text
stringlengths
216
4.52M
meta
dict
\section{Introduction} In \cite{L, L1} Lusztig proved that a quantum Kac--Moody algebra $\bu$ defined over $\bq(q)$ admits an $\A=\bz[q,q^{-1}]$--lattice $\ua$ and that any irreducible highest weight integrable representation $V$ of $\bu$ admits a corresponding $\A$--lattice, say $V_\A$. This allows us to specialize $q$ to a non--zero complex number $\zeta$ and we let $\bu_\zeta$, $W_\zeta$ denote the corresponding objects. If $\zeta$ is not a root of unity, Lusztig proved that $W_\zeta$ is irreducible and its character is the same as that of the corresponding classical representation. On the other hand, when $\zeta$ is a primitive $l^{th}$ root of unity, the situation is more interesting, even for finite--dimensional Kac--Moody algebras. In that case, $W_\zeta$ is not always irreducible: a sufficient condition for irreducibility \cite{APW} is that the highest weight $\Lambda$ of $V$ should be \lq\lq small\rq\rq \ in the sense that $(\Lambda,\alpha)<l$ for all positive roots $\alpha$. The corresponding question for infinite--dimensional Kac--Moody algebras at roots of unity is open, and in this paper we answer it in the case of level one representations of quantum affine algebras of ADE type. Note that the condition $(\Lambda,\alpha)<l$ never holds in this case; nevertheless, we find that $W_\zeta$ is irreducible provided that $l$ is coprime to the Coxeter number of the underlying finite--dimensional Lie algebra. The level one representations of an affine Lie algebra of ADE type can be explicitly constructed in the tensor product of a symmetric algebra and a twisted group algebra \cite{FK, S}. Essentially, these representations are built from the canonical representation of an infinite--dimensional Heisenberg algebra. Later, in \cite{FJ} this construction was extended to the case of the basic representations of the quantum affine algebras of ADE type. Again, the representations are built from the representation of a suitable quantum Heisenberg algebra. In this paper, we identify the natural lattice $V_\A$ of the level one representation explicitly as the tenosr product of the lattice of Schur functions tensored with the obvious $\A$--lattice in the twisted group algebra (see also \cite{J3}). We also describe the action of the divided powers of the Chevalley (and Drinfeld) generators on an $\A$--basis of $V_\A$ and this allows us to realize the level one irreducible representations $W_\zeta$ explicitly and prove that they are irreducible. Our methods also apply to the study of highest weight representations of affine Lie algebras in characteristic $p$, and the corresponding results are also new in that situation. In particular we give an explicit realization of the $\mathbb Z$-form \cite{Br} of the vertex representation of the affine Lie algebras. \section{ The algebras $\bu$, $\ua$.} Throughout this paper $\frak{g}$ will denote a simply-laced, finite-dimensional complex simple Lie algebra and $(a_{ij})_{i,j\in I}, \ I=\{1,\dots, n\},$ will denote its Cartan matrix. Let $(a_{ij})_{i,j\in\hat I}, \ \hat I=I \cup \{0\}$, be the extended Cartan matrix of $\frak g$ and let $\ag$ be the corresponding affine Lie algebra. Let $R$ (resp. $R^+$) denote a set of roots (resp. positive roots) of $\frak g$ and let $\alpha_i$ ($i\in I$) be a set of simple roots. Let $Q$ be the root lattice of $\frak g$, $P$ the weight lattice and let $\omega_i\in P$ ($i\in I$) be the fundamental weights of $\frak g$. For $\omega\in P$, $\eta\in Q$, define an integer $|\omega|\cdot|\eta|$ by extending bilinearly the assignment $|\omega_i|\cdot|\alpha_j| =\delta_{ij}$. Notice that $|\alpha_i|\cdot|\alpha_j| =a_{ij}$. Let $\theta$ be the highest root of $\frak g$. Let $q$ be an indeterminate, let $\mathbb{Q}(q)$ be the field of rational functions in $q$ with rational coefficients, and let $\mathcal{A}=\mathbb {Z}[q,q^{-1}]$ be the ring of Laurent polynomials with integer coefficients. For $r,m\in\mathbb N$, $m\ge r$, define \begin{equation*} [m]=\frac{q^m -q^{-m}}{q -q^{-1}},\ \ \ \ [m]! =[m][m-1]\ldots [2][1],\ \ \ \ \left[\begin{matrix} m\\ r\end{matrix}\right] = \frac{[m]!}{[r]![m-r]!}. \end{equation*} Then $\left[\begin{matrix} m\\r\end{matrix}\right]\in\mathcal A$ for all $m\ge r\ge 0$. \begin{prop}{\label{defnbu}} There is a Hopf algebra $\bu$ over $\bq(q)$ which is generated as an algebra by elements $E_{\alpha_i}$, $F_{\alpha_i}$, $K_i^{{}\pm 1}$ ($i\in\hat I$), $D^{\pm 1}$ with the following defining relations: \begin{align*} K_iK_i^{-1}=K_i^{-1}K_i&=1,\ \ \ \ K_iK_j=K_jK_i,\\ K_iD=DK_i,\ \ DD^{-1}&=D^{-1}D=1,\\ DE_{\alpha_i}D^{-1}=q^{\delta_{i0}}E_{\alpha_i},\ \ &DF_{\alpha_i}D^{-1}=q^{-\delta_{i0}}F_{\alpha_i}, \\ K_iE_{\alpha_j} K_i^{-1}&=q^{ a_{ij}}E_{\alpha_j},\\ K_iF_{\alpha_j} K_i^{-1}&=q^{-a_{ij}}F_{\alpha_j},\\ [E_{\alpha_i}, F_{\alpha_j} ]&=\delta_{ij}\frac{K_i-K_i^{-1}}{q-q^{-1}},\\ \sum_{r=0}^{1-a_{ij}}(-1)^r\left[\begin{matrix} 1-a_{ij}\\ r\end{matrix}\right] &(E_{\alpha_i})^rE_{\alpha_j}(E_{\alpha_i})^{1-a_{ij}-r}=0\ \ \ \ \ \text{if $i\ne j$},\\ \sum_{r=0}^{1-a_{ij}}(-1)^r\left[\begin{matrix} 1-a_{ij}\\ r\end{matrix}\right] &(F_{\alpha_i})^rF_{\alpha_j}(F_{\alpha_i})^{1-a_{ij}-r}=0\ \ \ \ \ \text{if $i\ne j$}. \end{align*} The comultiplication of $\bu$ is given on generators by \begin{align*} \Delta(E_{\alpha_i})&=E_{\alpha_i}\ot 1+K_i\ot E_{\alpha_i},\ \ \Delta(F_{\alpha_i})=F_{\alpha_i}\ot K_i^{-1} + 1\ot F_{\alpha_i},\\ \Delta(K_i)&=K_i\ot K_i, \qquad \Delta(D)=D\ot D, \end{align*} for $i\in\hat I$.\hfill\qedsymbol \end{prop} Let $\bu^+$ (resp. $\bu^-$; $\bu^0$) be the $\bq(q)$-subalgebras of $\bu$ generated by the $E_{\alpha_i}$ (resp. $F_{\alpha_i}$; $K_i^{\pm 1}$ and $D^{\pm 1}$) for $i\in \hat{I}$. The following result is well-known, see \cite{L} for instance. \begin{lem}{\label {butriangle}} $\bu\isom \bu^-\otimes\bu^0\otimes\bu^+$ as $\bq(q)$-vector spaces.\hfill\qedsymbol \end{lem} It is convenient to use the following notation: \begin{equation*}E_{\alpha_i}^{(r)}=\frac{E_{\alpha_i}^r}{[r]!}.\end{equation*} The elements $F_{\alpha_i}^{(r)}$ are defined similarly. Let $\ua$ denote the $\A$--subalgebra of $\bu$ generated by $E_{\alpha_i}^{(r)}$, $F_{\alpha_i}^{(r)}$, $K_i^{\pm 1}$ ($i\in \hat I$) and $D^{\pm 1}$. The subalgebras $\ua^\pm$ are defined in the obvious way. For $i\in \hat{I}$, $r\ge 1$, $m\in\mathbb Z$, define elements \begin{align*}\genfrac{[}{]}{0pt}{}{K_i,m}{r}& = \prod_{s=1}^r \frac{K_i q^{m-s+1} - K_i^{-1} q^{-m+ s-1}}{q^s - q^{-s}},\\ \genfrac{[}{]}{0pt}{}{D,m}{r}& = \prod_{s=1}^r \frac{D q^{m-s+1} - D^{-1} q^{-m+ s-1}}{q^s - q^{-s}}.\\ \end{align*} Let $\ua^0$ be the $\mathcal A$--subalgebra of $\ua$ generated by $K_i^{\pm 1}$, $D^{\pm 1}$, $\genfrac{[}{]}{0pt}{}{K_i,m}{r}$ and $\genfrac{[}{]}{0pt}{}{D,m}{r}$, $i\in\hat{I}$, $r\ge 1$ and $m\in\mathbb Z$. The following is well--known (see \cite{L}). \begin{lem}{\label{atriangle}} We have $\ua\cong\ua^-\ua^0\ua^+$.\end{lem} We shall also need another realization of $\bu$, due to \cite{Dr, B, J2}. \begin{thm}{\label{newr}} There is an isomorphism of $\bq(q)$-Hopf algebras from $\bu$ to the algebra with generators $x_{i,r}^{{}\pm{}}$ ($i\in I$, $r\in\bz$), $K_i^{{}\pm 1}$ ($i\in I$), $h_{i,r}$ ($i\in I$, $r\in \bz\backslash\{0\}$) and $C^{{}\pm 1}$, and the following defining relations: \begin{align*} C^{\pm 1}\ &\text{are central,}\\ K_iK_i^{-1} = K_i^{-1}K_i =1,\;\; &CC^{-1} =C^{-1}C =1,\\ K_iK_j =K_jK_i,\;\; &K_ih_{j,r} =h_{j,r}K_i,\\ K_ix_{j,r}^\pm K_i^{-1} &= q^{{}\pm a_{ij}}x_{j,r}^{{}\pm{}},\\ DD^{-1} =D^{-1}D =1,\;\;& DK_i =K_iD,\\ Dh_{j,r}D^{-1} =q^rh_{j,r},\;\;& Dx_{j,r}^{\pm 1}D^{-1} = q^rx_{j,r}^\pm,\\ [h_{i,r},h_{j,s}]&=\delta_{r,-s}\frac1{r}[ra_{ij}]\frac{C^r-C^{-r}} {q-q^{-1}},\\ [h_{i,\pm r} , x_{j,s}^{{}\pm{}}] &= \pm\frac1r[ra_{ij}]x_{j,s\pm r}^{{}\pm{}},\ \ r>0, \\ [h_{i,\mp r} , x_{j,s}^{{}\pm{}}] &= \pm\frac1rC^r[ra_{ij}]x_{j,s\pm r}^{{}\pm{}},\ \ r>0, \\ x_{i,r+1}^{{}\pm{}}x_{j,s}^{{}\pm{}} -q^{{}\pm a_{ij}}x_{j,s}^{{}\pm{}}x_{i,r+1}^{{}\pm{}} &=q^{{}\pm a_{ij}}x_{i,r}^{{}\pm{}}x_{j,s+1}^{{}\pm{}} -x_{j,s+1}^{{}\pm{}}x_{i,r}^{{}\pm{}},\\ [x_{i,r}^+ , x_{j,s}^-]=\delta_{i,j} & \frac{ C^{-s}\psi_{i,r+s}^+ - C^{-r} \psi_{i,r+s}^-}{q - q^{-1}},\\ \sum_{\pi\in\Sigma_m}\sum_{k=0}^m(-1)^k\left[\begin{matrix}m\\k\end{matrix} \right] x_{i, r_{\pi(1)}}^{{}\pm{}}\ldots x_{i,r_{\pi(k)}}^{{}\pm{}} & x_{j,s}^{{}\pm{}} x_{i, r_{\pi(k+1)}}^{{}\pm{}}\ldots x_{i,r_{\pi(m)}}^{{}\pm{}} =0,\ \ \text{if $i\ne j$}, \end{align*} for all sequences of integers $r_1,\ldots, r_m$, where $m =1-a_{ij}$, $\Sigma_m$ is the symmetric group on $m$ letters, and the $\psi_{i,r}^{{}\pm{}}$ are determined by equating powers of $u$ in the formal power series $$\sum_{r=0}^{\infty}\psi_{i,\pm r}^{{}\pm{}}u^{{}\pm r} = K_i^{{}\pm 1} exp\left(\pm(q-q^{-1})\sum_{s=1}^{\infty}h_{i,\pm s} u^{{}\pm s}\right).$$ \hfill\qedsymbol\end{thm} Following \cite[Section 3]{CP}, we define elements $P_{k,i}$ and $\tilde{P}_{k,i}$ via the generating functions \begin{equation} \label{integralimaginary} \mathcal{P}_i^\pm(u)= \sum_{k \ge 0}{ P^\pm_{i,k}} u^k = \exp \left(- \sum_{k = 1}^\infty \frac{h_{i,\pm k}}{[k]}u^k\right)= \exp \left( -\sum_{k = 1}^\infty \frac{\tilde{h}_{i,\pm k}}{k}u^k\right), \end{equation} \begin{equation} \label{tildeP} \tilde{\mathcal{P}}_i^\pm(u)=\sum_{k \ge 0}{\tilde{P}^\pm_{i, k}} u^k = \exp \left( \sum_{k = 1}^\infty \frac{h_{i,\pm k}}{[k]}u^k\right)=\exp \left( \sum_{k = 1}^\infty \frac{\tilde{h}_{i,\pm k} }{k}u^k\right), \end{equation} where $\tilde{h}_{i,k} = \frac{kh_{i,k}}{[k]}$. Notice that these formulas are exactly those that relate the elementary symmetric functions (resp. complete symmetric functions) to the power sum symmetric functions \cite{M}. For a vertex operator approach to this and to Schur functions, see \cite{J3}. The following result was proved in \cite[Section 5]{CP}. \begin{lem} For all $i\in I$, $k\in\bz$, $k\ge 0$, we have \begin{equation*} P^\pm_{i,k},\ \ \tilde{P}^\pm_{i, k}\in\ua.\end{equation*}\end{lem} Let $\tilde\ua$ (resp. $\tilde\ua^\pm$) be the $\mathcal A$--subalgebra generated by $(x_{i,n}^\pm )^{(r)}$, $r,n\in\bz$, $r\ge 0$, $i\in I$, (resp. $n\in\bz$, $\pm n\ge 0$) and $\ua^0$. The following result is proved in \cite[Section 2]{BCP}. \begin{prop}{\label{adrin}} We have, \begin{equation*}\ua=\tilde\ua,\ \ \ \ \ua^\pm\subset\tilde\ua^\pm. \ \ \ \ \ \ \qedsymbol\end{equation*} \end{prop} Finally, let $\bu(0)$ (resp. $\ua(0)$) be the $\bq(q)$--subalgebra of $\bu$ (resp. the $\A$--subalgebra of $\ua$) generated by the elements $h_{i,n}$, $i\in I$, $n\in\bz$ (resp. $P^\pm_{i,k}$, $i\in I$, $k\in\bz$, $k>0$), $C^{\pm 1}$. The subalgebras $ \bu^\pm(0)$ and $\ua^\pm(0)$ are defined in the obvious way. \begin{prop} \label{heis} \begin{enumerate} \item[(i)] The algebra $\bu(0)$ is defined by the relations \begin{align*} [h_{i,n}, h_{j,m}]& = \frac 1n\delta_{m,-n}{[na_{ij}]}\frac{C^n-C^{-n}}{q-q^{-1}},\\ C^{\pm 1}h_{i,n}& = h_{i,n}C^{\pm 1},\\ \end{align*} for all $i,j\in I$ and $m,n\in\bz$. In particular, $\bu^\pm(0)$ is commutative. \item[(ii)] For all $i\in I$, $k>0$, we have \begin{align*} P^\pm _{i,k}&=-\frac 1k\sum_{m=0}^k \tilde{h}_{i,m}P^\pm_{i,k-m},\\ \tilde{P}^\pm _{i,k}&=\frac 1k\sum_{m=0}^k \tilde{h}_{i,m}\tilde{P}^\pm_{i,k-m}.\end{align*} In particular, $\tilde{h}_{i,k}, \tilde{P}^\pm_{i,k}\in\ua(0)$ and as $\bq(q)$--spaces we have \begin{align*} \bu(0)\cong \bq(q)\otimes_\A\ua(0),\\ \bu^\pm(0)\cong \bq(q)\otimes_\A\ua^\pm(0).\end{align*}\end{enumerate} \item[(iii)]Monomials in $P^\pm_{i, n}$ (resp. $\tilde{P}^\pm_{i, n}$), $i\in I$, $n>0$, form a basis for $\ua^\pm(0)$. \end{prop} \begin{pf} Part (i) is a consequence of the PBW theorem for $\bu$ proved in \cite{B}. Parts (ii) and (iii) follow from the definition of the elements $\tilde{P}^\pm_{i,k}$ (see \cite{BCP} for details). \end{pf} \section{ The level one representations of $\bu$ and $\ua$} We begin this section by recalling the natural irreducible representation of $\bu(0)$ and we construct a natural $\ua(0)$--lattice in this representation. We then recall the definition of the highest weight representations $V_q(\Lambda)$ of $\bu$ and the lattice $V_{\mathcal A}(\Lambda)$ of $\ua$, see \cite{L}. Finally, we recall the explicit construction of the level one representations given in \cite{FJ} and state and prove the main theorem of the paper. Consider the left ideal $\mathcal I$ in $\bu(0)$ generated by $C^{\pm 1}-q^{\pm 1}$ and $\bu^+(0)$. Then, $\bu(0)/\mathcal I$ is a left $\bu(0)$--module through left multiplication. It is easy to see that as $\bq(q)$--spaces, we have $$ \bu^-(0)\cong \bu(0) /\mathcal I.$$ Thus $ \bu^-(0)$ acquires the structure of a left $\bu(0)$--module, and we let $\pi:\bu(0)\to{\text{End}}(\bu^-(0))$ be this representation. Then, elements of $\bu^-(0)$ act by left multiplication and it is easy to see that for $n>0$, $i\in I$, $\pi(h_{i,n})$ is the derivation of $\bu^-(0)$ obtained by extending the assignment, \begin{equation*}\pi(h_{i,n})h_{j,-m} = \delta_{n,m}\frac{[na_{ij}][n]}{n}.\end{equation*} \begin{prop} {\label{heisr}} \begin{enumerate} \item[(i)] $\pi$ is an irreducible representation of $\bu(0)$. \item[(ii)]For $i,j\in I$, we have, \begin{equation*}\pi(\tilde\mathcal{P}_i^+(u)).\tilde\mathcal{P}_j^-(v) = f_{i,j}(u,v)\tilde\mathcal{P}_j^-(v),\end{equation*} \begin{equation*}\pi(\mathcal{P}_i^+(u)).\mathcal{P}_j^-(v)= f_{i,j}(u,v)\mathcal{P}_j^-(v),\end{equation*} \begin{equation*}f_{i,j}(u,v)\pi(\mathcal{P}_i^+(u)).\tilde\mathcal{P}_j^-(v)= \tilde\mathcal{P}_j^-(v),\end{equation*} where the power series $f_{i, j}$ is defined by \begin{align*} {f}_{i,j}(u,v)&= 1\ \ \ \text{if $a_{ij}=0$},\\ &=(1-uv)\ \ \ \text{if $a_{ij}=-1$},\\ &=(1-quv)^{-1}(1-q^{-1}uv)^{-1}\ \ \ \text{if $a_{ij}=2$}.\end{align*} \item[(iii)] $\pi(\ua(0)) \ua^-(0)\subset \ua^-(0)$. \end{enumerate} \end{prop} \begin{proof} Part (i) is well-known. For (ii), notice that the relations in Proposition \ref{heis} imply that \begin{align*}&\pi(\tilde\mathcal{P}_-^+(u)).\tilde\mathcal{P}_j^-(v)\\ &=\exp \left( \sum_{k = 1}^\infty \frac{\pi(\tilde{h}_{i, k})} {k}u^k\right) \exp \left( \sum_{k = 1}^\infty \frac{\tilde{h}_{j,-k}}{k}u^k\right)\\ &=\exp\left(\sum_{k=1}^\infty\frac{[ka_{ij}]}{k[k]}u^kv^k\right)\exp \left( \sum_{k = 1}^\infty \frac{\tilde{h}_{j,-k}}{k}u^k\right)\exp\left(\sum_{k = 1}^\infty \frac{\pi(\tilde{h}_{i, k})} {k}u^k \right).1\\ &=f_{i,j}(u,v)\tilde\mathcal{P}_j^-(v).\end{align*} The second equality above follows by using the Campbell--Hausdorff formula. The calculation of $f_{i,j}(u,v)$ is now straightforward. The other equations are proved similarly. Part (iii) follows immediately from (ii). \end{proof} By a weight, we mean a pair $(\mu, n)\in \bz^{|\hat I|}\times \bz$. If $n=0$, we shall denote the pair $(\mu,0)$ as $\mu$. A representation $W$ of $\bu$ is said to be of type 1 if $$W=\bigoplus_{(\mu, n)} W_{\mu,n},$$ where $W_{\mu,n}=\{w\in W| K_i.w=q^{\mu_i}w,\ \ D.w =q^n w\}$. If $W_{\mu,n}\ne 0$, then $W_{\mu,n}$ is called the weight space of $W$ with weight $(\mu,n)$. Throughout this paper we will consider only type 1 representations. Writing $\theta=\sum_{i\in I}d_i\alpha_i$, we define the level of $(\mu, n)$ to be $\sum_{i\in I}d_i\mu_i+\mu_0$. For $i\in\hat I$, let $\Lambda_i$ be the $\hat I$--tuple with one in the $i^{th}$ place and zero elsewhere. Given a weight $\Lambda=\sum_in_i\Lambda_i$, $n_i\ge 0$, let $V_q(\Lambda)$ be the irreducible highest weight $\bu$--module with highest weight $\Lambda$ and let $v_{\Lambda}$ be the highest weight vector. Thus, $V_q(\Lambda)$ is generated by $v_\Lambda$ with relations, \begin{equation*} E_{\alpha_i}.v_{\Lambda}=0,\ \ K_i.v_{\Lambda}=q^{n_i}v_{\Lambda},\ \ D.v_{\Lambda}=v_{\Lambda}, \ \ F_{\alpha_i}^{n_i+1}.v_{\Lambda}=0,\ \ \end{equation*} for $i\in\hat{I}$. Clearly $V_q(\Lambda)$ is of type 1. We say that $V_q(\Lambda)$ has level one if $\Lambda$ has level one. Set \begin{equation*} V_{\mathcal A}(\Lambda)=\ua.v_\Lambda.\end{equation*}By Lemma \ref{atriangle} we see that $V_\A =\ua^-.v_\Lambda$. The following result is now an immediate consequence of Proposition \ref{adrin}. \begin{lem} We have \begin{equation*} V_\A(\Lambda)=\tilde\ua.v_\Lambda=\tilde\ua^-.v_\Lambda.\end{equation*} \hfill\qedsymbol\end{lem} The following result is due to Lusztig \cite{L}. \begin{prop}{\label{vlatt}} $V_{\mathcal A}(\Lambda)$ is a $\ua$--submodule of $V_q(\Lambda)$ such that \begin{equation*} V_q(\Lambda)\cong V_{\mathcal A}(\Lambda)\otimes_{\mathcal A}\bq(q).\end{equation*} Further, \begin{equation*}V_{\mathcal A}(\Lambda)=\bigoplus_{\mu,n} V_{\mathcal A}(\Lambda)\cap V_q(\Lambda)_{\mu,n},\end{equation*} and \begin{equation*} {\text{dim}}_{\mathcal A}(V_{\mathcal A}(\Lambda)\cap V_q(\Lambda)_{\mu,n}) ={\text{dim}}_{\bq(q)} V_q(\Lambda)_{\mu,n}.\ \ \ \ \ \ \qedsymbol\end{equation*}\end{prop} We turn now to the realization of the level one representations of $\bu$. In fact we shall restrict ourselves to constructing the basic representation of $\hat{\frak g}$, i.e. the representation corresponding to $\Lambda_0$. The construction of the other level one representations is identical except that one adjoins $v_{\Lambda_i}$ to the twisted group algebra (see \cite{FJ}). Fix a bilinear map $\epsilon: Q\times Q\to\{\pm 1\}$ such that for all $i\in I$, $\al, \beta, \ga\in Q$, we have, \begin{align*} \ep(\al, 0)&=\ep(0, \al)=1, \\ \ep(\al, \beta)\ep(\al+\beta, \ga)&=\ep(\al, \beta+\ga)\ep(\beta, \ga),\\ \epsilon(\al, \beta)\epsilon(\beta,\al)&= (-1)^{|\al|\cdot|\beta|}. \end{align*} Let $\bq(q)[Q]$ be the twisted group algebra over $\bq(q)$ of the weight lattice of $\frak{g}$. Thus, $\bq(q)[Q]$ is the algebra generated by elements $e^\eta$, $\eta\in Q$, subject to the relation, \begin{equation*} e^\eta.e^{\eta'} = \epsilon(\eta,\eta')e^{\eta+\eta'}.\end{equation*} Set \begin{equation*}\mathcal{V}_q=\bu^-(0)\otimes \bq(q)[Q].\end{equation*} Let $z^\partial_i:\mathcal{V}_q\to\mathcal{V}_q[z,z^{-1}]$ be the $\bq(q)$--linear map defined by extending \begin{equation*} z^{\partial_i}(v\otimes e^{\eta})= (v\otimes e^{\eta})z^{|\eta|\cdot|\alpha_i|},\ \ v\in\bu^-(0),\ \ \eta\in Q.\end{equation*} Define operators $X^\pm_{i,n}$ on $\mathcal{V}_q$ by means of the following generating series: \begin{align*} X_i^{+}(z)&=\pi(\tilde{\mathcal{P}}_i^-(z))\pi(\mathcal{P}_i^+(q^{-1}z^{-1}))e^{\alpha_i}z^{\partial_i}\\ &=\sum_{n\in{\mathbb Z}}X_{i,n}^{+}z^{-n-1},\\ X_i^{-}(z)&=\pi({\mathcal{P}}_i^-(qz))\pi(\tilde{\mathcal{P}}_i^+(z^{-1}))e^{-\alpha_i}z^{-\partial_i}\\ &=\sum_{n\in\bz}X_{i,n}^{-}z^{-n-1}. \end{align*} The following result was proved in \cite{FJ}. \begin{thm}{\label{realiz}} The assignment $x^\pm_{i,n}\to X^{\pm}_{i,n}$, $h_{i,n}\to \pi(h_{i,n})\otimes 1$ defines a representation of $\bu$ on $\mathcal{V}_q$. In fact as $\bu$--modules we have \begin{equation*} V_q(\Lambda_0)\cong \mathcal{V}_q.\end{equation*} Further, for all $i\in I$, $u\in\bu^-(0)$, $\eta\in Q$, we have \begin{equation*} K_i(u\otimes e^\eta) = q^{|\eta|.|\alpha_i|}u\otimes e^\eta, \ \ \ C(u\otimes e^\eta) = u\otimes e^\eta. \end{equation*} The highest weight vector in $V_q(\Lambda_0)$ maps to $1\otimes 1$ under this isomorphism. \hfill\qedsymbol\end{thm} Let $\mathcal{V}_\A$ be the image of $V_\A(\Lambda_0)$ under this isomorphism. Clearly $\mathcal{V}_\A$ is a $\ua$--submodule of $\mathcal{V}_q(\Lambda_0)$ and $\mathcal{V}_q(\Lambda_0)\cong \bq(q)\otimes_{\mathcal{A}}\mathcal{V}_\A$. Set \begin{equation*}\mathcal{L} =\ua^-(0)\otimes \A[Q],\end{equation*} where $\A[Q]$ is the $\A$--span in $\bq(q)[Q]$ of the elements $e^\eta$. It follows from Proposition \ref{heis} that \begin{equation*}{\label{lattice}} \mathcal{V}_q\cong \bq(q)\otimes_{\mathcal A} \mathcal L.\end{equation*} We now state our main result. \begin{thm}{\label {main}} The lattice $\mathcal{L}$ is preserved by $\ua$, and \begin{equation*} \mathcal{L}\cong \mathcal{V}_\A\end{equation*} as $\ua$--modules. \end{thm} \begin{rem} The case $\frak{g}=sl_2$ was studied in \cite{J3}. In that paper, the author worked over $\mathcal A = \bz[q^{\frac12}, q^{-\frac12}]$ and proved that the corresponding lattice $\mathcal L$ was preserved by $\ua$ and gave the action of the divided powers of the Drinfeld generators on the Schur functions. \end{rem} The rest of the section is devoted to proving Theorem \ref{main}. We begin with the following two lemmas which are easily deduced from the definition of $X_i^\pm(z)$ and Proposition \ref{heisr}. \begin{lem}\label{r=1} Let $i\in I$, $\eta\in Q$ and $m={|\eta|.|\alpha_i|}$. Then, \begin{align*}x^+_{i, -m-1}(1\otimes e^\eta) &= \epsilon(\alpha_i,\eta)\otimes e^{\alpha_i+\eta},\\ x^-_{i, m -1}(1\otimes e^\eta) &= \epsilon(-\alpha_i,\eta)\otimes e^{-\alpha_i+\eta}. \ \ \ \ \ \ \ \ \hfill\qedsymbol\end{align*} \end{lem} \begin{lem}\label{L:product} Let $r,l\in\bz$, $r,l\ge 0$, and let $i,j_1,j_2\cdots j_l\in I$. We have, \begin{align*} &X_i^+(z_1)X_i^+(z_2)\cdots X_i^+(z_r)\left(\tilde\mathcal P^-_{j_1}(w_1)\tilde\mathcal P^-_{j_2}(w_2)\cdots \tilde\mathcal P^-_{j_l}(w_l)\otimes e^\eta\right )\\ &\\ &=\epsilon\cdot\prod_{k=1}^rz_k^{r-k+|\eta|\cdot|\alpha_i|}\prod_{1\le k<s\le r}(f_{i,i}((qz_k)^{-1}, z_s))^{-1}\prod_{1\le k\le r, 1\le s\le l}f_{i,j_s}((qz_k)^{-1}, w_s)\\ &\times \tilde\mathcal P^-_i(z_1)\tilde\mathcal P^-_i(z_2)\cdots \tilde\mathcal P^-_i(z_r)\tilde\mathcal P^-_{j_1}(w_1)\tilde\mathcal P^-_{j_2}(w_2)\cdots \tilde\mathcal P^-_{j_l}(w_l)\otimes e^{r\alpha_i+\eta}\\ &\\ &=\epsilon\cdot (z_1z_2\cdots z_r)^{|\eta|\cdot|\alpha_i|}\prod_{1\le k<s\le r}(z_k-q^{-2}z_s)(z_k-z_s)\prod_{1\le k\le r, 1\le s\le l}f_{i,j_s}((qz_k)^{-1}, w_s)\\ & \times \tilde\mathcal P^-_i(z_1)\tilde\mathcal P^-_i(z_2)\cdots\tilde \mathcal P^-_i(z_r)\tilde\mathcal P^-_{j_1}(w_1)\tilde\mathcal P^-_{j_2}(w_2))\cdots \tilde\mathcal P^-_{j_l}(w_l)\otimes e^{r\alpha_i+\eta}, \end{align*} where $\epsilon=\ep(r\alpha_i, \eta)\prod_{k=1}^{r-1}\ep(\alpha_i, k\alpha_i)$. \hfill\qedsymbol\end{lem} Let $\frak{S}_r$ be the symmetric group on $r$ letters and for $\sigma\in\mathfrak S_r$, let $l(\sigma)$ be the length of $\sigma$. \begin{lem}\label{id} We have, \begin{equation*}\sum_{\sigma\in\frak S_r} (-1)^{l(\sigma)}\prod_{k<s}(z_{\sigma(k)}-q^{-2}z_{\sigma(s)}) = q^{-r(r-1)/2}[r]!\prod_{k<s}(z_k-z_s).\end{equation*} \end{lem} \begin{proof} Observe that the left--hand side of the equation is an antisymmetric polynomial in $z_1,z_2,\cdots ,z_r$ and hence is divisible by the right hand side. Hence, by comparing degrees, we can write, \begin{equation*}\sum_{\sigma\in\frak S_r} (-1)^{l(\sigma)}\prod_{k<s}(z_{\sigma(k)}-q^{-2}z_{\sigma(s)}) = C(q)\prod_{k<s}(z_k-z_s).\end{equation*} But it is easy to see that the coefficient of $z_1^{r-1}z_2^{r-2}\cdots z_{r-1}$ on the left hand side is \begin{equation*} \sum_{\sigma\in\frak S_r}q^{-2l(\sigma)} =q^{-r(r-1)/2}[r]!,\end{equation*} thus proving the proposition. \end{proof} \begin{lem}\label {rfact} \begin{enumerate} \item [(i)] Let $\delta=(\delta_1,\delta_2,\cdots ,\delta_r)\in\bz^r$ be the $r$--tuple $(r-1,r-2,\cdots ,1,0)$. We have, \begin{equation*}\prod_{j<k}(z_j-z_k)^2 =\left(\sum_\mu a_\mu \sum_{\rho\in\mathfrak S_r}z_1^{\mu_{\rho(1)}}z_2^{\mu_{\rho(2)}}\cdots z_r^{\mu_{\rho(r)}}\right),\end{equation*} where the sum is over $\{\delta+\tau(\delta):\tau\in\mathfrak S_r\}$ and $a_\mu = (-1)^{l(\tau)}$, if $\mu=\delta+\tau(\delta)$. \item[(ii)]Let $\mathcal R$ be a commutative ring and let $G\in\mathcal R[[z_1^{\pm1},z_2^{\pm 1},\cdots ,z_r^{\pm1}]]$ be invariant under the action of the symmetric group $\frak{S}_r$. Then, for all $n\in\bz$, the coefficient of $(z_1z_2\cdots z_r)^n$ in $\prod_{j<k}(z_j-z_k)^2G$ is divisible by $r!$. \end{enumerate} \end{lem} \begin{proof} Since, \begin{equation*}\prod_{j<k}(z_j-z_k) =\sum_{\sigma\in\mathfrak S_r} (-1)^{l(\sigma)} z_1^{\delta_{\sigma(1)}}z_2^{\delta_{\sigma(2)}} \cdots z_r^{\delta_{\sigma(r)}},\end{equation*} we get \begin{equation*}\prod_{j<k}(z_j-z_k)^2 =\sum_{\sigma,\tau\in\mathfrak S_r} (-1)^{l(\sigma)+l(\tau)} z_1^{\delta_{\sigma(1)}+\delta_{\tau(1)}}z_2^{\delta_{\sigma(2)}+\delta_{\tau(2)}} \cdots z_r^{\delta_{\sigma(r)}+\delta_{\tau(r)}},\end{equation*} which becomes the formula in (i) on putting $\rho=\sigma\tau$. Part (ii) follows trivially. \end{proof} {\it Proof of Theorem \ref{main}}. Using Lemma \ref{L:product} and Lemma \ref{id}, we get \begin{align*}&\sum_{\sigma\in\frak S_r}X_i^+(z_{\sigma(1)})X_i^+(z_{\sigma(2)})\cdots X_i^+(z_{\sigma(r)}).\left(\mathcal P^-_{j_1}(w_1)\mathcal P^-_{j_2}(w_2)\cdots \mathcal P^-_{j_l}(w_l)\otimes e^\eta\right)\\ &=q^{-r(r-1)/2}[r]!\epsilon\cdot (z_1\cdots z_r)^{|\eta|\cdot|\alpha_i|}\prod_{k<s}(z_k-z_s)^2\prod_{1\le k\le r, 1\le s\le l}f_{i,j_s}((qz_k)^{-1}, w_s)\\ &\times (\tilde\mathcal P^-_i(z_1)\tilde\mathcal P^-_i(z_2)\cdots \tilde\mathcal P^-_i(z_r)\tilde\mathcal P^-_{j_1}(w_1)\tilde\mathcal P^-_{j_2}(w_2)\cdots \tilde\mathcal P^-_{j_l}(w_l))\otimes e^{r\alpha_i+\eta}, \ \ \ \ \ \ (*)\end{align*} where the constant $\ep$ is defined in Lemma \ref{L:product}. Set $F=\prod_{k<l}(z_k-z_l)^2$ and let $G$ be the right hand side of (*) divided by $F$. Then Lemma \ref{rfact} applies, and by collecting the coefficient of $(z_1z_2\cdots z_r)^{-n-1}$ on both sides of (*), we find that \begin{equation*}x_{i,n}^{(r)}. (\tilde\mathcal P^-_{j_1}(w_1)\tilde\mathcal P^-_{j_2}(w_2)\cdots\tilde \mathcal P^-_{j_s}(w_l))w_1^{\mu_1}w_2^{\mu_2}\cdots w_l^{\mu_l} \otimes e^\eta\in \mathcal L,\end{equation*} for all $\mu_1,\mu_2,\cdots ,\mu_l\in\bz$, $\eta\in Q$, or equivalently that $$(x_{i,n}^+)^{(r)}\mathcal L\subset\mathcal L.$$ One proves similarly that $(x_{i,n}^-)^{(r)}$ preserves $\mathcal L$. In particular, by Proposition \ref{adrin} $\mathcal L$ is preserved by $\ua$. To complete the proof of the theorem we must prove that $$\mathcal L=\mathcal V_\A .$$ Since $1\otimes 1\in\mathcal V_\A$, it follows from Lemma \ref{r=1} and a simple induction that $e^\eta\in\mathcal V_\A$ for all $\eta\in Q$. Next, from Theorem \ref{realiz}, we see that for $i\in I$, $k>0$, \begin{equation*}\tilde{P}^-_{i,k}(1\otimes e^\eta) = \tilde{P}^-_{i,k}\otimes e^\eta.\end{equation*} Since by Proposition \ref{heis}, the monomials in the $\tilde{P}^-_{i,k}$'s span $\ua^-(0)$, we see that $\mathcal L\subset \mathcal V_\A$. The reverse inclusion $\mathcal V_\A\subset \mathcal L$ is now clear, for $$\mathcal V_\A =\tilde\ua(1\otimes 1)\subset\mathcal L $$ since $1\otimes 1\in\mathcal L$. \hfill\qedsymbol \section{Specialization to a root of unity} Throughout this section, we let $N$ denote the Coxeter number of $\frak{g}$. It is well--known \cite{Bo} that $N= n+1$ (resp. $2n-2$, 12, 18, 30) if $\frak g$ is of type $A_n$ (resp. $D_n$, $E_6$, $E_7$, $E_8$). Let $ \zeta\in\bc^*$ denote a primitive $l^{th}$ root of unity, where $l$ is a non--negative integer coprime to $N$. Set $n=|I|$. Finally, for any $g\in\A$ we let $g_\zeta\in\bc^*$ be the element obtained by setting $q=\zeta$. \begin{lem}\label{det} Let $[A]$ denote the $n\times n$--matrix with coefficients in $\A$ whose $(i,j)$--th entry is $[a_{ij}]$. Then, \begin{align*} {\text{det}}[A] &= [n+1],\ \ \ {\text{if $\frak g$ is of type $A_n$}},\\ &= [2](q^{n-1}+q^{n-1}), \ \ \ {\text{if $\frak g$ is of type $D_n$}},\\ &=(q^4+q^{-4}-1)(q^2+q^{-2}+1), \ \ \ {\text{if $\frak g$ is of type $E_6$}},\\ &=[2](q^6+q^{-6}-1), \ \ \ {\text{if $\frak g$ is of type $E_7$}},\\ &= q^8+q^6+q^{-6}+q^{-8}-q^2-1-q^{-2} \ \ \ {\text{if $\frak g$ is of type $E_8$}}.\end{align*} Further for all $k>0$, we have \begin{equation*}({\text{det}}[A])_{\zeta^k} = {\text{det}}[A]_{\zeta^k} \ne 0.\end{equation*} \end{lem} \begin{pf} The calculation of the determinant is straightforward. If $\frak g$ is of type $A_n$, then it is easy to see that for all $k>0$, $${\text{either}}\ \ \zeta^{2k} =1,\ \ \text{or}\ \ \zeta^{2k(n+1)} \ne 1.$$ This proves the second statement of the Lemma for $\frak g$ of type $A_n$. The other cases are proved by a similar analysis: in the hardest case $E_8$ one checks that $q^8+q^6+q^{-6}+q^{-8}-q^2-1-q^{-2}$ divides $q^{60}-1$ in $\A$. The result follows. \end{pf} Let $\bc_{\zeta}$ be the one--dimensional $\mathcal A$--module defined by sending $q\to\zeta$. Let $\bu_\zeta$ be the algebra over $\bc$ defined by, \begin{equation*}\bu_\zeta=\ua\otimes_\A \bc_\zeta.\end{equation*} The subalgebras $\bu^\pm_\zeta$ and $\bu_\zeta^\pm(0)$ of $\bu_\zeta$ are defined in the obvious way and we have $$\bu_\zeta=\bu_\zeta^-\bu_\zeta^0\bu_\zeta^+.$$ Given an element $u\in\ua$, we denote by $u$ the element $u\otimes 1$ in $\bu_\zeta$. It follows from Proposition \ref{heisr} that we have a representation $\pi_\zeta: \bu_\zeta(0)\to {\text{End}}(\bu_\zeta^-(0))$. \begin{prop} {\label{heiszeta}} \begin{enumerate} \item[(i)] For all $i\in I$, $k\in\bz$, $k>0$ there exist elements $h^{i,k}\in\bu_\zeta(0)$ such that \begin{align*} [h^{i, k}, \tilde h_{j,m}] &= \delta_{k,-m}\delta_{i,j},\\ [h^{i,k}, \tilde P^-_{j,m}]&=\delta_{k,m}\delta_{i,j}.\end{align*} \item[(ii)] $\pi_\zeta$ is an irreducible representation of $\bu_\zeta(0)$. \end{enumerate} \end{prop} \begin{proof} For $k\in\bz$, $k>0$, we know by Lemma \ref{det} that the matrix $[A]_{\zeta^k}$ is invertible. Let $b_{ij}(k)$ denote the inverse of this matrix. For $i\in I$, $k\in\bz$, $k>0$, set $$h^{i,k} =\sum_{j\in I}b_{ij}(k) \tilde h_{j,k}. $$ Clearly $h^{i, k}$ satisfies \begin{equation*} [h^{i, k}, \tilde h_{j,m}] = \delta_{k,-m}\delta_{i,j}.\end{equation*} The second formula in (i) is now clear from Proposition \ref{heis}. To prove (ii), assume that $W$ is a submodule of $\bu^-_\zeta(0)$ and let $0\ne w\in W$. By Proposition \ref{heis}, we can choose $i\in I$, $k\in \bz$, $k>0$, such that $$w=\sum_{ r=0}^n (\tilde P^-_{i,k} )^r w_r$$ where $w_r$ is a polynomial in the elements $\tilde P^-_{j,l}$, $j\ne i$, $1\le l\le k$ and $\tilde P^-_{i,l}$, $1\le l<k$. Applying $h^{i,k}$ to $w$ repeatedly we find that $w_n\in W$. Repeating the argument we find that $1\in W$ thus proving the Proposition. \end{proof} We now turn to the representations of $\bu_\zeta$. Given $\Lambda=\sum_{i\in\hat I}n_i\Lambda_i$, $n_i\ge 0$, set, $$W_\zeta(\Lambda) = V_\A(\Lambda)\otimes_\A\bc_\zeta .$$ It follows from Proposition \ref{vlatt} that $W_\zeta(\Lambda)$ is a representation of $\bu_\zeta$. Again, for $v\in V_\A(\Lambda)$, we let $v\in W_\zeta(\Lambda)$ be the element $v\otimes 1$. Clearly $\bu_\zeta^+.v =0$ and \begin{equation*} W_\zeta(\Lambda) = \bu_\zeta. v_\Lambda.\end{equation*} Set $W_\zeta(\Lambda)_{\mu,n} =(V_\A(\Lambda)\cap V_q(\Lambda)_{\mu,n})\otimes_\A \bc_\zeta$. Then one knows from \cite{L, L2} that, \begin{equation*} W_\zeta(\Lambda)=\bigoplus_{\mu,n} W_\zeta(\Lambda)_{\mu, n}, \ \ {\text{dim}}_\bc W_\zeta(\Lambda) ={\text{dim}}_{\bq(q)} V_q(\Lambda), \end{equation*} and $w\in W_\zeta(\Lambda)_{\mu,n}$ iff \begin{align*} K_i.v = \zeta^{\mu_i'}v, \ \ & \ \ \genfrac{[}{]}{0pt}{}{K_i,0}{l}.v = \mu_i''v,\\ D.v =\zeta^{n'}v,\ \ &\ \ \genfrac{[}{]}{0pt}{}{D,0}{l}.v = n''v,\end{align*} where $\mu_i=\mu_i'+l\mu_i''$, $0\le \mu_i'<l$, and $n'$ and $n''$ are defined similarly. Turning now to the level one basic representation, we see from Theorem \ref{main} that \begin{equation*} W_\zeta(\Lambda_0)\cong \bu_\zeta^-(0)\otimes \bc_\zeta[Q].\end{equation*} The main result of this section is: \begin{thm} $W_\zeta(\Lambda_0)$ is irreducible. \end{thm} \begin{pf} Set $W=W_\zeta(\Lambda_0)$ and let $0\ne W'$ be a submodule of $W$. Then $W'$ contains a non--zero vector $w\in W_{\mu,n}$ such that $$\bu_\zeta^+.w =0.$$ It is clear from Theorem \ref{realiz} that $w$ must be of the form $w_\mu\otimes e^\mu$ for some $w_\mu\in\bu_\zeta^-(0)$ with $$\bu^+_\zeta(0).w_\mu =0.$$ By Proposition \ref{heiszeta} we see that this forces $w_\mu =1$ and hence that $1\otimes e^\mu\in W'$. Proposition \ref{r=1} now shows that $1\otimes e^{\nu }$ for all $\nu\in Q$ and hence finally that $W'=W$. \end{pf}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Given a domain $\mathcal{D}\in {\mathbb R}^d$, with $d$ a positive integer, a noisy objective function is a stochastic process $f$ : $(x,\omega )\mapsto f(x,\omega )$ with $x\in\mathcal{D}$ and $\omega $ a random variable independently sampled at each call to $f$. Noisy optimization is the search of $x$ such that ${\mathbb E}\left[ f(x,\omega )\right]$ is approximately minimum. Throughout the paper, $x^{*}$ denotes the unknown exact optimum, supposed to be unique. For any positive integer $n$, $\tilde{x}_{n}$ denotes the search point used in the $n^{th}$ function evaluation. We here consider black-box noisy optimization, i.e we can have access to $f$ only through calls to a black-box which, on request $x$, (i) randomly samples $\omega $ (ii) returns $f(x,\omega )$. Among zero-order methods proposed to solve noisy optimization problems, some of the most usual are evolution strategies; \cite{abinvestigation} has studied the performance of evolution strategies in the presence of noise, and investigated its robustness by tuning the population size of the offspring and the mutation strength. Another approach consists in using resamplings of each individual (averaging multiple resamplings reduces the noise), rather than increasing the population size. Resampling means that, when evaluating $f(x,\omega )$, several independent copies $\omega _1,\dots,\omega _r$ of $\omega $ are used (i.e. the black-box oracle is called several times with a same $x$) and we use as an approximate fitness value $\frac1r \sum_{i=1}^r f(x,\omega _i)$ in the optimization algorithm. The key point is how to choose $r$, number of resamplings, for a given $x$. Another crucial point is the model of noise. Different models of noise can be considered: additive noise (Eq. \ref{additive}), multiplicative noise (Eq. \ref{multip}) or a more general model (Eq. \ref{ourmodel}). Notice that, in Eq. \ref{ourmodel} when $z>0$, the noise decreases to zero near the optimum; this setting is not artificial as we can observe this behavior in many real problems. Let us give an example in which the noise variance decreases to zero around the optimum. Consider a Direct Policy Search problem, i.e. the optimization of a parametric policy on simulations. Assume that we optimize the success rate of a policy. Assume that the optimum policy has a success rate 100\%. Then, the variance is zero at the optimum. \subsection{Convergence Rates: $\log$-linear convergence and $\log$-$\log$ convergence} Depending on the specific class of optimization problems and on some internal properties of the algorithm considered, we obtain different uniform rates of convergence (where the convergence can be almost sure, in probability or in expectation, depending on the setting); a fast rate will be a $\log$-linear convergence, as follows: \begin{eqnarray} \mbox{{\bf{Fast rate:}} }\limsup_{n} \frac{\log{|| \tilde{x}_{n}-x^{*}||}}{{n}} = -A\ <\ 0,\label{eqnf}\label{loglin} \end{eqnarray} In the noise-free case, evolution strategies typically converge linearly in $\log$-linear scale, as shown in \cite{TCSAnne04-corr,AJT,bey01,Rechenberg,fournierppsn}.\\ The algorithm presents a slower rate of convergence in case of $\log$-$\log$ convergence, as follows: \begin{eqnarray} \mbox{{\bf{Slow rate:}} }\limsup_{n} \frac{\log{|| \tilde{x}_{n}-x^{*}||}}{{\log n}} = -A\ < \ 0,\label{eqn}\label{loglog} \end{eqnarray} The $\log$-$\log$ rates are typical rates in the noisy case (see \cite{abnoise,bignoise2,chen1988,clop,fabian,shamir,decock}). Nevertheless, we will here show that, under specific assumptions on the noise (if the noise around the optimum decreases ``quickly enough'', see section \ref{sectionmodel}), we can reach faster rates: $\log$-linear convergence rates as in Eq. \ref{loglin}, by averaging a constant number of resamplings of $f(x,\omega )$. \subsection{Additive noise model} Additive noise refers to: \begin{equation} f(x,\omega )=||x-x^*||^p+ noise_{\omega },\label{additive} \end{equation} where $p$ is a positive integer and where $noise_{\omega }$ is sampled independently with a fixed given distribution. In this model, the noise has lower bounded variance, even in the neighborhood of the optimum. The uniform rate typically converges linearly in $\log-\log$ scale (cf Eq. \ref{loglog}) as discussed in \cite{abnoise,chen1988,clop,fabian,shamir,decock}. This important case in applications has been studied in \cite{chen1988,fabian,fabian2,shamir} where tight bounds have been shown for stochastic gradient algorithms using finite differences. When using evolution strategies, \cite{bignoise2} has shown mathematically that an exponential number of resamplings (number of resamplings scaling exponentially with the index of iterations) or an adaptive number of resamplings (scaling as a polynomial of the inverse step-size) can both lead to a $\log$-$\log$ convergence rate. \subsection{Multiplicative noise model} Multiplicative noise, in the unimodal spherical case, refers to \begin{equation} f(x,\omega )=||x-x^*||^p+||x-x^*||^p\times noise_{\omega }\label{multip} \end{equation} and some compositions (by increasing mappings) of this function, where $p$ is a positive integer and where $noise_{\omega }$ is sampled independently with a fixed given distribution. \cite{augerupu} has studied the convergence of evolution strategies in noisy environments with multiplicative noise, and essentially shows that the result depends on the noise distribution: if $noise_{\omega }$ is conveniently lower bounded, then some standard $(1+1)$ evolution strategy converges to the optimum; if arbitrarily negative values can be sampled with non-zero probability, then it does not converge. \subsection{A more general noise model}\label{sectionmodel} Eqs. \ref{additive} and \ref{multip} are particular cases of a more general noise model: \begin{equation} f(x,\omega )=||x-x^*||^p + ||x-x^*||^{pz/2}\times noise_{\omega}\label{ourmodel}. \end{equation} where $p$ is a positive integer, $z\geq 0$ and $noise_{\omega }$ is sampled independently with a fixed given distribution. Eq. \ref{ourmodel} boils down to Eq. \ref{additive} when $z=0$ and to Eq. \ref{multip} when $z=2$. We will here obtain fast rates for some larger values of $z$. More precisely, we will show that when $z>2$, we obtain $\log$-linear rates, as in Eq. \ref{loglin}. Incidentally, this shows some tightness (with respect to $z$) of conditions for non-convergence in \cite{augerupu}. \section{Theoretical analysis}\label{th} Section \ref{prel} is devoted to some preliminaries. Section \ref{th1} presents results for constant numbers of resamplings on our generalized noise model (Eq. \ref{ourmodel}) when $z>2$. \subsection{Preliminary: noise-free case}\label{prel} Typically, an evolution strategy at iteration $n$: \begin{itemize} \item generates $\lambda$ individuals using the current estimate $x_{n-1}$ of the optimum $x^{*}$ and the so-called mutation strength (or step-size) $\sigma_{n-1}$, \item provides a pair $(x_{n},\sigma_{n})$ where $x_{n}$ is a new estimate of $x^{*}$ and $\sigma_{n}$ is a new mutation strength. \end{itemize} From now on, for the sake of notation simplicity, we assume that $x^{*}=0$. For some evolution strategies and in the noise-free case, we know (see e.g. Theorem 4 in \cite{TCSAnne04-corr}) that there exists a constant $A$ such that : \begin{eqnarray} \frac{\log(||x_n||)}{n} \xrightarrow[n \rightarrow \infty]{a.s}-A\label{convergence}\\ \frac{\log(\sigma_n)}{n} \xrightarrow[n \rightarrow \infty]{a.s} -A\label{convergence2} \end{eqnarray} This paper will discuss cases in which an algorithm verifying Eqs. \ref{convergence}, \ref{convergence2} in the noise-free case also verifies them in a noisy setting. {\bf{Remarks:}} {\em{ In the general case of arbitrary evolution strategies (ES), we don't know if $A$ is positive, but: \begin{itemize} \item in the case of a $(1+1)$-ES with generalized one-fifth success rule, $A>0$ see \cite{oneplusonepaper}; \item in the case of a self-adaptive $(1, \lambda)$-ES with gaussian mutations, the estimate of $A$ by Monte-Carlo simulations is positive \cite{TCSAnne04-corr}. \end{itemize} }} \begin{property}\label{propprop} For some $\delta>0$, for any $\alpha$, $\alpha'$ such that $\alpha<A$ and $\alpha'>A$, there exist $C>0$, $C'>0$, $V>0$, $V'>0$, such that with probability at least $1-\delta$ \begin{eqnarray} \forall n\geq 1, C'\exp({-\alpha'}n)\leq ||x_n||\leq C\exp({-\alpha}n);\label{linearconv}\\ \forall n\geq 1, V'\exp(-\alpha' n)\leq \sigma_n\leq V\exp(-\alpha n).\label{linearconv2} \end{eqnarray} \end{property} \begin{proof} For any $\alpha<A$, almost surely, $\log(||x_{n}||)\leq -\alpha n$ for $n$ sufficiently large. So, almost surely, $\sup_{n\geq 1}\log(||x_{n}||)+\alpha n$ is finite. Consider $V$ the quantile $1-\frac{\delta}{4}$ of $\exp\left( \sup_{n\geq 1}\log(||x_{n}||)+\alpha n\right).$ Then, with probability at least $1-\frac{\delta}{4}$, $\forall n\geq 1, ||x_{n}||\leq V\exp(-\alpha n).$ We can apply the same trick for lower bounding $||x_{n}||$, and upper and lower bounding $\sigma_{n}$, all of them with probability $1-\frac{\delta}{4}$, so that all bounds hold true simultaneously with probability at least $1-\delta$.\fbox\\ \end{proof} \subsection{Noisy case}\label{th1} The purpose of this Section is to show that if some evolution strategies perform well (linear convergence in the log-linear scale, as in Eqs. \ref{convergence}, \ref{convergence2}), then, just by considering $Y$ resamplings for each fitness evaluation as explained in Alg. \ref{generales}, they will also be fast in the noisy case. Our theorem holds for any evolution strategy satisfying the following constraints: \begin{itemize} \item At each iteration $n$, a search point $x_n$ is defined and $\lambda$ search points are generated and have their fitness values evaluated. \item The noisy fitness values are averaged over $Y$ (a constant) resamplings. \item The $j^{th}$ individual evaluated at iteration $n$ is randomly drawn by $x_n+\sigma_n\mathcal{N}_d$ with $\mathcal{N}_d$ a $d$-dimensional standard Gaussian variable. \end{itemize} This framework is presented in Alg. \ref{generales}. \begin{algorithm} \begin{algorithmic} \STATE{Initialize $x_0$ and $\sigma_0$.} \STATE{$n\leftarrow 1$} \WHILE{not finished} \FOR{$i\in\{1,\dots,\lambda\}$} \STATE{Define $x_{n,i}=x_n+\sigma_n \mathcal{N}_d$.} \STATE{Define $y_{n,i}=\frac1Y \sum_{k=1}^Y f(x_{n,i},\omega _k)$.} \ENDFOR \STATE{Update: $(x_{n+1},\sigma_{n+1})\leftarrow$update($x_{n,1},\dots,x_{n,\lambda}$,$y_{n,1},\dots,y_{n,\lambda},\sigma_{n}$).} \STATE{$n\leftarrow n+1$} \ENDWHILE \end{algorithmic} \caption{\label{generales} A general framework for evolution strategies. For simplicity, it does not cover all evolution strategies, e.g. mutations of step-sizes as in self-adaptive algorithms are not covered; yet, our proof can be extended to a more general case ($x_{n,i}$ distributed as $x_n+\sigma N$ for some noise $N$ with exponentially decreasing tail). The case $Y=1$ is the case without resampling. Our theorem basically shows that if such an algorithm converges linearly (in log-linear scale) in the noise-free case then the version with $Y$ large enough converges linearly in the noisy case when $z>2$.} \end{algorithm} We now state our theorem, under $\log$-linear convergence assumption (cf assumption (\ref{assii}) below). \begin{theorem}\label{theotheo} Consider the following assumptions: \begin{enumerate}[(i)] \item the fitness function $f$ satisfies ${\mathbb E}\left[f(x,\omega )\right]=\|x\|^p$ and has a limited variance: \begin{equation} Var(f(x,\omega)) \leq \left({\mathbb E} \left[f(x,\omega)\right]\right)^{z}\mbox{ for some }z>2;\label{limvar} \end{equation} \item\label{assii} in the noise-free case, the ES with population size $\lambda$ under consideration is log-linearly converging, i.e. for any $\delta > 0$, for some $\alpha>0$, $\alpha'>0$, there exist $C>0$, $C'>0$, $V>0$, $V'>0$, such that with probability 1-$\delta$, Eqs. \ref{linearconv} and \ref{linearconv2} hold; \item the number $Y$ of resamplings per individual is constant. \end{enumerate} Then, if $z>\max\left(\frac{ 2( p \alpha'-(\alpha-\alpha') d)}{ p \alpha}, \frac{2( 2 \alpha'-\alpha)}{\alpha}\right)$, for any $\delta>0$, there is $Y_0>0$ such that for any $Y\geq Y_0$, Eqs. \ref{linearconv} and \ref{linearconv2} also hold with probability at least $(1-\delta)^2$ in the noisy case. \end{theorem} \begin{corollary}\label{coco} Under the same assumptions, with probability at least $(1-\delta)^2$, \begin{equation} \underset{n}{\lim\sup} \frac{log(||\tilde{x}_{n}||)}{n}\leq-\frac{\alpha}{\lambda Y}\nonumber \end{equation} \end{corollary} {\bf{Proof of Corollary \ref{coco} : }} Immediate consequence of Theorem \ref{theotheo}, by applying Eq. \ref{linearconv} and using $\underset{n}{\lim\sup} \frac{log(||\tilde{x}_{n}||)}{n}=\underset{n}{\lim\sup} \frac{log(||{x}_{n}||)}{\lambda Y n}$.\fbox\\ {\bf{Remarks:}} {\em{ \begin{itemize} \item {\bf{Interpretation:}} Informally speaking, our theorem shows that if an algorithm converges in the noise-free case, then it also converges in the noisy case with the resampling rule, at least if $z$ and $Y$ are large enough. \item Notice that we can choose constants $\alpha$ and $\alpha^{'}$ very close to each other. Then the assumption $z>\max\left(\frac{ 2( p \alpha'-(\alpha-\alpha') d)}{ p \alpha}, \frac{2( 2 \alpha'-\alpha)}{\alpha}\right)$ boils down to $z>2$. \item We show a log-linear convergence rate as in the noise-free case. This means that we get $\log ||\tilde{x}_{n}||$ linear in the number of function evaluations. This is as Eq. \ref{loglin}, and faster than Eq. \ref{loglog} which is typical for noisy optimization with constant variance. \item In the previous hypothesis, the new individuals are drawn following $x_n+\sigma_n\mathcal{N}_d$ with $\mathcal{N}_d$ a $d$-dimensional standard Gaussian variable, but we could substitute $\mathcal{N}_d$ for any random variable with an exponentially decreasing tail. \end{itemize} }} {\bf{Proof of Theorem \ref{theotheo} : }} In all the proof, ${\mathcal{N}_{k}}$ denotes a standard normal random variable in dimension $k$. {\bf{Sketch of proof:}} Consider an arbitrary $\delta>0$ and $\delta_n=\exp(-\gamma n)$ for some $n\geq 1$ and $\gamma>0$.\\ We compute in Lemma \ref{boundprobatwopoints} the probability that at least two generated points $x_{n,i_1}$ and $x_{n,i_2}$ at iteration $n$ are ``close'', i.e are such that $|\ ||x_{n,i_1}||^p-||x_{n,i_2}||^p\ |\leq \delta_n$; then we calculate the probability that the noise of at least one of the $\lambda$ evaluated individuals of iteration $n$ is bigger than $\frac{\delta_n}{2}$ in Lemma \ref{probabilityonnoise}. Thus, we can conclude in Lemma \ref{probamisranking} by estimating the probability that at least two individuals are misranked due to noise. \newline We first begin by showing a technical lemma. \begin{lemma}\label{technicallemma} Let $u\in \mathbb{R}^{d}$ be a unit vector and $\mathcal{N}_{d}$ a $d$-dimensional standard normal random variable. Then for $S>0$ and $\ell >0$, there exists a constant $M>0$ such that : \begin{equation*} \max_{v\geq0} {\mathbb P}(|\ ||u+S{\mathcal{N}_{d}}||^{p}-v |\leq \ell)\leq MS^{-d}\max\left(\ell,\ell^{d/p}\right). \end{equation*} \end{lemma} \begin{proof} For any $v\geq \ell$, we denote $E_{v\geq\ell}$ the set : \begin{equation*} E_{v\geq\ell}=\left\lbrace x\ ; |\ ||x||^{p}-v\ |\leq \ell\right\rbrace =\left\lbrace x\ ; \left(v-\ell\right)^{\frac{1}{p}}\leq||x||\leq\left(v+\ell\right)^{\frac{1}{p}}\right\rbrace. \end{equation*} We first compute $\mu(E_{v\geq\ell})$, the Lebesgue measure of $E_{v\geq\ell}$ : \begin{equation*} \mu(E_{v\geq\ell})=K_d\left\lbrace\left(v+\ell\right)^{\frac{d}{p}}-\left(v-\ell\right)^{\frac{d}{p}}\right\rbrace,\\ \end{equation*} with $K_d=\frac{(2\pi)^{d/2}}{2\times 4\times \dots \times d}$ if $d$ is even, and $K_d=\frac{2(2\pi)^{(d-1)/2}}{1\times 3\times \dots \times d}$ otherwise. Hence, by Taylor expansion, $\mu(E_{v\geq\ell}) \leq Kv^{\frac{d}{p} -1}\ell$, where $K=K_{d}\left(2\frac{d}{p}+\underset{v\geq \ell}{\sup}\ \underset{0<\zeta<\frac{\ell}{v}}{\sup}\frac{q''(\zeta)}{2}\frac{\ell}{v}\right)$, with $q(x)=(1+x)^{\frac{d}{p}}$.\\ $\bullet$ If $v\geq \ell$: \begin{eqnarray*} {\mathbb P}(|\ ||u+S{\mathcal{N}_{d}}||^{p}-v |\leq \ell)&=&{\mathbb P}(u+S\mathcal{N}_{d}\in E_{v\geq\ell}),\\ &\leq&S^{-d}\underset{x\in E_{v\geq\ell}}{\sup} \left(\frac{1}{\sqrt{2\pi}}\exp(-\frac{||S^{-1}(x-u)||^2}{2})\right)\mu(E_{v\geq\ell}),\\ &\leq&M_{1}S^{-d} \ell,\\ &\leq&M_{1}S^{-d}\max\left(\ell,\ell^{d/p}\right). \end{eqnarray*} where $M_{1}=\frac{K}{\sqrt{2\pi}}\underset{v\geq\ell}{\sup}\underset{x : ||x||\leq (v+\ell)^{\frac{1}{p}}}{\sup}\left[ v^{\frac{d}{p} -1}\exp\left(-\frac{||S^{-1}(x-u)||^2}{2}\right)\right].$ \begin{eqnarray*} \mbox{$\bullet$ If $v< \ell$, }& &{\mathbb P}(|\ ||u+S\mathcal{N}_{d}||^{p}-v |\leq \ell)\leq M_{2}S^{-d} \ell^{d/p} \leq M_{2}S^{-d} \max\left(\ell,\ell^{d/p}\right), \end{eqnarray*} where $M_{2}=2^{\frac{d}{p}}\frac{K_{d}}{\sqrt{2\pi}}$. Hence the result follows by taking $M=\max(M_{1},M_{2})$. \fbox\\ \end{proof} \begin{lemma}\label{boundprobatwopoints} Let us denote by $P^{(1)}_{n}$ the probability that, at iteration $n$, there exist at least two points $x_{n,i_1}$ and $x_{n,i_2}$ such that $|\ ||x_{n,i_1}||^p-||x_{n,i_2}||^p\ |\leq \delta_n$. Then \begin{equation*} P^{(1)}_{n}\leq B\lambda^{2}\exp(-\gamma'n), \end{equation*} for some $B>0$ and $\gamma'>0$ depending on $\gamma$, $d$, $p$, $C$, $C'$, $V$, $\alpha$, $\alpha'$. \end{lemma} \begin{proof} Let us first compute the probability $P^{(0)}_n$ that, at iteration $n$, two given generated points $x_{n,i_{1}}$ and $x_{n,i_{2}}$ are such that $|\ ||x_{n,i_1}||^p-||x_{n,i_2}||^p\ |\leq \delta_n$. Let us denote by ${\mathcal{N}}_{d}^{1}$ and ${\mathcal{N}}_{d}^{2}$ two $d$-dimensional standard independent random variables, $u\in \mathbb{R}^{d}$ a unit vector and $S_{n}=\frac{\sigma_{n}}{||x_{n}||}$. \begin{eqnarray*} P^{(0)}_{n}&=&{\mathbb P}\left(|\ ||x_{n}+\sigma_{n}{\mathcal{N}}_{d}^{1}||^p-||x_{n}+\sigma_{n}{\mathcal{N}}_{d}^{2}||^p\ |\leq \delta_n\right),\\ &=&{\mathbb P}\left(|\ ||u+S_{n}{\mathcal{N}}_{d}^{1}||^p-||u+S_{n}{\mathcal{N}}_{d}^{2}||^p\ |\leq \frac{\delta_n}{||x_n||^p}\right),\\ &\leq&\max_{v\geq0} {\mathbb P}\left(|\ ||u+S_{n}{\mathcal{N}}_{d}^{1}||^{p}-v |\leq \frac{\delta_n}{||x_n||^p}\right). \end{eqnarray*} Hence, by Lemma \ref{technicallemma}, there exists a $M>0$ such that $P^{(0)}_{n}\leq MS_{n}^{-d}\left(\frac{\delta_{n}}{||x_n||^{p}}\right)^{m}$, where $m$ is such that $\left(\frac{\delta_{n}}{||x_n||^{p}}\right)^{m}=\max\left(\frac{\delta_{n}}{||x_{n}||^{p}},\left(\frac{\delta_{n}}{||x_{n}||^{p}}\right)^{d/p}\right)$. Moreover $S_{n}\geq V'C^{-1}\exp(-(\alpha' - \alpha)n)$ by Assumption (\ref{assii}). Thus $P^{(0)}_{n}\leq B\exp(-\gamma' n)$, with $B=MV'^{-d}C^{d}C'^{-mp}$ and $\gamma'=d(\alpha-\alpha^{'})+m\gamma-mp\alpha'$. In particular, $\gamma'$ is positive, provided that $\gamma$ is sufficiently large.\\ By union bound, $P^{(1)}_{n}\leq \frac{(\lambda-1)\lambda}{2} P^{(0)}_n\leq B\lambda^{2}\exp(-\gamma'n)$. \fbox\\ \end{proof} We now provide a bound on the probability $P^{(3)}_{n}$ that the fitness value of at least one search point generated at iteration $n$ has noise (i.e. deviation from expected value) bigger than $\frac{\delta_{n}}{2}$ in spite of the $Y$ resamplings. \begin{lemma}\label{probabilityonnoise} $$P^{(3)}_{n}:={\mathbb P}\left(\exists i\in \{ 1,\dots,\lambda\}\ ;\ \left|\frac{1}{Y}\sum_{j=1}^{Y} f(x_{n,i},\omega_{j})- {\mathbb E}\left[f(x_{n,i},\omega_{j})\right] \right|\geq \frac{\delta_n}{2}\right)$$ $$\leq \lambda B'\exp(-\gamma''n)$$ for some $B'>0$ and $\gamma''>0$ depending on $\gamma$, $d$, $p$, $z$, $C$, $Y$, $\alpha$, $\alpha'$.\\ \end{lemma} \begin{proof} First, for one point $x_{n,i_{0}}$, $i_{0}\in \{1,\dots,\lambda\}$ generated at iteration $n$, we write $P^{(2)}_{n}$ the probability that when evaluating the fitness function at this point, we make a mistake bigger than $\frac{\delta_{n}}{2}$.\\ $P^{(2)}_{n}={\mathbb P}(|\frac{1}{Y}\sum_{j=1}^{Y} f(x_{n,i_{0}},\omega_{j})- {\mathbb E}\left[f(x_{n,i_{0}},\omega_{j})\right]|\geq \frac{\delta_n}{2})\leq B'\exp(-\gamma''n)$ by using Chebyshev's inequality, where $B'=4Y^{-1}C^{pz}$ and $\gamma''=\alpha zp-2\gamma$. In particular, $\gamma'' >0$ if $z>\frac{2 (mp \alpha'-(\alpha-\alpha') d) }{p \alpha m}$; hence, if $z\geq \max\left(\frac{ 2( p \alpha'-(\alpha-\alpha') d)}{ p \alpha}, \frac{2( 2 \alpha'-\alpha)}{\alpha}\right)$, we get $\gamma''>0$.\\ Then, $P^{(3)}_{n}\leq \lambda P^{(2)}_n$ by union bound.\fbox\\ \end{proof} \begin{lemma}\label{probamisranking} Let us denote by $P_{misranking}$ the probability that in at least one iteration, there is at least one misranking of two individuals. Then, if $z> \max\left(\frac{ 2( p \alpha'-(\alpha-\alpha') d)}{ p \alpha}, \frac{2( 2 \alpha'-\alpha)}{\alpha}\right)$ and $Y$ is large enough, $P_{misranking}\leq \delta$. \end{lemma} This lemma implies that with probability at least $1-\delta$, provided that $Y$ has been chosen large enough, we get the same rankings of points as in the noise free case. In the noise free case Eqs. \ref{linearconv} and \ref{linearconv2} hold with probility at least $1-\delta$ - this proves the convergence with probability at least $(1-\delta)^2$, hence the expected result; the proof of the theorem is complete.\fbox\\ \begin{proof} (of the lemma) We consider the probability $P_{n}^{(4)}$ that two individuals $x_{n,i_1}$ and $x_{n,i_2}$ at iteration $n$ are misranked due to noise, so \begin{eqnarray} ||x_{n,i_1}||^p&\leq& ||x_{n,i_2}||^p\label{eq1}\\ \mbox{ and }\frac{1}{Y}\sum_{j=1}^{Y} f(x_{n,i_{1}},\omega _{j})&\geq& \frac{1}{Y}\sum_{j=1}^{Y} f(x_{n,i_{2}},\omega_{j})\label{eq2} \end{eqnarray} Eqs. \ref{eq1} and \ref{eq2} occur simultaneously if either two points have very similar fitness (difference less than $\delta_n$) or the noise is big (larger than $\frac{\delta_n}{2}$). Therefore, $P^{(4)}_{n}\leq P^{(1)}_{n}+P^{(3)}_{n}\leq \lambda^{2} P^{(0)}_{n}+\lambda P^{(2)}_{n}\leq (B+B')\lambda^{2}\exp(-\min(\gamma',\gamma'')n)$.\\ $P_{misranking}$ is upper bounded by $\sum_{n\geq 1} P^{(4)}_{n}<\delta$ if $\gamma'$ and $\gamma''$ are positive and constants large enough. $\gamma'$ and $\gamma''$ can be chosen positive simultaneously if $z> \max\left(\frac{ 2( p \alpha'-(\alpha-\alpha') d)}{ p \alpha}, \frac{2( 2 \alpha'-\alpha)}{\alpha}\right)$. \fbox\\ \end{proof} \section{Experiments : how to choose the right number of resampling ?} We consider in our experiments a version of multi-membered evolution strategies, the ($\mu$,$\lambda$)-ES, where $\mu$ denotes the number of parents and $\lambda$ the number of offspring ($\mu\leq\lambda$; see Alg. \ref{es}). We denote $(x_{n}^{1},\dots,x_{n}^{\mu})$ the $\mu$ parents at iteration $n$ and $(\sigma_{n}^{1},\dots,\sigma_{n}^{\mu})$ their corresponding step-size. At each iteration, a ($\mu$,$\lambda$)-ES noisy algorithm : (i) generates $\lambda$ offspring by mutation on the $\mu$ parents, using the corresponding mutated step-size, (ii) selects the $\mu$ best offspring by ranking the noisy fitness values of the individuals. Thus, the current approximation of the optimum $x^{*}$ at iteration $n$ is $x_{n}^{1}$, to be consistent with the previous notations, we denote $x_{n}=x_{n}^{1}$ and $\sigma_{n}=\sigma_{n}^{1}$. \begin{algorithm} \begin{algorithmic} \STATE{{\bf{Parameters : }} $Y> 0$, $\lambda\geq \mu>0$, a dimension $d>0$.} \STATE{{\bf{Input : }} $\mu$ initial points $x_{1}^{1},\dots,x_{1}^{\mu} \in {\mathbb R}^{d}$ and initial step size $\sigma_{1}^{1}>0, \dots,\sigma_{1}^{\mu} >0$.} \STATE{$n\leftarrow 1$} \WHILE{(true)} \STATE{Generate $\lambda$ individuals indenpendently using :} \begin{eqnarray*} \sigma_{j}&=&\sigma_{n}^{mod(j-1,\mu)+1}\times exp(\frac{1}{2d}\times \mathcal{N}_{1})\\ i_{j}&=&x_{n}^{mod(j-1,\mu)+1}+\sigma_{j}\mathcal{N}_{d} \end{eqnarray*} \STATE{$\forall j\in \{1,\dots,\lambda\}$, evaluate $i_{j}$ $Y$ times. Let $y_{j}$ be the averaging over these $Y$ evaluations. } \STATE{Define $j_{1},\dots,j_{\lambda}$ so that $y_{j_{1}}\leq y_{j_{2}}\leq \dots \leq y_{j_{\lambda}}$. } \STATE{{\bf{Update : }} compute $\sigma_{n+1}^{k} $ and $x_{n+1}^{k}$ for $k\in \{1,\dots,\mu\}$: \begin{eqnarray*} \sigma_{n+1}^{k}&=&\sigma_{j_{k}}\\ x_{n+1}^{k}&=&x_{j_{k}} \end{eqnarray*}} \STATE{$n\leftarrow n+1$} \ENDWHILE \end{algorithmic} \caption{\label{es}An evolution strategy, with constant number of resamplings. If we consider $Y=1$, we obtain the case without resampling. ${\mathcal{N}_{k}}$ is a $k$-dimensional standard normal random variable.} \end{algorithm} Experiments are performed on the fitness function $f(x,\omega)=||x||^p + ||x||^{pz/2}\mathcal{N}$, with $x\in {\mathbb R}^{15}$, $p=2$, $z=2.1$, $\lambda=4$, $\mu=2$, and $\mathcal{N}$ a standard gaussian random variable, using a budget of $500000$ evaluations. The results presented here are the mean and the median over 50 runs. The positive results are proved, above, for a given quantile of the results. This explains the good performance in Fig. \ref{med} (median result) as soon as the number of resamplings is enough. The median performance is optimal with just 12 resamplings. On the other hand, Fig. \ref{moy} shows the mean performance of Alg. \ref{es} with various numbers of resamplings. We see that a limited number of runs diverge so that the mean results are bad even with 16 resamplings; results are optimal (on average) for 20 resamplings. Results are safer with 20 resamplings (for the mean), but faster (for the median) with a smaller number of resamplings. \begin{figure} \center \includegraphics[width=0.8\linewidth]{medianesaes15budget500000p2z21lambda4mu2moynbruns50.eps} \caption{\label{med}Convergence of Self-Adaptive Evolution Strategies: Median results.} \end{figure} \begin{figure} \center \includegraphics[width=0.8\linewidth]{moyennesaes15budget500000p2z21lambda4mu2moynbruns50.eps} \caption{\label{moy}Convergence of Self-Adaptive Evolution Strategies: Mean results.} \end{figure} \section{Conclusion} We have shown that applying evolution strategies with a finite number of resamplings when the noise in the function decreases quickly enough near the optimum provides a convergence rate as fast as in the noise-free case. More specifically, if the noise decreases slightly faster than in the multiplicative model of noise, using a constant number of revaluation leads to a log-linear convergence of the algorithm. The limit case of a multiplicative noise has been analyzed in \cite{augerupu}; a fixed number of resamplings is not sufficient for convergence when the noise is unbounded. {\bf{Further work.}} We did not provide any hint for choosing the number of resamplings. Proofs based on Bernstein races \cite{icmlbadtruc} might be used for adaptively choosing the number of resamplings. \subsection*{Acknowledgements} This paper was written during a stay in Ailab, Dong Hwa University, Hualien, Taiwan. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Understanding the landscape of software vulnerabilities is a key step for developing effective security solutions. It is difficult to counter a threat that is not well understood. Fortunately, there exist vulnerability databases that can be analyzed to help shed light on the nature of publicly published software vulnerabilities. The National Vulnerability Database (NVD) \cite{NVD} is one such repository. NVD catalogs publicly disclosed vulnerabilities and provides an analysis of their attributes and severity scores using the Common Vulnerability Scoring System (CVSS) \cite{CVSS}. CVSS is used extensively by security tools and databases and is maintained by the international Forum of Incident Response and Security Teams (FIRST) \cite{FIRST}. The CVSS provides a framework for describing vulnerability attributes and then scoring them as to their projected severity. The attributes are metric values that are the input to a CVSS equation that generates the score. It is the vulnerability attribute descriptions (the metric values) that are of primary interest to our work, although we also look at the raw scores. The use of CVSS by vulnerability databases provides a suite of low level metrics, encapsulated in a vector, describing the characteristics of each vulnerability. CVSS was initially released in 2005 \cite{schiffman2004common}, was completely revamped with version 2 (v2) in 2007 \cite{mell2007complete}, and was updated with new and modified metrics in 2015 with the release of version 3 (v3) \cite{CVSSv3.0}\footnote{Minor update version 3.1 was released in 2019 \cite{CVSSv3.1} but the changes therein do not effect our work.}. The software flaw vulnerability landscape was thoroughly analyzed in the scientific literature using v2 when it was first released \cite{mell2007improving, schiffman2004common, mell2006common, holm2015expert, scarfone2009analysis, wang2011improved, holm2012empirical}, but little work has been done since to evaluate changes to that landscape over time. Also in our literature survey, we did not find a single study that uses the updated and significantly modified v3 to understand the software vulnerability landscape. In this paper, we use the CVSS v2 and v3 data provided by the NVD to undertake a historical and statistical analysis of the software vulnerabilities landscape over the period 2005 to 2019. More precisely, we conduct three studies analyzing the following: \begin{itemize} \item score distributions, \item metric value distributions, \item and relative rankings of the most frequent metric values. \end{itemize} For our first study, we analyze and compare the distributions of CVSS v2 and v3 scores as generated from the NVD data. We then compare the empirical distributions against the theoretical score distributions, assuming that all CVSS vectors are equally likely (which is not the case, but it is illustrative to evaluate the differences). For our second study, we compute the distributions of the CVSS metric values (i.e., vulnerability characteristics) for each year. We then analyze the differences from 2005 to 2019 to determine if and how vulnerability characteristics change over time. For our third study, we identify the most frequent metric values and analyze their relative rankings from 2015 to 2019. For each year and for both CVSS versions, we compute the values of the top 10 observed vulnerability metrics as well as their frequencies. We then generate parallel coordinates plots showing the values and frequencies of each metric for each year. Our analysis shows that the software vulnerability landscape has been dominated by only a few vulnerability types and has changed very little from 2005 to 2019. For example, the overwhelming majority of vulnerabilities are exploitable over the network (i.e., remotely). The complexity to successfully exploit these vulnerabilities is dominantly low while attackers are generally not required to have any level of prior access to their targets (i.e., having successfully authenticated) in order to launch an attack. And most of the flaws require very limited interaction with users. On the positive side, the damage of these vulnerabilities is mostly confined within the security scope of the impacted components. Few vulnerabilities obtain greater privileges than is available to the exploited vulnerable component. Our findings are consistent to previous studies \cite{mell2007improving} (mainly based on CVSS version 2). This indicates that the same vulnerabilities are still being found in our software, suggesting that the community has not been doing a great job correcting the most common vulnerabilities. The remainder of this paper is organized as follows. Section \ref{sec:CVSS-Dataset} presents the CVSS data sets that constitute the basis of our study. Section \ref{sec:Data-Anal} gives the details of our analysis and our discussion. Section \ref{sec:relatedWork} provides a summary of related work and Section \ref{sec:conclusion} concludes. \section{The CVSS Datasets} \label{sec:CVSS-Dataset} CVSS consists of three metric groups: base, temporal, and environmental. The base group represents the intrinsic qualities of a vulnerability that are constant over time and across user environments, the temporal group reflects the characteristics of a vulnerability that change over time, and the environmental group represents the characteristics of a vulnerability that are unique to a user's environment \cite{CVSSv3.0}. In this work, we evaluate only the base metrics as no extensive database of temporal scores exists and the environment metrics are designed for an organization to customize base and temporal scores to their particular environment. Tables \ref{table:CVSS-v2} and \ref{table:CVSS-v3} show the base score metrics and possible values for v2 and v3, respectively. A particular assignment of metric values is then used as input to the CVSS base score equations to generate scores representing the inherent severity of a vulnerability in general apart from any particular environment. The raw score in the range from 0 to 10 is then often translated into a `qualitative severity rating scale' (None: 0.0, Low: 0.1 to 3.9, Medium: 4.0 to 6.9, High: 7.0 to 8.9, and Critical: 9.0 to 10.0) \cite{CVSSv3.0}. Vulnerability analysts apply the metrics to vulnerabilities to generate CVSS vector strings. The vectors describe the metric values, but not the CVSS scores, for a particular vulnerability using a simplified notation. \begin{table}[] \centering \caption{CVSS v2 metrics} \begin{tabular}{|l|l|} \hline CVSS v2 Metrics & Metric Values \\ \hline \hline Access Vector (AV) & Network (N), Adjacent (A), Local (L) \\ \hline Attack Complexity (AC) & Low (L), Medium (M), High (H) \\ \hline Authentication (Au) & Multiple (M), Single (S), None (N) \\ \hline Confidentiality (C) & Complete (C), Partial (P), None (N) \\ \hline Integrity (I) & Complete (C), Partial (P), None (N) \\ \hline Availability (A) & Complete (C), Partial (P), None (N) \\ \hline \end{tabular} \label{table:CVSS-v2} \end{table} \begin{table}[] \centering \caption{CVSS v3 metrics} \begin{tabular}{|l|l|} \hline CVSS v3 Metrics & Metric Values \\\hline\hline Attack Vector (AV) & Network (N), Adjacent (A), \\ & Local (L), Physical (P) \\\hline Attack Complexity (AC) & Low (L), High (H) \\ \hline Privileges Required (PR) & None (N), Low (L), High (H) \\ \hline User Interaction (UI) & None (N), Required (R) \\ \hline Scope (S) & Unchanged (U), Changed (C) \\ \hline Confidentiality (C) & High (H), Low (L), None (N)\\ \hline Integrity (I) & High (H), Low (L), None (N) \\ \hline Availability (A) & High (H), Low (L), None (N) \\ \hline \end{tabular} \label{table:CVSS-v3} \end{table} \begin{comment} \begin{table}[] \centering \caption{CVSS v2 metrics} \begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|}{CVSS V2} \\ \hline Metric & Metric Values \\ \hline \hline Access Vector (AV) & Network (N), Adjacent (A), Local (L) \\ \hline Attack Complexity (AC) & Low (L), Medium (M), High (H) \\ \hline Authentication (Au) & Multiple (M), Single (S), None (N) \\ \hline Confidentiality (C) & Complete (C), Partial (P), None (N) \\ \hline Integrity (I) & Complete (C), Partial (P), None (N) \\ \hline Availability (A) & Complete (C), Partial (P), None (N) \\ \hline \end{tabular} \label{table:CVSS-v2} \end{table} \begin{table}[] \centering \caption{CVSS v3 metrics} \begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|}{CVSS V3} \\ \hline Metric & Metric Values \\\hline\hline Attack Vector (AV) & Network (N), Adjacent (A), \\ & Local (L), Physical (P) \\\hline Attack Complexity (AC) & Low (L), High (H) \\ \hline Privileges Required (PR) & None (N), Low (L), High (H) \\ \hline User Interaction (UI) & None (N), Required (R) \\ \hline Scope (S) & Unchanged (U), Changed (C) \\ \hline Confidentiality (C) & High (H), Low (L), None (N)\\ \hline Integrity (I) & High (H), Low (L), None (N) \\ \hline Availability (A) & High (H), Low (L), None (N) \\ \hline \end{tabular} \label{table:CVSS-v3} \end{table} \end{comment} The NVD is the `U.S. government repository of standards based vulnerability management data' \cite{NVD}. It provides CVSS vectors and base scores for all vulnerabilities listed in the Common Vulnerabilities and Exposures (CVE) \cite{baker1999CVE} \cite{CVE} catalog of publicly disclosed software flaws. We use NVD to evaluate both CVSS v2 and v3 vectors and scores. The v2 data covers all CVE vulnerabilities published between 2005 and 2019. The v3 data ranges from 2015 to 2019 (only limited v3 data is available prior to 2015). These coverage dates result in the inclusion in our study of 118\,173 v2 vectors and scores and 55\,441 v3 vectors and scores. \section{Data Analysis} \label{sec:Data-Anal} We analyze the NVD CVSS data in order to better understand the software vulnerability landscape. We investigate both the current nature of the threat posed by the existence and public disclosure of these vulnerabilities as well as how this threat has changed over time. To achieve this, we conduct the three studies described previously where we analyze the following: score distributions, metric value distributions, and relative rankings of the most frequent metric values. \subsection{Score Distributions} \label{sec:ScoreDistributions} \begin{figure}[htp] \centering \includegraphics[width=0.45\textwidth]{images/Score_Theo_Emp_Anal_Hist_v3_only.png} \caption{Theoretical vs Empirical Score Distributions for CVSS version 3} \label{fig:Theo_vs_Emp} \end{figure} The top graph of Figure \ref{fig:Theo_vs_Emp} shows the theoretical distribution of the v3 scores (V2 scores are similar and not shown in the paper due to space limitation. They can be found in the appendix of \cite{OurArxivVersion}). These plots show what is expected if all CVSS vectors (i.e., vulnerability types) are equally likely to occur. Note how the theoretical distribution was designed, by the FIRST CVSS committee, to spread CVSS scores throughout the range with a somewhat normal distribution with the most probable scores occurring in the middle of the distribution (a little biased to the right). That said, it is interesting in that for both v2 and v3 some scores are not possible even though they lie within the valid range of score values. However, the empirical distribution is shown in the bottom of Figure \ref{fig:Theo_vs_Emp} for v3. The empirical data indicates a predominance of certain vectors (groupings of vulnerability characteristics) in the real world. Thus, only a few vulnerability feature sets describe the majority of publicly disclosed vulnerabilities. This leads to the frequent use of just a very small number of scores. A similar observation was made in a previous study of the v2 scoring system \cite{mell2007improving}. The results observed with v3, which uses data from 2015 to 2019 (since v3 vectors are not generally available prior to 2015) are similar to those with v2, which uses data from 2005 to 2019. Hence, the long-term obtained with CVSS v2 data is confirmed by the shorter-term data of CVSS v3. \subsection{Metric Value Distributions} \label{sec:ValueDistributions} To investigate more carefully (in order to identify) possible differences per year and trends over time, we focus on the distributions of each set of metric values per year over the time period of study. Figure \ref{fig:CVSS3-Freqs} provides the histograms for v3 from 2015 to 2019. We have also plotted the histograms for v2 \cite{OurArxivVersion}, which cover from 2005 t0 2019. The inclusion of v2 in the study allows for a comparison over 15 years as opposed to being limited to just 5 years with v3, due to its more recent development. \begin{figure*}[htp] \centering \includegraphics[width=0.9\textwidth]{images/CVSS_V3_Freq_Years_15_19.png} \caption{CVSS v3 metrics' values distributions over the years} \label{fig:CVSS3-Freqs} \end{figure*} The histograms for individual metric values for v3 appear almost the same year to year for the 5 years of study. This is the same in v2 over the longer time period of 15 years with some small exceptions: in 2014 the attack vector (AV) value of adjacent had some significance\footnote{According to the NVD team (in an email received March 10, 2020), this was a one time anomaly due to more than 800 CVEs all being announced simultaneously by an organization doing analyses on phone apps.}, the attack complexity (AC) value of medium increased some from 2007 onwards but then was steady, the authentication (Au) value of single increased slightly over the years, and the confidentiality (C), integrity (I), and availability (A) metric proportions between None, Partial, and Complete varied slightly from year to year while generally maintaining themselves about the same. Overall though, the software vulnerability landscape for publicly disclosed vulnerabilities has been almost static during the period of study. This said, doing comparisons between the v2 and v3 histograms we see some differences, but this is due to differences in the approaches of the two versions of CVSS. These differences are primarily seen in regards to the metrics C, I, and A, which we will discuss shortly. Consider the AV metric which reflects the context by which the vulnerability can possibly be exploited: Network (N), Adjacent (A), Local (L), or Physical (P). Both data sets show a high peak at N, a low peak at L and almost nothing at A and P. This indicates that the overwhelming majority of publicly disclosed software vulnerabilities are exploitable over the network (i.e., remotely), and it has been that way consistently through the period of study. The AC metric describes the conditions beyond the attacker's control that must exist in order to exploit the vulnerability. When it is low (AC:L), the attacker can expect repeatable easy successes, while when it is high (AC:H) the attack is less likely to be successful. The data shows that the AC metric is largely dominated by the values of AC:L for v3 and AC:L and AC medium (AC:M) for v2. This indicates that the set of publicly disclosed vulnerabilities have been predominantly easy to exploit. This ``easiness'' to exploit vulnerabilities is confirmed by the other metrics for each CVSS version. For v3, the Privileges Required (PR) metric describes the level of privileges an attacker must possess before successfully exploiting a vulnerability. The user interaction (UI) metric captures the requirements for a human user (other than the attacker) to participate in the successful compromising of the vulnerable components. The data shows that in most of the cases, no privilege is required and very little user interaction is needed for a successful attack. Similarly, with v2, the Au metric measures the number of times an attacker must legitimately authenticate to a target in order to be in a position to exploit a vulnerability. The data shows that almost always, there is no authentication required prior to exploiting a vulnerability. Sometimes a single authentication is required, but almost never is there a vulnerability that requires multiple authentications in order to be successfully exploited. CVSS v3 introduced a new scope (S) metric, which captures the spill-over effect: how much a vulnerability in one vulnerable component impacts resources in components outside of its security scope. When the scope is unchanged (S:U), there is no spill-over, while when the scope is changed (S:C) the vulnerability will very likely affect other components. The data shows that the scope metric has predominantly been S:U. The last three metrics C, I, and A are common to both CVSS versions. They capture the extent to which a successful exploitation of a vulnerability will affect these three principles of security on the effected component. With respect to these metrics, the v3 data shows that the impact on C, I, and A has predominantly been high (C:H, I:H, and A:H) with very similar distributions for all the years. The v2 data also shows a similar stationary behavior in the distributions. However, the difference in the fraction of high for v3 and complete for v2 is notable. One might expect the high values in CVSS v3 to be equivalent to the complete values for v2. However, this is not the case as they are defined differently. \subsection{Relative Rankings of the Most Frequent Metric Values} \label{sec:RelativeRankings} We now focus on the most prominent individual values of the metrics, evaluating the rankings of the top 10 metric values observed each year and providing a comparison between the years. Figure \ref{fig:CVSS3-Top10} shows the rankings for v3 (the same plots for v2 can be found here \cite{OurArxivVersion}). The y-axes show the top 10 most prevalent metric values, ordered from the least frequent to most frequent. Thus, the set of metric values included in the y-axis is significant (only the top ten are shown). The x-axes show the years. Each (\textit{x},\textit{y}) point indicates that on year \textit{x} the metric value at \textit{y} has a rank indicated by the number in the circle. The size of the circle is proportional to the number of times that metric value appeared in a score in that year. For example in Figure \ref{fig:CVSS3-Top10}, in 2017 the metric value AV-N was the fourth most frequent metric value within the set of all v3 vectors. However, in 2018 and 2019 this metric value became the third most frequent. Notice that in general, a value might appear in the top 10 of one year while not appearing in another year. Whenever that happens, we rank that particular value 11 for all the years in which it did not appear. For v3 (see Figure \ref{fig:CVSS3-Top10}), we observed that the same top 10 values appeared from 2016 to 2019. Furthermore, only one of those values is missing in the 2015 top 10. In addition, these values were ranked almost the same over the years. The top 2 are constant and in the same order over the time period 2015 to 2019. The top 4 and the bottom 4 (including the 11th appended value) are also constant with minor changes in the order they appear over the years. The v2 data shows similar results (see \cite{OurArxivVersion}). This is another illustration of the stationary threat landscape observed earlier. It also corroborates the observations in Figure \ref{fig:Theo_vs_Emp}, that the landscape has been dominated by just a few vulnerability types. \begin{figure*}[htp] \centering \includegraphics[width=0.9\textwidth]{images/CVSS_v3_Top10_1.png} \caption{CVSS v3 top 10 rankings} \label{fig:CVSS3-Top10} \end{figure*} In conclusion, our data indicates that the vulnerability threat landscape has been dominated by a few vulnerability types and has not evolved over the years. The overwhelming majority of software vulnerabilities are exploitable over the network (i.e., remotely). The complexity to successfully exploit these vulnerabilities is dominantly low and very little authentication to the target victim is necessary for a successful attack. Moreover, most of the flaws require very limited interaction with users. The damage of these vulnerabilities has however mostly been confined within the scope of the compromised systems. \section{Related Work} \label{sec:relatedWork} There are many efforts to understanding the software vulnerability landscape. These efforts include reports by security solutions vendors \cite{SymnatecReport2019,MCAfeeReport2019}, white papers from non-profits such as MITRE \cite{MitreTop25} and SANS \cite{SANSCyberThreat}, as well as academic papers\footnote{Any mention of commercial products or entities is for information only; it does not imply recommendation or endorsement.}. For CVSS, most studies focused on the aggregation equation that produces the CVSS numerical scores representing the severity of the vulnerability. Surprisingly, we found no studies on v3 despite its preponderance in commercial security software. Reference \cite{mell2007improving} is among the first statistical studies of the CVSS scoring system. It evaluates v1 and proposed improvements that contributed to the release of v2. Our study considers both v2 and v3 (but doesn't try to improve on either). Relative to the statistical evaluation, we consider our paper as a continuation and update of the work in \cite{mell2007improving}. However, our work uses data from a much longer time period. It also goes one step further by analyzing association rules of vulnerability metrics. It is worth noting that there are similarities between the results of the two studies. For instance, both papers show the predominance of certain types of vulnerabilities. Our temporal analysis (which was not performed in \cite{mell2007improving}) shows that this predominance is maintained over the years. Reference \cite{scarfone2009analysis} considers CVSS v1 and v2 and analyzes how effectively v2 addresses the deficiencies found in v1. It also identifies new deficiencies. In contrast, our motivation was to understand the threat landscape. Reference \cite{holm2012empirical} uses empirical data from an international cyber defense exercise to study how 18 security estimation metrics based on CVSS correlate with the actual \emph{time-to-compromised} (TTC) of 34 successful attacks. This study uses TTC as a dependent variable to analyze how well different security estimation models involving CVSS are able to approximate the actual security of network systems. The results suggest that security modeling with CVSS data alone does not accurately portray the time-to-compromise of a system. This result questions the applicability of the CVSS numerical scoring equation. Our study focused on the raw CVSS vectors, which represent the actual experts' opinions about the vulnerabilities. Reference \cite{zhang2011empirical} uses NVD data to study trends and patterns in software vulnerabilities in order to predict the time to next vulnerability for a given software application. Data mining techniques were used as prediction tools. The vulnerability features used to aid the prediction are the published time of each vulnerability and its version. We believe that these features are not sufficiently informative. Instead, we directly use the eight metrics from the CVSS base scores which constitute the best available information covering large multi-year sets of vulnerabilities. Reference \cite{Feutrill2018} also carried out a predictive study based on the NVD/CVSS and ExploitDB \cite{ExploitDB} data. Using the CVSS data, it attempts to answer two questions: \emph{ (1) Can we predict the time until a proof of concept exploit is developed based on the CVSS metrics? and (2) Are CVSS metrics populated in time to be used meaningfully for exploit delay prediction of CVEs?} The former is answered in the positive, while the latter is answered in the negative. While using the same datasets, our objective differs from that in \cite{Feutrill2018}. We did not attempt to predict the threat landscape; we provide a thorough historical and statistical study of vulnerabilities for the last fifteen years. The work in \cite{johnson2016can} is another assessment of CVSS. It evaluates the trustworthiness of CVSS by considering data found in five vulnerability databases: NVD, X-Force, OSVDB (Open Source Vulnerability Database), CERT-VN (Computer Emergency Response Team, Vulnerability Notes Database) , and Cisco IntelliShield Alerts. It then uses a Bayesian model to study consistencies and differences. It concluded that CVSS is trustworthy and robust in the sense that most of the databases generally agree. This suggests that our focus on the NVD to study the threat landscape is justified: studies using data from the other databases will likely lead to the same conclusions. All of the studies cited above are focused on v1 and v2. In our literature survey, we did not find a single study that uses the updated and significantly modified v3 to understand the software vulnerability landscape. We believe that the present paper is the first of this kind in doing so. Furthermore, our study is the first to use association rule mining and co-occurrence of vulnerability metrics' values in an attempt to characterize the software threat landscape. \section{Conclusion} \label{sec:conclusion} Our data indicates that the vulnerability threat landscape for publicly disclosed vulnerabilities has been dominated by a few vulnerability types and has not significantly changed from 2005 to 2019. However, the underlying software flaw types that enable these vulnerabilities change dramatically from year to year (for example, see \cite{NVDCWE}). This means that many flaw types result in vulnerabilities with the same properties. This is bad news because it means, as a security community, it will be difficult to eliminate certain vulnerability types because they result from a plethora of underlying software flaw types. Another concern is that the overwhelming majority of software vulnerabilities are exploitable over the network. When developing software, efforts should be made to reduce unnecessary connections, protect necessary ones, and require more authentication where possible to reduce attack surface area. Another significant issue is that most of the vulnerabilities require no sophistication to be exploited (but again this is hard to improve upon due to the many software flaw types that allow this). These two factors together combined with the finding that most vulnerabilities require very limited interaction with users facilitates the widespread hacking occurring today. Often in security literature the human is cited as the weakest link. While certainly humans can be exploited, within the set of CVE type vulnerabilities exploitation of humans plays a very minor role; training humans will have little impact in this area. Overall, this study documents the security community's inability to eliminate any types of vulnerabilities through addressing the related software flaw types. In 15 years, the vulnerability landscape hasn't changed; through the lens of the metrics in this paper we aren't making progress. Perhaps we as community need to ``stop and think'' about the ways we are developing software and/or the methods we use to identify vulnerabilities. The security community needs new approaches. We don't want to write this same paper 15 years from now showing that, once again, nothing has changed. Overall, this study shows that either we (the community) are incapable of correcting the most common software flaws, or we are focusing on the wrong flaws. In either case, it seems to us that there is a need to ``stop and think'' about the ways we are developing software and/or the methods we use to identify vulnerabilities. \begin{comment} \section*{Acknowledgement} This work was partially accomplished under NIST (National Institute for Standards and Technology) Cooperative Agreement No.70NANB19H063 with Prometheus Computing, LLC. The authors would like to thank the NVD staff for their review and consideration of this work. \end{comment}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Learning predictive models like regressors or classifiers from data has become a routine exercise in machine learning nowadays. Nevertheless, making predictions and reasoning about classifier behavior on unseen data is still a highly challenging task for many real-world applications. This is even more true when data is affected by uncertainty, e.g., in the case of noisy or missing observations. A principled way to deal with this kind of uncertainty would be to probabilistically reason about the expected outcomes of a predictive model on a particular feature distribution. That is, to compute mathematical expectations of the predictive model w.r.t.\ a generative model representing the feature distribution. This is a common need that arises in many scenarios including dealing with missing data~\cite{little2019statistical,Khosravi2019}, performing feature selection~\cite{yu2009active,choi2012same,cvdb17}, handling sensor failure and resource scaling~\cite{GalindezNeurIPS19}, seeking explanations~\cite{ribeiro2016should,NIPS2017_7062,chang2018explaining} or determining how ``fair'' the learned predictor is~\cite{zafar2015fairness,zafar2017parity, DBLP:journals/corr/abs-1906-03843}. While dealing with the above expectations is ubiquitous in machine learning, computing the expected predictions of an \textit{arbitrary} discriminative models w.r.t. an \textit{arbitrary} generative model is in general computationally intractable~\cite{Khosravi2019,Roth1996}. As one would expect, the more expressive these models become, the harder it is to compute the expectations. More interestingly, even resorting to simpler discriminative models like logistic regression does not help reducing the complexity of such a task: computing the first moment of its predictions w.r.t. a naive Bayes model is known to be NP-hard~\cite{Khosravi2019}. In this work, we introduce a pair of expressive generative and discriminative models for regression, for which it is possible to compute not only expectations, but any moment efficiently. We leverage recent advancements in probabilistic circuit representations. Specifically, we prove that generative and discriminative circuits enable computing the moments in time polynomial in the size of the circuits, when they are subject to some structural constraints which do not hinder their expressiveness. Moreover, we demonstrate that for classification even the aforementioned structural constraints cannot guarantee computations in tractable time. However, efficiently approximating them becomes doable in polynomial time by leveraging our algorithm for the computations of arbitrary moments. Lastly, we investigate applications of computing expectations. We first consider the challenging scenario of missing values at test time. There, we empirically demonstrate that computing expectations of a discriminative circuit w.r.t.\ a generative one is not only a more robust and accurate option than many imputation baselines for regression, but also for classification. In addition, we show how we can leverage this framework for exploratory data analysis to understand behavior of predictive models within different sub-populations. \section{Expectations and higher order moments of discriminative models} We use uppercase letters for random variables, e.g.,~$X$, and lowercase letters for their assignments e.g.,~$x$. Analogously, we denote sets of variables in bold uppercase, e.g., $\ensuremath{\mathbf{X}}$ and their assignments in bold lowercase, e.g., $\ensuremath{\mathbf{x}}$. The set of all possible values that $\ensuremath{\mathbf{X}}$ can take is denoted as $\mathcal{X}$. Let $p$ be a probability distribution over $\ensuremath{\mathbf{X}}$ and $f: \mathcal{X} \to \mathbb{R}$ be a discriminative model, e.g., a regressor, that assigns a real value (outcome) to each complete input configuration $\ensuremath{\mathbf{x}}\in\mathcal{X}$ (features). The task of computing \emph{the $k$-th moment} of $f$ with respect to the distribution $p$ is defined as: \begin{equation} M_k(f,p) \triangleq \EX\nolimits_{\ensuremath{\mathbf{x}} \sim p(\ensuremath{\mathbf{x}})} \left[ (f(\ensuremath{\mathbf{x}}))^k \right]. \end{equation} Computing moments of arbitrary degree $k$ allows one to probabilistically reason about the outcomes of $f$. That is, it provides a description of the distribution of its predictions assuming $p$ as the data-generating distribution. For instance, we can compute the mean of $f$ w.r.t.\ $p$: $\EX_p [f] = M_1(f,p)$ or reason about the dispersion (variance) of its outcomes: $\ensuremath{\mathbb{VAR}}_p(f) = M_2(f,p) - (M_1(f,p))^2$. These computations can be a very useful tool to reason in a principled way about the behavior of $f$ in the presence of uncertainty, such as making predictions with missing feature values~\cite{Khosravi2019} or deciding a subset of $\ensuremath{\mathbf{X}}$ to observe~\cite{krause2009optimal,yu2009active}. For example, given a partial assignment $\ensuremath{\mathbf{x}}^o$ to a subset $\ensuremath{\mathbf{X}}^o \subseteq \ensuremath{\mathbf{X}}$, the expected prediction of $f$ over the unobserved variables can be computed as $ \EX_{\ensuremath{\mathbf{x}} \sim p(\ensuremath{\mathbf{x}} \vert \ensuremath{\mathbf{x}}^o)} \left[ f(\ensuremath{\mathbf{x}}) \right]$, which is equivalent to $M_1(f, p(.\vert \ensuremath{\mathbf{x}}^o))$. Unfortunately, computing arbitrary moments, and even just the expectation, of a discriminative model w.r.t.\ an arbitrary distribution is, in general, computationally hard. Under the restrictive assumptions that $p$ fully factorizes, i.e., $p(\ensuremath{\mathbf{X}})=\prod_{i}p(X_{i})$, and that $f$ is a simple linear model of the form $f(\ensuremath{\mathbf{x}})=\sum_{i}\phi_{i}x_{i}$, computing expectations can be done in linear time. However, the task suddenly becomes NP-hard even for slightly more expressive models, for instance when $p$ is a naive Bayes distribution and $f$ is a logistic regression (a generalized linear model with a sigmoid activation function). See~\cite{Khosravi2019} for a detailed discussion. In Section~\ref{sec:moments}, we propose a pair of a generative and discriminative models that are highly expressive and yet still allow for polytime computation of exact moments and expectations of the latter w.r.t.\ the former. We first review the necessary background material in Section~\ref{sec:background}. \begin{figure*}[t!] \begin{subfigure}[t]{0.17\textwidth} \centering \scalebox{.9}{ \begin{tikzpicture}[level distance=1.3cm, level 1/.style={sibling distance=1.1cm}, level 2/.style={sibling distance=1.1cm}] \node[circle,fill=gold1,inner sep=4pt, line width=3pt, draw=white]{} child {node {$X_1$} } child {node[circle,fill=petroil6,inner sep=4pt, line width=3pt, draw=white] {} child {node {$X_2$}} child {node {$X_3$}} }; \end{tikzpicture} } \caption{A vtree}\label{fig: vtree} \end{subfigure} \begin{subfigure}[t]{0.37\textwidth} \centering \scalebox{0.67}{ \begin{tikzpicture}[circuit logic US, nnf] \node (b1) [nnfterm] at (0.0, 0.0) {$X_2$}; \node (c1) [nnfterm] at ($(b1) + (0.8, 0.0)$) {$X_3$}; \node (not b) [nnfterm] at ($(c1) + (1.0, 0.0)$) {$\neg X_2$}; \node (not c1) [nnfterm] at ($(not b) + (1.0, 0.0)$) {$\neg X_3$}; \node (b2) [nnfterm] at ($(not c1) + (0.9, 0.0)$) {$X_2$}; \node (c2) [nnfterm] at ($(b2) + (0.8, 0.0)$) {$X_3$}; \node (not c2) [nnfterm] at ($(c2) + (0.8, 0.0)$) {$\neg X_3$}; \node (or1) [nnf2or, scale=0.75] at ($(c2) + (0.4, 1.4)$) {}; \node (and1) [nnf2and, scale=0.9,fill=petroil6, draw=none] at ($(b1) + (0.4, 1.5)$) {}; \node (and2) [nnf2and, scale=0.9,fill=petroil6, draw=none] at ($(not b) + (0.5, 1.5)$) {}; \node (and3) [nnf2and, scale=0.9,fill=petroil6, draw=none] at ($(b2) + (0.1, 2.5)$) {}; \node (or2) [nnf2or, scale=0.9] at ($(and1) + (.95, 1.5)$) {\rotatebox{270}{}}; \node (or3) [nnf3or, scale=0.9] at ($(and3) + (0.0, 1.1)$) {\rotatebox{270}{}}; \node (a) [nnfterm] at ($(or2) + (-1.2, 0.8)$) {$X_1$}; \node (not a) [nnfterm] at ($(or3) + (-1.2, 0.2)$) {$\neg X_1$}; \node (and4) [nnf2and, scale=0.9,fill=gold1, draw=none] at ($(or2) + (-0.6, 1.8)$) {}; \node (and5) [nnf2and, scale=0.9,fill=gold1, draw=none] at ($(or3) + (-0.6, 1.2)$) {}; \node (root) [nnf2or, scale=0.9] at ($(and4) + (1.25, 1.5)$) {}; \node (dummy) at ($(root) + (0.0, 0.4)$) {}; \begin{scope}[on background layer] \draw [nnfedge] (c2) -- ++ (up:0.75) -| (or1.input 1) node[pos=0.4,above left] {$.5$}; \draw [nnfedge] (not c2) -- ++ (up:0.75) -| (or1.input 2) node[pos=0.4,above right] {$.5$}; \draw [nnfedge] (b1) -- ++ (up: 0.75) -| (and1.input 1); \draw [nnfedge] (c1) -- ++ (up: 0.75) -| (and1.input 2); \draw [nnfedge] (not b) -- ++ (up: 0.75) -| (and2.input 1); \draw [nnfedge] (not c1) -- ++ (up: 0.75) -| (and2.input 2); \draw [nnfedge] (b2) -- (and3.input 1); \draw [nnfedge] (or1.output) -- ++ (up: 0.22) -| (and3.input 2); \draw [nnfedge] (and1.output) -- ++ (up: 0.45) -| (or2.input 1) node[pos=0.4,above left] {$.6$}; \draw [nnfedge] (and2.output) -- ++ (up: 0.45) -| (or2.input 2) node[pos=0.4,above right] {$.4$}; \draw [nnfedge] (and3.output) -- (or3.input 2) node[pos=0.05,above left] {$1.0$}; \draw [nnfedge] (a) -- ++ (up: 0.45) -| (and4.input 1); \draw [nnfedge] (or2.output) -- ++ (up: 0.85) -| (and4.input 2); \draw [nnfedge] (not a) -- ++ (up: 0.45) -| (and5.input 1); \draw [nnfedge] (or3.output) -- ++ (up: 0.28) -| (and5.input 2); \draw [nnfedge] (and4.output) -- ++ (up: 0.45) -| (root.input 1) node[pos=0.4,above left] {$.2$}; \draw [nnfedge] (and5.output) -- ++ (up: 0.45) -| (root.input 2) node[pos=0.4,above right] {$.8$}; \draw [nnfedge] (root.output) -- (dummy); \end{scope} \end{tikzpicture} } \caption{A Probabilistic Circuit}\label{fig:PSDD} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \centering \scalebox{.65}{ \begin{tikzpicture}[circuit logic US, nnf] \node (b1) [nnfterm] at (0.0, 0.0) {$X_2$}; \node (c1) [nnfterm] at ($(b1) + (0.8, 0.0)$) {$X_3$}; \node (not b1) [nnfterm] at ($(c1) + (1.0, 0.0)$) {$\neg X_2$}; \node (not c1) [nnfterm] at ($(not b1) + (1.0, 0.0)$) {$\neg X_3$}; \node (or1) [nnf2or, scale=0.9] at ($(not c1) + (-0.1, 1.5)$) {}; \node (and1) [nnf2and, scale=0.9,fill=petroil6, draw=none] at ($(b1) + (0.1, 2.8)$) {}; \node (and2) [nnf2and, scale=0.9,fill=petroil6, draw=none] at ($(c1) + (0.5, 2.8)$) {}; \node (and3) [nnf2and, scale=0.9,fill=petroil6, draw=none] at ($(or1) + (-0.08, 1.3)$) {}; \node (b2) [nnfterm] at ($(not c1) + (1.0, 0.0)$) {$X_2$}; \node (not b2) [nnfterm] at ($(b2) + (0.9, 0.0)$) {$\neg X_2$}; \node (c2) [nnfterm] at ($(not b2) + (0.9, 0.0)$) {$X_3$}; \node (not c2) [nnfterm] at ($(c2) + (1.0, 0.0)$) {$\neg X_3$}; \node (or2) [nnf2or, scale=0.9] at ($(c2) + (0.1, 1.5)$) {}; \node (and4) [nnf2and, scale=0.9,fill=petroil6, draw=none] at ($(b2) + (0.1, 2.8)$) {}; \node (and5) [nnf2and, scale=0.9,fill=petroil6, draw=none] at ($(not b2) + (.1, 2.8)$) {}; \node (and6) [nnf2and, scale=0.9,fill=petroil6, draw=none] at ($(not c2) + (-0.1, 2.8)$) {}; \node (or3) [nnf3or, scale=0.9,line width=0.5mm] at ($(and2) + (0.0, 1.5)$) {}; \node (or4) [nnf3or, scale=0.9] at ($(and5) + (0.0, 1.5)$) {}; \node (a) [nnfterm] at ($(or3) + (-1.0, 0.5)$) {$X_1$}; \node (not a) [nnfterm] at ($(or4) + (-1.0, 0.5)$) {$\neg X_1$}; \node (and7) [nnf2and, scale=0.9,fill=gold1, draw=none] at ($(or3) + (-.1, 1.4)$) {}; \node (and8) [nnf2and, scale=0.9,fill=gold1, draw=none] at ($(or4) + (-.1, 1.4)$) {}; \node (root) [nnf2or, scale=0.9] at ($(and7) + (1.6, 1.2)$) {}; \node (dummy) at ($(root) + (0.0, 0.4)$) {}; \begin{scope}[on background layer] \draw [nnfedge] (c1) -- ++ (up: 0.95) -| (or1.input 1) node[pos=0.47,above left] {$-.3$}; \draw [nnfedge, draw=red, line width=0.5mm] (not c1) -- (or1.input 2) node[pos=0.4,above right] {$.5$}; \draw [nnfedge] (b1) -- (and1.input 1); \draw [nnfedge] (c1) -- ++ (up: 0.95) -| (and1.input 2); \draw [nnfedge] (b1) -- ++ (up: 1.4) -| (and2.input 1); \draw [nnfedge] (not c1) -- ++ (up: 0.45) -| (and2.input 2); \draw [nnfedge, draw=red, line width=0.5mm] (not b1) -- ++ (up: 1.95) -| (and3.input 1); \draw [nnfedge, draw=red, line width=0.5mm] (or1.output) -- (and3.input 2); \draw [nnfedge] (c2) -- (or2.input 1) node[pos=0.4,above left] {$1.7$}; \draw [nnfedge] (not c2) -- ++ (up: .75) -| (or2.input 2) node[pos=0.4,above right] {$2.8$}; \draw [nnfedge] (b2) -- (and4.input 1); \draw [nnfedge] (or2.output) -- ++ (up: 0.15) -| (and4.input 2); \draw [nnfedge] (not b2) -- (and5.input 1); \draw [nnfedge] (c2) -- ++ (up: .5) -| (and5.input 2); \draw [nnfedge] (not b2) -- ++ (up: 2.2) -| (and6.input 1); \draw [nnfedge] (not c2) -- (and6.input 2); \draw [nnfedge] (and1.output) -- ++ (up: 0.45) -| (or3.input 1) node[pos=0.4, above left] {$-5.3$}; \draw [nnfedge] (and2.output) -- (or3.input 2) node[pos=0.5, below left] {$2$}; \draw [nnfedge, draw=red, line width=0.5mm] (and3.output) -- ++ (up: 0.45) -| (or3.input 3) node[pos=0.4, above right] {$6.1$}; \draw [nnfedge] (and4.output) -- ++ (up: 0.45) -| (or4.input 1) node[pos=0.4, above left] {$3$}; \draw [nnfedge] (and5.output) -- (or4.input 2) node[pos=0.5, below left] {$-1.1$}; \draw [nnfedge] (and6.output) -- ++ (up: 0.45) -| (or4.input 3) node[pos=0.4, above right] {$-4.3$}; \draw [nnfedge, draw=red, line width=0.5mm] (a) -- ++ (up: 0.45) -| (and7.input 1); \draw [nnfedge, draw=red, line width=0.5mm] (or3.output) -- (and7.input 2); \draw [nnfedge] (not a) -- ++ (up: 0.45) -| (and8.input 1); \draw [nnfedge] (or4.output) -- (and8.input 2); \draw [nnfedge, draw=red, line width=0.5mm] (and7.output) -- ++ (up: 0.3) -| (root.input 1) node[pos=0.4, above left] {$-1.6$}; \draw [nnfedge] (and8.output) -- ++ (up: 0.3) -| (root.input 2) node[pos=0.4, above right] {$2.1$}; \draw [nnfedge] (root.output) -- (dummy); \end{scope} \end{tikzpicture} } \caption{A Logistic/Regression Circuit} \label{fig:LC} \end{subfigure} \caption{A vtree (a) over $\ensuremath{\mathbf{X}} =\{X_{1},X_{2},X_{3}\}$ and a generative and discriminative circuit pair (b, c) that conform to it. AND gates are colored as the vtree nodes they correspond to (blue and orange). For the discriminative circuit on the right, ``hot wires'' that form a path from input to output are colored red, for the given input configuration $\ensuremath{\mathbf{x}}=(X_{1}=1, X_{2}=0, X_{3}=0)$. } \label{fig:vtree-and-circuits} \end{figure*} \section{Generative and discriminative circuits} \label{sec:background} This section introduces the pair of circuit representations we choose as expressive generative and discriminative models. In both cases, we assume the input is discrete. We later establish under which conditions computing expected predictions becomes tractable. \paragraph{Logical circuits} A \emph{logical circuit}~\cite{darwiche2002knowledge,Darwiche2003} is a directed acyclic graph representing a logical formula where each node $n$ encodes a logical sub-formula, denoted as $[n]$. Each inner node in the graph is either an AND or an OR gate, and each leaf (input) node encodes a Boolean literal (e.g., $X$ or $\neg X$). We denote the set of child nodes of a gate $n$ as $\ensuremath{\mathsf{ch}}(n)$. An assignment $\ensuremath{\mathbf{x}}$ satisfies node $n$ if it satisfies the logical formula encoded by $n$, written $\ensuremath{\mathbf{x}}\models [n]$. Fig.~\ref{fig:vtree-and-circuits} depicts some examples of logical circuits. Several syntactic properties of circuits enable efficient logical and probabilistic reasoning over them~\cite{darwiche2002knowledge,NIPS2016_6363}. We now review those properties as they will be pivotal for our efficient computations of expectations and high-order moments in Section~\ref{sec:moments}. \paragraph{Syntactic Properties} A circuit is said to be \emph{decomposable} if for every AND gate its inputs depend on disjoint sets of variables. For notational simplicity, we will assume decomposable AND gates to have two inputs, denoted $\ensuremath{\mathsf{L}}$(eft) and $\ensuremath{\mathsf{R}}$(ight) children, depending on variables $\ensuremath{\mathbf{X}}^{\ensuremath{\mathsf{L}}}$ and $\ensuremath{\mathbf{X}}^{\ensuremath{\mathsf{R}}}$ respectively. In addition, a circuit satisfies \emph{structured decomposability} if each of its AND gates decomposes according to a \emph{vtree}, a binary tree structure whose leaves are the circuit variables. That is, the $\ensuremath{\mathsf{L}}$ (resp. $\ensuremath{\mathsf{R}}$) child of an AND gate depends on variables that appear on the left (resp. right) branch of its corresponding vtree node. Fig.~\ref{fig:vtree-and-circuits} shows a vtree and visually maps its nodes to the AND gates of two example circuits. A circuit is \emph{smooth} if for an OR gate all its children depend on the same set of variables \cite{ShihNeurIPS19}. Lastly, a circuit is \emph{deterministic} if, for any input, at most one child of every OR node has a non-zero output. For example, Fig.~\ref{fig:LC} highlights in red the wires that are true, and that form a path from the root to the leaves, given input $\ensuremath{\mathbf{x}}\!=\!(X_{1}\!=\!1, X_{2}\!=\!0, X_{3}\!=\!0)$. Note that every OR gate in Fig.~\ref{fig:LC} has at most one hot input wire, because of the determinism property. \paragraph{Generative probabilistic circuits} A \emph{probabilistic circuit} (PC) is characterized by its logical circuit structure and parameters $\theta$ that are assigned to the inputs of each OR gate. Intuitively, each PC node $n$ recursively defines a distribution $p_n$ over a subset of the variables $\ensuremath{\mathbf{X}}$ appearing in the sub-circuit rooted at it. More precisely: \begin{equation} p_n(\ensuremath{\mathbf{x}}) = \begin{cases} \mathds{1}_{n}(\ensuremath{\mathbf{x}}) &\text{if $n$ is a leaf,}\\ \ensuremath{p_{\Li}}(\ensuremath{\x^{\Li}}) \cdot \ensuremath{p_{\R}}(\ensuremath{\x^{\R}})&\text{if $n$ is an AND gate,}\\ \sum_{i \in\ensuremath{\mathsf{ch}}(n)} \theta_i p_i(\ensuremath{\mathbf{x}}) &\text{if $n$ is an OR gate.} \end{cases} \label{eq:prob-circuit-semantics} \end{equation} Here, $\mathds{1}_{n}(\ensuremath{\mathbf{x}})\triangleq\mathds{1}\{\ensuremath{\mathbf{x}}\models [n]\}$ indicates whether the leaf $n$ is satisfied by input $\ensuremath{\mathbf{x}}$. Moreover, $\ensuremath{\x^{\Li}}$ and $\ensuremath{\x^{\R}}$ indicate the subsets of configuration $\ensuremath{\mathbf{x}}$ restricted to the decomposition defined by an AND gate over its $\ensuremath{\mathsf{L}}$ (resp. $\ensuremath{\mathsf{R}}$) child. As such, an AND gate of a PC represents a factorization over independent sets of variables, whereas an OR gate defines a mixture model. Unless otherwise noted, in this paper we adopt PCs that satisfy \textit{structured decomposability} and \textit{smoothness} as our generative circuit. PCs allow for the exact computation of the probability of complete and partial configurations (that is, marginalization) in time linear in the size of the circuit. A well-known example of PCs is the probabilistic sentential decision diagram (PSDD)~\cite{Kisa2014}.\footnote{PSDDs by definition also satisfy determinism, but we do not require this property for computing moments.} They have been successfully employed as state-of-the-art density estimators not only for unstructured~\cite{Liang2017} but also for structured feature spaces~\cite{Choi2015,Shen2017,Shen2018}. Other types of PCs include sum-product networks (SPNs) and cutset networks, yet those representations are typically decomposable but not structured decomposable~\cite{Poon2011,Rahman2014}. \paragraph{Discriminative circuits} For the discriminative model $f$, we adopt and extend the semantics of logistic circuits (LCs): discriminative circuits recently introduced for classification \cite{LiangAAAI19}. An LC is defined by a decomposable, \textit{smooth} and \textit{deterministic} logical circuit with parameters $\phi$ on inputs to OR gates. Moreover, we will work with LCs that are \textit{structured decomposable}, which is a restriction already supported by their learning algorithms~\cite{LiangAAAI19}. An LC acts as a classifier on top of a rich set of non-linear features, extracted by its logical circuit structure. Specifically, an LC assigns an \emph{embedding} representation $h(\ensuremath{\mathbf{x}})$ to each input example $\ensuremath{\mathbf{x}}$. Each feature $h(\ensuremath{\mathbf{x}})_{k}$ in the embedding is associated with one input $k$ of one of the OR gates in the circuit (and thus also with one parameter $\phi_k$). It corresponds to a logical formula that can be readily extracted from the logical circuit structure. Classification is performed on this new feature representation by applying a sigmoid non-linearity: $f^\ensuremath{\mathsf{LC}}(\ensuremath{\mathbf{x}}) \triangleq {1}/{(1+e^{-\sum_k \phi_k h(\ensuremath{\mathbf{x}})_k})}$, and similar to logistic regression it is amenable to convex parameter optimization. Alternatively, one can fully characterize an LC by recursively defining the output of each node $m$. We use $g_m(\ensuremath{\mathbf{x}})$ to define output of node $m$ given $\ensuremath{\mathbf{x}}$. It can be computed as: \begin{equation} g_{m}(\ensuremath{\mathbf{x}}) = \begin{cases} 0 &\text{if $m$ is a leaf,}\\ \ensuremath{g_{\Li}}(\ensuremath{\x^{\Li}}) + \ensuremath{g_{\R}}(\ensuremath{\x^{\R}}) &\text{if $m$ is an AND gate,}\\ \sum\nolimits_{j\in \mathsf{ch}(m)} \mathds{1}_{j}(\ensuremath{\mathbf{x}})(\phi_{j} + g_{j}(\ensuremath{\mathbf{x}})) &\text{if $m$ is an OR gate.} \end{cases} \label{eq:glc-semantics} \end{equation} Again, $\mathds{1}_{j}(\ensuremath{\mathbf{x}})$ is an indicator for $\ensuremath{\mathbf{x}} \models [j]$, effectively using the determinism property of LCs to select which input to pass through. Then classification is done by applying a sigmoid function to the output of the circuit root $r$: $f^\ensuremath{\mathsf{LC}}(\ensuremath{\mathbf{x}}) = 1 / (1+e^{-g_r(\ensuremath{\mathbf{x}})})$. The increased expressive power of LCs w.r.t.\ simple linear regressors lies in the rich representations $h(\ensuremath{\mathbf{x}})$ they learn, which in turn rely on the underlying circuit structure as a powerful feature extractor~\cite{Vergari2017,Vergari2016a}. LCs have been introduced for classification and were shown to outperform larger neural networks~\cite{LiangAAAI19}. We also leverage them for regression, that is, we are interested in computing the expectations of the output of the root node $g_{r}(\ensuremath{\mathbf{x}})$ w.r.t.\ a generative model $p$. We call an LC when no sigmoid function is applied to $g_{r}(\ensuremath{\mathbf{x}})$ a $\emph{regression circuit}$ (RC). As we will show in the next section, we are able to exactly compute \textit{any moment} of an RC $g$ w.r.t.\ an LC $p$, that is, $ M_k(g, p) $, in time polynomial in the size of the circuits, if $p$ and $g$ share the same vtree. \section{Computing expectations and moments for circuit pairs} \label{sec:moments} We now introduce our main result, which leads to efficient algorithms for tractable Expectation and Moment Computation of Circuit pairs ({\ensuremath{\textsc{EC$_2$}}} and {\ensuremath{\textsc{MC$_2$}}}) in which the discriminative model is an RC and the generative model is a PC, and where both circuits are structured decomposable~\textit{sharing the same vtree}. Recall that we also assumed all circuits to be smooth, and the RC to be deterministic. \begin{theorem} Let $n$ and $m$ be root nodes of a PC and an RC with the same vtree over $\ensuremath{\mathbf{X}}$. Let $s_n$ and $s_m$ be their respective number of edges. Then, the $k^{\text{th}}$ moment of $g_m$ w.r.t.\ the distribution encoded by $p_n$, that is, $M_k(g_m, p_n)$, can be computed exactly in time $O(k^2 s_n s_m)$.\footnote{ This is a loose upper bound since the algorithm only looks at a small subset of pairs of edges in the circuits. A tighter bound would be $O(k^2 \sum_{v} s_{v} \, t_{v})$ where $v$ ranges over vtree nodes and $s_{v}$ (resp.\ $t_{v}$) counts the number of edges going into the nodes of the PC (resp.\ RC) that can be attributed to the vtree node $v$. } \label{th:emc2} \end{theorem} Moreover, this complexity is attained by the \ensuremath{\textsc{MC$_2$}}\ algorithm, which we describe in the next section. We then investigate how this result can be generalized to arbitrary circuit pairs and how restrictive the structural requirements are. In fact, we demonstrate how computing expectations and moments for circuit pairs \textit{not} sharing a vtree is \#P-hard. Furthermore, we address the hardness of computing expectations for an LC w.r.t.\ a PC--due to the introduction of the sigmoid function over $g$--by approximating it through the tractable computation of moments with the \ensuremath{\textsc{MC$_2$}}\ algorithm. \subsection{\ensuremath{\textsc{EC$_2$}}: Expectations of regression circuits} Intuitively, the computation of expectations becomes tractable because we can ``break it down'' to the leaves of the PC and RC, where it reduces to trivial computations. Indeed, the two circuits sharing the same vtree is the property that enables a polynomial time recursive decomposition, because it ensures that pairs of nodes considered by the algorithm depend on exactly the same set of variables. We will now show how this computation recursively decomposes over pairs of OR and AND gates, starting from the roots of the PC $p$ and RC $g$. We refer the reader to the Appendix for detailed proofs of all Propositions and Theorems in this section. Without loss of generality, we assume that the roots of both $p$ and $g$ are OR gates, and that circuit nodes alternate between AND and OR gates layerwise. \begin{proposition} \label{prop:or} Let $n$ and $m$ be OR gates of a PC and an RC, respectively. Then the expectation of the regressor $g_m$ w.r.t.\ distribution $p_n$ is: \begin{align*} M_1(g_m, p_n) = \sum\nolimits_{i\in\ensuremath{\mathsf{ch}}(n)} \theta_i \sum\nolimits_{j\in\ensuremath{\mathsf{ch}}(m)} \left( M_1(\mathds{1}_j \cdot g_j, p_i) + \phi_j M_1(\mathds{1}_j, p_i) \right). \end{align*} \end{proposition} The above proposition illustrates how the expectation of an OR gate of an RC w.r.t.\ an OR gate in the PC is a weighted sum of the expectations of the child nodes. The number of smaller expectations to be computed is quadratic in the number of children. More specifically, one now has to compute expectations of two different functions w.r.t.\ the children of PC $n$. First, $M_1(\mathds{1}_j, p_i)$ is the expectation of the indicator function associated to the $j$-th child of $m$ (see Eq.~\ref{eq:glc-semantics}) w.r.t.\ the $i$-th child node of $n$. Intuitively, this translates to the probability of the logical formula $[j]$ being satisfied according to the distribution encoded by $p_{i}$. Fortunately, this can be computed efficiently, in quadratic time, linear in the size of both circuits as already demonstrated in~\cite{Choi2015}. On the other hand, computing the other expectation term $M_1(\mathds{1}_j g_j, p_i)$ requires a novel algorithm tailored to RCs and PCs. We next show how to further decompose this expectation from AND gates to their OR children. \begin{proposition} \label{prop:and} Let $n$ and $m$ be AND gates of a PC and an RC, respectively. Let $n_\ensuremath{\mathsf{L}}$ and $n_\ensuremath{\mathsf{R}}$ (resp.\ $m_\ensuremath{\mathsf{L}}$ and $m_\ensuremath{\mathsf{R}}$) be the left and right children of $n$ (resp.\ $m$). Then the expectation of function $(\mathds{1}_m \cdot g_m)$ w.r.t.\ distribution $p_n$ is: \begin{align*} M_1(\mathds{1}_m \cdot g_m, p_n) = M_1(\mathds{1}_{m_\ensuremath{\mathsf{L}}}, p_{n_\ensuremath{\mathsf{L}}}) M_1(g_{m_\ensuremath{\mathsf{R}}}, p_{n_\ensuremath{\mathsf{R}}}) + M_1(\mathds{1}_{m_\ensuremath{\mathsf{R}}}, p_{n_\ensuremath{\mathsf{R}}}) M_1(g_{m_\ensuremath{\mathsf{L}}}, p_{n_\ensuremath{\mathsf{L}}}). \end{align*} \end{proposition} We are again left with the task of computing expectations of the RC node indicator functions, i.e., $M_1(\mathds{1}_{m_\ensuremath{\mathsf{L}}}, p_{n_\ensuremath{\mathsf{L}}})$ and $M_1(\mathds{1}_{m_\ensuremath{\mathsf{R}}}, p_{n_\ensuremath{\mathsf{R}}})$, which can also be done by exploiting the algorithm in~\cite{Choi2015}. Furthermore, note that the other expectation terms ($M_1(g_{m_\ensuremath{\mathsf{L}}}, p_{n_\ensuremath{\mathsf{L}}})$ and $M_1(g_{m_\ensuremath{\mathsf{R}}}, p_{n_\ensuremath{\mathsf{R}}})$) can readily be computed using Proposition~\ref{prop:or}, since they concern pairs of OR nodes. \begin{algorithm}[tb] \begin{algorithmic} \caption{\textsc{\ensuremath{\textsc{EC$_2$}}}{($n$, $m$)} \Comment{Cache recursive calls to achieve polynomial complexity} } \label{alg:exp} \Require{A PC node $n$ and an RC node $m$} \If {$m$ is $\mathsf{Leaf}$} \Return{0} \ElsIf {$n$ is $\mathsf{Leaf}$} \If {$[n] \models [m_\ensuremath{\mathsf{L}}]$} \Return{$\phi_{m_\ensuremath{\mathsf{L}}}$} \EndIf \If {$[n] \models [m_\ensuremath{\mathsf{R}}]$} \Return{$\phi_{m_\ensuremath{\mathsf{R}}}$} \EndIf \ElsIf {$n$,$m$ are $\mathsf{OR}$} \Return $\sum_{i\in\ensuremath{\mathsf{ch}}(n)} \theta_i \sum_{j\in\ensuremath{\mathsf{ch}}(m)} \left( \text{\ensuremath{\textsc{EC$_2$}}}(i,j) + \phi_j \textsc{Pr}(i,j) \right)$ \ElsIf {$n$,$m$ are $\mathsf{AND}$} \Return $\textsc{Pr}(n_\ensuremath{\mathsf{L}},m_\ensuremath{\mathsf{L}})\ \text{\ensuremath{\textsc{EC$_2$}}}(n_\ensuremath{\mathsf{R}},m_\ensuremath{\mathsf{R}}) + \textsc{Pr}(n_\ensuremath{\mathsf{R}},m_\ensuremath{\mathsf{R}})\ \text{\ensuremath{\textsc{EC$_2$}}}(n_\ensuremath{\mathsf{L}},m_\ensuremath{\mathsf{L}})$ \EndIf \end{algorithmic} \end{algorithm} We briefly highlight how determinism in the regression circuit plays a crucial role in enabling this computation. In fact, OR gates being deterministic ensures that the otherwise non-decomposable product of indicator functions $\mathds{1}_{m} \cdot \mathds{1}_{k}$, where $m$ is a parent OR gate of an AND gate $k$, results to be equal to $\mathds{1}_{k}$. We refer the readers to Appendix~\ref{sec:appx-regression-pf} for a detailed discussion. Recursively, one is guaranteed to reach pairs of leaf nodes in the RC and PC, for which the respective expectations can be computed in $\mathcal{O}(1)$ by checking if their associated Boolean indicators agree, and by noting that $g_{m}(\ensuremath{\mathbf{x}})=0$ if $m$ is a leaf (see Eq.~\ref{eq:glc-semantics}). Putting it all together, we obtain the recursive procedure shown in Algorithm~\ref{alg:exp}. Here, $\textsc{Pr}(n,m)$ refer to the algorithm to compute $M_1(\mathds{1}_m,p_n)$ in~\cite{Choi2015}. As the algorithm computes expectations in a bottom-up fashion, the intermediate computations can be cached to avoid evaluating the same pair of nodes more than once, and therefore keeping the complexity as stated by our Theorem \ref{th:emc2}. \subsection{\ensuremath{\textsc{MC$_2$}}: Moments of regression circuits} Our algorithmic solution goes beyond the tractable computation of the sole expectation of an RC. Indeed, any arbitrary order moment of $g_{m}$ can be computed w.r.t.\ $p_{n}$, still in polynomial time. We call this algorithm \ensuremath{\textsc{MC$_2$}}\ and we delineate its main routines with the following Propositions:\footnote{ The algorithm {\ensuremath{\textsc{MC$_2$}}} can easily be derived from \ensuremath{\textsc{EC$_2$}}\ in Algorithm~\ref{alg:exp}, using the equations in this section. } \begin{proposition} \label{prop:or-mom} Let $n$ and $m$ be OR gates of a PC and an RC, respectively. Then the $k$-th moment of the regressor $g_m$ w.r.t.\ distribution $p_n$ is: \begin{align*} M_k(g_m, p_n) = \sum\nolimits_{i\in\ensuremath{\mathsf{ch}}(n)} \theta_i \sum\nolimits_{j\in\ensuremath{\mathsf{ch}}(m)} \sum\nolimits_{l=0}^k \binom{k}{l} \phi_j^{k-l} M_l(\mathds{1}_j \cdot g_j, p_i). \end{align*} \end{proposition} \begin{proposition}\label{prop:and-mom} Let $n$ and $m$ be AND gates of a PC and an RC, respectively. Let $n_\ensuremath{\mathsf{L}}$ and $n_\ensuremath{\mathsf{R}}$ (resp.\ $m_\ensuremath{\mathsf{L}}$ and $m_\ensuremath{\mathsf{R}}$) be the left and right children of $n$ (resp.\ $m$). Then the $k$-th moment of function $(\mathds{1}_m g_m)$ w.r.t.\ distribution $p_n$ is: \begin{align*} M_k(\mathds{1}_m\cdot g_m, p_n) = \sum\nolimits_{l=0}^k \binom{k}{l} M_l(\mathds{1}_{m_\ensuremath{\mathsf{L}}} \cdot g_{m_\ensuremath{\mathsf{L}}}, p_{n_\ensuremath{\mathsf{L}}}) M_{k-l}(\mathds{1}_{m_\ensuremath{\mathsf{R}}} \cdot g_{m_\ensuremath{\mathsf{R}}}, p_{n_\ensuremath{\mathsf{R}}}) \end{align*} \end{proposition} Analogous to computing simple expectations, by recursively and alternatively applying Propositions~\ref{prop:or-mom} and~\ref{prop:and-mom}, we arrive at the moments of the leaves at both circuits, while gradually reducing the order $k$ of the involved moments. Furthermore, the lower-order moments in Proposition~\ref{prop:and-mom} that decompose to $\ensuremath{\mathsf{L}}$ and $\ensuremath{\mathsf{R}}$ children, e.g., $M_l(\mathds{1}_{m_\ensuremath{\mathsf{L}}} \cdot g_{m_\ensuremath{\mathsf{L}}}, p_{n_\ensuremath{\mathsf{L}}})$, can be computed by noting that they reduce to: \begin{align} M_k(\mathds{1}_m \cdot g_m, p_n) = \begin{cases} M_1(\mathds{1}_m, p_n) & \text{if $k=0$,} \\ M_k(g_m, p_n) & \text{otherwise.} \end{cases} \label{eq:simplify-or} \end{align} Note again that these computations are made possible by the interplay of determinism of $g$ and shared vtrees between $p$ and $g$. From the former it follows that a sum over OR gate children reduces to a single child value. The latter ensures that the AND gates in $p$ and $g$ decompose in the same way, thereby enabling efficient computations. Given this, a natural question arises: ``\textit{If we do not require a PC $p$ and a RC $g$ to have the same vtree structure, is computing $M_{k}(g, p)$ still tractable?}''. Unfortunately, this is not the case, as we demonstrate in the following theorem. \begin{theorem}\label{thm:exp-regression} Computing any moment of an RC $g_m$ w.r.t.\ a PC distribution $p_n$ where both have arbitrary vtrees is \#P-hard. \end{theorem} At a high level, we can reduce \#SAT, a well known \#P-complete problem on CNF sentences, to the moment computation problem. Given a choice of different vtrees, we can construct an RC and a PC in time polynomial in the size of the CNF formula such that its \#SAT value can be computed using the expectation of the RC w.r.t.\ the PC. We refer to Appendix~\ref{sec:appx-regression-pf} for more details. So far, we have focused our analysis to RCs, the analogous of LCs for regression. One would hope that the efficient computations of \ensuremath{\textsc{EC$_2$}}\ could be carried on to LCs to compute the expected predictions of classifiers. However, the application of the sigmoid function $\sigma$ on the regressor $g$, \textit{even when $g$ shares the same vtree as $p$}, makes the problem intractable, as our next Theorem shows. \begin{theorem}\label{thm:exp-logistic} Taking the expectation of an LC ($\sigma \circ g_m$) w.r.t.\ a PC distribution $p_n$ is NP-hard even if $n$ and $m$ share the same vtree. \end{theorem} This follows from a recent result that taking the expectation of a logistic regression w.r.t.\ a naive Bayes distribution is NP-hard~\cite{Khosravi2019}; see Appendix~\ref{sec:appx-logistic-pf} for a detailed proof. \subsection{Approximating expectations of classifiers} \label{sec:taylor} Theorem~\ref{thm:exp-logistic} leaves us with no hope of computing exact expected predictions in a tractable way even for pairs of generative PCs and discriminative LCs conforming to the same vtree. Nevertheless, we can leverage the ability to efficiently compute the moments of the RC $g_m$ to efficiently approximate the expectation of $\gamma \circ g_m$, with $\gamma$ being any differentiable non-linear function, including sigmoid $\sigma$. Using a Taylor series approximation around point $\alpha$ we define the following $d$-order approximation: \begin{align*} T_{d}(\gamma \circ g_m, p_n) \triangleq \sum\nolimits_{k=0}^{d} \frac{\gamma^{(k)}(\alpha)}{k!} M_k(g_m-\alpha, p_n) \end{align*} See Appendix~\ref{sec:appx-aprox-classification}, for a detailed derivation and more intuition behind this approximation. \begin{figure*}[!t] \begin{center} \begin{subfigure}[c]{.24\textwidth} \includegraphics[width=1.0\textwidth]{figs/abalone_trysample_sqrtmse_plot_merged-crop.pdf} \label{fig:cccccc} \end{subfigure} \begin{subfigure}[c]{.24\textwidth} \includegraphics[width=1.0\textwidth]{figs/delta-ailerons_trysample_sqrtmse_plot_merged-crop.pdf} \label{fig:bbbb} \end{subfigure} \begin{subfigure}[c]{.24\textwidth} \includegraphics[width=1.0\textwidth]{figs/insurance_trysample_sqrtmse_plot_merged-crop.pdf} \label{fig:aaaa} \end{subfigure} \begin{subfigure}[c]{.24\textwidth} \includegraphics[width=1.0\textwidth]{figs/elevators_trysample_sqrtmse_plot_merged-crop.pdf} \label{fig:ddd} \end{subfigure} \end{center} \caption{Evaluating \ensuremath{\textsc{EC$_2$}}\ for predictions under different percentages of missing features (x-axis) over four real-world \textit{regression} datasets in terms of the RMSE (y-axis) of the predictions of $g((\ensuremath{\mathbf{x}}^{m}, \ensuremath{\mathbf{x}}^{o}))$. Overall, exactly computing the expected predictions via \ensuremath{\textsc{EC$_2$}}\ outperforms simple imputation schemes like median and mean as well as more sophisticated ones like MICE~\cite{azur2011multiple} or computing the MPE configuration with the PC $p$. Detailed dataset statistics can be found in Appendix~\ref{sec:appx-datasets}.} \label{fig:regression-plots} \end{figure*} \section{Expected prediction in action} \label{sec:applications} In this section, we empirically evaluate the usefulness and effectiveness of computing the expected predictions of our discriminative circuits with respect to generative ones.% \footnote{Our implementation of the algorithm and experiments are available at \href{https://github.com/UCLA-StarAI/mc2}{https://github.com/UCLA-StarAI/mc2}.} % First, we tackle the challenging task of making predictions in the presence of missing values at test time, for both regression and classification.\footnote{In case of classification, we use the Taylor expansion approximation we discussed in Section~\ref{sec:taylor}.} Second, we show how our framework can be used to reasoning about the behavior of predictive models. We employ it in the context of exploratory data analysis, to check for biases in the predictive models, or to search for interesting patterns in the predictions associated with sub-populations in the data distribution. \subsection{Reasoning with missing values: an application} Traditionally, prediction with missing values has been addressed by imputation, which substitutes missing values with presumably reasonable alternatives such as the mean or median, estimated from training data~\cite{schafer1999multiple}. These imputation methods are typically heuristic and model-agnostic~\cite{little2019statistical}. To overcome this, the notion of expected predictions has recently been proposed in~\cite{Khosravi2019} as a probabilistically principled and model-aware way to deal with missing values. Formally, we want to compute \begin{equation} \EX\nolimits_{\ensuremath{\mathbf{x}}^{m}\sim p(\ensuremath{\mathbf{x}}^{m}|\ensuremath{\mathbf{x}}^{o})}\left[f(\ensuremath{\mathbf{x}}^m \ensuremath{\mathbf{x}}^o)\right] \label{eq:exp-miss} \end{equation} where $\ensuremath{\mathbf{x}}^{m}$ (resp. $\ensuremath{\mathbf{x}}^{o}$) denotes the configuration of a sample $\ensuremath{\mathbf{x}}$ that is missing (resp.\ observed) at test time. In the case of regression, we can exactly compute Eq.~\ref{eq:exp-miss} for a pair of generative and discriminative circuits sharing the same vtree by our proposed algorithm, after observing that \begin{align} \EX\nolimits_{\ensuremath{\mathbf{x}}^{m}\sim p(\ensuremath{\mathbf{x}}^{m}|\ensuremath{\mathbf{x}}^{o})}\left[f(\ensuremath{\mathbf{x}}^m \ensuremath{\mathbf{x}}^o)\right] = \frac{1}{p(\ensuremath{\mathbf{x}}^{o})}\EX\nolimits_{\ensuremath{\mathbf{x}}^{m}\sim p(\ensuremath{\mathbf{x}}^{m}, \ensuremath{\mathbf{x}}^{o})}\left[f(\ensuremath{\mathbf{x}}^m \ensuremath{\mathbf{x}}^o)\right] \label{eq:exp-miss-ii} \end{align} where $ p(\ensuremath{\mathbf{x}}^{m}, \ensuremath{\mathbf{x}}^{o})$ is the unnormalized distribution encoded by the generative circuit \textit{configured} for evidence $\ensuremath{\mathbf{x}}^{o}$. That is, the sub-circuits depending on the variables in $\ensuremath{\mathbf{X}}^{o}$ have been fixed according to the input $\ensuremath{\mathbf{x}}^{o}$. This transformation, and computing any marginal $p(\ensuremath{\mathbf{x}}^{o})$, can be done efficiently in time linear in the size of the PC~\cite{Darwiche2009}. To demonstrate the generality of our method, we construct a 6-dataset testing suite, four of which are common regression benchmarks from several domains~\cite{khiari2018metabags}, and the rest are classification on MNIST and FASHION datasets \cite{yann2009mnist,fashion2017}. We compare our method with classical imputation techniques such as standard mean and median imputation, and more sophisticated (and computationally intensive) imputation techniques such as multiple imputations by chained equations (MICE) \cite{azur2011multiple}. Moreover, we adopt a natural and strong baseline: imputing the missing values by the most probable explanation (MPE) \cite{Darwiche2009}, computed by probabilistic reasoning on the generative circuit $p$. Note that the MPE inference acts as an imputation: it returns the mode of the input feature distribution, while $\ensuremath{\textsc{EC$_2$}}$ would convey a more global statistic of the distribution of the outputs of such a predictive model. To enforce that the discriminative-generative pair of circuits share the same vtree, we first generate a fixed random and balanced vtree and use it to guide the respective parameter and structure learning algorithms of our circuits. In our experiments we adopt PSDDs~\cite{Kisa2014} for the generative circuits. PSDDs are a subset of PCs, since they also satisfy determinism. Although we do not require determinism of generative circuits for moment computation, we use PSDDs due to the availability of their learning algorithms. On image data, however, we exploit the already learned and publicly available LC structure in~\cite{LiangAAAI19}, which scores 99.4\% accuracy on MNIST, being competitive to much larger deep models. We learn a PSDD with the same vtree. For RCs, we adapt the parameter and structure learning of LCs~\cite{LiangAAAI19}, substituting the logistic regression objective with a ridge regression during optimization. For structure learning of both LCs and RCs, we considered up to 100 iterates while monitoring the loss on a held out set. For PSDDs we employ the parameter and structure learning of~\cite{Liang2017} with default parameters and run it up to 1000 iterates until no significant improvement is seen on a held out~set. Figure~\ref{fig:regression-plots} shows our method outperforming other regression baselines. This can be explained by the fact that it computes the exact expectation while other techniques make restrictive assumptions to approximate the expectation. Mean and median imputations effectively assume that the features are independent; MICE\footnote{On the elevator dataset, we reported MICE result only until 30\% missing as the imputation method is computationally heavy and required more than 10hr to complete.} assumes a fixed dependence formula between the features; and, as already stated, MPE only considers the highest probability term in the expansion of the expectation. Additionally, as we see in Figure~\ref{fig:classification-plots}, our approximation method for predicted classification, using just the first-order expansion $T_{1}(\gamma \circ g_m, p_n)$, is able to outperform the predictions of the other competitors. This suggests that our method is effective in approximating the true expected values. These experiments agree with the observations from \cite{Khosravi2019} that, given missing data, probabilistically reasoning about the outcome of a classifier by taking expectations can generally outperform imputation techniques. Our advantage clearly comes from the PSDD learning a better density estimation of the data distribution, instead of having fixed prior assumptions about the features. An additional demonstration of this fact comes from the excellent performance of MPE on both datasets. Again, this can be credited to the PSDD learning a good distribution on the features. \begin{figure*}[!t] \centering \begin{minipage}[c]{0.55\textwidth} \begin{subfigure}[c]{.48\textwidth} \centering \includegraphics[width=1\textwidth]{figs/mnist_trial-III_all_plot_merged-crop.pdf} \end{subfigure}\hspace{5pt} \begin{subfigure}[c]{.47\textwidth} \centering \includegraphics[width=1\textwidth]{figs/fmnist_trial-III_all_plot_merged-crop.pdf} \end{subfigure} \end{minipage}\hfill \begin{minipage}[c]{0.4\textwidth} \caption{ Evaluating the first-order Taylor approximation $T_{1}(\sigma \circ g_m, p_n)$ of the expected predictions of a \textit{classifier} for missing value imputation for different percentages of missing features (x-axis) in terms of the accuracy (y-axis). } \label{fig:classification-plots} \end{minipage} \end{figure*} \subsection{Reasoning about predictive models for exploratory data analysis} \label{sec:explore_queries} We now showcase an example of how our framework can be utilized for exploratory data analysis while reasoning about the behavior of a given predictive model. Suppose an insurance company has hired us to analyze both their data and the predictions of their regression model. To simulate this scenario, we use the RC and PC circuits that were learned on the real-world Insurance dataset in the previous section (see Figure~\ref{fig:regression-plots}). This dataset lists the yearly health insurance cost of individuals living in the US with features such as age, smoking habits, and location. Our task is to examine the behavior of the predictions, such as whether they are biased by some sensitive attributes or whether there exist interesting patterns across sub-populations of the data. We might start by asking: \textit{``how different are the insurance costs between smokers and non smokers?''} which can be easily computed as \begin{equation} M_1(f,\ p(.\ \vert\ \textit{Smoker})) - M_1(f,\ p(.\ \vert\ \textit{Non Smoker})) = 31,355 - 8,741 = 22,614 \end{equation} by applying the same conditioning as in Equations~\ref{eq:exp-miss} and~\ref{eq:exp-miss-ii}. We can also ask: \textit{``is the predictive model biased by gender?''} To answer this question, it would be interesting to compute: \begin{equation} M_1(f,\ p(.\ \vert\ \textit{Female})) - M_1(f,\ p(.\ \vert\ \textit{Male})) = 14,170 - 13,196 = 974 \end{equation} As expected, being a smoker affects the health insurance costs much more than being male or female. If it were the opposite, we would conclude that the model may be unfair or misbehaving. In addition to examining the effect of a single feature, we may study the model in a smaller sub-population, by conditioning the distribution on multiple features. For instance, suppose the insurance company is interested in expanding and as part of their marketing plan wants to know the effect of an individual's region, e.g., southeast ($\mathsf{SE}$) and southwest ($\mathsf{SW}$), for the sub-population of female ($\mathsf{F}$) smokers ($\mathsf{S}$) with one child ($\mathsf{C}$). By computing the following quantities, we can discover that the difference in their average insurance cost is relevant, but much more relevant is the difference in their standard deviations, indicating a significantly different treatment of this population between~regions: \begin{align} &\EX_{p_{\mathsf{SE}}}[f]=M_1(f,p(.\ \vert\ \mathsf{F,S,C,SE})) = 30,974,\ \ensuremath{\mathbb{STD}}_{p_{\mathsf{SE}}}[f]=\sqrt{M_2(.) - (M_1(.))^2} = 11,229 \\ &\EX_{p_{\mathsf{SW}}}[f]=M_1(f,p(.\ \vert\ \mathsf{F,S,C,SW}))=27,250,\ \ensuremath{\mathbb{STD}}_{p_{\mathsf{SW}}}[f]=\sqrt{M_2(.) - (M_1(.))^2} = 7,717 \end{align} However, one may ask why we do not estimate these values directly from the dataset. The main issue in doing so is that as we condition on more features, fewer if not zero matching samples are present in the data. For example, only 4 and 3 samples match the criterion asked by the last two queries. Furthermore, it is not uncommon for the data to be unavailable due to sensitivity or privacy concerns, and only the models are available. For instance, two insurance agencies in different regions might want to partner without sharing their data yet. The expected prediction framework with probabilistic circuits allows us to efficiently compute these queries with interesting applications in explainability and fairness. We leave the more rigorous exploration of their applications for future work. \section{Related Work} Using expected prediction to handle missing values was introduced in \citet{Khosravi2019}; given a logistic regression model, they learned a conforming Naive bayes model and then computed expected prediction only using the learned naive bayes model. In contrast, we are taking the expected prediction using two distinct models. Moreover, probabilistic circuits are much more expressive models. Imputations are a common way to handle missing features and are a well-studied topic. For more detail and a history of the techniques we refer the reader to \citet{buuren_2018,little2019statistical}. Probabilistic circuits enable a wide range of tractable operations. Given the two circuits, our expected prediction algorithm operated on the pairs of children of the nodes in the two circuits corresponding to the same vtree node and hence had a quadratic run-time. There are other applications that operate on similar pairs of nodes such as: multiplying the distribution of two PSDDs \cite{NIPS2016_6363}, computing the probability of a logical formula \cite{choi2015tractable}, and computing KL divergence \cite{LiangXAI17}. \section{Conclusion} \label{sec:conclusion} In this paper we investigated under which model assumptions it is tractable to compute expectations of certain discriminative models. We proved how, for regression, pairing a discriminative circuit with a generative one sharing the same vtree structure allows to compute not only expectations but also arbitrary high-order moments in poly-time. Furthermore, we characterized when the task is otherwise hard, e.g., for classification, when a non-decomposable, non-linear function is introduced. At the same time, we devised for this scenario an approximate computation that leverages the aforementioned efficient computation of the moments of regressors. Finally, we showcased how the expected prediction framework can help a data analyst to reason about the predictive model's behavior under different sub-populations. This opens up several interesting research venues, from applications like reasoning about missing values, to perform feature selection, to scenarios where exact and approximate computations of expected predictions can be combined. \section*{Acknowledgements} This work is partially supported by NSF grants \#IIS-1633857, \#CCF-1837129, DARPA XAI grant \#N66001-17-2-4032, NEC Research, and gifts from Intel and Facebook Research. \scalebox{0.01}{expecta bos olim herba.} \bibliographystyle{abbrvnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec-intro} One of the most challenging problems in high-performance computing (HPC) is using the full strength of supercomputers. It is very demanding, however, to efficiently use all the power of a supercomputer in a single run. The main barrier is that most of the currently available algorithms do not scale well on the complex multi-node, multi-core, and multi-accelerator hybrid architectures which are the dominant today and will be dominant for the nearest future. Hence the computation must be divided into millions of tasks to be scheduled on individual cores. It is thus of crucial importance to develop new, fully scalable algorithms, new programming techniques, and new methods to build programs which can efficiently use the power of supercomputers. Markov chain Monte Carlo methods ~\cite{Berg,BinderLandau} are among the prime approaches of supercomputer simulations in physics, chemistry, and materials sciences. A Monte Carlo (MC) simulation is a sequence of local updates of microscopic degrees of freedom, and the overall performance of a simulation strongly depends on these local elementary updates~\cite{BinderLandau}. Two features of the family of local MC algorithms are worth noting: first, simulations based on these algorithms are naturally embarrassingly parallel, which makes them even more attractive for massively parallel computations; second, they are applicable in the presence of external fields and/or competing ferro- and antiferromagnetic interactions, where more sophisticated schemes (e.g., cluster updates) break down. A program of the US DOE, Office of Science, entitled the Innovative and Novel Computational Impact on Theory and Experiment (INCITE)~\cite{INCITE} awarded on average 60 projects per year, from which 30 percent are in physics, 28 percent in engineering, 15 percent in materials sciences, 9 percent in earth sciences, and 7 per cent in chemistry, etc. The average number of projects which used MC methods is 5 per year in the last ten years. In the core of MC methods is the Metropolis algorithm~\cite{Metropolis} known for more than 60 years. What we find surprising is that a careful analysis of an ``old'' method could bring completely new knowledge on the computations. Namely, we note that the acceptance rate of an MC simulation is a thermodynamic function which displays a well-defined temperature dependence. These findings provide a deeper understanding of the algorithm, and we believe that our observation may influence further improvements of MC algorithms in general, and their parallel scalable versions. A Markov process is defined by its set of transition probabilities between microscopic states, which must obey a balance condition \cite{Berg,BinderLandau}. Efficiency of an MC process is governed by several factors: the typical acceptance rate (i.e., the fraction of states in the Markov chain which differ from the previous state), the computational complexity of an elementary update, and the autocorrelation time of the process. Different choices for both trial moves and their transition probabilities are possible~\cite{BinderLandau, Janke2008}. A rigorous theorem claims that a specific choice of elementary updates --- which is in fact a global update, attempting to update all degrees of freedom at once by drawing random increments from a Gaussian distribution --- maximizes the efficiency if the mean value of the acceptance rate is tuned to the special value 0.234~\cite{Baesian, Baesian2}. However, such processes have unfavorable correlation properties~\cite{PotterSwendsen2013}, and are not suitable for MC simulations of physical systems with a large number of degrees of freedom. We now make the following observation: the mean value of the acceptance rate of an MC simulation using a given set of transition probabilities has a well-defined temperature dependence. Therefore, it can be viewed as a thermodynamic function of the model under the MC dynamics, on par with other thermodynamic functions, e.g., the energy. A natural question is then: what is the relation between the mean acceptance rate and other thermodynamic quantities? We find that for the one-dimensional (1D) Ising model, the acceptance rate of the Metropolis algorithm \cite{Metropolis} is a \emph{linear} function of energy. An immediate question is then whether this linear relation is a one-off artifact of the Metropolis MC dynamics for the 1D Ising model, or whether it generalizes for other related models and MC algorithms. The rest of the paper is organized as follows. In Sec.~\ref{sec-models} we briefly describe the models for which we calculate analytically or compute numerically the acceptance rate. In Sec.~\ref{sec-analytics} we present analytical results on the acceptance rate of the Metropolis and heat-bath algorithms for the 1D Ising model. Section~\ref{sec-simulations} contains computational results for the acceptance rate and its variance for the one-dimensional models described in Sec.~\ref{sec-models}. Next Sec.~\ref{sec-2d} presents computational results for the acceptance rate and its variance for the two-dimensional models. In Sec.~\ref{sec-discussion} we summarize our findings. Two Appendixes contain more details on the analytical calculation of the acceptance rate when applying the Metropolis and heat-bath algorithms to the 1D Ising model. \section{Models and Update Algorithms} \label{sec-models} We consider several well-known classical lattice models. The Ising model is defined by the Hamiltonian function \begin{equation} H = -J \sum_{\langle ij \rangle} S_i S_{j}\;, \label{ising_S} \end{equation} where the coupling constant $J > 0$, $\langle ij \rangle$ denotes nearest-neighbor pairs, and $S_i = \pm 1$ are Ising spins, located at the sites of a $d$-dimensional lattice of linear size $L$ (and volume $V = L^d$) with periodic boundary conditions. In the $q$-state Potts model, spins can take $q$ possible values, $S_i\in {1, \ldots,q}$ \cite{Wu}: \begin{equation} H = -J\sum_{\langle ij \rangle} \delta(S_i, S_{j})\;, \label{potts_S} \end{equation} where the coupling constant $J>0$ and $\delta(S_i, S_{j})$ is the Kronecker delta symbol, which equals one whenever $S_i=S_{j}$, and zero otherwise. Finally, we consider the XY model, defined by \cite{Kosterlitz}: \begin{equation} H = -J\sum_{\langle ij \rangle} \cos{(S_i-S_{j})}\;, \label{XY_S} \end{equation} where the coupling constant $J>0$ and $S_i$ are continuous variables, $S_i \in [0, 2\pi)$. MC simulations provide a way of studying models \eqref{ising_S}-\eqref{XY_S} in thermodynamic equilibrium. An MC simulation constructs an ergodic random walk in the configuration space of a model, $$ \cdots\to \mu \to \nu \to \cdots \;, $$ which generates the equilibrium Gibbs distribution of a model as its stationary distribution \cite{BinderLandau}. For local updating schemes, successive configurations $\mu$ and $\nu$ only differ by the value of a single spin. An elementary update of the local Metropolis algorithm \cite{Metropolis} for the Ising model proceeds in two steps: (i) select a random site and (ii) flip its spin, $S_i \to -S_i$, with the probability \begin{equation} p(\mu\to\nu) = \min(1, e^{-\beta\Delta E}) \;, \label{metropolis_p} \end{equation} where $\beta$ is the inverse temperature and $\Delta E = E_\nu - E_\mu$ the energy difference between the updated and original states \cite{Janke2008}. The generalization to models with more than two states per spin is straightforward: (ii) is simply replaced by $S_i \to \widetilde{S}_i$, where $\widetilde{S}_i$ is any admissible spin value. The heat-bath algorithm for the Ising model differs from the Metropolis algorithm only in that a spin-flip update is accepted with the probability \cite{Janke2008} \begin{equation} p(\mu\to\nu) = \frac{e^{-\beta E_\nu}}{e^{-\beta E_\nu} + e^{-\beta E_\mu}}\;. \label{heat_bath_p} \end{equation} This can be recast into the form \begin{eqnarray} p(\mu\to\nu) &=& \frac{e^{-\beta \Delta E/2}}{e^{-\beta \Delta E/2} + e^{\beta \Delta E/2}}\nonumber\\ &=& \frac{1}{2}\left[1-\tanh(\beta \Delta E/2)\right]\;, \label{heat_bath_p2} \end{eqnarray} which is the general Glauber update rule \cite{Glauber} that can also be applied to models with more than two states per spin and a generalized update proposal $S_i \to \widetilde{S}_i$. Note that only for the Ising model with two states per spin, the heat-bath process coincides with the Glauber dynamics \cite{Glauber}. In the general case, the heat-bath process in a strict sense (there is some confusion with the notation in the literature) involves {\em all\/} possible spin values and is hence more complicated. \section{Acceptance rates of MC simulations of the 1D Ising model} \label{sec-analytics} To calculate the expected value of the acceptance probability of an MC simulation of the 1D Ising model we first convert Eq.\ \eqref{ising_S} to bond variables \cite{Suzuki1972, Mueller2017}. \subsection{Bond representation} \label{subsec:ising_bonds} In one dimension Eq.\ \eqref{ising_S} takes the form \begin{equation} H = -J \sum_{i=1}^{L} S_i S_{i+1}\;, \label{ising_S_1D} \end{equation} where the indices in \eqref{ising_S_1D} are taken modulo $L$, i.e., the term $S_L S_{L+1}$ is understood as $S_L S_{1}$. To calculate the expected values of the acceptance probability of a MC simulation of the 1D Ising model \eqref{ising_S_1D} for the Metropolis and heat-bath updates, we first convert the model \eqref{ising_S_1D} to a bond representation. We define for a bond connecting sites $i$ and $i+1$ the 'charge' \cite{Suzuki1972, Mueller2017}, \begin{equation} Q_i = \frac{1}{2}\left(S_i S_{i+1} + 1 \right)\;, \label{q_def} \end{equation} which takes values of 0 (for $S_i \neq S_{i+1}$) and 1 (for $S_i = S_{i+1}$ ). In this representation, Eq.\ \eqref{ising_S_1D} takes the form \begin{equation} H = - 2 J \sum_{i=1}^L Q_i + J L\;, \label{ising_Q} \end{equation} where the sum is taken over the \emph{bonds} of the lattice. With periodic boundary conditions, the number of bonds equals the number of sites of the lattice. This way, the state space of the model \eqref{ising_S_1D} is spanned by a collection of $L$ integers $Q_i = \left\{0, 1\right\}$, subject to the constraint: the parity of $\sum_{i=1}^L Q_i$ is the same as the parity of the number of bonds. We take $L$ to be even throughout, so that $$\sum_{i=1}^L Q_i \text{\qquad is even.} $$ The partition function corresponding to Eq.\ \eqref{ising_Q} then reads \begin{equation} Z = 2 x^{-L/2} \sum_{l = 0}^{L/2} \comb{L}{2l} x^{2l}\;, \label{Z} \end{equation} where $x \equiv e^{2\beta J}$ and $\beta$ is the inverse temperature. The summation runs over the values of $\sum_i Q_i = 2l$, and the binomial coefficient, $\comb{L}{2l}$, counts the number of ways of distributing $2l$ values of $Q=1$ over $L$ bonds. The factor of 2 accounts for a double-counting of the representation \eqref{q_def}: each value of $Q_i$ can be realized by two possible combinations of $S_i$ and $S_{i+1}$ (e.g., $Q_i = 0$ means that either $S_i=-1$ and $S_{i+1} = 1$ or vice versa). Performing the summation in \eqref{Z} we obtain \begin{equation} Z = x^{-L/2} \Bigl[ (x+1)^L + (x-1)^L \Bigr]\;, \label{Z_2} \end{equation} which agrees with the standard result \cite{Baxter}. \subsection{Acceptance rate of the Metropolis algorithm} We use the bond representation \eqref{ising_Q} to calculate the acceptance rates for the Metropolis and the heat-bath algorithms. In this section, we only state the main results and relegate the details of calculations to the Appendix. We start with the Metropolis update \eqref{metropolis_p}. Denoting the expected value of the acceptance probability by $R$, the expected value of the \emph{rejection} probability is \begin{equation} 1 - R = \frac{x^2-1}{Z} % \Bigl[ (x+1)^{L-2} + (x-1)^{L-2}\Bigr] x^{-L/2} \;. \label{expect_2} \end{equation} In the thermodynamic limit, $L\rightarrow \infty$, the second term in brackets is negligible, and Eq.\ \eqref{expect_2} simplifies to \begin{equation} R = \frac{2}{x+1}\;. \label{acpt_prob} \end{equation} We now compare Eq.\ \eqref{acpt_prob} to the thermodynamic mean value of the internal energy of the system, $E$. Using the partition function \eqref{Z_2}, we obtain in the thermodynamic limit the standard result \cite{Baxter} \begin{equation} \varepsilon = -\frac{x-1}{x+1}\;, \label{energy} \end{equation} where the reduced energy density $\varepsilon = E / J L$. Comparing Eqs.\ \eqref{acpt_prob} and \eqref{energy}, we find \begin{equation} R = 1 + \varepsilon \;, \label{R_vs_e} \end{equation} i.e., the expected value of the acceptance probability is a {\it linear function} of the energy. In fact, relation \eqref{R_vs_e} holds for all values of $L$, see Appendix A. \subsection{Acceptance rate of the heat-bath algorithm} For the expected value of the acceptance probability $R$ of the heat-bath update \eqref{heat_bath_p} we find in the thermodynamic limit $L\rightarrow \infty$ \begin{equation} R = \frac{x}{1 + x^2}\;. \label{acpt_HB-of-x} \end{equation} Comparing to \eqref{energy}, we have \begin{equation} R = \frac{1}{2} \frac{1 - \varepsilon^2}{1 + \varepsilon^2} \;, \label{acpt_HB} \end{equation} which approaches the linear behavior $(1+\varepsilon)/2$ for $1 + \varepsilon \ll 1$ (cf.\ Fig.\ \ref{I1M_HB_ER}). Details can be found in Appendix B. \section{Simulation results in 1D} \label{sec-simulations} In this section we first verify the analytical results for the Metropolis and heat-bath acceptance rates of the Ising model in one dimension and then test the observed qualitative features for the other models defined in Sec.\ \ref{sec-models}. Results for the generalization to two dimensions will be presented in the next section. \subsection{First moments of the energy and acceptance rate} We performed MC simulations of the 1D Ising model \eqref{ising_S_1D} using Metropolis updates \eqref{metropolis_p} and heat-bath updates \eqref{heat_bath_p} for temperatures ranging from $T/J = 0.2$ to $10$. We used $N_T=10^6$ MC steps (MCS) for thermalization and collected statistics over $N_A=10^7$ MCS. Here an MCS is defined as $L$ elementary update attempts for a chain with $L$ spins. We first focus on the collected statistics for the total energy of the system, $E$, and the acceptance rate, $R$, which we specifically define as the ratio of the numbers of accepted and attempted elementary updates. Figure \ref{I1M_HB_ER} shows the relation between the mean values of the acceptance rates of the MC process and the reduced energy density, $\varepsilon$, for a chain of $L = 512$ spins and periodic boundary conditions. The results of the MC simulations agree with Eqs.\ \eqref{R_vs_e} and \eqref{acpt_HB} in the whole range of energies (hence, temperatures). We note that for the heat-bath algorithm the dependence of $R$ on the reduced energy density is approximately linear in a wide range of temperatures: the relative difference between $R$ and its linear approximation is below 10\% for $T/J < 1.1$. \begin{figure}[tb] \includegraphics[width=.99\columnwidth]{I1_M_HB.png} \caption{Average acceptance rates of the Metropolis updates \eqref{metropolis_p} and the heat-bath updates \eqref{heat_bath_p} for the 1D Ising model \eqref{ising_S_1D}. Simulation were done for $L=512$ spins with periodic boundary conditions. Symbols are simulation results, with error bars shown at all points. Solid lines are predicted relations \eqref{R_vs_e} and \eqref{acpt_HB}. See text for discussion.} \label{I1M_HB_ER} \end{figure} It is instructive to compare the behavior of local MC algorithms for related classical spin models. We performed MC simulations of the 3- and 4-state Potts models \eqref{potts_S}, and the classical XY model \eqref{XY_S} in one dimension using the Metropolis and Glauber algorithms. There are several ways of organizing the local updates. We take the simplest possible prescription: we select a spin $S_i$ at random, and then draw a proposed value $\widetilde{S}_i$ for the update $S_i \to \widetilde{S}_i$ from a uniform discrete distribution of $q$ values for the Potts model, and from a uniform distribution on $[0, 2\pi)$ for the XY model. In these simulations we use $N_T=10^4$ MCS for thermalization and $N_A=10^5$ MCS for the averaging. The energy dependence of the acceptance rate for the Metropolis algorithm is summarized in Fig.\ \ref{ALL1M_HB_ER}(a). In general, the dependence turns out to be a non-linear featureless curve. The maximum difference is observed to be between the 3-state Potts and Ising models. The 4-state Potts model is closer to the Ising result, and for the XY model, the acceptance rate approaches that for the Ising model at very large temperatures, $T \gg J$. Results for the Glauber updates turn out to be qualitatively similar, and we present them for one spatial dimension in Fig.\ \ref{Supp_ALL1M_HB_ER}(a). \begin{figure*}[bt] \begin{tabular}{cc} \includegraphics[width=.99\columnwidth]{ALL1M-ER.png} & \includegraphics[width=.99\columnwidth]{All1M-CR-loglog.png} \\ \end{tabular} \caption{(a) Average acceptance rates of the Metropolis updates \eqref{metropolis_p} versus energy for 1D models. (b) Variance of the acceptance rate versus heat capacity on a log-log scale. Simulations were done for the Ising model (circles), the Potts model with $q=3$ (triangles) and $q=4$ (diamonds), and the XY model (squares), using chains of $L=512$ spins with periodic boundary conditions. Symbols are simulation results with error bars, lines are to guide the eye. } \label{ALL1M_HB_ER} \end{figure*} \begin{figure*}[bt] \begin{tabular}{cc} \includegraphics[width=.99\columnwidth]{ALL1HB-ER.png} & \includegraphics[width=.99\columnwidth]{All1HB-CR-loglog.png} \\ \end{tabular} \caption{(a) Average acceptance rates of the heat-bath respectively Glauber updates \eqref{heat_bath_p}, \eqref{heat_bath_p2} versus energy for 1D models. (b) Variance of the acceptance rate versus heat capacity on a log-log scale. Simulations were done for the Ising model (circles), the Potts model with $q=3$ (triangles) and $q=4$ (diamonds), and the XY model (squares), using chains of $L=512$ spins with periodic boundary conditions. Symbols are simulation results with error bars, lines are to guide the eye. } \label{Supp_ALL1M_HB_ER} \end{figure*} \subsection{Second moments of energy and acceptance rate} % Mean values of energy and acceptance rate are computed as first moments of samples generated by the MC process. Given a one-to-one, monotonic relation between the mean values, it is instructive to compare the second moments. In general, the second moment of the reduced energy density $\varepsilon = E/JV = \langle H \rangle/JV$ is related to the specific-heat capacity, \begin{equation} C = J\frac{d \varepsilon}{dT} = \frac{\langle H^2 \rangle - \langle H \rangle^2}{VT^2}\;, \label{hcap} \end{equation} where $\langle \cdots\rangle$ stands for the average over the states generated by the MC process. The variance of the acceptance rate is readily computed as the variance of a Bernoulli process of binary decisions ($1$ if an elementary update is accepted, and $0$ otherwise). For a Bernoulli process, the variance, $\var R$, is related to the mean value, $R$, via $\var R = R (1 - R)$. In the case of the 1D Ising model, given the exact results for $R$ derived above, hence also $\var R$ is known exactly. Figure \ref{ALL1M_HB_ER}(b) displays the relation between the heat capacity and the variance of the acceptance rate for the Metropolis algorithm. We scale the variance of $R$ by $T^2$ in accordance with Eq.\ \eqref{hcap}. At lowest temperatures, $T \ll J$, the heat capacity as a function of temperature has a maximum due to the well-known Schottky anomaly \cite{Kubo}. Because of this, the curves in Fig.\ \ref{ALL1M_HB_ER} form arcs for $C\sim 1$. Outside of this range, for $T > J$, the relation between second moments divided by $T^2$ is close to linear on the log-log scale for all one-dimensional models. Results of simulations using the heat-bath respectively Glauber updates, shown in Fig.\ \ref{Supp_ALL1M_HB_ER}(b), look qualitatively similar. \section{Simulation results in 2D} \label{sec-2d} It is instructive to compare the behavior of 1D models to higher dimensions, if only to see whether our observations are specific to 1D or have broader applicability. Specifically, 2D models with Hamiltonians \eqref{ising_S}--\eqref{XY_S} undergo a phase transition at a certain temperature $T_c$, between a high-temperature paramagnetic behavior and a low-temperature phase. For the Potts models, critical parameters are known analytically \cite{Wu}, $T_c / J = 1 / \ln\left(1 +\sqrt{q} \right)$, and for the Kosterlitz-Thouless transition of the XY model, MC simulations of Ref.~\cite{Weber} quote $T_c / J = 0.887(2)$. Here the specific-heat capacity stays finite everywhere, with a smooth peak located about $20\%$ above $T_c/J$. The behavior of the MC process with local updates varies significantly between the paramagnetic phase ($T > T_c$), the low-temperature phase ($T < T_c$), and the critical region ($T \approx T_c$) \cite{BinderLandau}. Figure~\ref{All2M_HB_ER} shows results of simulations with the Metropolis algorithm of the two-dimensional models with $64^2$ spins for temperatures between $T/J = 0.5$ to $10$. Here we use $N_T=10^5$ MCS for thermalization and $N_A=10^6$ MCS for averaging. The first moments of the acceptance rate and energy shown in Fig.~\ref{All2M_HB_ER}(a) are both smooth across the phase transition, with a relation which is close to linear in the critical region $T \sim T_c$. The second moments in Fig.\ \ref{All2M_HB_ER}(b) show two clearly separate branches, for $T > T_c$ and $T < T_c$, which join at the critical point. Note that while the heat capacity develops a singularity as $T\to T_c$, the variance of the acceptance rate remains smooth and does not show any signs of divergence. This can be explained by recalling that the heat capacity \eqref{hcap} is by definition proportional to the variance of the nonlocal total energy, while the acceptance rate and its variance refer to local measurements. It is also worth noting that the relative position of the low- and high-temperature branches for the Ising model differs from both Potts models and the XY model. \begin{figure*}[htb] \begin{tabular}{cc} \includegraphics[width=.99\columnwidth]{All2M-ER.png} & \includegraphics[width=.99\columnwidth]{All2M-CR-loglog.png} \\ \end{tabular} \caption{(a) Average acceptance rates of the Metropolis updates \eqref{metropolis_p} for 2D models. (b) Second moments of energy and the acceptance rate on a log-log scale. Simulations were done on a $64^2$ square lattice with periodic boundary conditions for the Ising model (circles), the Potts model with $q=3$ (triangles) and $q=4$ (diamonds), and the XY model (squares), using $N_T=10^5$ MC steps (MCS) for thermalization and $N_A=10^6$ MCS for averaging. Symbols are simulation results with error bars, lines are to guide the eye. Semi-transparent disks show the critical regions. } \label{All2M_HB_ER} \end{figure*} \begin{figure*}[htb] \begin{tabular}{cc} \includegraphics[width=.99\columnwidth]{All2HB-ER.png} & \includegraphics[width=.99\columnwidth]{All2HB-CR-loglog.png} \\ \end{tabular} \caption{(a) Average acceptance rates of the heat-bath respectively Glauber updates \eqref{heat_bath_p}, \eqref{heat_bath_p2} versus energy for 2D models. (b) Variance of the acceptance rate versus heat capacity on a log-log scale. Simulations were done on a square lattice with $64^2$ spins and periodic boundary conditions for the Ising model (circles), the Potts model with $q=3$ (triangles) and $q=4$ (diamonds), and the XY model (squares), using $N_T=10^5$ MC steps (MCS) for thermalization and $N_A=10^6$ MCS for averaging. Symbols are simulation results with error bars, lines are to guide the eye. Semi-transparent disks indicate the critical region of the corresponding model. } \label{Supp_All2M_HB_ER} \end{figure*} For heat-bath respectively Glauber updates, the results are qualitatively similar to those using Metropolis updates, see Fig.\ \ref{Supp_All2M_HB_ER}. We have also verified that simulations of the three- and four-dimensional models behave qualitatively similar to the two-dimensional models. These results will be detailed elsewhere. \section{Conclusion} \label{sec-discussion} Concluding, we start with the observation that the acceptance rate of an update proposal of a Monte Carlo simulation can be regarded on par with thermodynamic functions of a model, and thus can itself be considered a thermodynamic function of a model under a given Monte Carlo dynamics. For the one-dimensional Ising model we derive analytically that the mean value of the acceptance rate for local Metropolis updates is a linear function of energy. This linear dependence turns out to be specific for this combination of the updating scheme and the model: changing the updating algorithm to heat-bath updates changes the functional form of the relation between the mean values of the acceptance rate and energy, so that the relation is only linear away from the high-temperature region $T \gg J$. We simulate several classical models in one and two spatial dimensions, the Ising model, the 3- and 4-state Potts models, and the XY model, and compute the dependence of the first and second moments of the acceptance rate on the mean value and the second moment of energy. We find that in general the relation is not linear in a wide range of temperatures, but is close to linear in the critical region around the transition temperature $T_c$. Our result for the acceptance rate of the heat-bath algorithm for the one-dimensional Ising model can be viewed as an addition to the Glauber paper~\cite{Glauber} on the dynamics of the one-dimensional Ising model -- since in that case, the heat-bath algorithm exactly reproduces the Glauber dynamics. The acceptance rate in the Glauber dynamics \cite{Glauber} is the \emph{frequency} of the spin flips, and it is given by Eqs.\ \eqref{acpt_HB-of-x} and \eqref{acpt_HB}. The acceptance rate can also be calculated analytically for any exactly solvable model (e.g., one-dimensional $q$-state Potts models and the two-dimensional Ising model), although this is not straightforward in all cases. One can compute the acceptance rate for any local Monte Carlo algorithm. In fact, we checked that for all models and for all dimensions for which we have code on hand, the acceptance rate is linear in the energy close to a second-order phase transition, as demonstrated for some two-dimensional models in Section V. Moreover, additional simulations show that in the vicinity of a first-order phase transition, the acceptance rate is a linear function of energy as well. The source codes for our Monte Carlo simulations and data analysis are available from Ref.~\cite{repo}. \begin{acknowledgments} E.B. and M.G. acknowledge the support of the Academic Fund Program at the National Research University Higher School of Economics (HSE), grant No.~18-05-0024, and by the Russian Academic Excellence Project ``5-100''. W.J. thanks the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for support through project No.\ 189\,853\,844 -- SFB/TRR 102 (project B04). L.S. acknowledges the support from the program 0033-2018-0010 of Landau Institute for Theoretical Physics and thanks Travis S. Humble (Oak Ridge National Laboratory) for the valuable discussions of the high-performance computing and INCITE program.\\ \end{acknowledgments} \section*{Appendix: Acceptance rates of local Monte Carlo updates for the one-dimensional Ising model} \label{sec-appendix} In this appendix we turn our attention to the mathematical details of calculating the expectations of the acceptance rates of local MC updates. \subsection{Acceptance rate of the Metropolis algorithm} \label{subsec-metropolis_1D_ising} Using the bond representation \eqref{ising_Q}, we note that flipping a spin $S_i$ flips the values of two bond charges, $Q_i$ and $Q_{i-1}$. The acceptance probabilities depend on the sum $Q \equiv Q_i + Q_{i-1}$: for $Q = 0$ or $1$, the Metropolis update is always accepted, since $\Delta E \leqslant 0$. For $Q = 2$, the update is accepted with probability $e^{-4\beta J} = x^{-2}$. Denoting the expected value of the acceptance probability by $R$, the expected value of the \emph{rejection} probability is then \begin{equation} 1 - R = \sum_{l=0}^{L/2} % \left(1-x^{-2}\right) \frac{2l}{L}\frac{2l-1}{L-1} % \frac{\comb{L}{2l} x^{2l} 2 x^{-L/2} }{Z} \;. \label{expect_1} \end{equation} Here the factor $2l (2l-1) / L (L-1)$ counts the probability that, in a configuration with $\sum_i Q_i = 2l$, for a randomly chosen site $i$ we have $Q=2$, i.e., $Q_i = Q_{i-1} = 1$. The sum entering Eq.\ \eqref{expect_1} is readily computed by differentiating twice the binomial formula $$ (x + 1)^L = \sum_{k=0}^L C_L^{k} x^k \;. $$ The result is $$ 1 - R = \frac{x^2-1}{Z} % \Bigl[ (x+1)^{L-2} + (x-1)^{L-2}\Bigr] x^{-L/2} \;, $$ which is Eq.\ \eqref{expect_2}. We now compare Eq.\ \eqref{expect_2} to the internal energy of the system. The energy is given in general by $E = -d \ln Z/d \beta = (1/Z)dZ/d\beta$. Using (\ref{Z_2}) and $dx/d\beta = 2Jx$ (since $x = e^{2\beta J}$), we obtain from the product rule \begin{widetext} \begin{eqnarray} \label{lengthy} -E/J &=& \{-(L/2) x^{-L/2-1} 2 x [(x+1)^L + (x-1)^L] + 2x x^{-L/2}[L(x+1)^{L-1} + L(x-1)^{L-1}]\}/Z \nonumber\\ &=& \{-Lx^{-L/2} [(x+1)^L + (x-1)^L] + 2Lx x^{-L/2}[(x+1)^{L-1} + (x-1)^{L-1}]\}/Z \nonumber\\ &=& Lx^{-L/2}\{2x[(x+1)^{L-1} + (x-1)^{L-1}]-[(x+1)^L + (x-1)^L]\}/Z \nonumber\\ &=& Lx^{-L/2}\{[2x-(x+1)](x+1)^{L-1} +[2x-(x-1)](x-1)^{L-1}\}/Z\\ &=& Lx^{-L/2}\{(x-1)(x+1)^{L-1}+(x+1)(x-1)^{L-1}\}/Z \nonumber\\ &=& Lx^{-L/2}\{(x-1)(x+1)(x+1)^{L-2}+(x+1)(x-1)(x-1)^{L-2}\}/Z \nonumber\\ &=& L(x^2-1)x^{-L/2}\{(x+1)^{L-2}+(x-1)^{L-2}\}/Z\;,\nonumber \end{eqnarray} \end{widetext} \vspace*{-3mm} which simplifies in the thermodynamic limit $L \rightarrow \infty$ to Eq.\ \eqref{energy}. Comparing (\ref{lengthy}) with (\ref{expect_2}), one readily sees that \begin{equation} -\varepsilon = 1 - R \end{equation} \vspace*{-2mm} or \begin{equation} R = 1 + \varepsilon \end{equation} is true for {\em all} lattice sizes $L$, i.e., the relation between the acceptance rate $R$ for the Metropolis update and the reduced energy density $\varepsilon$ of the 1D Ising model does {\em not\/} depend on the length $L$ of the one-dimensional chain with periodic boundary conditions. \subsection{Acceptance rate of the heat-bath algorithm} \label{subsec-heat_bath_1D_ising} The expected value of the acceptance probability $R$ of the heat-bath update can be calculated similarly to Eqs.\ \eqref{expect_1} and \eqref{expect_2}. Here we directly compute the acceptance probability: acceptance probability of an elementary update of the spin $S_i$ is again defined by $Q \equiv Q_i + Q_{i-1}$, and the analog of Eq.\ \eqref{expect_1} is \begin{multline} R = \sum_{l=0}^{L/2} \Bigl(% \frac{1}{1 + x^2} \frac{2l}{L} \frac{2l-1}{L-1} \\ + \frac{x^2}{1 + x^2} \frac{L-2l}{L} \frac{L-2l-1}{L-1} \\ + \frac{L-2l}{L} \frac{2l}{L-1} \Bigr) \frac{\comb{L}{2l} x^{2l} 2 x^{-L/2} }{Z} \;, \label{expect_1_HB} \end{multline} where the terms in brackets correspond to $Q=2$, $Q=0$ and $Q=1$, respectively. Differentiating the binomial formula, we obtain \begin{equation} R = \frac{x}{1 + x^2} \frac{1 - \kappa^L}{1 + \kappa^L}\;, \label{acpt_HB-in-x} \end{equation} where $\kappa = (x-1)/(x+1) = e^{-1/\xi} < 1$ with $\xi$ denoting the correlation length. In the thermodynamic limit $L \rightarrow \infty$, the second factor $\frac{1 - \kappa^L}{1 + \kappa^L} = \frac{1 - e^{-L/\xi}}{1 + e^{-L/\xi}}$ in \eqref{acpt_HB-in-x} approaches unity exponentially fast, and comparing to \eqref{energy}, we obtain Eq.\ \eqref{acpt_HB}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We consider the situation where one is interested in the trace $\ensuremath{\mathrm{tr}}(f(A))$ of a matrix function $f(A)$. Here, $f(A) \in \mathbb{C}^{n \times n}$ is the matrix obtained from $A \in \mathbb{C}^ {n \times n}$ and the function $f: z \in D\subseteq \mathbb{C} \to f(z) \in \mathbb{C}$ in the usual operator theoretic sense; see \cite{Higham2008}, e.g. Our focus is on the inverse $A^{-1}$, i.e.\ $f(z) = z^{-1}$. Computing the trace is an important task arising in many applications. The trace of the inverse is required, for example, in the study of fractals \cite{Sapoval1991}, in generalized cross-validation and its applications \cite{GolubMatt1995,GolubHeathWahba1979}. In network analysis, the Estrada index---a total centrality measure for networks---is defined as the trace of the exponential of the adjacency matrix $A$ of a graph \cite{EstradaHigham2010,Ginosar2008} and an analogous measure is given by the trace of the resolvent $(\rho I-A)^{-1}$ \cite[Section 8.1]{Estrada2011}. For Hermitian positive definite matrices $A$, one can compute the log-determinant $\log(\det(A))$ as the trace of the logarithm of $A$. The log-determinant is needed in machine learning and related fields \cite{Rasmussen2005,Rue2005}. Further applications are discussed in \cite{meyer2021hutch++,SaadUbaru2017,SaadUbaruJie2017}. A particular application which we want to prepare with this work is in lattice quantum chromodynamics (QCD), a computational approach in Theoretical Physics to simulate the interaction of the quarks as constituents of matter. Here, the trace of the inverse of the discretized Dirac operator yields the disconnected fermion loop contribution to an observable; see \cite{SextonWeingarten1994}. As simulation methods get more and more precise, these contributions become increasingly important. It is usually unfeasible to compute the diagonal entries $f(A)_{ii}$ directly as $e_i^*f(A)e_i$, $e_i$ the $i$th canonical unit vector, and then obtain the trace by summation. For example, for the inverse this would mean that we have to solve $n$ linear systems, which is prohibitive for large values of $n$. One large class of methods which aims at circumventing this cost barrier are deterministic approximation techniques. Probing methods, for example, approximate \begin{equation} \label{eq:probing} \ensuremath{\mathrm{tr}}(f(A)) \approx \sum_{i=1}^N w_i^* f(A) w_i, \end{equation} where the vectors $w_i$ are carefully chosen sums of canonical unit vectors and $N$ is not too large. Various approaches have been suggested and explored in order to keep $N$ small while at the same time achieving good accuracy in \eqref{eq:probing}. This includes approaches based on graph colorings; see \cite{BekasKokiopoulouSaad2007,Endress:2014qpa,TangSaad2012} e.g., and the hierarchical probing techniques from \cite{Stathopoulos2013,LaeuchliStathopoulos2020}. In order for probing to yield good results, the matrix $f(A)$ should expose a decay of the moduli of its entries when we move away from the diagonal, since the sizes of the entries farther away from the diagonal determine the accuracy of the approximation. Recent theoretical results in this direction were given in \cite{FrSchiSchw2020}. Lanczos techniques represent another deterministic approximation approach and are investigated \cite{Bentbib2021,JieSaad2018,Lin2016}, e.g. Without giving details let us just mention that in order to improve their accuracy, deterministic approximation techniques can be combined with the stochastic techniques to be presented in the sequel; see \cite{SaadUbaruJie2017}, e.g. In this paper we deal with the other big class of methods which aim at breaking the cost barrier using {\em stochastic estimation}. In principle, they work for any matrix and, for example, do not require a decay away from the diagonal. Our goal is to develop a multilevel Monte-Carlo method to estimate $\ensuremath{\mathrm{tr}}(f(A))$ stochastically. Our approach can be regarded as a variance reduction technique applied to the classical stochastic ``Hutchinson'' estimator \cite{Hutchinson90} \begin{equation} \label{hutch:eq} \ensuremath{\mathrm{tr}}(f(A)) \approx \frac{1}{N} \sum_{n=1}^N x^{(n)} f(A) x^{(n)}, \end{equation} where the components of the random vectors $x^{(n)} $ obey an appropriate probability distribution. The variance of the estimator \eqref{hutch:eq} decreases only like $\frac{1}{N}$, which makes the method too costly when higher precisions are to be achieved. The multilevel approach aims at curing this by working with representations of $A$ at different levels. On the higher numbered levels, evaluating $f(A)$ becomes increasingly cheap, while on the lower levels, which are more costly to evaluate, the variance is small. This paper is organized as follows: In Section~\ref{mlmc:sec} we recall the general framework of multilevel Monte-Carlo estimators. In Section~\ref{trace_est:sec} we discuss Hutchinson's method for stochastically estimating the trace before turning to our new multilevel approach in Section~\ref{mlmc_trace:est}. This section also contains a comparison to known approaches based on deflation as a motivation of why the new multilevel method should provide additional efficiency. Finally, several numerical results are presented in Section~\ref{numerical_results:sec}. \section{Multilevel Monte-Carlo} \label{mlmc:sec} We discuss the basics of the multilevel Monte-Carlo approach as a variance reduction technique. We place ourselves in a general setting, thereby closely following \cite{Giles2015}. Assume that we are given a probability space $(\Omega, \mathcal{F}, \P)$ with sample space $\Omega$, sigma-algebra $\mathcal{F} \subseteq \Omega$ and probability measure $\P: \mathcal{F} \to [0,1]$. For a given random variable $f: \Omega \to \mathbb{C}$ , the {\em standard Monte-Carlo} approach estimates its expected value $\ensuremath{\mathbb{E}}[f]$ as the arithmetic mean \begin{equation} \label{stdMC:eq} \ensuremath{\mathbb{E}}[f] \approx \frac{1}{N} \sum_{n=1}^N f(\omega^{(n)}), \end{equation} where the $\omega^{(i)}$ are independent events coming from $(\Omega, \mathcal{F}, \P)$. The variance of this estimator is $\frac{1}{N} \ensuremath{\mathbb{V}}[f]$, so the root mean square deviation has order $\mathcal{O}(N^{-1/2})$. This indicates that the number $N$ of events has to increase quadratically with the accuracy required which is why, typically, higher accuracies require very high computational effort in this type of Monte-Carlo estimation. The idea of {\em multilevel Monte-Carlo} is to split the random variable $f$ as a sum \begin{equation} \label{multilevel_decomposition:eq} f = \sum_{\ell=1}^L g_{\ell}, \end{equation} where the random variables $g_{\ell}:\Omega \to \mathbb{C}$ are regarded as contributions ``at level $\ell$'' to $f$. This gives \[ \ensuremath{\mathbb{E}}[f] = \sum_{\ell=1}^L \ensuremath{\mathbb{E}}[g_{\ell}], \] and an unbiased estimator for $\ensuremath{\mathbb{E}}[f]$ is obtained as \[ \ensuremath{\mathbb{E}}[f] \approx \sum_{\ell=1}^L \frac{1}{N_\ell} \sum_{n=1}^{N_\ell} g_{\ell}(\omega^{n,\ell}), \] where the $\omega^{(n,\ell)}$ denote the independent events on each level. The variance of this estimator is \[ \sum_{\ell=1}^{L} \frac{1}{N_\ell} \ensuremath{\mathbb{V}}[g_\ell]. \] The idea is that we are able to find a multilevel decomposition of the form \eqref{multilevel_decomposition:eq} in which the cost $C_\ell$ to evaluate $g_\ell$ is low when the variance $V_\ell := \ensuremath{\mathbb{V}}[g_\ell]$ is high and vice versa. As is explained in \cite{Giles2015}, the solution to the minimization problem which minimizes the total cost subject to achieving a given target variance $\epsilon^2$ \[ \sum_{l=1}^L N_\ell C_\ell \to \min! \enspace \mbox{s.t. } \sum_{\ell=1}^{L} \frac{1}{N_\ell} V_\ell = \epsilon^2 \] gives $N_\ell = \mu \sqrt{V_\ell/C_\ell}$. Here, the Lagrangian multiplier $\mu$ satisfies $\displaystyle \mu = \epsilon^{-2} \sum_{\ell=1}^L \sqrt{V_\ell/C_\ell}$, and the corresponding minimal total cost is \[ C = \epsilon^{-2} \left( \sum_{\ell = 1}^L \sqrt{V_\ell C_\ell} \right)^2. \] The typical situation is that the contributions $g_\ell$ on level $\ell$ are given as differences $f_\ell - f_{\ell+1}$ of approximations $f_\ell$ to $f$ on the various levels, i.e.\ we have \begin{equation} \label{mlmc_representation:eq} f = \sum_{\ell=1}^{L-1} \underbrace{\left(f_{\ell}-f_{\ell+1}\right)}_{=g_{\ell}} + \underbrace{f_L}_{=g_L} \quad \text{with } f_1 = f. \end{equation} If we assume that the cost $\hat{C}_\ell$ to evaluate $f_\ell$ decreases rapidly with the level $\ell$, the cost ${C}_\ell$ for evaluating the differences $g_\ell = f_\ell-f_{\ell+1}$ is well approximated by $\hat{C}_\ell$. The ratio of the total cost encountered when reducing the variance to a given value between multilevel Monte-Carlo (with optimal choice of $N_\ell$) and standard Monte-Carlo \eqref{stdMC:eq} is then given by \[ \left( \sum_{\ell = 1}^L \sqrt{V_\ell C_\ell}\right)^2 \; \big/ \; \left(\ensuremath{\mathbb{V}}[f]C_1\right) . \] This is the basic quantitative relation indicating how the costs $C_\ell$ to evaluate the $f_\ell$ and the variances $V_\ell$ of the differences $f_\ell-f_{\ell+1}$ have to relate in order for the multilevel approach to be more efficient than standard Monte-Carlo estimation of $f$. \section{Stochastic estimation of the trace of a matrix} \label{trace_est:sec} We now assume that we are given, in an indirect manner, a matrix $A = (a_{ij}) \in \mathbb{C}^{n \times n}$ for which we want to compute the trace \[ \ensuremath{\mathrm{tr}}(A) = \sum_{i=1}^n a_{ii}. \] Our basic assumption is that the entries $a_{ii}$ of $A$ are neither available directly nor that they can all be obtained at decent computational cost. This is typically the case when $A$ arises as a function of a large (and sparse) matrix, the most common case being the matrix inverse. In a seminal paper \cite{Hutchinson90}, Hutchinson suggested to use a stochastic estimator to approximate $\ensuremath{\mathrm{tr}}(A)$. The following theorem summarizes his result together with the generalizations on the admissible probability spaces; see \cite{DongLiu1993,Wilcox99}, e.g. \begin{theorem} \label{hutchinson:thm} Let $\P: \Omega \to [0,1]$ be a probability measure on a sample space $\Omega$ and assume that the components $x_i$ of the vector $x \in \mathbb{C}^n$ are random variables depending on $\omega \in \Omega$ satisfying \begin{equation} \label{P_assumptions:eq} \ensuremath{\mathbb{E}}[x_i] = 0 \enspace \mbox{ and } \enspace \ensuremath{\mathbb{E}}[\overline{x}_ix_j] = \delta_{ij} \enspace \mbox{ (where $\delta_{ij}$ is the Kronecker delta)}. \end{equation} Then \[ \ensuremath{\mathbb{E}}[x^*Ax] = \ensuremath{\mathrm{tr}}(A) \mbox{ and } \ensuremath{\mathbb{V}}[x^*Ax] = \sum_{\stackrel{i,j,k,p=1}{i\neq j, k \neq p}}^n \overline{a}_{ij}a_{kp} \ensuremath{\mathbb{E}}[ \overline{x}_ix_j\overline{x}_kx_p]. \] In particular, if the probability space is such that each component $x_i$ is independent from $x_j$ for $i \neq j$, then \[ \ensuremath{\mathbb{V}}[x^*Ax] = \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ij} + \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ji} \ensuremath{\mathbb{E}}[x_i^2]\ensuremath{\mathbb{E}}[\overline{x}_j^2]. \] \end{theorem} \begin{proof} The proof is simple, but we repeat it here because the literature often treats only the real and not the general complex case. We have \[ \ensuremath{\mathbb{E}}[x^*Ax] = \sum_{i=1}^n a_{ii}\ensuremath{\mathbb{E}}(\overline{x}_ix_i) + \sum_{i,j=1, i\neq j}^n a_{ij}\ensuremath{\mathbb{E}}(\overline{x}_ix_j) = \ensuremath{\mathrm{tr}}(A), \] where the last inequality follows from \eqref{P_assumptions:eq}. Similarly \begin{eqnarray} \ensuremath{\mathbb{V}}[x^*Ax] &=& \ensuremath{\mathbb{E}} \left[ \overline{(x^*Ax-\ensuremath{\mathrm{tr}}(A))}(x^*Ax-\ensuremath{\mathrm{tr}}(A))\right] \nonumber \\ &=& \ensuremath{\mathbb{E}} \big[ \big( \sum_{\stackrel{i,j=1}{i\neq j}}^n x_i\overline{a}_{ij} \overline{x}_j\big) \big( \sum_{\stackrel{k,p=1}{k\neq p}}^n \overline{x}_k a_{kp} x_p\big)\big] \nonumber \\ &=& \ensuremath{\mathbb{E}} \big[ \sum_{\stackrel{i,j,k,p=1}{ i\neq j, k\neq p}}^n \overline{a}_{ij}a_{kp}x_i\overline{x}_j \overline{x}_k x_p\big] \, = \, \sum_{\stackrel{i,j,k,p=1}{i \neq j, k \neq p}}^n\overline{a}_{ij}a_{kp} \ensuremath{\mathbb{E}}[x_i\overline{x}_j \overline{x}_k x_p] \label{sum_prod_expected:eq}. \end{eqnarray} Since the components $x_i$ are assumed to be independent, we have $\ensuremath{\mathbb{E}}[\overline{x}_ix_j \overline{x}_k x_p] = 0$ except when $i =j, k=p$ (which does not occur in \eqref{sum_prod_expected:eq}) or $i=k, j=p$ or $i=p, j=k$. This gives \[ \sum_{\stackrel{i,j,k,p=1}{i \neq j, k \neq p}}^n\overline{a}_{ij}a_{kp} \ensuremath{\mathbb{E}}[x_i\overline{x}_j \overline{x}_k x_p] = \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ij} \ensuremath{\mathbb{E}}[x_i\overline{x}_j \overline{x}_i x_j] + \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ji} \ensuremath{\mathbb{E}}[x_i\overline{x}_j \overline{x}_j x_i], \] and in the first sum $\ensuremath{\mathbb{E}}[x_i\overline{x}_j \overline{x}_i x_i] = \ensuremath{\mathbb{E}}[\overline{x}_ix_i] \ensuremath{\mathbb{E}}[\overline{x}_jx_j] = 1$ by assumption, whereas in the second sum we have $ \ensuremath{\mathbb{E}}[x_i\overline{x}_j \overline{x}_j x_i] = \ensuremath{\mathbb{E}}[x_i^2]\ensuremath{\mathbb{E}}[\overline{x}_j^2]$. \end{proof} Note that as a definition for the variance of a complex random variable $y$ we used $\ensuremath{\mathbb{E}}[(\overline{y-E(y})(y-E[y]]$ rather than $\ensuremath{\mathbb{E}}[(y-E[y])^2] $ to keep it real and non-negative. Standard choices for the probability spaces are to take $x$ with identically and independently distributed (i.i.d.) components as \begin{align} \label{iid_components1:eq} & x_i \in \{-1,1\} \mbox{ with equal probability } \tfrac{1}{2}, \\ \label{iid_components2:eq} &x_i \in \{-1,1,-i,i\} \mbox{ with equal probability } \tfrac{1}{4}, \\ \label{iid_components3:eq} & x_i = \exp(i\theta) \mbox{ with $\theta$ uniformly distributed in $[0,2\pi]$}, \\ \label{iid_components4:eq} & x_i \mbox{ is $N(0,1)$ normally distributed}. \end{align} \begin{corollary} \label{variance:cor} If the components $x_i$ are i.i.d.\ with distribution \eqref{iid_components1:eq} or \eqref{iid_components4:eq}, then \[ \ensuremath{\mathbb{V}}[x^*AXx] = \frac{1}{2}\|\ensuremath{\mathrm{offdiag}}(A+A^T)\|_F^2, \] where $\| \cdot \|_F$ denotes the Frobenius norm and $\ensuremath{\mathrm{offdiag}}$ the offdiagonal part of a matrix. If the components are i.i.d.\ with distribution \eqref{iid_components2:eq} or \eqref{iid_components3:eq}, then \[ \ensuremath{\mathbb{V}}[x^*AXx] = \|\ensuremath{\mathrm{offdiag}}(A)\|_F^2. \] \end{corollary} \begin{proof} For the distributions \eqref{iid_components1:eq} and \eqref{iid_components4:eq}, the components $x_i$ have only real values and $\ensuremath{\mathbb{E}}[x_i^2] = 1$. Therefore \begin{eqnarray*} \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ij} + \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ji} \ensuremath{\mathbb{E}}[x_i^2]\ensuremath{\mathbb{E}}[\overline{x}_j^2] & = & \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ij} + \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ji} \\ &= & \frac{1}{2} \sum_{\stackrel{i,j}{i \neq j}}^n(\overline{a_{ij}+a_{ji}})(a_{ij}+a_{ji}) \\ &=& \frac{1}{2} \|\ensuremath{\mathrm{offdiag}}(A+A^T)\|_F^2. \end{eqnarray*} For the distributions \eqref{iid_components2:eq} and \eqref{iid_components3:eq}we have $\ensuremath{\mathbb{E}}[x_i^2] = 0$, and thus \[ \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ij} + \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ji} \ensuremath{\mathbb{E}}[x_i^2]\ensuremath{\mathbb{E}}[\overline{x}_j^2] = \sum_{\stackrel{i,j}{i \neq j}}^n\overline{a}_{ij}a_{ij} = \|\ensuremath{\mathrm{offdiag}}(A)\|_F^2. \] \end{proof} In a practical situation where we approximate $\ensuremath{\mathrm{tr}}(A)$ by averaging over $N$ samples we can compute the root mean square deviation along with the averages and rely on the law of large numbers to assess the probability that the computed mean lies within the $\sigma$, $2\sigma$ or $3\sigma$ interval. Several results on Hutchinson's method have been formulated which go beyond this asymptotic aspects by giving tail or concentration bounds; see \cite{AvronToledo2011,CortinovisKressner2020,roosta2015improved}, e.g. For the sake of illustration we here report a summary of these results as given in \cite{meyer2021hutch++}. In our numerical examples, we will simply work with the root mean square deviation to assess accuracy. \begin{theorem} Let the distribution for the i.i.d.\ components of the random vectors $x^i$ be sub-Gaussian, and let $\epsilon, \delta \in (0,1)$. Then for $N = \mathcal{O}(\log(1/\delta)/\epsilon^2)$ we have that the probability for \begin{equation} \label{eq:haim_toledo} \left| \frac{1}{N} \sum_{i=1}^n (x^i)^*Ax^i - \ensuremath{\mathrm{tr}}(A) \right| \leq \epsilon \|A\|_ F \end{equation} is $\geq 1 - \delta$. \end{theorem} Note that if $A$ is symmetric positive semidefinite with $\lambda_i$ denoting its (non-negative) eigenvalues, then \[ \|A\|_F = \left(\sum_{i=1}^n \lambda_i^2 \right)^{1/2} \leq \sum_{i=1}^n \lambda_i = \ensuremath{\mathrm{tr}}(A), \] implying that \eqref{eq:haim_toledo} yields a (probabilistic) relative error bound for the trace. Also note that the real distributions \eqref{iid_components1:eq} and \eqref{iid_components4:eq} are sub-Gaussian; see \cite{meyer2021hutch++}. \section{Multilevel Monte-Carlo for the trace of the inverse} \label{mlmc_trace:est} We now turn to the situation where we want to estimate $\ensuremath{\mathrm{tr}}(A^{-1})$ for a large and sparse matrix $A$. Direct application of Theorem~\ref{hutchinson:thm} shows that an unbiased estimator for $\ensuremath{\mathrm{tr}}(A^{-1})$ is given by \begin{equation} \label{plain_tr_estimate:eq} \frac{1}{N} \sum_{i=1}^N x^{(i)}A^{-1}x^{(i)} \approx \ensuremath{\mathrm{tr}}(A^{-1}), \end{equation} where the vectors $x^{(i)}$ are independent random variables satisfying \eqref{P_assumptions:eq}, and that its variance is \[ \frac{1}{N} \| \ensuremath{\mathrm{offdiag}}(A^{-1}+A^{-T})\|_F^2 \enspace \mbox { or } \enspace \frac{1}{N} \|\ensuremath{\mathrm{offdiag}}(A^{-1})\|_F^2, \] depending on whether the components of $x^{(i)}$ satisfy \eqref{iid_components1:eq}, \eqref{iid_components4:eq} or \eqref{iid_components2:eq}, \eqref{iid_components3:eq}, respectively. Each time we add a sample $i$ to \eqref{plain_tr_estimate:eq} we have to solve a linear system with matrix $A$ and right hand side $x^{(i)}$, and the cost for solving these linear systems determines the cost for each stochastic estimate. For a large class of matrices, multigrid methods represent particularly efficient linear solvers. We assume that this is the case for our matrix $A$ and now describe how to derive a multilevel Monte-Carlo method for the approximation of $\ensuremath{\mathrm{tr}}(A^{-1})$ which uses the multigrid hierarchy not only for the linear solver, but also to obtain a good representation \eqref{mlmc_representation:eq} required for a multilevel Monte-Carlo approach. \subsection{Derivation of a mutilevel Monte-Carlo method} \label{sect:derivation_MLMC} Multigrid methods rely on the interplay between a smoothing iteration and a coarse grid correction which are applied alternatingly. In the geometric interpretation, where we view components of vectors as representing a continuous function on a discrete grid, the smoother has the property to make the error of the current iterate smooth, i.e.\ varying slowly from one grid point to the next. Such error can be represented accurately by a coarser grid, and the coarse grid correction solves for this coarse error on the coarse grid using a coarse grid representation of the matrix. The solution is then interpolated back to the original ``fine'' grid and applied as a correction to the iterate. The principle can be applied recursively using a sequence of coarser grids with corresponding operators, the solves on the coarsest grid being obtained by direct factorization. To obtain a multilevel Monte-Carlo decomposition we discard the smoother and only consider the coarse grid operators and the intergrid transfer operators which we now describe algebraically. The coarse grid operators are given by a sequence of matrices \[ A_\ell \in \mathbb{C}^{n_\ell \times n_\ell}, \ell = 1,\ldots,L, \] representing the original matrix $A = A_1 \in \mathbb{C}^{n_1 \times n_1}$ on the different levels $\ell = 1,\ldots,L$; the prolongation and restriction operators \[ P_\ell \in \mathbb{C}^{n_{\ell}\times n_{\ell+1}}, \; R_\ell \in \mathbb{C}^{n_{\ell+1} \times n_\ell}, \; \ell = 1,\ldots,L-1. \] transfer data between the levels. Typically, when $A$ is Hermitian, one takes $P_\ell = R_\ell^*$, and for given $P_\ell, R_\ell$ the coarse system matrices $A_\ell$ are often constructed using the Petrov-Galerkin approach \[ A_{\ell+1} = R_\ell A_\ell P_\ell, \; \ell = 1,\ldots, L-1. \] Using the accumulated prolongation and restriction operators \[ \hat{P}_\ell = P_1 \cdots P_{\ell-1} \in \mathbb{C}^{n \times n_\ell}, \hat{R}_\ell = R_{\ell-1} \cdots R_1 \in \mathbb{C}^{n_\ell \times n}, \; \ell = 1,\ldots, L, \] where we put $\hat{R}_1 = \hat{P}_1 = I \in \mathbb{C}^{n \times n}$ by convention, we regard $\hat{P}_\ell A_\ell^{-1} \hat{R}_\ell$ as the approximation to $A^{-1}$ at level $\ell$. We thus obtain a multilevel decomposition for the trace as \begin{equation} \label{tr_ml_dec:eq} \ensuremath{\mathrm{tr}}(A^{-1}) = \sum_{\ell=1}^{L-1} \ensuremath{\mathrm{tr}}\left(\hat{P}_\ell A_\ell^{-1} \hat{R}_\ell- \hat{P}_{\ell+1} A_{\ell+1}^{-1} \hat{R}_{\ell+1}\right) + \ensuremath{\mathrm{tr}}(\hat{P}_L A_L^{-1} \hat{R}_L). \end{equation} This gives \[ \ensuremath{\mathrm{tr}}(A^{-1}) = \sum_{\ell=1}^{L-1} \ensuremath{\mathbb{E}}\left[(x^\ell)^*\left(\hat{P}_\ell A_\ell^{-1} \hat{R}_\ell - \hat{P}_{\ell+1} A_{\ell+1}^{-1} \hat{R}_{\ell+1} \right)x^\ell\right] + \ensuremath{\mathbb{E}}\left[(x^L)^*\hat{P}_L A_L^{-1} \hat{R}_Lx^L\right], \] with the components of $x^\ell \in \mathbb{C}^n$ being i.i.d.\ stochastic variables satisfying \eqref{P_assumptions:eq}. The unbiased multilevel Monte-Carlo estimator is then \begin{eqnarray*} \ensuremath{\mathrm{tr}}(A^{-1}) &\approx& \sum_{\ell=1}^{L-1} \sum_{i=1}^{N_\ell} \left( (x^{i,\ell})^*\hat{P}_\ell A_\ell^{-1} \hat{R}_\ell x^{i,\ell}- (x^{i,\ell})^*\hat{P}_{\ell+1} A_{\ell+1}^{-1} \hat{R}_{\ell+1}x^{i,\ell}\right) \\ & & \mbox{}+ \sum_{i=1}^{N_L}(x^{n,L})^*\hat{P}_L A_L^{-1} \hat{R}_Lx^{i,L}, \end{eqnarray*} where the vectors $x^{i,\ell} \in \mathbb{C}^n$ are stochastically independent samples of the random variable $x \in \mathbb{C}^n$ satisfying \eqref{P_assumptions:eq}. The following remarks collect some important observations about this stochastic estimator. \begin{remark} Computationally, the estimator requires to solve systems of the form $A_\ell y^{n,\ell} = z$ with $z = \hat{R}_{\ell}x^{n,\ell} $. Since the matrices $A_\ell$ arise from the multigrid hierarchy, we directly have a multigrid method available for these systems by restricting the method for $A$ to the levels $\ell, \ldots,L$. \end{remark} \begin{remark} Since for any two matrices $B = (b_{ij}) \in \mathbb{C}^{n \times m}$ and $C = (c_{kl}) \in \mathbb{C}^{m \times n}$ the trace of their product does not depend on the order, \begin{equation} \label{trace_commute:eq} \ensuremath{\mathrm{tr}}(BC) = \sum_{i=1}^n \sum_{j=1}^m b_{ij}c_{ji} = \sum_{j=1}^m \sum_{i=1}^n c_{ji}b_{ij} = \ensuremath{\mathrm{tr}}(CB), \end{equation} we have \[ \ensuremath{\mathrm{tr}}(\hat{P}_LA^{-1}_L\hat{R}_L) = \ensuremath{\mathrm{tr}}(A_L^{-1}\hat{P}_L\hat{R}_L). \] So, instead of estimating the contribution $\ensuremath{\mathrm{tr}}(\hat{P}_LA_L^{-1}\hat{R}_L)$ in \eqref{tr_ml_dec:eq} stochastically, we can also compute it directly by inverting the matrix $A_L$ and computing the product $A_L^{-1}\hat{R}_L\hat{P}_L$. Note that $\hat{R}_L$ and $\hat{P}_L$ are usually sparse with a maximum of $d$, say, non-zero entries per row. The arithmetic work for $A_L^{-1}\hat{R}_L\hat{P}_L$ is thus of order $\mathcal{O}(dN_L^2)$ for the product $\hat{R}_L\hat{P}_L$ plus $\mathcal{O}(n_L^3)$ for the inversion of $A_L$ and the product $A_L^{-1}(\hat{R}_L\hat{P}_L$). Since the variance of $x^*\hat{P}_LA^{-1}_L\hat{R}_Lx$ is presumably large, this direct computation can be much more efficient than a stochastic estimation, even when we aim at only quite low precision in the stochastic estimate. \end{remark} \begin{remark} \label{rem:simplified} There are situations where $\hat{R}_\ell \hat{P}_\ell = I \in \mathbb{C}^{n_\ell \times n_\ell}$, for example in aggregation based multigrid methods, where the columns of $P_\ell$ are orthonormal and $R_\ell = P_\ell^*$, see \cite{Braess95, Brezinaetal2005}. Then \begin{eqnarray*} \ensuremath{\mathrm{tr}}(\hat{P}_\ell A_\ell^{-1}\hat{R}_\ell) &=& \ensuremath{\mathrm{tr}}(A_\ell^{-1}\hat{R}_\ell\hat{P}_\ell) \, = \, \ensuremath{\mathrm{tr}}(A_\ell^{-1}), \end{eqnarray*} and \begin{eqnarray*} \ensuremath{\mathrm{tr}}(\hat{P}_{\ell+1} A_{\ell+1}^{-1}\hat{R}_{\ell+1}) &=& \ensuremath{\mathrm{tr}}(\hat{P}_{\ell}P_{\ell}A_{\ell+1}^{-1}R_{\ell} \hat{R}_\ell) \, = \, \ensuremath{\mathrm{tr}}(P_{\ell}A_{\ell+1}^{-1}R_{\ell} \hat{R}_\ell\hat{P}_{\ell}) \, = \, \ensuremath{\mathrm{tr}}(P_{\ell}A_{\ell+1}^{-1}R_{\ell}). \end{eqnarray*} This means that instead of the multilevel decomposition \eqref{tr_ml_dec:eq} we can use \[ \ensuremath{\mathrm{tr}}(A) = \sum_{\ell=1}^{L-1}\left( \ensuremath{\mathrm{tr}}(A_\ell^{-1})- \ensuremath{\mathrm{tr}}({P}_{\ell} A_{\ell+1}^{-1} {R}_{\ell})\right) + \ensuremath{\mathrm{tr}}(A_L^{-1}), \] in which the stochastic estimation on level $\ell$ now involves random vectors from $\mathbb{C}^{n_\ell}$ instead of $\mathbb{C}^n$. \end{remark} \subsection{Discussion of the multilevel Monte-Carlo method} A profound analysis of the proposed multilevel Monte-Carlo method must take the approximation properties of the representation of the matrix at the various levels into account. This is highly problem dependent, so that in this paper we only provide a discussion of heuristics on why the proposed approach has the potential to yield efficient multilevel Monte-Carlo schemes. To simplify the discussion to follow, let us assume that the variance of the estimator at level $\ell$ is given by the square of the Frobenius norm of the off-diagonal part. This is the case, for example, if the components are i.i.d.\ with distribution \eqref{iid_components2:eq} or \eqref{iid_components3:eq}; see Corollary~\ref{variance:cor}. This Frobenius norm can be related to the singular values of $A$. Recall that the singular value decomposition of a non-singular matrix $A$ is \begin{eqnarray} A &=& U \Sigma V^* \enspace \mbox { with } U,\Sigma,V \in \mathbb{C}^{n\times n}, U^*U = V^*V = I, \label{svd:eq}\\ & & U = [u_1 | \cdots | u_n], \; V= [v_1| \ldots |v_n], \nonumber \\ & & \Sigma = \diag(\sigma_1,\ldots,\sigma_n), 0< \sigma_1 \leq \cdots \leq \sigma_n, \nonumber \end{eqnarray} with left singular vectors $u_i$, right singular vectors $v_i$ and positive singular values $\sigma_i$ which we ordered by increasing value for convenience here. In the following we base all our discussion on singular values and vectors. It is therefore worthwhile to mention that in the case of a Hermitian matrix $A$ this discussion simplifies in the sense that then the singular values are the moduli of the eigenvalues, and left and right singular vectors are identical and coincide with the eigenvectors. \begin{lemma} \label{offdiagnorm:lem} Let $A \in \mathbb{C}^{n \times n}$ have singular values $\sigma_i, i=1,\ldots,n$. Then \begin{equation} \label{svd_estimate:eq} \| \ensuremath{\mathrm{offdiag}}(A) \|^2_F = \sum_{i=1}^n \sigma_i^2 - \sum_{i=^1}^n |a_{ii}|^2. \end{equation} \end{lemma} \begin{proof} The equality $\|A\|^2_F = \sum_{i=1}^n \sigma_i^2$ is a basic fact from linear algebra, see \cite{GvL}, e.g. The formula \eqref{svd_estimate:eq} uses this and corrects for the vanishing diagonal part in $\ensuremath{\mathrm{offdiag}}(A)$. \end{proof} For the trace of the inverse $A^{-1}$ we have \begin{equation} \label{sigular_values_offdiag:eq} \| \ensuremath{\mathrm{offdiag}}(A^{-1}) \|^2_F = \sum_{i=1}^n \sigma_i^{-2} - \sum_{i=^1}^n |(A^{-1})_{ii}|^2. \end{equation} since the reciprocals of the singular values of $A$, are the singular values of $A^{-1}$. Therefore, in a simplified manner---disregarding the second term in \eqref{sigular_values_offdiag:eq}---it appears that the small singular values of $A$ are those who contribute most to the variance for the Hutchinson estimator \eqref{plain_tr_estimate:eq} for $\ensuremath{\mathrm{tr}}(A^{-1})$. In high performance computing practice, {\em deflation} has thus become a common tool, see \cite{DeGrand:2004qw,Endress:2014qpa,Gambhir_2017,Giusti:2004yp}, e.g., to reduce the variance: One precomputes the $k$, say, smallest singular values $\sigma_1,\ldots,\sigma_k$ of $A$ in the singular value decomposition \eqref{svd:eq} together with their left singular vectors $u_1,\ldots,u_k$. With the orthogonal projector \begin{equation} \label{eq:orth_proj} \Pi = U_k U_k^*, \mbox{ where } U_k = [u_1 | \cdots | u_k], \end{equation} we now have $A^{-1} = A^{-1} (I-\Pi) + A^{-1}\Pi$ with \begin{equation} \label{eq:defl} A^{-1} (I-\Pi) = \sum_{i=k+1}^n v_i \sigma_i^{-1} u_i^*, \quad A^{-1} \Pi = \sum_{i=1}^k A^{-1}u_iu_i^*. \end{equation} This shows that in $A^{-1}(I-\Pi)$ we have deflated the small singular values of $A$, so that we can expect a reduction of the variance when estimating the trace of this part stochastically. The trace of the second part is equal to the sum $\sum_{i=1}^k u_i^* A^{-1}u_i$ (see \eqref{trace_commute:eq}), and $A^{-1}u_i = \sigma_i^{-1}v_i$. So the second part can be computed directly from the singular triplets computed for the deflation. If $A$ is Hermitian, the deflation approach simplifies and amounts to precomputing the $k$ smallest eigenpairs. We refer to the results in \cite{Gambhir_2017} for a more in-depth analysis and discussion about the heuristics just presented. The deflation approach is still quite costly, since one has to precompute the singular values and vectors, and if the size of the matrix increases it is likely that we have to increase $k$ to maintain the same reduction in the variance. Approximate deflation has thus been put forward as an alternative, see \cite{Balietal2015,Romero_2020}, where one can use larger values for $k$ while at the same time allowing that the contribution of the small singular values to the variance is eliminated only approximately. One then replaces $\Pi$ by a more general projector of the form \[ \Pi = \hat{U}_k(\hat{V}_k^*A\hat{U}_k)^{-1}\hat{V}_k^*A, \enspace \hat{U}_k, \hat{V}_k \in \mathbb{C}^{n \times k} \] where now $\hat{U}_k$ and $\hat{V}_k$ can be regarded as containing approximate left and right singular vectors, respectively, as their columns. Actually, it is sufficient that their range is spanned by such approximations to left and right singular vectors, since the construction of $\Pi$ is invariant under transformations $\hat{U} \to \hat{U}B_U, \hat{V} \to \hat{V}B_V$ with non-singular matrices $B_U,B_V \in \mathbb{C}^{k \times k}$. In the decomposition $A^{-1} = A^{-1}(I-\Pi) + A^{-1}\Pi$ we now have, again using \eqref{trace_commute:eq}, \begin{eqnarray*} \ensuremath{\mathrm{tr}}(A^{-1}(I-\Pi)) &=& \ensuremath{\mathrm{tr}}(A^{-1}) - \ensuremath{\mathrm{tr}}(\hat{U}_k(\hat{V}_k^*A\hat{U}_k)^{-1}\hat{V}_k^*) , \\ \ensuremath{\mathrm{tr}}(A^{-1}\Pi) &=& \ensuremath{\mathrm{tr}}( \hat{U}_k(\hat{V}_k^*A\hat{U}_k)^{-1}\hat{V}_k^*) . \end{eqnarray*} If $k$ is relatively small, the second trace can be computed directly as in the exact deflation approach. If we take larger values for $k$, we can estimate it stochastically. The inexact deflation approach then becomes a two-level Monte-Carlo method. If we look at our multilevel Monte-Carlo decomposition \eqref{multilevel_decomposition:eq} with just tow levels, then it differs from inexact deflation in that the value for $k$ is now very large, namely the grid size at level 2 which usually is $\mathcal{O}(n)$. The matrix $\hat{U}_k$ spanning the approximate singular vectors is replaced by the prolongation operator $P_1$, and $\hat{V}_k^*$ corresponds to the restriction operator $R_1$. The multigrid construction principle should ensure that the range of $P_1$ contains good approximations to $\mathcal{O}(n)$ left singular vectors belonging to small singular values, and similarly for $R_1^*$ with respect to right singular vectors. This is why the variance reduction can be expected to be efficient. We thus have a large value of $k$---proportional to $n$---which targets at a high reduction of the variance of the first term. The second term involves the second level matrix representation, which is still of large size, and its trace estimator will, in addition, still have large variance. This is the reason why we extend the approach to involve many levels, ideally until a level $L$ where we can compute the trace directly, so that we do not suffer from a potentially high variance of a stochastic estimator. To conclude this discussion, we note that several other techniques for variance reduction have been suggested which can also be regarded as two-level Monte-Carlo techniques. For example, \cite{liu2014polynomial,baral2019} take a decomposition $A^{-1}-p(A) + p(A)$ with an appropriately chosen polynomial $p(A)$ and then estimates $\ensuremath{\mathrm{tr}}(A^{-1}-p(A))$ stochastically. The ``truncated solver'' method of \cite{Alexandrou_2014} follows a related idea by subtracting an approximation to the inverse. A similar decomposition with $p$ being a truncated Chebyshev series approximation was considered in \cite{Hanetal2017,Han2015,SaadUbaru2017}, for example, in which case $\ensuremath{\mathrm{tr}}(A^{-1}-p(A)$ is actually neglected. The work then resides in the stochastic estimation of $\ensuremath{\mathrm{tr}}(p(A))$, thus avoiding to solve linear systems. Finally, we refer to \cite{meyer2021hutch++} for a recent further variance reduction technique for Hutchinson's method, enhancing it by using vectors of the form $Av$ with random vectors $v$. \section{Numerical results} \label{numerical_results:sec} We consider three classes of matrices: The standard discrete 2d Laplacian, the 2d gauge Laplacian, and the Schwinger model. These three classes represent an increasingly complex hierarchy of problems which will eventually lead to our final, though yet unreached target, the Wilson-Dirac matrix arising in lattice QCD. The improvements of the multilevel approach compared to ``plain'' Hutchinson \eqref{plain_tr_estimate:eq} are tremendous and typically reach two orders of magnitude or more. This is why we compare against {\em deflated Hutchinson}, where we deflate the $n_\ensuremath{\mathrm{defl}}$ smallest eigenpairs of the matrix $A$. With $U \in \mathbb{C}^{n \times n_\ensuremath{\mathrm{defl}}}$ holding the respective eigenvectors in its columns, we use the projector $\Pi = I-UU^*$ as in \eqref{eq:orth_proj}, resulting in the decomposition \eqref{eq:defl} Therein we estimate $\ensuremath{\mathrm{tr}}(A^{-1}(I-\Pi))$ with the Hutchinson estimator whereas $\ensuremath{\mathrm{tr}}(A^{-1}\Pi) = \sum_{i=1}^{n_\ensuremath{\mathrm{defl}}} \lambda_i^{-1}$ is obtained directly from the deflated eigenpairs. We always performed a rough scan to determine a number $n_\ensuremath{\mathrm{defl}}$ of deflated eigenpairs which s close to time-optimal. The deflated Hutchinson approach usually gains at least one order of magnitude in time and arithmetic cost over plain Hutchinson. All our computations were done on a single thread of an Intel Xeon Processor E5-2699 v4, with a MATLAB R2021a implementation of our numerical experiments for the 2d Laplacian, and in Python for our tests with the gauge Laplacian and the Schwinger model. By default we aimed at a relative accuracy of $\epsilon = 10^{-3}$. This is done as follows: We first perform five stochastic estimates, take their mean and subtract their root mean square deviation, giving the value $\tau$. In the deflated Hutchinson method we now perform stochastic estimates as long as their root mean square deviation exceeds $\epsilon \tau$. For the multilevel Monte-Carlo method we have to prescribe a value for the root mean square deviations $\rho_\ell$ for the stochastic estimation of each of the traces \begin{equation} \label{eq:trace_diff} \ensuremath{\mathrm{tr}}\left(\hat{P}_\ell A_\ell^{-1} \hat{R}_\ell- \hat{P}_{\ell+1} A_{\ell+1}^{-1} \hat{R}_{\ell+1}\right), \enspace \ell = 1,\ldots, L-1 \end{equation} from \eqref{tr_ml_dec:eq}, while we always compute the last term $\ensuremath{\mathrm{tr}}(\hat{P}_L A_L^{-1} \hat{R}_L)$ in \eqref{tr_ml_dec:eq} non-sto\-cha\-stic\-al\-ly as $\ensuremath{\mathrm{tr}}(A_L^{-1} \hat{R}_L\hat{P}_L)$, inverting $A_L$ explicitly. The requirement is to have \[ \sum_{\ell=1}^{L-1} \rho_\ell^2 = (\epsilon \tau)^2, \] so the obvious approach is to put $ \rho_\ell = \epsilon\tau/\sqrt{L-1}$ for all $\ell$, and this is what we do in our first two examples. It might be advantageous, though, to allow for a larger value of $\rho_\ell$ on those level differences where the cost is high, and we do so in Example~\ref{ex:Schwinger}. For each stochastic estimate for \eqref{eq:trace_diff} we have to solve linear systems with the matrices $A_\ell$ and $A_{\ell+1}$. This is done using a multigrid method based on the same prolongations $P_\ell$, restrictions $R_\ell$ and coarse grid operators $A_\ell$ that we use to obtain our multilevel decomposition \eqref{tr_ml_dec:eq}. However, when multigrid is used as a solver, we use the full hierarchy going down to coarse grids of very small sizes, whereas in the multilevel decomposition \eqref{tr_ml_dec:eq} used in multilevel Monte-Carlo we might stop at an earlier level. For all experiments we report mainly two quantities. The first is the number of stochastic estimates that are performed at each level difference \eqref{eq:trace_diff} for multilevel Monte-Carlo together with the number of stochastic estimates in deflated Hutchinson (which always require linear solves at the finest level). These numbers may be interpreted as illustrating how multilevel Monte-Carlo moves the higher variances to the coarser level differences. As a second quantity, we report the approximate arithmetic cost for both methods, deflated Hutchinson and multilevel Monte-Carlo, which we obtain using the following cost model: For every matrix-vector product of the form $Bx$ we assume a cost of $\ensuremath{\mathrm{nnz}}(B)$, the number of nonzeros in $B$. In this manner ,one unit in the cost model roughly corresponds to a multiplication plus an addition. This applies to the computation of residuals, of prolongations and restrictions and the coarsest grid solve in the multigrid solver as well as to the ``global'' restrictions and prolongations $\hat{R}_\ell, \hat{P}_\ell$ used in each stochastic estimate in multilevel Monte-Carlo. For the latter method, we also count the work for the direct computation of the trace at the coarsest level, which involves the inversion of the coarsest grid matrix and additional matrix-matrix products. This cost model thus only neglects vector-vector and scalar operations and is thus considered sufficiently accurate for our purposes. \begin{example} \label{ex:2dLaplace} The discrete 2d Laplacian is the $N^2 \times N^2$-matrix \[ L^N = B \otimes I + I \otimes B, \enspace \text{$I$ the $N \times N$ identity, } B_0 = \begin{bmatrix} 2 & -1\\ -1 & 2 & \ddots\\ & \ddots & \ddots & -1 \\ & & -1 & 2 \end{bmatrix} \in \mathbb{C}^{N \times N}, \] which results from the finite difference approximation of the Laplace operator on an equidistant grid in the unit square with Dirichlet boundary conditions. Note that the eigenvalues of $L^N$ are explicitly known, so the trace of the inverse is, in principle, directly available as the sum of the inverses of the eigenvalues. \begin{table} \centering \begin{small} \begin{tabular}{|r|l|cccccc|cc|} \hline \multicolumn{10}{|c|}{2d Laplace} \\ \hline $N$ & &$\ell = 1$&$\ell = 2$&$\ell = 3$&$\ell = 4$&$\ell = 5$&$\ell = 6$& $n_\ensuremath{\mathrm{defl}}$ & $L$ \\ \hline 63 & $n_\ell$ & $63^2$ & $31^2$ & $15^2$ & & & & 92 & 3 \\ & $\ensuremath{\mathrm{nnz}}(L^N_\ell)$ & $1.96e4$ & $8.28e3$ & $1.85e3$ & & & & & \\ \hline 127 & $n_\ell$ & $127^2$ & $63^2$ & $31^2$ & $15^2$ & & & 44 & 4 \\ & $\ensuremath{\mathrm{nnz}}(L^N_\ell)$ & $8.01e4$ & $3.50e4$ & $8.28e3$ & $1.85e3$ & & & & \\ \hline 255 & $n_\ell$ & $255^2$ & $127^2$ & $63^2$ & $31^2$ & $15^2$ & & 64 & 5 \\ & $\ensuremath{\mathrm{nnz}}(L^N_\ell)$ & $3.24e5$ & $1.44e5$ & $3.50e4$ & $8.28e3$ & $1.85e3$ & & & \\ \hline 511 & $n_\ell$ & $511^2$ & $255^2$ & $127^2$ & $63^2$ & $31^2$ & $15^2$ & 76 & 6 \\ & $\ensuremath{\mathrm{nnz}}(L^N_\ell)$ & $1.30e6$ & $5.82e5$ & $1.44e5$ & $3.50e4$ & $8.28e3$ & $1.85e3$ & & \\ \hline \end{tabular} \end{small} \caption{Parameters and quantities for Example~\ref{ex:2dLaplace}} \label{tab:2dLaplace} \end{table} For the multigrid hierarchy we choose $P_\ell$ to be the standard bilinear interpolation from a grid of size $N_{\ell+1} \times N_{\ell +1}$ to one of size $N_\ell \times N_\ell$; see \cite{Trottenberg2000}. Here, the number $N_\ell$ of grid points in one dimension on level $\ell$ is recursively given as $N_{\ell+1} = \lfloor N_{\ell}/2 \rfloor $. The restrictions $P_\ell$ are taken as the adjoints of the interpolations $P_\ell$, and the coarse grid operators $L^N_\ell$ are obtained as Galerkin approximations $L^N_{\ell+1} = R_\ell L^N_{\ell}P_\ell$. So the operator $L^N_\ell$ at level $\ell$ is an $n_\ell \times n_\ell$ matrix with $n_\ell = N_\ell^2$. V-cycle multigrid with one step of Gauss-Seidel pre- and post-smoothing was used as a solver to compute $(L^N_\ell)^{-1}x$ on the various levels. In the solver, we always use the full hierarchy down to a smallest coarsest grid operator size of $N^2 = 7^2$, where we inverted directly using a Cholesky factorization. For multilevel Monte-Carlo we took $L$, the maximum number of levels, such that $N_L = 15$, since it turned out that with this choice the work for the direct computation of the trace at level $L$---requiring the inversion of a matrix of size $15^2 = 225$---was small enough compared to the other cost. Table~\ref{tab:2dLaplace} summarizes the most important quantities for the matrices in this example. \begin{figure}[htbp] \includegraphics[width=.49\textwidth]{Figures/2d/2dN127defl_time.eps} \hfill \includegraphics[width=.49\textwidth]{Figures/2d/2dN127defl_work.eps} \centering \caption{2d Laplace: Comparison of execution times and arithmetic cost for multilevel Monte-Carlo and deflated Hutchinson, varying $n_\ensuremath{\mathrm{defl}}$. \label{fig:2dLaplace_timings}} \end{figure} Figure~\ref{fig:2dLaplace_timings} reports timings and arithmetic cost for a study to approximately find the time-optimal number of eigenvalues to deflate in the deflated Hutchinson method for $N=127$. The left part of the figure shows that there is a substantial decrease in the timings when we start to increase the number of deflated eigenvalues up to around 50, after which the cost for computing the eigenpairs becomes increasingly larger, eventually dominating the overall execution time completely. The straight horizontal line on the bottom indicates the time required in the multilevel Monte-Carlo approach using $L=4$ levels. Even with an optimal number of deflated eigenvalues, deflated Hutchinson takes about 7 times longer than multilevel Monte-Carlo. The right part of Figure~\ref{fig:2dLaplace_timings} reports the arithmetic cost. We do {\em not} include the cost for the computation of the eigenpairs, since we ignore the details of Matlab's \texttt{eigs} function that we use to compute those. The top line in the right part of the figure thus corresponds to the line marked with open squares in the left part, and we see that the cost model quite accurately matches the observed execution times. The two lines below the top line separate this work into the work spent in the computation of the projections $(I-UU^*)x = x - U(U^*x)$, which is $2nn_\ensuremath{\mathrm{defl}}$, and the work spent in the linear solves. The straight horizontal line represents the work for multilevel Monte-Carlo. We see that if a large number of eigenpairs is deflated, we have comparable work in the linear solves in deflated Hutchinson than in multilevel Monte-Carlo, due to the fact that we have to do a similar number of stochastic estimates. Each estimate, though, now becomes substantially more expensive due to the additional projections, so that we still see a factor of about 3 in favor of multilevel Monte-Carlo. We repeat that this comparison does not take the work for computing the eigenpairs in deflated Hutchinson into account. \begin{figure}[htbp] \includegraphics[width=.49\textwidth]{Figures/2d/2dests} \hfill \includegraphics[width=.49\textwidth]{Figures/2d/2dcost} \caption{Comparison of multilevel Monte-Carlo and optimally deflated Hutchinson for the 2d Laplace matrix: no of stochastic estimates on each level difference \eqref{eq:trace_diff} and total work for different $N$. \label{fig:2dLaplace_estimates_work}} \end{figure} We proceed with Figure~\ref{fig:2dLaplace_estimates_work} which reports results for the four size parameters $N=63,127,255,511$. For multilevel Monte-Carlo, the left part of the figure shows the number of stochastic estimates that we perform for each of the differences $\hat{P}_\ell (L^N_\ell)^{-1}\hat{R}_\ell - \hat{P}_{\ell+1}(L^N_{\ell+1})^{-1}\hat{R}_{\ell+1}$ in \eqref{eq:trace_diff}. For comparison, the number of stochastic estimates required in (time optimally) deflated Hutchinson are indicated as dashed horizontal lines; see Table~\ref{tab:2dLaplace} for the values of $n_\ensuremath{\mathrm{defl}}$ that we used. The right part of Figure~\ref{fig:2dLaplace_estimates_work} shows the total amount of work invested in the different methods where, again, we do not count the work for the eigenpair computation in deflated Hutchinson. The figure illustrates the fact that in multilevel Monte-Carlo we do few estimates on the fine, expensive levels and more on the coarse and cheap levels. The plots for the cost in the right part show how this translates into a reduction of the arithmetic cost, reaching about one order of magnitude for the larger values of $N$, even without accounting for the cost for the computation of the eigenpairs in deflated Hutchinson. \begin{figure}[htbp] \centerline{\includegraphics[width=.49\textwidth]{Figures/2d/2dN127scaneps}} \caption{Arithmetic cost for multilevel Monte-Carlo and standard Hutchinson as a function of the target accuracy $\epsilon$. \label{fig:2dN127scaneps}} \end{figure} Our multilevel Monte-Carlo approach performs a Monte-Carlo estimate for each level difference. It therefore exhibits the same quadratic scaling with respect to the target accuracy $\epsilon$ as standard Hutchinson does. This is illustrated in Figure~\ref{fig:2dN127scaneps} for $N=127$. This is a logscale plot, and the quadratic dependence on $\epsilon^{-1}$ is clearly visible for the smaller values of $\epsilon$. For the larger values of $\epsilon$ (to the left), the results are less significant, since we always do a minimum of at least 5 stochastic estimates for each level difference. For $\epsilon = 10^{-1}$ and $\epsilon = 10^{-1.5}$, this is precisely what happens for all the level differences, so the cost for multilevel Monte-Carlo is exactly the same (and probably unnecessarily high) for both these values of $\epsilon$. Note that the arithmetic cost of standard Hutchinson is between 2 and 3 orders of magnitude higher than that of multilevel Monte-Carlo for $\epsilon \leq 10^{-2}$. \end{example} \begin{example} \label{ex:gaugeLaplace} The gauge Laplacian $G^N$ is a modification of the standard 2d discrete Laplace matrix where now the coupling coefficients---called gauge links---are complex numbers of modulus one but with a random phase. The gauge Laplacian represents a first step towards the modeling of gauge field theories in physics, where the random coefficients model the fluctuating background gauge field. With $u_{ij}$ describing the variables belonging to a grid point $(ih,jh)$, $h=1/N$, $i,j = 0,\ldots,N-1$, the coupling described by one row of the gauge Laplacian $G^N \in \mathbb{C}^{N^2 \times N^2}$ is given as \begin{eqnarray*} &4 u_{ij} - e^{\mathrm{i}\Theta_{ij}}u_{i+1,j}-e^{\mathrm{i}\Phi_{ij}}u_{i,j+1} - e^{-\mathrm{i}\Theta_{i-1,j}}u_{i-1,j}-e^{-\mathrm{i}\Phi_{i,j-1}}u_{i,j-1},& \\ &i,j=0,\ldots,N-1,& \end{eqnarray*} where the indices are to be understood $\bmod N$ since we have periodic boundary conditions. Gauge Laplacians are Hermitian and positive semidefinite. Typically, as in our case, they are even positive definite. \begin{table} \centering \begin{small} \begin{tabular}{|r|l|cccc|cc|} \hline \multicolumn{8}{|c|}{Gauge Laplace} \\ \hline $N$ & &$\ell = 1$&$\ell = 2$&$\ell = 3$&$\ell = 4$& $n_\ensuremath{\mathrm{defl}}$ & $L$ \\ \hline 64 & $n_\ell$ & $4096$ & $1354$ & $134$ & & 60 & 3 \\ & $\ensuremath{\mathrm{nnz}}(G^N_\ell)$ & $20480$ & $24900$ & $3172$ & & & \\ \hline 128 & $n_\ell$ & $16384$ & $5440$ & $554$ & & 60 & 3 \\ & $\ensuremath{\mathrm{nnz}}(G^N_\ell)$ & $81920$ & $99448$ & $11300$ & & & \\ \hline 256 & $n_\ell$ & $65536$ & $21802$ & $2348$ & $196$ & 20 & 4 \\ & $\ensuremath{\mathrm{nnz}}(G^N_\ell)$ & $327680$ & $394628$ & $49416$ & $6352$ & & \\ \hline 512 & $n_\ell$ & $262144$ & $87296$ & $9562$ & $924$ & 20 & 4 \\ & $\ensuremath{\mathrm{nnz}}(G^N_\ell)$ & $327680$ & $394628$ & $49416$ & $6352$ & & \\ \hline \end{tabular} \end{small} \caption{Parameters and quantities for Example~\ref{ex:gaugeLaplace}} \label{tab:GaugeLaplace} \end{table} The Python package pyAMG, see \cite{OlSc2018}, contains functions to produce gauge Laplacians with prescribed distributions for the phases $\Theta_{ij}$ and $\Phi_{ij}$ of the gauge links. For our experiments, we produced gauge Laplacians with grid spacing $a=1$ and temperature $\beta = 0.009$ for the distribution of the phases of the gauge links. The pyAMG package provides a variety of algebraic multigrid methods. To build a multigrid hierarchy for gauge Laplacians, we use adaptive smoothed aggregation, where for the adaptive setup we took the parameters \[ \text{\texttt{num\_candidates} } =2, \text{\texttt{ candidate\_iters} } = 5, \text{\texttt{ improvement\_iters} } =8. \] As our linear solver we used V-cycle multigrid with one step of Gauss-Seidel pre- and post-smoothing. Table~\ref{tab:GaugeLaplace} summarizes the most important quantities for the same four lattice sizes as we had for the 2d Laplacian. The table shows that smoothed aggregation yields matrices at level 2 which are a factor 3 less in size but are significantly more dense, since they have even more non-zeros than the matrices at level 1. For the subsequent levels the coarsening is quite aggressive, reducing the sizes by factors of roughly 10, and similarly for the number of non-zeros. \begin{figure}[htbp] \includegraphics[width=.49\textwidth]{Figures/gauge/gaugeLaplace_ests.eps} \hfill \includegraphics[width=.49\textwidth]{Figures/gauge/gaugeLaplace_cost.eps} \caption{Comparison of multilevel Monte-Carlo and optimally deflated Hutchinson for the gauge Laplace matrices: no of stochastic estimates on each level difference \eqref{eq:trace_diff} and total work for different $N$. \label{fig:gaugeLaplace_estimates_work}} \end{figure} In the same manner as Figure~\ref{fig:2dLaplace_estimates_work} did for the 2d Laplacian, Figure~\ref{fig:gaugeLaplace_estimates_work} now reports the number of estimates at the various levels and the arithmetic cost for multilevel Monte-Carlo as well as for optimally deflated Hutchinson. As for the first example the figure illustrates that multilevel Monte-Carlo allows to do few estimates on the fine levels. The gain in work reaches up to a factor of 5 for the larger values of $N$, whereas $N=64$ is too small in this example for multilevel Monte-Carlo to beat deflated Hutchinson. As before, we do not count the arithmetic cost for computing the eigenpairs in deflated Hutchinson in this comparison. \end{example} \begin{example} \label{ex:Schwinger} Our third example is the Schwinger discretization of the 2-di\-men\-si\-onal Dirac operator \cite{Schwinger1962}. This operator describes quantum electrodynamics (QED), the quantum field theory of the electro-magnetic interaction between charged particles via photons. The Schwinger matrix resembles the gauge Laplacian in the sense that there is a periodic nearest neighbor coupling on an equidistant grid in the unit square and that the coupling coefficients are based on complex numbers of modulus 1 with a random phase. The difference is that for Dirac operators there are several (here: two) variables per grid point, representing different spins. With the Pauli matrices \[ \sigma_1 = \left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right], \enspace \sigma_2 = \left[ \begin{array}{cc} 0 & i \\ -i & 0 \end{array} \right], \] and the understanding that a grid variable $u_{ij}$ has now two components, i.e.\ $u_{ij} \in \mathbb{C}^2$, representing the different spins at grid point $(ih,jh)$, the periodic couplings is now given as \begin{eqnarray} \label{eq:schwinger_coupling} (4+m)\cdot u_{ij} &-& e^{\mathrm{i}\Theta_{ij}}(I-\sigma_1)u_{i+1,j}-e^{\mathrm{i}\Phi_{ij}}(I-\sigma_2)u_{i,j+1} \\ &-& e^{-\mathrm{i}\Theta_{i-1,j}}(I+\sigma_1)u_{i-1,j}-e^{-\mathrm{i}\Phi_{i,j-1}}(I+\sigma_2)u_{i,j-1}, \\ & &i,j=1,\ldots,N. \end{eqnarray} Note that the Pauli matrices cross-couple the spins. Thus, if we order spin components such that the first spin componen at each grid location comes first and all second spin components follow, the Schwinger matrix has the form \begin{equation} \label{eq:matrix_Schwinger} S^N = \left[ \begin{array}{cc} G^N & B \\ -B^* & G^N \end{array} \right] \end{equation} where the matrices $G^N$ are gauge Laplacians and the matrix $B$ represents the spin cross-coupling. We used a Schwinger matrix arising from a thermalized configuration within a Markov process. This guarantees that the random gauge links obey a Boltzmann distribution with a given temperature parameter. The matrix belongs to an $N \times N$ lattice with $N = 128$, and is thus of size $2N^{2}{\times}2N^{2} = 32,768 \times 32,768$. The multigrid hierarchy for the Schwinger matrix is obtained through an aggregation based approach similar to the one typically used for the 4-dimensional Wilson-Dirac operator; see \cite{MGClark2010_1,FroKaKrLeRo13}. Giving all details would be beyond the scope of this paper, so here is a rough sketch: At each level, the operator represents a periodic nearest neighbor coupling an a 2-dimensional lattice of decreasing size. At each lattice site we have several, $d$ say, degrees of freedom (dofs), i.e.\ variables belonging to a lattice site are vectors of length $d$. When going from one level to the next, we subdivide the lattice into small sublattices---the aggregates. Each aggregate becomes a single lattice site on the next level. The corresponding restriction operator is obtained by computing (quite inexact) approximations to the $d$ smallest eigenvectors, the components of which are assembled over the aggregates and orthogonalized. This gives restriction operators which are orthonormal, and since we take the prolongations to be their adjoints, we are in the simplified situation of Remark~\ref{rem:simplified} for estimating the traces of the differences in multilevel Monte-Carlo. The Schwinger matrix is not Hermitian, but its eigenvalues come in complex conjugate pairs. This is due to a non-trivial symmetry induced by the spin structure that can be seen from \eqref{eq:matrix_Schwinger}, \[ J S^N = \left(S^N\right)^* J, \enspace \mbox{ where } J = \left[ \begin{array}{cc} I & 0 \\ 0 & -I \end{array} \right], \] so that to each right eigenpair $(x,\lambda)$ of $S^N$ corresponds a left eigenpair $((Jx)^*, \bar{\lambda})$. This spin symmetry can be preserved on the coarser levels if one doubles the dofs; see \cite{MGClark2010_1,FroKaKrLeRo13}. We built a multigrid hierarchy with four levels. For the aggregates, at all levels we always aggregated $4 \times 4$ sublattices into one lattice site on the next level, and we started with 2 dofs for the second level and 4 for all remaining levels. Those dofs are then doubled because we implemented the spin structure preserving approach. Table~\ref{tab:Schwinger} summarizes the most important information on the multigrid hierarchy. It also shows the five different (negative) values for the mass $m$ that we used in our experiments. These values are physically meaningful, and for all of them the spectrum of $S^N$ is contained in the right half plane. As $m$ becomes smaller, $S^N$ becomes more ill-conditioned, so the work for each stochastic estimate increases. When solving linear systems at the various levels, we used one V-cycle of multigrid with two steps of Gauss-Seidel pre- and post-smoothing as a preconditioner for flexible GMRES \cite{Saad2003}. Our implementation was done in Python\footnote{The programs can be found in the GitHub repository https://github.com/Gustavroot/MLMCTraceComputer}. \begin{table} \centering \begin{small} \begin{tabular}{|r|l|cccc|c|} \hline \multicolumn{7}{|c|}{Schwinger model} \\ \hline $N$ & &$\ell = 1$ &$\ell = 2$ &$\ell = 3$ &$\ell = 4$ & $L$ \\ \hline 128 & $n_\ell$ & $2\cdot 128^2$& $4\cdot 32^2$& $8\cdot 8^2$ & $8\cdot2^2$ & 4 \\ & $\ensuremath{\mathrm{nnz}}(S^N_\ell)$ & $2.94e5$ & $1.64e5$ & $2.46e4$ & $1024$ & \\ \hline \hline $m$ & \multicolumn{1}{c}{$-0.1320$} & $-0.1325$ & $-0.1329$ & $-0.1332$ & $-0.1333$ & \\ $n_\ensuremath{\mathrm{defl}}$ & \multicolumn{1}{c}{$384$} & $384$ & $512$ & $512$ & 512 & \\ \hline \end{tabular} \end{small} \caption{Parameters and quantities for Example~\ref{ex:Schwinger}} \label{tab:Schwinger} \end{table} \begin{figure}[htbp] \includegraphics[width=.49\textwidth]{Figures/schwinger/schwinger_ests.eps} \hfill \includegraphics[width=.49\textwidth]{Figures/schwinger/schwinger_cost.eps} \caption{Multilevel Monte-Carlo and deflated Hutchinson for the Schwinger matrix: no of stochastic estimates on each level difference \eqref{eq:trace_diff} and total work for different masses $m$. \label{fig:Schwinger_estimates_work}} \end{figure} Figure~\ref{fig:Schwinger_estimates_work} shows our results. We tuned the required root mean square deviation $\rho_\ell$ at each level due to the observation that this time the root mean square deviation is comparably small on the last level difference. The values we chose, independently of the mass parameter $m$, are $\rho_{1}=\sqrt{0.4}\epsilon\tau$, $\rho_{2}=\sqrt{0.55}\epsilon\tau$ and $\rho_{3} = \sqrt{0.05}\epsilon\tau$ for all masses. As in the other examples, we compare against deflated Hutchinson with a time-optimal number of deflated eigenpairs, and we do not count the work for the eigenpair computation. The figure shows that multilevel Monte-Carlo becomes increasingly more efficient over deflated Hutchinson as the masses become smaller, ending up in a one order of magnitude improvement in work for the smallest. Interestingly, we also see that the number of stochastic estimates to be performed on each level in multilevel Monte-Carlo depends on the masses only very mildly, whereas the number of stochastic estimates increase rapidly in deflated Hutchinson. \end{example} \section*{Conclusion} We presented a novel multilevel Monte-Carlo approach to stochastically estimate the trace of a matrix. The method relies on the availability of a multigrid hierarchy and estimates traces of differences of matrices at the various levels. The method is efficient if the variances at the earlier, finer level differences, are small, and this is what we observed in three different examples, two of which used an adaptive algebraic approach to determine the multigrid hierarchy. \section*{Acknowledgments} We would like to thank Francesco Knechtli and Tomasz Korzec for stimulating discussion including references on work done in the lattice QCD community, and Karsten Kahl for providing us with the thermalized Schwinger model matrix. \bibliographystyle{siamplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Living cells actively sense and respond to the physical geometry and stiffness of their environment, which in turn affects a variety of cellular processes, such as growth, differentiation, morphogenesis, spreading and motility~\cite{Discher2005}. Cell-matrix adhesion is mediated by integrin complexes, referred to as focal adhesions, that bind to specific ligands on the underlying matrix. Focal adhesions are mechanically linked to the actomyosin cytoskeleton inside the cell, that in turn generates contractile forces on the extracellular matrix. The interplay between substrate stiffness, intracellular contractility and the extracellular adhesion forces controls the cell morphology and its mechanical behavior. For instance, cells adhering to soft substrate are generally found to spread less and have round morphology, while cells on stiff substrates have greater spread area with more branched shapes~\cite{Yeung2005}. Powerful techniques have been developed in recent years to measure the traction forces exerted by adherent cells on compliant substrates~\cite{Harris1980}. Traction Force Microscopy is used to probe the traction stresses exerted by cells on continuous elastic gels. The stresses are inferred from measurements of the displacements of fiducial markers embedded in the gel before and after cell detachment~\cite{Dembo1999,Butler2002}. In a second technique cells plated on microfabricated pillar arrays induce bending of the elastic micropillars. The traction forces are then obtained from by assuming a linear Hooke's law relation between the measured bending and the forces ~\cite{Tan2003}. These experiments have demonstrated that the mechanical response of adherent cells is controlled by a complex interplay of substrate stiffness and geometry, myosin activity and extracellular matrix proteins. Adhesive micro patterning has also been used as a tool for both controlling cell shape and study the interplay between shape and cytoskeletal organization and architecture~\cite{Thery2010}. These studies have shown that when strongly adhesive patterns force the cell boundary to exhibit regions of high curvature, traction stresses tend to be concentrated in these regions, while stress fibers develop along cell boundaries linking non-adhesive zones, confirming the crucial role of the cytoskeletal contractility and architecture in controlling cellular stresses and morphology~\cite{Rape2011}. The role of adhesion geometry in controlling traction force distribution has been addressed theoretically using network models and continuum mechanical models. While models of continuum mechanical elements coupled to bio-chemical agents have been used before to describe the traction force distribution by adherent cells~\cite{deshpande2006}, continuum minimal models inspired by thermoelasticity~\cite{Edwards2011} or active gel theory~\cite{Banerjee2011} have recently provided new key analytical results. Network models of the contractile cytoskeleton have also been used to describe the relation between force distribution and shape of adherent cells~\cite{lemmon2010,Torres2012}, including networks of Hookean springs as well as cable networks that incorporate the asymmetry of the elastic response of biopolymers such as filamentary actin to compression version extension, with and without the explicit inclusion of contractility. In particular, the active cable network reproduces the arc morphology of cell boundaries pinned by strong local adhesions that has been seen in experiments~\cite{Lemmon2005}. The relationship between cell shape and adhesion geometry has also been studied by modeling cells as contractile films bounded by the elastic cortex~\cite{Barziv1999,Bischofs2009,Banerjee2012b}. In this paper we consider a continuum model of cells as linear, active elastic media and demonstrate that the introduction of activity as a \emph{spatially homogeneous} contractile, hence negative, contribution to the pressure is sufficient to reproduce the spatial inhomogeneous distribution of traction and cellular stresses observed in experiments for a number of cell geometries. An interesting extension of our work will be to introduce nonlinearity in the continuum model to incorporate an asymmetric response to compression and stretching. This asymmetry, arising from the nonlinear force-extension curve of actin filaments, is known to be important in controlling the contractile behavior of isotropic gels~\cite{Liverpool2009,Banerjee2011b} and may alter the stress distribution in adhering cells. In the next section we introduce our continuum model of adherent cells as active contractile elastic media. We then use the model to study the effect of the geometry of the adhesion region on controlling the spatial distribution of stresses in the cell. The model can be solved analytically for a circular cell, where we obtain an expression for the cell spread area as a function of substrate stiffness and show that our results compares favorably to experiments (inset of Fig.~\ref{fig:spreading}). The cases of elliptical, square and triangular cells are solved numerically. We show that the geometry of the adhesive region strongly affect the stress distribution, with traction stresses concentrated in regions of high curvatures or at sharp corners (Fig.~\ref{fig:2dshapes}). In section 3.3 we provide an analytical argument that quantifies the correlation between traction stress magnitude and curvature of the cell boundary and discuss in section 3.4 the relative roles of shear and compressional deformations in controlling the stress distribution. We conclude with a brief discussion. \section{Adherent cell as a contractile gel} We consider a stationary cell adhering to an elastic substrate via stable focal adhesion complexes. We further assume that the cell has attained its optimum spread area on the substrate, with an average height $h$ much thinner than its perimeter. In mechanical equilibrium, the condition of local force-balance translates to $\partial_\beta \sigma_{\alpha \beta}=0$, where ${\bm \sigma}$ is the three-dimensional stress tensor of the cell with greek indices taking values $x,y$ and $z$. For a thin cellular film we average the cellular force-balance equation over the cell thickness $h$. In-plane force balance is given by \begin{equation} \label{eq:balance-2d} \partial_j\sigma_{ij} + \partial_z \sigma_{iz}=0\;, \end{equation} with $i, j$ denoting in-plane coordinates. We assume that the top surface of the cell is stress free, $\sigma_{iz}({\bf r}_\perp,z=h)=0$, whereas at the cell-substrate interface $z=0$, the cell experiences lateral traction stresses given by $\sigma_{iz}({\bf r}_\perp,z=0)=Yu_i({\bf r}_\perp,z=0)$. Here, $Y$ denotes the substrate rigidity parameter, representing the cell-substrate anchoring strength, and ${\bf u}({\bf r}_\perp,z)$ is the in-plane deformation field of the cellular medium. The thickness-averaged force balance equation then reads~\cite{Banerjee2011,Edwards2011}, \begin{equation}\label{eq:force-balance} h\partial_j\overline{\sigma}_{ij}=Yu_i\;, \end{equation} where $\overline{\sigma}_{ij}({\bf r}_\perp)=\int_0^h(dz/h)\sigma_{ij}({\bf r}_\perp,z)$. It is worthwhile to mention that the assumption of in-plane traction forces is a good approximation for fully spread stationary cells making almost zero contact angle with the substrate. During the early stages of spreading and migration, cells can exert appreciable out-of-plane traction forces via rotation of focal adhesions~\cite{Legant2013}. In the following we will drop the overbear indicating the average and refer to thickness averaged quantities throughout. The quantity $T_i=Yu_i$ is a stress in three dimensions, i.e., a force per unit area. It describes the in-plane traction force per unit area that the cell exerts on the substrate. The assumption of local elastic interactions with the substrate strictly holds on elastic substrates that are much thinner than the lateral size of the cell~\cite{Banerjee2012} or on micropillar substrates~\cite{Edwards2011}. The substrate rigidity parameter $Y$ depends on the stiffness of the underlying substrate as well as on the density $\rho_f$ and stiffness $k_f$ of focal adhesions. For an elastic substrate of shear modulus $\mu_s$ and thickness $h_s$, $Y$ takes the simple form~\cite{Banerjee2012}, $Y^{-1}=\frac{1}{k_f \rho_f} + \frac{1}{\mu_s/h_s}$. We model the cell as an isotropic and homogeneous elastic material with additional internal active stresses due to actomyosin contractility. The constitutive relation for the cellular stress tensor is then given by, \begin{equation}\label{eq:stress} \sigma_{ij}=\frac{E}{2(1+\nu)}\left(\frac{2\nu}{1-2\nu}\bm\nabla\cdot{\bf u}\ \delta_{ij} + \partial_j u_i + \partial_i u_j \right) + \sigma_a \delta_{ij} \;, \end{equation} where $E$ and $\nu$ denote the Young modulus and Poisson ratio of the cellular material, respectively. Actomyosin contractility is modeled as a negative contribution to the local pressure, corresponding to $\sigma_a>0$. The assumption of linear elasticity is valid on time scales shorter than cytoskeletal turnovers, that are indeed slowed down by strong adhesion to the substrate. Equations \eref{eq:force-balance} and \eref{eq:stress}, subject to the boundary condition $\sigma_{ij}n_j \vert_\Omega=0$, wtih $\Omega$ the cell boundary and ${\bf n}$ the outward unit normal on $\Omega$, completely describe the equilibrium of an adherent cell. As a consequence of the stress free condition at the lateral cell boundary, the net traction force transmitted by the cell to the substrate vanishes, i.e., $\int_A d^2{\bf r}\ Yu_i=\oint_\Omega ds\ \sigma_{ij}n_j=0$. It is instructive to consider two limiting cases for the anchoring strength. When the cell is rigidly anchored onto the substrate, corresponding to $Y\rightarrow \infty$, we find ${\bf u}=0$, defining the reference state for elastic deformations. In our model the reference cell shape is then dictated by the geometry of the adhesion patch, which can be controlled in experiments by micropatterning substrates by adhesion proteins. In contrast, when $Y\rightarrow 0$, the cell does not adhere to the substrate and the equilibrium state is uniformly contracted state, with a density enhancement $\delta\rho=-\bm\nabla\cdot{\bf u}=\sigma_a(1+\nu)(1-2\nu)/E(1-\nu)$. In the following, we investigate analytically and numerically solutions of the cell elasticity equations \eref{eq:force-balance} and \eref{eq:stress} subject to stress-free boundary conditions in various planar geometries. \section{Results} \subsection{Spatial distribution of traction stresses is sensitive to adhesion geometry} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{fig1.pdf} \caption{\label{fig:2dshapes}Equilibrium cell shapes for various adhesion patterns : Circle (top left), ellipse (top right), square (bottom left) and equilateral triangle (bottom right). The color map indicates magnitude of the traction $\vert {\bf T}\vert=Y\vert{\bf u}\vert$, and the arrows demote the direction of the traction vectors. The reference shapes for all the four patterns have an equal area of $1000\ \mu m^2$. The other parameters are: $E=1~{\rm kPa}$, $\nu=0.4$, $\sigma_a=1~{\rm kPa}$, $\mu_s=10~{\rm kPa}$, $h_s=30~ \mu{\rm m}m$, $h=0.2~ \mu{\rm m}$.} \end{figure} The spatial distribution of traction stresses exerted by cells on substrate and the corresponding organization of stress and deformation inside the cell are affected by the geometry of adhesive patterns. Using micropatterning techniques, cell shapes can be constrained to adhere to controlled geometrical patterns~\cite{Chen1997,Thery2006}. In our model the shape determined by the pattern in the limit of infinite adhesion strength provides the reference shape for the cell. Here we investigate four reference cell shapes: circle, ellipse, square and equilateral triangle. These are chosen to have the same reference area but different perimeters. The case of a circular cell can be treated analytically, as described below. For the other shapes the elasticity equations~\eref{eq:force-balance} and \eref{eq:stress} are solved numerically using the \textsc{MATLAB} pde toolbox. We assume the contractility $\sigma_a$ to be uniform and of order of the cellular Young's modulus. Heatmap of traction stresses are shown in Fig.~\ref{fig:2dshapes}. In all cases the traction stresses are concentrated at the cell periphery, irrespective of the reference shape. The magnitude of the local traction stress is, however, higher in regions of high curvatures or at sharp corners. For a circular cell, Eqs.~\eref{eq:force-balance} and \eref{eq:stress} can be solved analytically~\cite{Edwards2011,Mertz2012}. Assuming in-plane rotational symmetry, it is convenient to use polar coordinates $r$ and $\theta$, denoting radial and angular coordinates, and demand that no quantity depend on $\theta$. The equation for the radial displacement $u_r$ about a circular reference state of radius $R_0$, is then given by \begin{equation} \label{eq:ur} r^2\partial_r^2 u_r + r\partial_r u_r - (1+r^2/\ell_p^2) u_r =0\;, \end{equation} where the \textit{penetration length} $\ell_p$ describes the localization of traction stresses at the cell boundary. It is given by : \begin{equation} \label{eq:lp} \ell_p^2=\frac{Eh(1-\nu)}{Y(1+\nu)(1-2\nu)}\;, \end{equation} and is essentially controlled by the ratio of the cell stiffness $\sim E$ to the substrate rigidity $\sim Y$. The penetration length is short on stiff substrates and increases with decreasing substrate rigidity. The solution of Eq.~\eref{eq:ur} with the boundary conditions $\sigma_{rr}(r=R_0)=0$ and $u_r(r=0)=0$ is given in terms of modified Bessel functions of the first kind as, \begin{equation}\label{eq:circle} u_r(r)=-\sigma_a R_0\left[\frac{(1+\nu)(1-2\nu)}{E(1-\nu)}\right]I_1(r/\ell_p)g(R_0/\ell_p)\;, \end{equation} with $g(s)=\left[s I_0(s)-\frac{1-2\nu}{1-\nu}I_1(s)\right]^{-1}$. As anticipated, the deformation $u_r$ vanishes for all $r$ when $Y\rightarrow\infty$, when the adhering circular cell is maximally spread and has its largest undeformed radius $R_0$. \subsection{Cell spread area is sensitive to substrate stiffness and contractility} The optimal spread area of the cell is controlled by the interplay between cell contractility, as described by the active pressure $\sigma_a$, and the traction forces on the substrate. In the case of a circular cell, where the deformation induced by adhesion is given by Eq.~\eref{eq:circle}, the steady state cell area is given by, \begin{equation} \label{eq:area} A=\pi (R_0+u(R_0))^2\;, \end{equation} with $R_0$ the reference radius corresponding to the maximal spread area $A_\infty=\pi R_0^2$ attained on an infinitely rigid substrate, where $u_r(r)=0$. To make contact with experiments, we investigate the ratio $A/A_\infty$, the relative cell spread area, as a function of substrate stiffness and contractility. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{fig2.pdf} \caption{\label{fig:spreading}Optimal shape of a triangular cell for different values of the active pressure $\sigma_a$ and the substrate shear modulus $\mu_s$, with $E= 1~{\rm Pa}$. The color map represents the magnitude of the displacement vector $\vert{\bf u}\vert$ (proportional to the traction force) about an equilateral triangular reference shape of area $1000~\mu{\rm m}^2$. The cell spread area increases with increasing substrate stiffness and decreases with increasing $\sigma_a$. Inset (Left) : Least-square fit of the relative cell spread area $A/A_\infty$ obtained from the model using Eq.~\eref{eq:area} (solid) to the experimental data reported in Ref.~\cite{Chopra2011} (solid red circles). The fitting parameters are $E=911~{\rm Pa}$ and $\sigma_a=1589~{\rm Pa}$. Inset (Right) : Relationship between cellular Young's modulus $E_c$ and contractility $\sigma_a$. Here we tune $\sigma_a$ to desired values and then determine the fitting parameter $E_c$ using data in Ref.~\cite{Chopra2011}. Other parameters : $\nu=0.4$, $h_s=30~\mu{\rm m}$, $h=0.2~\mu{\rm m}$.} \end{figure} On stiff substrates, where $R_0\gg \ell_p$, i.e., the traction stress extends over a length much smaller than the reference cell radius, $u_r(R_0)\simeq -\sigma_a \ell_p/B$, where the compressional modulus $B$ is given by $B=E(1-\nu)/\left[(1+\nu)(1-2\nu)\right]$. The relative spread area then takes the simple form $A/A_\infty \simeq \left(1-\frac{\sigma_a}{R_0}\sqrt{h/BY}\right)^2$. Letting $Y\simeq \mu_s/h_s$, we note that increasing substrate stiffness increases relative spread area, with $A/A_\infty \rightarrow 1$ as $\mu_s \rightarrow \infty$, in qualitative agreement with experiments~\cite{Yeung2005,Ghibaudo2008,Chopra2011}. In contrast, increasing the contractile pressure $\sigma_a$ reduces the optimal cell spread area, consistent with the experimental observation that myosin-II activity retards cell spreading~\cite{Wakatsuki2003}. To make a quantitative comparison with experiments, we fit Eq.~\eref{eq:area} to experimentally reported data on the projected area of cardiac myocytes cultured on N-cadherin coated PA gels of varying stiffnesses~\cite{Chopra2011}. Here the maximal spread area $A_\infty$ is taken to be equal to the cell projected area on a glass substrate (shear modulus $\sim$ 30 GPa), which is $\simeq 690\ \mu m^2$. The fit, shown in the left inset of Fig.~\ref{fig:spreading}, is obtained using the active contractility $\sigma_a$ and the cellular Young's modulus $E$ as the fitting parameters. A least-square fit gives us $E=911~{\rm Pa}$ and $\sigma_a=1589~{\rm Pa}$. Although the strength of contractility is likely to depend on cell type, it is worth highlighting that the fit value for $\sigma_a$ is of the same order of magnitude as previously used in Ref.~\cite{Mertz2012} to fit the measured value of the surface tension of a colony of epithelial cells. Next, we tune the contractility $\sigma_a$, which can be artificially controlled through pharmacological interventions, and determine the corresponding best fit value of the cellular Young's modulus $E_c$. Our result (Fig.~\ref{fig:spreading}, right inset) indicates a linear relationship between the cellular Young's modulus and the contractile stress. There are indeed experimental data available~\cite{Wang2002} that show that the cell stiffness increases linearly with contractility for adherent cells~\footnote{We thank the anonymous referee for suggesting this fit and pointing out to us Ref.~\cite{Wang2002}.}. This suggests that our model could be used to infer contractility from measurements of cellular stiffness. Figure~\ref{fig:spreading} also demonstrates the competing roles of contractility and adhesion in controlling optimal cell shapes for a chosen triangular reference state. On softer substrates the triangular cell retains its topology and contracts by an amount proportional to $\sigma_a$, whereas on stiffer substrates the corners tend to form protrusions. \subsection{Traction forces increase with cell boundary curvature} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fig3.pdf} \caption{\label{fig:curv}(a) Force-balance on a thin slice of cellular material at the cell boundary. (b) Force-balance at a generic sharp corner with opening angle $\phi$. (c) Traction stress magnitude at the cell edge as a function of the local curvature $\kappa$ for the elliptical cell of Fig.~\ref{fig:2dshapes}. } \end{figure} When the boundaries of the adhesion pattern exhibits non-uniform curvature, the traction stresses are higher at regions of high curvatures. This is seen for example in Fig.~\ref{fig:2dshapes} for the case of an elliptical reference shape. To justify this we propose a simple analytical argument based on local force balance. Consider a thin slice of cellular material at the cell periphery of width comparable to penetration length $\ell_p$ and arc length $R\Delta\theta $ much less than the cell perimeter (Fig.~\ref{fig:curv}(a)), with $1/R$ the local curvature of the cell element. At the outer edge of this element, the only force on the cell is the reaction to the traction by the cell on the substrate traction, of areal density $-{\bf T}$, with ${\bf T}=Y{\bf u}$. This yields an outward total force on the outer edge of the cell element of magnitude $TR\Delta\theta\ell_p$, with $T>0$. At the interior edge, the cellular element experiences a contractile force of magnitude $\sigma_n (R-\ell_p)\Delta\theta\ell_p$, where $\sigma_n$ is the normal stress pulling the inner contour inwards and has contributions from active as well as passive elastic stresses. The lateral stresses $\sigma_t$ contributes to an effective line tension $\sigma _t \ell_p R\Delta\theta$ of the cell element. Due to the curvature of the boundary element, the line tension generates an inward Laplace pressure of magnitude $\sigma_t \ell_p/R$. Local balance of forces then yields, \begin{equation} T R\Delta\theta\ell_p-\sigma_n (R-\ell_p)\Delta\theta\ell_p=R\Delta\theta\ell_p\sigma_t\frac{\ell_p}{R}\;. \end{equation} The above law can be written down in a compact form as, \begin{equation}\label{eq:curv} T=\sigma_n + (\sigma_t-\sigma_n)\ell_p \kappa\;, \end{equation} with $\kappa=1/R$, the local curvature of the boundary element. Equation~\eref{eq:curv} then tells us that local magnitude of traction increases linearly with increasing boundary curvature. The lateral and normal stresses $\sigma_t$ and $\sigma_n$ can be expressed in terms of the local cellular stresses in polar coordinates as $\sigma_t = \sigma_{\theta\theta}-\partial_\theta\sigma_{r\theta}$ and $\sigma_n=\sigma_{rr}$. The linear dependence of $T$ on $\kappa$ strictly holds in the limit $\ell_p \kappa \ll 1$. In addition, non-local elastic interactions can also affect the dependence of traction magnitude on local curvature. Figure.~\ref{fig:curv}(c) shows the dependence of the magnitude of the traction stress at the cell boundary on local curvature for an elliptical cell as shown in Fig.~\ref{fig:2dshapes}. For low $\kappa$, the traction stress magnitude increases linearly with $\kappa$ before reaching a plateau at higher values of $\kappa$. When the cell boundary exhibits a sharp corner with opening angle $\phi$, as shown in Fig.~\ref{fig:curv}(b), the local force-balance is given by, \begin{equation} T=\sigma_n + 2\sigma_t \cos{(\phi/2)}\;, \end{equation} where $\sigma_n$ acts along the bisecting line of the corner. Hence smaller the opening angle, the larger is the traction force. \subsection{Mechanical anisotropy induced by geometric anisotropy} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{fig4.pdf} \caption{\label{fig:stress}Cell shape anisotropy correlates with internal stress anisotropy. (a) Heatmap of internal compressive stress $\sigma$ (left) and maximum shear stress $\sigma_s$ (right) corresponding to various reference shapes : circle, ellipse, square and equilateral triangle. The reference shapes all have an equal area of $1000\ \mu m^2$. (b) Average maximum shear $\bar{\sigma}_{s}$ as a function of eccentricity $e$ for elliptical cells of same reference area ($1000\ \mu m^2$). Equilibrium shapes with colorplot of $\mu$ are given as plot markers. Parameters : $E=1$ kPa, $\nu=0.4$, $\sigma_a=1$ kPa, $\mu_s=10$ kPa, $h_s=30\ \mu m$, $h=0.2\ \mu m$.} \end{figure} The spatial distribution of internal stresses $\sigma_{ij}$ within the cell depends on cell shape, which is in turn controlled by the geometry of the adhesive region. Experimentally ${\bm \sigma}(x,y)$ can be obtained from the measured distribution of traction stresses ${\bf T}(x,y)$, inverting the local force-balance condition $\partial_j \sigma_{ij}= T_i$~\cite{Tambe2011}. The elasticity equations Eqs.~\eref{eq:force-balance} and \eref{eq:stress} can be recast as a single partial differential equation for the internal stress tensor $\sigma_{ij}$, given by \begin{equation} \label{eq:internal-stress} \ell_p^2\left[\partial_i\partial_k\sigma_{kj}\right]^S + \delta_{ij}\sigma_a = \sigma_{ij} + \frac{1-2\nu}{\nu}\delta_{ij}\left(\sigma_{kk}-2\sigma_a\right)\;, \end{equation} where $[...]^S$ denotes symmetrization with respect to indices that are not summed over, i.e., $\left[\partial_i\partial_k\overline{\sigma}_{kj}\right]^S=\frac12\left[\partial_i\partial_k\overline{\sigma}_{kj}+\partial_j\partial_k\overline{\sigma}_{ki}\right]$. We have investigated numerically the solution of Eq.~\eref{eq:internal-stress} with stress free boundary condition $\sigma_{ij}n_j=0$. To understand the role of shear and compressional deformations in different geometries, it is instructive to diagonalize the stress tensor and display the results in terms of linear combinations of the eigenvalues $\sigma_1$ and $\sigma_2$. The sum $\sigma=\frac{1}{2}(\sigma_1 + \sigma_2)$ is simply half the trace of the stress tensor and describes compressional deformations. The difference $\sigma_s=\frac{1}{2}\vert \sigma_1 -\sigma_2 \vert=\sqrt{[\sigma_{xx}-\sigma_{yy}]^2+4\sigma_{xy}^2}$, is controlled by normal stress $\sigma_{xx}-\sigma_{yy}$ and shear stress $\sigma_{xy}$. For an isotropic reference shape, such as the circle, $\sigma_1=\sigma_2$ and $\sigma_s= 0$, whereas for anisotropic shapes such as the ellipse, one expects nonzero values for the local maximum shear $\sigma_s$. Fig.~\ref{fig:stress}(a) shows heatmaps of the spatial distribution of $\sigma$ and $\sigma_s$ for various reference shapes - circle, ellipse, square and equilateral triangle. Irrespective of the shape of the adhesion geometry, $\sigma$ is maximum at the cell center, indicating build-up of compressive stresses. The compressional stress $\sigma$ always vanishes at the boundary, and it does so more rapidly at regions of high curvature or at sharp corners. In contrast, the shear stress $\sigma_s$ is identically zero for isotropic shapes, defined as those that have a gyration tensor that is diagonal, with equal eigenvalues. The circle, triangle and square are all in this class. Local stress anisotropy as measured by $\sigma_s$ is nonzero for elliptical shapes and shear stresses build up at the center of the ellipse. The shape anisotropy of ellipses can be quantified by their eccentricity $e=\sqrt{1 - (b/a)^2}$, with $a$ and $b$ the semi-major and semi-minor axes. Figure~\ref{fig:stress}(b) shows the spatial average of $\sigma_s$ over the area $A$ of the cell, defined as $\bar{\sigma}_s=\frac{1}{A}\int_A d^2{\bf r}\ \sigma_s$, as a function of the eccentricity $e$. The average shear stress $\bar{\sigma}_s$ increases with $e$ with a sharp rise as $e\rightarrow 1$, indicating a positive relationship between geometrical and mechanical anisotropy in adherent cells. Our theoretical model thus confirms the experimental result that cell mechanical anisotropy increases with increasing aspect ratio, as previously reported for single endothelial cells with the same spread area~\cite{roca2008}. \section{Discussion} We have used a continuum model of an adherent cell on a substrate as an active contractile medium to study the role of adhesion geometry in controlling cell shape, cell spreading and the spatial distribution of traction stresses. More realistic future modeling should take into account that a cell is a highly heterogeneous material with spatially varying stiffness~\cite{Heidemann2004}. It is however intriguing to note that the simplified assumption of homogeneity and isotropy in the underlying cytoskeletal network can reproduce several of the known experimental results. The central input of the model is the cell contractility or activity $\sigma_a$, a negative contribution to the pressure that enters the constitutive equation for the cellular material. In general, $\sigma_a$ will be determined by the concentration and activity of myosin proteins cross linking the actin cortex and controlling the formation of stress fiber. In our model $\sigma_a$ is assumed to be a constant parameter, to be determined by fitting experiments. We consider cells adhering to flat substrates that have been patterned with adhesive patches, consisting for instance of fibronectin coatings, of specific geometry and examine the role of the geometry of the adhesive patch in controlling the spatial distribution of stresses in the cellular material. The reference state for our cell is the limit of infinitely strong adhesion, where the cell shape and lateral extent and determined entirely by the shape and size of the adhesive patch. For finite adhesion strength, cell elasticity and contractility yield deviations form this reference state. We restrict ourselves to considering continuous or densely spaced adhesion sites. For discrete or sparsely distributed adhesion sites, non-adherent segments in the cell boundary could likely exhibit morphological transitions induced by contractile activity and substrate stiffness~\cite{Banerjee2012b}. In agreement with experimental observations, we find that cells spread more on stiff substrates and we provide an expression for the cell area versus substrate stiffness for the case of a circular cell. We show that this expression fit the data for spread areas of cardiac myocytes on substrates of various sitffness values(see inset of Fig. 2). We demonstrate analytically and numerically that strong traction stresses correlate with regions of high cell boundary curvature, in agreement with experimental observations. Further, as reported in experiments on single endothelial cells, our model demonstrates that cell mechanical anisotropy is higher on elongated cells than on rounded ones for fixed area~\cite{roca2008}. Understanding the relation between cell morphology, the cell's mechanical response and cell fate is an important question in cellular biophysics. Our simple model highlights the correlation between the geometry of adhesion sites and cell morphology and demonstrates that traction forces by cells can be tuned by controlling the geometry of adhesive regions. An important open question not addressed by this simple model where the adhesive patch geometry solely controls the cell shape is how cell morphology is determined by the interplay of cell-substrate adhesion and dynamical reorganization of the cytoskeletal architecture in response to the adhesion stimulus. To understand this it will be necessary to incorporate the dynamical feedback between actin reorganization and adhesion kinetics. \vspace{0.2in} This work was supported by the National Science Foundation through award DMR-1004789. \section*{References} \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Dirac oscillator represents an example of relativistic exactly solvable quantum model. It was firstly proposed by It\^{o} and collaborators to replace the momentum operator ${\bf P}$ in the free particle's Dirac equation by combination ${\bf P}-im\omega{\bf X}\beta$ where ${\bf X}$ was the position operator, $m$ being the particle's mass and $\omega$ the oscillator frequency. Then unusual accidental degeneracy of the Dirac oscillator's spectrum was investigated by Cook \cite{Cook_lett_NC_71}. Supersymmetric approach to the Dirac oscillator was investigated in \cite{Ui_PTP84,Balanntekin_AnnPhys85}. We note that the name Dirac oscillator for this relativistic problem was given by Moshinsky and Szczepaniak \cite{Moshinsky_JPA89} who rederived it and shown that in nonrelativistic limit the relativistic hamiltonian becomes a harmonic oscillator with a strong spin-orbit coupling term. The last work renewed interest to the Dirac oscillator and it was examined form different viewpoints, such as covariance properties \cite{Moreno_JPA89}, complete energy spectrum and wavefunctions \cite{BenitezPRL90}, Lie algebra symmetry \cite{Quesne_JPA90}, shift operators \cite{deLange_JPA91}, hidden supersymmetry \cite{BenitezPRL90, Beckers_PRD90,Martinez_PRD91,Quesne_IJMPA91}, conformal invariance \cite{Martinez_JMP92}, completeness of wavefunctions \cite{Szmytkowski_JPA01}, approach based on Clifford algebra \cite{deLimaRodrigues_PLA08}. Some generalization of Dirac oscillator was also considered \cite{Zarrinkamar_AnnPh10}. The Dirac oscillator model was applied to problems of nuclear and high energy physics. Relativistic many body systems with interactions modelled by the Dirac oscillator hamiltonians with applications to mesons and baryons was considered \cite{Moshinsky_FoundPhys93}. Thermodynamics of Dirac oscillators in $1+1$ spacetime was noted to be important in studies of quark-gluon plasma \cite{Dominguez_EPL90}. It was also utilized for developing of effective approach for description of intermediate and short-range components of nucleon-nucleon interaction \cite{Faessler_annPh05}. Dirac oscillator was used for modelling photon-nucleus scattering \cite{Grineviciute_PRC09}. Another area where the Dirac oscillator model was extensively applied is quantum optics. Relation between the Dirac oscillator and relativistic Jaynes-Cummings model was investigated \cite{Rozmej_JPA99}. Mapping of the Dirac oscillator onto Jaynes-Cummings model in case of different dimensions was examined in \cite{Torres_AIPproc_2010}. In regard to the Jaynes-Cummings model chirality quantum phase transition in $2+1$ dimensional Dirac oscillator subjected to constant magnetic field was investigated \cite{Bermudez_PRA08}. \textit{Zitterbewegung} behaviour of the Dirac oscillator and possible realization of such a system was considered in \cite{Bermudez_PRA08_01,Romera_PRA_2011,Wang_EPJB_2012}. Several attempts to get experimental realization of such a model were made \cite{Longhi_OptLett10, Franco-Villafane}. Here we consider the Dirac oscillator from a bit different point of view, namely we solve the Dirac oscillator eigenavalue problem in space with deformed Heisenberg algebra that lead to appearance of minimal uncertainties in position and momentum. The interest to the theories with deformed Heisenberg algebra was inspired by investigations in string theory and independently by several approaches to quantum gravity \cite{GrossNPB88,Maggiore_PLB93,Witten_PhysToday96} where it was suggested the existence of a finite lower bound for resolution of length $\Delta X$, so called minimal length. Deformed commutation relations that leads to existence of minimal uncertainty in position and momentum was proposed firstly by Kempf and collaborators \cite{KMM_95} and then were investigated from different viewpoints. We point out that only a few quantum mechanical problems are solved exactly, that is to say the harmonic oscillator in one \cite{KMM_95} and $D$ dimensions \cite{Chang_PRD02}, the one- \cite{Noucier_JPA06} and three-dimensional \cite{QuesneJPA05} Dirac oscillator and one dimensional Coulomb-like problem \cite{Fityo_JPA06}. Lorentz-covariant deformed algebra with minimal length was proposed and $1+1$ dimensional Dirac oscillator problem was solved \cite{Quesne_JPA06}. Minimal uncertainty for momentum can be treated as a consequence of gravity induced decoherence \cite{Kay}. Uncertainty relation that gives rise to appearance of minimal momentum is also possible in theories with position dependent effective mass \cite{Quesne_JPA_04_effmass}. We note that deformed commutation relations with minimal length and momentum were proposed even earlier in context of quantum group theory \cite{Kempf_JMP94}. Later it was shown that similar uncertainty principle with minimal length and momentum can be obtained in a gedanken experiment of measuring of position in de Sitter space \cite{Bambi_CQG}. Deformed algebra with minimal length and momentum was also obtained in context of Triply Special Relativity \cite{Kowalski-glikman_PRD04}. It should be noted that basic principles of triply special relativity adopt three fundamental constants and one of them can be identified with a cosmological constant of de Sitter space. In case of deformed algebra with minimal length and momentum only the harmonic oscillator was examined \cite{Quesne_JPA03,Quesne_JPA04,Mignemi_arxiv}. Our paper is organized as follows. In the second section an uncertainty relation obtained from deformed algebra is analyzed then the Dirac oscillator oscillator is reviewed in given representation. In the third section we obtain equations for small and large components of a wavefunction and examine requirements imposed on the wave function. In the forth section energy spectrum of Dirac oscillator is obtained. In the fifth section wavefunctions of the problem are derived. Finally, the sixth section contains the conclusions. \section{Dirac oscillator } We consider stationary Dirac oscillator equation which can be written in the form: \begin{equation}\label{dirac_eq} H\Psi=E\Psi, \quad H=\hat{{\bf\alpha}}({\bf P}-im\omega{\bf X}\hat{\beta})+m\hat{\beta} \end{equation} where \begin{eqnarray} \hat{{\bf \alpha}}= \begin{pmatrix} 0 & {\bf \sigma} \\ {\bf \sigma} & 0\\ \end{pmatrix} \hat{\beta}= \begin{pmatrix} I & 0 \\ 0 & -I\\ \end{pmatrix}\\ \end{eqnarray} and $\sigma_i$ , $i=1,2,3$ are the Pauli matrices. We also put $\hbar=c=1$. It is supposed that position $X_i$ and momentum $P_i$ operators in the equation \ref{dirac_eq} are obeyed to deformed commutation relations which take form: \begin{eqnarray}\label{algebra} \begin{array}{l} [X_i,P_j]=i\left(\delta_{ij}+\alpha X_iX_j+\beta P_jP_i+\sqrt{\alpha\beta}(P_iX_j+X_jP_i)\right),\\ \\ {[X_i, X_j]=i\beta\varepsilon_{ijk}L_{k}}, \quad [P_i, P_j]=i\alpha\varepsilon_{ijk}L_{k}. \end{array} \end{eqnarray} Here $L_{k}$ are components of angular momentum operator and parameters $\alpha$ and $\beta$ are supposed to be positive. We also note, that there is summation over dummy indices. Components of angular momentum operator are defined as follows: \begin{equation}\label{ang_mom_def} J_{ij}=\varepsilon_{ijk}L_{k}=\frac{1}{2}(X_iP_j+P_jX_i-X_jP_i-P_iX_j) \end{equation} Components of angular momentum operator fulfil the ordinary commutation relations: \begin{equation} [L_i,X_j]=i\varepsilon_{ijk}X_k, \quad [L_i,P_j]=i\varepsilon_{ijk}P_k. \end{equation} In the one-dimensional case the algebra (\ref{algebra}) takes simpler form: \begin{equation}\label{algebra_2} [X,P]=i(1+\alpha X^2+\beta P^2+\sqrt{\alpha\beta}(PX+XP)) \end{equation} We note that similar one-dimensional deformed algebra was examined in the work \cite{Quesne_07sigma} but in their case instead of factor $\sqrt{\alpha\beta}$ in the forth term in the right-hand side an independent parameter $\kappa$ was used. It is easy to show that the algebra (\ref{algebra_2}) gives rise to uncertainty relation: \begin{equation}\label{unceratinty} \Delta X\Delta P\geqslant\frac{1}{2}|1+\gamma+\alpha(\Delta X)^2+\beta(\Delta P)^2+\sqrt{\alpha\beta}\langle \hat{X}\hat{P}+\hat{P}\hat{X}\rangle| \end{equation} where $\hat{X}=X-\langle X\rangle$, $\hat{P}=P-\langle P\rangle$ and $\gamma=(\sqrt{\alpha}\langle X\rangle+\sqrt{\beta}\langle P\rangle)^2\geqslant 0$. From the inequality $|\langle \hat{A}\hat{B}+\hat{B}\hat{A}\rangle|\leqslant 2\sqrt{\langle A^2\rangle\langle B^2\rangle}$ which is valid for any two operators $\hat{A}$ and $\hat{B}$ it follows that $|\langle \hat{X}\hat{P}+\hat{P}\hat{X}\rangle|\leqslant 2\Delta X\Delta P$. Since parameters $\alpha$ and $\beta$ are positive, it leads to inequality $1+\gamma+\alpha(\Delta X)^2+\beta(\Delta P)^2>0$. Using these remarks we can rewrite the uncertainty relation (\ref{unceratinty}) in the form: \begin{equation}\label{uncert_2} \Delta X\Delta P\geqslant\frac{1}{2}(1+\gamma+\alpha(\Delta X)^2+\beta(\Delta P)^2-2\sqrt{\alpha\beta}\Delta X\Delta P). \end{equation} The latter uncertainty relation brings minimal uncertainty in position as well as in momentum: \begin{equation}\label{min_uncert} \Delta X\geqslant(\Delta X)_{min}=\sqrt{\frac{\beta(1+\gamma)}{1+2\sqrt{\alpha\beta}}}; \quad \Delta P\geqslant(\Delta P)_{min}=\sqrt{\frac{\alpha(1+\gamma)}{1+2\sqrt{\alpha\beta}}} \end{equation} It is important to emphasize that these minimal uncertainties do not appear if parameters $\alpha$ and $\beta$ are negative. Having done rescaling of uncertainties and parameters of deformation we can represent uncertainty relation in the well known form obtained by Kempf \cite{Kempf_JMP94}: \begin{equation}\label{uncert_3} \Delta\bar{X}\Delta\bar{P}\geqslant\frac{1}{2}(1+\bar{\alpha}(\Delta\bar{X})^2+\bar{\beta}(\Delta\bar{P})^2), \end{equation} where \begin{equation} \Delta\bar{X}=\sqrt{\frac{1+\sqrt{\alpha\beta}}{1+\gamma}}\Delta X,\quad \Delta\bar{P}=\sqrt{\frac{1+\sqrt{\alpha\beta}}{1+\gamma}}\Delta P, \quad \bar{\alpha}=\frac{\alpha}{1+\sqrt{\alpha\beta}}, \quad \bar{\beta}=\frac{\beta}{1+\sqrt{\alpha\beta}}. \nonumber \end{equation} It is no doubt that ``rescaled" uncertainty relation (\ref{uncert_3}) leads to the same minimal uncertainties (\ref{min_uncert}) as it should be. In multidimensional case commutation relations (\ref{algebra}) brings to uncertainty relation: \begin{equation} \Delta X_i\Delta P_j\geqslant\frac{1}{2}\left|\delta_{ij}+\gamma_{ij}+\alpha\langle\hat{X}_i\hat{X}_j\rangle+ \beta\langle\hat{P}_j\hat{P}_j\rangle+\sqrt{\alpha\beta}\langle\hat{P}_i\hat{X}_j+\hat{X}_j\hat{P}_i\rangle\right| \end{equation} where similarly as it was used in one dimensional case $\hat{X}_i=X_i-\langle X_i\rangle$, $\hat{P}_i=P_i-\langle P_i\rangle$ and $\gamma_{ij}=\alpha\langle X_i\rangle\langle X_j\rangle+\beta\langle P_i\rangle\langle P_j\rangle+2\sqrt{\alpha\beta}\langle P_i\rangle\langle X_j\rangle$. It is easy to see that in case when $i=j$ the last relation reduces to (\ref{uncert_2}) and as a consequence the minimal uncertainties for position and momentum are the same as in the one-dimensional case (\ref{min_uncert}) To solve the Dirac equation (\ref{dirac_eq}) representation of operators $X_i$, $P_j$ that obeyed to commutation relations (\ref{algebra}) should be defined. The algebra (\ref{algebra}) does not have position or momentum representations because of noncommutativity of corresponding operators. To build up representation for position and momentum operators (\ref{algebra}) it was proposed projective transformation \cite{Mignemi_arxiv} which introduce a relation between the commutation relations (\ref{algebra}) and Snyder algebra \cite{snyder}. As it was noted such a transformation is nonsymplectic. The position and momentum operators can represented as follows \cite{Mignemi_arxiv}: \begin{eqnarray} X_i=i\sqrt{1-\beta p^2}\frac{\partial}{\partial p_i}+\lambda\sqrt{\frac{\beta}{\alpha}}\frac{p_i}{\sqrt{1-\beta p^2}},\\ P_i=-i\sqrt{\frac{\alpha}{\beta}}\sqrt{1-\beta p^2}\frac{\partial}{\partial p_i}+(1-\lambda)\frac{p_i}{\sqrt{1-\beta p^2}}. \end{eqnarray} Here $p^2=p_kp_k$ and parameter $\lambda$ is arbitrary real. Since $\alpha, \beta >0$ it leads to the restriction for the absolute value of square of variable $p$: $\beta p^2<1$. To provide hermicity of position and momentum operators scalar product should be defined with a weight function. It can be written in the form: \begin{equation}\label{inner_product_general} \langle\psi|\vp\rangle=\int\frac{d{\bf p}}{\sqrt{1-\beta p^2}}\psi^*(\bf{p})\vp({\bf p}) \end{equation} We note that according to abovementioned remark the domain of integration is bounded by the sphere: $p^2\leqslant 1/\beta$. It is worth emphasizing that the weight function does not depend on the choice of parameter $\lambda$. Components of the angular momentum operator defined by formula (\ref{ang_mom_def}) are represented as follows: \begin{equation} J_{ij}=\varepsilon_{ijk}L_k=i\left(p_j\frac{\partial}{\partial p_i}-p_i\frac{\partial}{\partial p_j}\right). \end{equation} So the components of angular momentum operator take the same form as in momentum representation in ordinary quantum mechanics. Wave function of the Dirac equation (\ref{dirac_eq}) can be written as a two-component spinor $\psi=\begin{pmatrix} \psi_1 \\\psi_2\\ \end{pmatrix}$ where functions $\psi_1$ and $\psi_2$ are called large and small component respectively. The Dirac equation (\ref{dirac_eq}) can be rewritten as a system of two coupled equations: \begin{eqnarray} B^+\psi_2=(E-m)\psi_1,\label{eq_b_+} \\ B^-\psi_1=(E+m)\psi_2\label{eq_b_-}. \end{eqnarray} where \begin{equation}\label{operator_B_big} B^{\pm}=(\bm{\sigma},{\bf P})\pm im\omega(\bm{\sigma},{\bf X}) \end{equation} To get a factorized equation for the large component $\psi_1$ one should apply the operator $B^+$ (\ref{operator_B_big}) to the equation (\ref{eq_b_-}) and then in the right-hand sight of obtained equation the action of the operator on the component $\psi_2$ should be replaced by the right-hand side of the equation (\ref{eq_b_+}). As a result we arrive at: \begin{equation} B^+B^-\psi_1=(E^2-m^2)\psi_1. \end{equation} Similarly for the small component $\psi_2$ we have: \begin{equation} B^-B^+\psi_2=(E^2-m^2)\psi_2. \end{equation} The representation of position and momentum operators $X_i$ and $P_j$ allows one to get the explicit form for the operators $B^{\pm}$: \begin{equation}\label{oper_B_+} B^+=\left[-i\left(\sqrt{\frac{\alpha}{\beta}}-im\omega\right)\sqrt{1-\beta P^2}\left(\frac{\partial}{\partial p}+\frac{(\bm{\sigma}, {\bf L})+2}{p}\right)+\left(1-\lambda+im\omega\lambda\sqrt{\frac{\beta}{\alpha}}\right)\frac{p}{\sqrt{1-\beta p^2}}\right]\sigma_p \end{equation} \begin{equation}\label{oper_B_-} B^-=\sigma_p\left[-i\left(\sqrt{\frac{\alpha}{\beta}}+im\omega\right)\sqrt{1-\beta P^2}\left(\frac{\partial}{\partial p}-\frac{(\bm{\sigma}, {\bf L})}{p}\right)+\left(1-\lambda-im\omega\lambda\sqrt{\frac{\beta}{\alpha}}\right)\frac{p}{\sqrt{1-\beta p^2}}\right] \end{equation} where $\sigma_p=(\bm{\sigma},{\bf p})/p$ The equations (\ref{eq_b_+}), (\ref{eq_b_-}) and as a consequence the operators $B^{\pm}$ would take simpler form if transformation of large and small functions is performed: \begin{equation}\label{subst} \psi_i=\frac{1}{p}\vp_i \end{equation} After that transformation equations (\ref{eq_b_+}) and (\ref{eq_b_-}) can be rewritten as follows: \begin{eqnarray} \tilde{\omega}b^+\sigma_p\vp_2=(E-m)\vp_1,\label{equ_1_b+}\\ \tilde{\omega}^*\sigma_pb^-\vp_1=(E+m)\vp_2.\label{equ_1_b-} \end{eqnarray} where $\tilde{\omega}=\left(m\omega+i\sqrt{{\alpha}/{\beta}}\right)$ and $\tilde{\omega}^*$ is complex conjugate. Operators $b^{\pm}$ are obtained from relations (\ref{oper_B_+}), (\ref{oper_B_-}) and (\ref{subst}). They take form: \begin{equation}\label{op_b_+} b^+=-\sqrt{1-\beta p^2}\frac{\partial}{\partial p}-\frac{\sqrt{1-\beta p^2}}{p}((\bm{\sigma},{\bf L})+1)+\eta\frac{p}{\sqrt{1-\beta p^2}} \end{equation} \begin{equation}\label{op_b_-} b^-=\sqrt{1-\beta p^2}\frac{\partial}{\partial p}-\frac{\sqrt{1-\beta p^2}}{p}((\bm{\sigma},{\bf L})+1)+\eta^*\frac{p}{\sqrt{1-\beta p^2}}, \end{equation} here \begin{equation} \eta=\frac{1-\lambda+im\omega\lambda\sqrt{\frac{\beta}{\alpha}}}{m\omega+i\sqrt{\frac{\alpha}{\beta}}}= \frac{m\omega+i\left(m^2\omega^2\lambda\sqrt{\beta\over{\alpha}}- \sqrt{\alpha\over{\beta}}(1-\lambda)\right)}{m^2\omega^2+\frac{\alpha}{\beta}}.\nonumber \end{equation} To simplify equations (\ref{equ_1_b-}) and (\ref{equ_1_b-}) one can introduce function: \begin{equation} \tilde{\vp}_2=\sigma_p\vp_2 \end{equation} As a result we arrive at \begin{eqnarray} \tilde{\omega}b^+\tilde{\vp}_2=(E-m)\vp_1,\label{equ_1x_b+}\\ \tilde{\omega}^*b^-\vp_1=(E+m)\tilde{\vp}_2.\label{equ_1x_b-} \end{eqnarray} \section{Superpartner hamiltonians and components of radial wave function} Operators $b^{\pm}$ (\ref{op_b_+}) (\ref{op_b_-}) introduced in the previous section commute with the total angular momentum ${\bf J}={\bf L}+{\bf S}$ where ${\bf S}=\frac{1}{2}\bm{\sigma}$ as well as with ${\bf L}^2$ and ${\bf S}^2$, so the solutions $\vp_1$ and $\tilde{\vp}_2$ of equations (\ref{equ_1x_b+}) and (\ref{equ_1x_b-}) can be taken in the form representing the fact that they are eigenfunctions of operators ${\bf L}^2$, ${\bf S}^2$, ${\bf J}^2$ and $J_z$ with corresponding eigenvalues $l(l+1)$, 3/4, $j(j+1)$ and $m$ respectively. \begin{equation} \vp_1=\vp_1(p,s,j,m)=R_{1;s,j}(p)\mathcal{Y}_{s,j,m}(\th,\vp,\xi) \end{equation} \begin{equation} \vp_2=\vp_2(p,s,j,m)=R_{2;s,j}(p)\mathcal{Y}_{s,j,m}(\th,\vp,\xi) \end{equation} where \begin{equation} \mathcal{Y}_{s,j,m}(\th,\vp,\xi)=\sum_{\sigma,\mu}\langle j-s\mu,\frac{1}{2}\sigma|jm\rangle Y_{j-s,m}(\th,\vp)\chi_{\sigma}(\xi) \end{equation} is a spin spherical harmonic \cite{Edmonds} and $R_{1;s,j}(p)$ and $R_{2;s,j}(p)$ are radial wavefunctions. It should be noted that $\chi_{\sigma}(\xi)$ denotes a spinor and $\sigma=\pm\frac{1}{2}$. The main advantage of introduced function $\tilde{\vp}_2$ is caused by the fact that it has the same spin-angular part as the function $\vp_1$. Whereas for the function $\vp_2$ we have: \begin{equation} \vp_2=\sigma_p\tilde{\vp}_2=\tilde{R}_{2;s,j}(p)\sigma_p\mathcal{Y}_{s,j,m}(\th,\vp,\xi)=-\tilde{R}_{2;s,j}(p)\mathcal{Y}_{-s,j,m}(\th,\vp,\xi) \end{equation} Last relation can be written as follows: \begin{equation} \vp_2=\vp_{2;-s,j,m}(p,\th,\vp,\xi)=R_{2;-s,j}(p)\mathcal{Y}_{-s,j,m}(\th,\vp,\xi) \end{equation} and here $R_{2;-s,j}(p)=-\tilde{R}_{2;s,j}(p)$. We remark that wavefunctions $\phi_1$ and $\tilde{\phi}_2$ are characterized by the same value $l=j-s$. To make equations (\ref{equ_1x_b+}) and (\ref{equ_1x_b-}) simpler we consider relation: \begin{equation} ((\bm{\sigma},{\bf L})+1) \mathcal{Y}_{s,j,m}(\th,\vp,\xi)=({\bf J}^2-{\bf L}^2-{\bf S}^2+1)\mathcal{Y}_{s,j,m}(\th,\vp,\xi)=s(2j+1)\mathcal{Y}_{s,j,m}(\th,\vp,\xi). \end{equation} Having used last equation one arrives at a system of coupled equations for radial wavefunctions: \begin{equation}\label{equ_3_b+} \tilde{\omega}b_p^+\tilde{R}_2=(E-m)R_1, \end{equation} \begin{equation}\label{equ_3_b-} \tilde{\omega}^*b_p^-R_1=(E+m)\tilde{R}_2. \end{equation} we use notation $b^{\pm}_p$ for the radial part of operators $b^{\pm}$ and they take form \begin{equation}\label{b_p+} b^+_p=-\sqrt{1-\beta p^2}\frac{\partial}{\partial p}-\frac{k}{p}\sqrt{1-\beta p^2}+\frac{\eta p}{\sqrt{1-\beta p^2}}; \end{equation} \begin{equation}\label{b_p-} b^-_p=\sqrt{1-\beta p^2}\frac{\partial}{\partial p}-\frac{k}{p}\sqrt{1-\beta p^2}+\frac{\eta^* p}{\sqrt{1-\beta p^2}}; \end{equation} where $k=s(2j+1)$. In radial momentum space the scalar product (\ref{inner_product_general}) can be represented as follows: \begin{equation}\label{inner_prod_radial} \langle R|R'\rangle=\int^{1/\sqrt{\beta}}_0\frac{dp}{\sqrt{1-\beta p^2}}R^*(p)R'(p) \end{equation} It is easy to verify that with respect to the scalar product (\ref{inner_prod_radial}) the operators $b_p^+$ (\ref{b_p+}) and $b_p^-$ (\ref{b_p-}) are mutually hermitian conjugates. From equations (\ref{equ_3_b+}) and (\ref{equ_3_b-}) we obtain: \begin{eqnarray} b_p^+b_p^-R_1=\frac{1}{|\tilde{\omega}|^2}(E^2-m^2)R_1;\label{equ_4_b+-}\\ b_p^-b_p^+\tilde{R}_2=\frac{1}{|\tilde{\omega}|^2}(E^2-m^2)\tilde{R}_2\label{equ_4_b-+} \end{eqnarray} The radial wavefunctions $R_1$ and $\tilde{R}_2$ can be treated as eigenfunctions of two superpartner hamiltonians \cite{Cooper_PhysRept95,Junker_96}. We consider bound state problem so normalizability condition should be imposed on the relativistic wavefunction $\psi=\begin{pmatrix}\psi_1 \\\psi_2\\\end{pmatrix}$. It gives rise to the following relation: \begin{equation}\label{normal_wavefunct} \int_0^{1/\sqrt{\beta}}\frac{dp}{\sqrt{1-\beta p^2}}\left(|R_1|^2+|\tilde{R}_2|^2\right)=1. \end{equation} In the presence of deformed commutation relations additional requirements are imposed on bound state wavefunctions. In case of uncertainty principle with minimal length it is demanded that any ``physical" wavefunction belongs to the domain of operator $\bf{P}$ it means that meanvalue of square of momentum operator is finite. The deformed commutation relations (\ref{algebra}) impose stricter requirements. To be acceptable a wavefunction should belong to the domains of operators $\bf{P}$ and $\bf{X}$. As a result it leads to finite meanvalues for square of both momentum and position. Let us suppose that in the right-hand side of equations (\ref{equ_4_b+-}) and (\ref{equ_4_b-+}) we have eigenvalue $E^2=m^2$, so the corresponding wavefunctions are necessarily the solutions of equations: \begin{eqnarray} b_p^-R_{1;0}=0,\label{wavefunct_b_0-}\\ b_p^+\tilde{R}_{2;0}=0.\label{wavefunct_b_0+} \end{eqnarray} Having integrated equation (\ref{wavefunct_b_0-}) we obtain: \begin{equation}\label{wavfunct_r_backgr_zero} R_{1;0}=C_{1;0}p^k(1-\beta p^2)^{\frac{\tilde{\xi}}{2}+i\frac{\tilde{\zeta}}{2}} \end{equation} where $\tilde{\xi}=\frac{m\omega}{\alpha+\beta m^2\omega^2}$ and $\tilde{\zeta}=\frac{\sqrt{\alpha/\beta}(1-\lambda)-m^2\omega^2\lambda\sqrt{\beta/\alpha}}{\alpha+\beta m^2\omega^2}$ and $C_{1;0}$ is the normalization constant. The normalization condition (\ref{normal_wavefunct}) implies that integral from the square module of the function $R_{1;0}$ must be finite: \begin{equation}\label{funct_R^0_1} \int^{1/\sqrt{\beta}}_0\frac{dp}{\sqrt{1-\beta p^2}}\left|C_{1;0}\right|^2p^{2k}(1-\beta p^2)^{\tilde{\xi}}=\left|C_{1;0}\right|^2\int^{1/\sqrt{\beta}}_0dp p^{2k}(1-\beta p^2)^{\tilde{\xi}-\frac{1}{2}}<\infty \end{equation} For $p\rightarrow 0$ function $R_{1;0}$ behaves as $p^k$ and boundary condition $R_{1;0}=0$ leads to the restriction $k>0$ and this inequality is satisfied if $s=1/2$. When $p\rightarrow\frac{1}{\sqrt{\beta}}$ convergence of the integral (\ref{funct_R^0_1}) gives rise to the condition $\tilde{\xi}-\frac{1}{2}>-1$ or equivalently $\tilde{\xi}>-\frac{1}{2}$. But this inequality is always fulfilled because the parameter $\tilde{\xi}$ is defined as positive. So we conclude that wavefunction $R_{1;0}$ is normalizable when $s=\frac{1}{2}$. As it has been already mentioned additional ``physical" conditions should be imposed on the wave function $R_{1;0}/p$. Meanvalues of square of momentum and position operators must be finite: \begin{equation} \left\langle \frac{R_{1;0}}{p}\Big|\hat{P}^2\Big|\frac{R_{1;0}}{p}\right\rangle<\infty, \quad \left\langle \frac{R_{1;0}}{p}\Big|\hat{X}^2\Big|\frac{R_{1;0}}{p}\right\rangle<\infty. \end{equation} Meanvalue for square of momentum can be represented in the form: \begin{equation}\label{P_2_integral} \left\langle \frac{R_{1;0}}{p}\Big|\hat{P}^2\Big|\frac{R_{1;0}}{p}\right\rangle=\int^{1/\sqrt{\beta}}_0\frac{dp p^2}{\sqrt{1-\beta p^2}}\frac{R^*_{1;0}}{p}\hat{P}^2_p\frac{R_{1;0}}{p}<\infty, \end{equation} where: \begin{eqnarray}\label{P_2_operator} \hat{P}^2_p=-\frac{\alpha}{\beta}\left((1-\beta p^2)\left(\frac{1}{p^2}\frac{\partial}{\partial p}p^2\frac{\partial}{\partial p}-\frac{l(l+1)}{p^2}\right)-\beta p\frac{\partial}{\partial p}\right)-\\\nonumber 2i\sqrt{\frac{\alpha}{\beta}}(1-\lambda)p\frac{\partial}{\partial p}+(1-\lambda)\left((1-\lambda)-i\sqrt{\frac{\alpha}{\beta}}\right)\frac{p^2}{1-\beta p^2}-3i\sqrt{\frac{\alpha}{\beta}}(1-\lambda) \end{eqnarray} is the ``radial" part of square of momentum operator. It should be noted that all remarks concerning meanvalue of square of momentum can be applied to the square of position operator because both of them have similar structure. Taking into account the explicit form for operator $\hat{P}^2_p$ (\ref{P_2_operator}) and using requirement (\ref{P_2_integral}) one can obtain following condition for integral: \begin{equation}\label{int_converg} \int^{1/\sqrt{\beta}}_0dp\quad p^{2k-2}(1-\beta p^2)^{\tilde{\xi}-\frac{3}{2}}<\infty \end{equation} It is easy to convince oneself that convergence of the latter integral in the vicinity of the point $p=0$ gives rise to the condition $k>0$. From the other side convergence of the integral (\ref{int_converg}) in the vicinity of the point $1/\sqrt{\beta}$ will be provided if ${\tilde{\xi}-\frac{3}{2}}>-1$ from which we obtain restriction on the parameters of oscillator if it is supposed that parameters of deformation are held fixed: \begin{equation}\label{cond_gr_state=0} \frac{1}{\beta}(1-\sqrt{1-\alpha\beta})<m\omega<\frac{1}{\beta}(1+\sqrt{1-\alpha\beta}) \end{equation} One can conclude that in order to obtain the eigenvalue $E^2=m^2$ in the equation (\ref{equ_4_b+-}) the condition $s=1/2$ should be required. We note that in case of two-parametric deformed algebra with minimal length eigenvalue $E^2=m^2$ exists also for positive projection of spin $s=1/2$ but an additional demand for values $j$ must be satified \cite{QuesneJPA05}. The mentioned requirement disappears in the limit case when one of those parameters corresponding to our parameter $\beta$ is kept. Having integrated equation (\ref{wavefunct_b_0+}) we obtain: \begin{equation}\label{wavfunct_B_0+_integr} \tilde{R}_{2;0}=C_{2;0}p^{-k}(1-\beta p^2)^{-\frac{\tilde{\xi}}{2}+i\frac{\tilde{\zeta}}{2}} \end{equation} Again the boundary conditions are imposed on it. For the first we require that $\tilde{R}_{2;0}\rightarrow 0$ when $p\rightarrow 0$. The restriction $k<0$ or equivalently $s=-\frac{1}{2}$ follows immediately from the last requirement. From the other side one should demand $\tilde{R}_{2;0}\rightarrow 0$ when $p\rightarrow\frac{1}{\sqrt{\beta}}$ but this requirement cannot be fulfilled because $-\frac{\tilde{\xi}}{2}<0$. As a result, the function $\tilde{R}_{2;0}$ is not normalizable. To have physically acceptable function one should demand $\tilde{R}_{2;0}=0$ and $R_{1;0}\neq 0$. It is worth mentioning that the same requirement appears in case of two parametric deformed algebra with minimal length \cite{QuesneJPA05}. We also remark that the ground state wavefunction $(R_{1;0}\neq 0, \tilde{R}_{2;0}=0)$ is compatible with the positive eigenvalue $E=m$ whereas the negative one $E=-m$ will not be compatible with the system (\ref{equ_3_b+}) and (\ref{equ_3_b-}). \section{Spectrum of Dirac oscillator} In this section we will obtain energy spectrum for the Dirac oscillator. As it was shown in the previous section ground state with energy $E^2=m^2$ exists only for positive projection of spin ($s=\frac{1}{2}$). In this section we will show that the ground state with energy $E^2\neq m^2$ can take place for positive ($s=\frac{1}{2}$) as well as for negative ($s=-\frac{1}{2}$) projection of spin. These two cases that correspond different ground state energy are considered separately. \subsection{Case of zero ground state energy} As it has been already mentioned in the previous section that whenever $s=\frac{1}{2}$ and the condition (\ref{cond_gr_state=0}) is fulfilled then equation (\ref{equ_4_b+-}) has acceptable wavefunction corresponding to the ground state energy $E^2-m^2=0$. To solve eigenvalue problem (\ref{equ_4_b+-}) SUSY QM procedure is applied \cite{Cooper_PhysRept95,Junker_96}. Operator $h=b^+_pb^-_p$ is supposed to be the first member of the SUSY QM hierarchy \begin{equation} h_i=b^+_p(k_i,\eta_i)b^-_p(k_i,\eta_i)+\sum^i_{j=0}\varepsilon_j, \quad i=0,1,2,\ldots \end{equation} Imposing shape invariance condition we obtain: \begin{equation}\label{SI_condition} b^-_p(k_i,\eta_i)b^+_p(k_i,\eta_i)=b^+_p(k_{i+1},\eta_{i+1})b^-_p(k_{i+1},\eta_{i+1})+\varepsilon_{i+1} \end{equation} In explicit form we write: \begin{equation}\label{Im_eta_cond} \eta_{i+1}-\eta^*_{i+1}=\eta_i-\eta_i^* \end{equation} \begin{equation}\label{k_cond} k^2_{i+1}-k_{i+1}=k^2_i+k_i \end{equation} \begin{equation}\label{Re_eta_cond} \frac{1}{\beta}|\eta_{i+1}|^2-\eta^*_{i+1}=\frac{1}{\beta}|\eta_{i}|^2+\eta_{i} \end{equation} \begin{equation}\label{epsilon_cond} -k^2_{i+1}\beta-k_{i+1}(\eta_{i+1}+\eta^*_{i+1})-\frac{1}{\beta}|\eta_{i+1}|^2+\varepsilon_{i+1}= -k^2_{i}\beta-k_{i}(\eta_{i}+\eta^*_{i})-\frac{1}{\beta}|\eta_{i}|^2 \end{equation} In the following we use notations: ${\rm Re}\eta_i=\xi_i$ and ${\rm Im}\eta_i=\zeta_i$ Having solved the first three equations we obtain: \begin{equation}\label{iter_cond_e=0} \zeta_{i}=\zeta, \quad \xi_{i}=\xi+\beta i, \quad k_i=k+i \end{equation} we note that $\xi=\beta\tilde{\xi}$ and $\zeta=\beta\tilde{\zeta}$. It is easy to show that for obtained values $\eta_i$ and $k_i$ the hierarchy hamiltonians $h_i$ have physically acceptable solutions $R_{1;0}(k_i,\eta_i,p)$ corresponding to the energies $\Sigma^{i}_{j=0}\varepsilon_j$. Having used the equation (\ref{epsilon_cond}) we arrive at following equation for energy eigenvalues: \begin{equation}\label{eigenvalues_general_e=0} E^2_n-m^2=\left(m^2\omega^2+\frac{\alpha}{\beta}\right)\sum^n_{j=0}\varepsilon_j=4n\left(m^2\omega^2+\frac{\alpha}{\beta}\right)(\beta(n+k)+\xi) \end{equation} Since $k=s(2j+1)$ and $\xi=\frac{m\omega}{m^2\omega^2+\alpha/\beta}$ the last relation can be rewritten in the form: \begin{equation}\label{spectrum_background_zero} E^2_n-m^2=4n\left[m\omega+(m^2\omega^2\beta+\alpha)\left(n+j+\frac{1}{2}\right)\right] \end{equation} We note that in case $\alpha=0$ expression (\ref{spectrum_background_zero}) is in agreement with corresponding relation obtained in the work \cite{QuesneJPA05} when one of their parameters of deformation is set to zero. The principal quantum number $N=2n+l=2n+j-s$ can be introduced instead of $n$. Then the relation (\ref{spectrum_background_zero}) can be represented as follows: \begin{equation} E^2_n-m^2=2\left(N-j+\frac{1}{2}\right) \left[m\omega+\frac{1}{2}(m^2\omega^2\beta+\alpha)\left(N+j+\frac{3}{2}\right)\right]. \end{equation} \subsection{Nonzero ground state energy} Now we suppose that in the right-hand side of the equations (\ref{equ_4_b+-}) and (\ref{equ_4_b-+}) we have $E^2-m^2\neq 0$. It will be shown that in this case the ground state exists for the following hamiltonian: \begin{equation}\label{gr_state_0} h_0=b^+_p(k,\eta)b^-_p(k,\eta)=-\left(\sqrt{1-\beta p^2}\frac{\partial}{\partial p}\right)^2+(\eta-\eta^*)p\frac{\partial}{\partial p}+\frac{k^2-k}{p^2}+ \frac{\frac{1}{\beta}|\eta|^2-\eta^*}{1-\beta p^2}-k(\eta+\eta^*)-k^2\beta-\frac{1}{\beta}|\eta|^2 \end{equation} In order to obtain ground state energy one should re-factorize hamiltonian $h_0$. It can be represented as follows: \begin{equation}\label{gr_state_1} h_0=b^+_p(k',\eta')b^-_p(k',\eta')+\varepsilon', \end{equation} where $k'$ and $\eta'$ are new parameters in operators (\ref{b_p+}) and (\ref{b_p-}) and $\varepsilon'$ defines the ground state energy. From equations (\ref{gr_state_0}) and (\ref{gr_state_1}) it follows: \begin{equation}\label{gr_st_im_eta} \eta'-\eta'^*=\eta-\eta^* \end{equation} \begin{equation}\label{gr_st_k} k'^2-k'=k^2-k \end{equation} \begin{equation}\label{gr_st_re_eta} \frac{1}{\beta}|\eta'|^2-\eta'=\frac{1}{\beta}|\eta|^2-\eta \end{equation} \begin{equation}\label{gr_st_epsilon} -k'(\eta'+\eta'^*)-\beta k'^2-\frac{1}{\beta}|\eta'|^2+\varepsilon=-k(\eta+\eta^*)-\beta k^2-\frac{1}{\beta}|\eta|^2 \end{equation} Solving the equations (\ref{gr_st_im_eta})-(\ref{gr_st_re_eta}) we arrive at the relations: \begin{eqnarray} k'_1=k, \quad k'_2=1-k;\label{cond_gr_st_k}\\ \zeta'=\zeta,\quad \xi'_1=\xi, \quad \xi'_2=\beta-\xi.\label{cond_gr_st_eta} \end{eqnarray} Since conditions for parameters $k'$ (\ref{cond_gr_st_k}) and $\eta'$ (\ref{cond_gr_st_eta}) are obtained independently then one can combine different $k'$ and $\eta'$ to investigate whether obtained wavefunctions will be physically acceptable. For the first we consider the case $k'=k$ and $\xi'=\xi$. It follows immediately that $\varepsilon=0$. So the latter combination should be left out. Then if we suppose that $k'=k$ and $\xi'=\beta-\xi$ equation (\ref{wavefunct_b_0-}) gives us corresponding wavefunction $R_{1;0}=C_{1;0}p^k(1-\beta p^2)^{\frac{\beta-\xi}{2\beta}+i\frac{\zeta}{2\beta}}$. The first requirements imposed on this function are boundary conditions. To provide the condition $R_{1;0}=0$ at the boundaries one should demand that $k>0$ and $\beta-\xi>0$. As a consequence condition $k>0$ leads to requirement $s=\frac{1}{2}$ whereas the demand $\beta-\xi>0$ gives rise to $m\omega\in(0,\frac{1-\sqrt{1-4\alpha\beta}}{2\beta})$ or $m\omega\in(\frac{1+\sqrt{1-4\alpha\beta}}{2\beta}, \infty)$. If $4\alpha\beta>1$ the last condition can be satisfied for arbitrary $m\omega$. To make obtained wave function physically acceptable it must fulfil normalizability condition (\ref{funct_R^0_1}) and even stronger requirement (\ref{P_2_integral}). From the last requirement it follows that $-\frac{\xi}{\beta}+\frac{1}{2}>0$ and as a result it gives rise to the restrictions for the product $m\omega$: \begin{equation}\label{cond_nonzero_gr_state} m\omega\in\left(0,\frac{1-\sqrt{1-\alpha\beta}}{\beta}\right)\bigcup\left(\frac{1+\sqrt{1-\alpha\beta}}{\beta}, \infty\right). \end{equation} One can see that obtained restrictions for the product $m\omega$ are opposite to (\ref{cond_gr_state=0}). As a conclusion, if the relation (\ref{cond_gr_state=0}) is fulfilled then the ground state has zero energy. If the condition (\ref{cond_gr_state=0}) is broken the ground state with nonzero energy appears. From relation (\ref{gr_st_epsilon}) we obtain \begin{equation}\label{gr_st_energ_s=1/2} \varepsilon=(\beta-2\xi)(1+2k). \end{equation} It is easy to verify that obtained ground state energy is positive. To find other eigenvalues of the hamiltonian $h_0$ one should substitute $\xi'$ instead of $\xi$ into (\ref{eigenvalues_general_e=0}) and take into account relation (\ref{gr_st_energ_s=1/2}). After necessary transformations we arrive at: \begin{equation}\label{spectrum_s=1/2_E} E^2_n-m^2=4(n+j+1)\left[-m\omega+(m^2\omega^2\beta+\alpha)\left(n+\frac{1}{2}\right)\right] \end{equation} Similarly as in previous case the obtained relation can be represented in terms of principal quantum number \begin{equation} E^2_n-m^2=2\left(N+j+\frac{5}{2}\right)\left[-m\omega+\frac{1}{2} (m^2\omega^2\beta+\alpha)\left(N-j+\frac{3}{2}\right)\right]. \end{equation} From relations (\ref{cond_gr_st_k}) and (\ref{cond_gr_st_eta}) it follows that ground states with nonzero energy other combinations of $k'$ and $\eta'$ are also possible. We consider combination $k'=1-k$ and $\xi'=\xi$. Then the ground state wavefunction takes form: $R_{1;0}=C_{1;0}p^{1-k}(1-\beta p^2)^{\frac{\xi}{2\beta}+i\frac{\zeta}{2\beta}}$. One of boundary conditions leads to restriction $k<0$ which can be satisfied if $s=-\frac{1}{2}$ and then $k'=j+\frac{3}{2}$. Since $\xi>0$ the second boundary condition is satisfied immediately. It is easy to persuade oneself that obtained wavefunction is normalizable. Similarly as in the previous cases to make the obtained wavefunction physically acceptable we should impose condition (\ref{P_2_integral}) on it. Having used formula (\ref{gr_st_epsilon}) we obtain following relation for the ground state energy: \begin{equation} \varepsilon=(\beta+2\xi)(1-2k) \end{equation} One can see that ground state energy is also positive. Using the same procedure as in case with positive $k$ on can obtain energy spectrum: \begin{equation} E^2_n-m^2=4(n+j+1)\left[m\omega+(m^2\omega^2\beta+\alpha)\left(n+\frac{1}{2}\right)\right] \end{equation} Having introduced the principal quantum number we rewrite the last relation in the form: \begin{equation} E^2_n-m^2=2\left(N+j+\frac{3}{2}\right)\left[m\omega+\frac{1}{2} (m^2\omega^2\beta+\alpha)\left(N-j+\frac{1}{2}\right)\right]. \end{equation} In the end we can choose combination $k'=1-k$ and $\xi'=\beta-\xi$. This variant leads to the wave function $R_{1;0}=C_{1;0}p^{1-k}(1-\beta p^2)^{\frac{\beta-\xi}{2\beta}+i\frac{\zeta}{2\beta}}$. One of the boundary conditions gives rise to the demand $k<0$ or equivalently $k'=j+\frac{3}{2}$. Another boundary condition leads to inequality $\beta-\xi>0$ but as we already know normalizability condition and boundness of the square of momentum operator should be satisfied and as a consequence all these demands lead to the condition (\ref{cond_nonzero_gr_state}). For the ground state we obtain: \begin{equation}\label{gr_st_4} \varepsilon=4(\beta(1-k)-\xi). \end{equation} It can be shown that the ground state energy (\ref{gr_st_4}) is positive if $4\alpha\beta>1$. The same procedure leads us to the following expression for the spectrum: \begin{equation} E^2_n-m^2=4(n+1)\left(-m\omega+(m^2\omega^2\beta+\alpha)\left(n+j+\frac{3}{2}\right)\right) \end{equation} Again we rewrite obtained formula replacing the quantum number $n$ by the principal quantum number $N$: \begin{equation} E^2_n-m^2=2\left(N-j+\frac{3}{2}\right)\left[-m\omega+(m^2\omega^2\beta+\alpha) \left(N+j+\frac{5}{2}\right)\right]. \end{equation} We note that this case does not have ``classical" limit or in other words when parameters of deformation $\alpha,\beta\rightarrow 0$ obtained spectrum does not reduce to any solution of ordinary quantum mechanics \cite{Moshinsky_JPA89}. Similar situation appears in the case of deformed algebra with minimal length \cite{QuesneJPA05}. \section{Radial momentum wavefunctions of Dirac oscillator} In the previous section the ground state wavefunctions of hamiltonian $h=b^+_pb^-_p$ have been derived. As it was shown only the large component of wavefunction can be obeyed to all imposed requirements. In this section we calculate the remaining large and small components of radial momentum wavefunction \subsection{Zero ground state energy} The large component of radial momentum wavefunction for excited states can be calculated with help of well-known SUSY QM and SI technique \cite{Cooper_PhysRept95,Junker_96}. As it is known the wave functions of the excited states are derived form the ground state wavefunction with help of recursive procedure which is based on the relation: \begin{equation}\label{iter_proc_wf} R_{1;n} (p;k,\xi)=\frac{1}{\sqrt{e_n-e_0}}b^+_p(k,\xi)R_{1;n-1}(p;k_1,\xi_1). \end{equation} Where we used notation $e_i=(E^2_i-m^2)/|\tilde{\omega}|^2$ for simplicity. According to the conditions (\ref{iter_cond_e=0}) and (\ref{eigenvalues_general_e=0}) we should impose $e_0=0$, $k_1=k+1$ and $\xi_1=\xi+\beta$. Having substituted the explicit form of operator $b^+_p$ into the relation (\ref{iter_proc_wf}) we arrive at the equation: \begin{equation} R_{1;n} (p;k,\xi)=\frac{1}{\sqrt{e_n}}\left(-\sqrt{1-\beta p^2}\frac{\partial}{\partial p}-\frac{k}{p}\sqrt{1-\beta p^2}+\frac{(\xi+i\zeta)p}{\sqrt{1-\beta p^2}}\right)R_{1;n-1}(p;k+1,\xi+\beta). \end{equation} Given recursion procedure leads to consequence that the large component of radial wavefunction takes form: \begin{equation}\label{wf_e=0} R_{1;n}(p;k,\xi)=C_{1;n}(k,\xi)p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{a}{2}+\frac{1}{4}-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z), \end{equation} where $C_{1;n}(k,\xi)$ and $P^{(a,b)}_n(z)$ are a normalization factor a Jacobi polynomial respectively. Here we also denoted: \begin{equation} a=\frac{\xi}{\beta}-\frac{1}{2}, \quad b=k-\frac{1}{2}, \quad z=2\beta p^2-1 \quad (-1<z<1). \end{equation} It was argued in the previous section that the small component of the ground state radial wavefunction vanishes $\tilde{R}_{2;0}(p;k,\xi)=0$. For excited states small component can be found by using relation (\ref{equ_3_b-}): \begin{equation}\label{wf_tilde_R_2} \tilde{R}_{2;n}(p;k,\xi)=\frac{\tilde{\omega}^*}{E_n+m}b^-_p(k,\xi)R_{1;n}(p;k,\xi) \end{equation} Taking into account explicit expressions for the operator $b^-(k,\xi)$ (\ref{b_p-}) and wave function (\ref{wf_e=0}) one can rewrite the last relation in form: \begin{eqnarray}\label{wf_e=0_2} \tilde{R}_{2;n}(p;k,\xi)=\frac{\tilde{\omega}^*C_{1;n}(k,\xi)}{E_n+m}\sqrt{1-\beta p^2}\left(\frac{\partial} {\partial p}-\frac{k}{p}+\frac{\eta^* p}{1-\beta p^2}\right)p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{a}{2}+\frac{1}{4}-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z)\\\nonumber =\frac{\tilde{\omega}^*C_{1;n}}{E_n+m}\sqrt{1-\beta p^2}\left(\frac{\partial} {\partial p}-\frac{b+\frac{1}{2}}{p}+\frac{\left(\beta\left(a+\frac{1}{2}\right)-i\zeta\right) p}{1-\beta p^2}\right)p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z)\\\nonumber =\frac{2\beta \tilde{\omega}^*C_{1;n}(k,\xi)(n+a+b+1)}{E_n+m}p^{b+\frac{3}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{3}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a+1,b+1)}_{n-1}(z). \end{eqnarray} It should be noted that a formula of differentiation of the Jacobi polynomials was used here \cite{Bateman_1953,Abramowitz}. In the previous section it was stated that wavefunction $(R_{1;0}(p;k,\xi)\neq 0,\tilde{R}_{2;0}(p;k,\xi)=0)$ is the physically acceptable solution of the system of equations (\ref{equ_3_b+}) and (\ref{equ_3_b-}) only for $ E^2_0=m^2$. At the same time for excited states: $n=1,2,\ldots$ the solution of this system of equations is given by $(R_{1;n}(p;k,\xi),\tilde{R}_{2;n}(p;k,\xi))$. It is necessary to verify whether these function are physically acceptable or not. It is easy to persuade oneself that the Jacobi polynomials in (\ref{wf_e=0}) and (\ref{wf_e=0_2}) do not spoil the convergence of integral (\ref{normal_wavefunct}) and also meanvalues for square of momentum and position operators would be finite similarly as it was given by the condition (\ref{P_2_integral}) for the ground state wavefunction. Finally, the normalization factor $C_{1;n}$ can be found from the normalization condition (\ref{normal_wavefunct}): \begin{equation} C_{1;n}=\left(\beta^{b+1}(2n+a+b+1)\frac{n!\Gamma(n+a+b+1)}{\Gamma(n+a+1)\Gamma(n+b+1)}\frac{E_n+m}{E_n}\right)^{\frac{1}{2}} \end{equation} \subsection{Nonzero groundstate energy} To find wavefunctions of excited states in remaining cases one should follow the approach used in the previous section. Parameters $k$ and $\xi$ in the iteration equation (\ref{iter_proc_wf}) should be replaced by $k'$ and $\xi'$ correspondingly. It worth noting that at the same time parameters $k$ and $\xi$ in the equation (\ref{wf_tilde_R_2}) remain unchanged. As a consequence we can state that equation (\ref{wf_e=0_2}) remains valid if parameters $a$ and $b$ are replaced by a new one. In the case $k'=k$ that corresponds $s=\frac{1}{2}$ and $\xi'=\beta-\xi$ we obtain: \begin{equation} R_{1;n}(p;k,\xi)=C_{1;n}p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z) \end{equation} where $a=\frac{1}{2}-\frac{\xi}{\beta}$ and $b=k-\frac{1}{2}$. The relation (\ref{wf_tilde_R_2}) gives rise to: \begin{eqnarray} \tilde{R}_{2;n}(p;k,\xi)=\frac{\tilde{\omega}^*C_{1;n}}{E_n+m}\sqrt{1-\beta p^2}\left(\frac{\partial} {\partial p}-\frac{k}{p}+\frac{\eta^* p}{1-\beta p^2}\right)p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z)\\\nonumber =\frac{\tilde{\omega}^*C_{1;n}}{E_n+m}\sqrt{1-\beta p^2}\left(\frac{\partial} {\partial p}-\frac{b+\frac{1}{2}}{p}+\frac{\left(\beta\left(\frac{1}{2}-a\right)-i\zeta\right) p}{1-\beta p^2}\right)p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z)\\\nonumber =-\frac{2\beta\tilde{\omega}^*(a+n)C_{1;n}}{E_n+m}p^{b+\frac{3}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a-\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a-1,b+1)}_{n}(z) \end{eqnarray} In the case $k'=1-k$ that corresponds $s=-\frac{1}{2}$ and $\xi'=\xi$ we arrive at: \begin{equation} R_{1;n}(p;k,\xi)=C_{1;n}p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z) \end{equation} where $a=\frac{\xi}{\beta}-\frac{1}{2}$ and $b=\frac{1}{2}-k$. Again the relation (\ref{wf_tilde_R_2}) leads to: \begin{eqnarray} \tilde{R}_{2;n}(p;k,\xi)=\frac{\tilde{\omega}^*C_{1;n}}{E_n+m}\sqrt{1-\beta p^2}\left(\frac{\partial} {\partial p}-\frac{k}{p}+\frac{\eta^* p}{1-\beta p^2}\right)p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z)\\\nonumber =\frac{\tilde{\omega}^*C_{1;n}}{E_n+m}\sqrt{1-\beta p^2}\left(\frac{\partial} {\partial p}-\frac{\frac{1}{2}-b}{p}+\frac{\left(\beta\left(a+\frac{1}{2}\right)-i\zeta\right) p}{1-\beta p^2}\right)p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z)\\\nonumber =\frac{2\beta\tilde{\omega}^*(b+n)C_{1;n}}{E_n+m}p^{b+\frac{3}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{3}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a+1,b-1)}_{n}(z) \end{eqnarray} In the end we consider the case $k'=1-k$ or equivalently as previously $s=-\frac{1}{2}$ and $\xi'=\beta-\xi$. We arrive at: \begin{equation} R_{1;n}(p;k,\xi)=C_{1;n}p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z) \end{equation} where $a=\frac{1}{2}-\frac{\xi}{\beta}$ and $b=\frac{1}{2}-k$. Having used relation (\ref{wf_tilde_R_2}) we obtain: \begin{eqnarray} \tilde{R}_{2;n}(p;k,\xi)=\frac{\tilde{\omega}^*C_{1;n}}{E_n+m}\sqrt{1-\beta p^2}\left(\frac{\partial} {\partial p}-\frac{k}{p}+\frac{\eta^* p}{1-\beta p^2}\right)p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z)\\\nonumber =\frac{\tilde{\omega}^*C_{1;n}}{E_n+m}\sqrt{1-\beta p^2}\left(\frac{\partial} {\partial p}-\frac{\frac{1}{2}-b}{p}+\frac{\left(\beta\left(\frac{1}{2}-a\right)-i\zeta\right) p}{1-\beta p^2}\right)p^{b+\frac{1}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a+\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a,b)}_n(z)\\\nonumber =-\frac{2\beta\tilde{\omega}^*(n+1)C_{1;n}}{E_n+m}p^{b+\frac{3}{2}}(1-\beta p^2)^{\frac{1}{2}\left(a-\frac{1}{2}\right)-i\frac{\zeta}{2\beta}}P^{(a-1,b-1)}_{n+1}(z) \end{eqnarray} It has been already noted that the last case does not have ``classical" limit when the parameters of deformation $\alpha$, $\beta$ tend to zero. Similarly in the case of deformed algebra with minimal length \cite{QuesneJPA05} bounded states which do not have classical limit appear. \section{Discussion} In this work we considered the Dirac oscillator problem in deformed space given by the commutation relations (\ref{algebra}). It was shown that deformed commutation relations (\ref{algebra}) give rise to minimal uncertainty in position as well as in momentum. To find appropriate representation for position and momentum operators a specific nonsymplectic transformation was proposed \cite{Mignemi_arxiv}. It allows one to find some relation between given algebra (\ref{algebra}) and well known Snyder algebra. Having used proposed representation it has been solved exactly the Dirac oscillator eigenvalue problem. It has been shown that the Dirac oscillator in deformed space with commutation relations (\ref{algebra}) has some common features with conventional case as well as in case of deformation with minimal length only. A dissymmetry under the exchange of $s=\frac{1}{2}$ with $s=-\frac{1}{2}$ that appeared in nondeformed case due to specific substitution ${\bf P}\rightarrow{\bf P}-im\omega{\bf X}\hat{\beta}$ takes place in case of Snyder-de Sitter deformed algebra (\ref{algebra}). The same situation happens in case of deformed algebra with minimal length \cite{QuesneJPA05}. If we consider system of equations (\ref{equ_3_b+}) and (\ref{equ_3_b-}) and make substitution $\omega\rightarrow -\omega$ the system can be transformed to equivalent one where $s$ is replaced by $-s$ and $E, R_1, \tilde{R}_2$ are changed into $-E,-\tilde{R}_2, R_1$ respectively. This transformation is valid in nondeformed case \cite{Moshinsky_JPA89} and in the presence of deformed algebra with minimal length \cite{QuesneJPA05}. In nondeformed situation it is treated in connection with supersymmetry or, equivalently, with duality between particles and antiparticles \cite{Moshinsky_FoundPhys93}. Another similarity with previous cases lies in the absence of negative energy $E=-m$ ground states \cite{Moshinsky_JPA89,QuesneJPA05}. It has been noted above the energy spectrum of the Dirac oscillator with deformed commutation relations (\ref{algebra}) takes similar form as in case of deformed algebra with minimal length. \cite{QuesneJPA05}. In particular, the difference $E^2_n-m^2$ gets terms quadratic in $n$ instead of linear dependence in nondeformed instance. It should be noted that the relations for the energy spectrum would be in agreement with each other if the parameter $\alpha$ in our expressions is set to zero whereas in relations obtained in \cite{QuesneJPA05} the only parameter corresponding to our $\beta$ is kept. We also note that in case of deformed algebra with minimal length ground state with energy $E^2-m^2=0$ is allowed for small values $j$ only \cite{QuesneJPA05}. In contrast to it the Snyder-de Sitter algebra (\ref{algebra}) does not make any restriction for parameter $j$ similarly as it was in ordinary quantum mechanics \cite{Moshinsky_JPA89}. Ground states with nonvanishing energy $E^2-m^2\neq 0$ are allowed for both projections of spin: $s=\frac{1}{2}$ and $s=-\frac{1}{2}$. Here similarly to nondeformed situation no restriction on value of total angular momentum quantum number $j$ is imposed. It is worth stressing that in order to have physically acceptable wavefunctions parameters of oscillator should fulfil some conditions, namely product $m\omega$ can not take any value but it should satisfy such requirements as (\ref{cond_gr_state=0}) or (\ref{cond_nonzero_gr_state}). We also remark that although Dirac oscillator was introduced as a relativistic problem in our case it is not Lorentz covariant. This is caused by the fact that chosen algebra of operators (\ref{algebra}) is not a relativistic one. The algebra (\ref{algebra_2}) is obtained from the relativistic Snyder-de Sitter algebra \cite{Mignemi_arxiv,Kowalski-glikman_PRD04} and it seems that it easy to consider fully relativistic case but unfortunately some problems appear. The first one is that both time and energy will be represented by differential operators as we have here for position and momentum operators. The second problem is related to the behaviour of minimal uncertainties under Lorentz transformations. These questions need careful consideration and will examined elsewhere.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The physics of the quantum error correction process~\cite{Shor95, Steane96, Kitaev03, Dennis02, Terhal15, Brown16}, where we aim to correct a random configuration of errors incident to a system, is naturally captured by its free energy; a quantity specified by an energetic and an entropic contribution. Broadly speaking, the likelihood that the environment can introduce an error that will corrupt encoded logical information corresponds to the configurational energy, and the number of configurations which will lead the system to failure -- the entropic contribution to the free energy. Characterising both of these quantities will enable us to establish the performance of a given quantum error-correction procedure precisely. A number of studies have been conducted where threshold error rates of topological codes have been determined under a variety of noise models by mapping maximum likelihood decoding onto the partition function of a statistical mechanical Hamiltonian. Under such mappings thresholds have been found to correspond to phase transitions in the statistical mechanical model which can be estimated both analytically~\cite{Dennis02, Fowler12} and numerically~\cite{Katzgraber10, Andrist11, Bombin12, Andrist12, Katzgraber13, Andrist15, Andrist16, Kubica17, Chubb18}. More generally, it is interesting to explore how the competition between energy and entropy is manifest at low error rates. For instance, such studies~\cite{Pastawski10} have shown that certain models with a divergent code distance such as the Bacon-Shor code~\cite{Bacon06} do not present a finite error threshold due to entropic considerations that arise over the error-correction procedure. Indeed, we hope that we can develop quantum hardware~\cite{Reed12, Barends14, Nigg14, Corcoles15, Takita16, Linke16} that functions with error rates far below threshold such that the logical failure rate of the system decays rapidly as we increase the number of physical degrees of freedom. To this end it is important to characterise the energetic and entropic contribution of the free energy of different quantum error correction processes to better understand the shortcomings of their performance, and to help us design better quantum error-correcting codes in the future. \begin{figure} \includegraphics{Lattices.pdf} \caption{\label{Fig:Lattices}Square lattice surface codes on the torus with two different orientations. (Left) The square-lattice surface code with $n = 72$ and $d = 6$. A star and plaquette operator are shown at the top of the figure. A least-weight error is shown along the red path at the bottom of the lattice. The red line supports an uncorrectable error of dephasing flips with weight $d/2$. (Right) The rotated diamond-lattice surface code with $n = 144$ and $d = 12$. The coloured lines indicate the low-weight support of different low-weight logical operators, and the yellow points indicate right turns in the red path.} \end{figure} We present new insights into the statistical mechanics of topological quantum error correction by exploring the following observation, depicted in Fig.~\ref{Fig:Lattices}. The surface code~\cite{Kitaev03, Dennis02}, first defined by Kitaev with qubits lying on the edges of a square lattice has a code distance of $d = \sqrt{n / 2}$ where $n$ is the number of physical qubits in the system. In contrast, ~\cite{Bombin06a, Hastings15, Delfosse16} the model~\cite{Wen03} where the same lattice geometry is rotated by $\pi / 4$ ~\cite{Nussinov09, Brown11} has an increased code distance; $d = \sqrt{n}$. Conventional wisdom may lead us to suppose that that the rotated variant of the surface code will have a better qubit economy than the original square lattice model. However, to compare these two codes accurately, we must account for intricate combinatorial effects that emerge over the error-correction process. In fact, we find that the better logical error rate of these two models is a complicated function of their system size and error rate. In Fig.~\ref{fig:summaryintro}, we summarize the results of the paper on a diagram that maps out parameter space over system size and noise strength, indicating which analysis tools apply to what regimes, and the area of parameter space where we might prefer to employ the code with the inferior distance. In Sec.~\ref{Sec:Preliminary} we introduce the surface code, the error model, the decoding algorithm we use for numerical studies, and we define the free energy of the quantum error correction procedure. In Sec.~\ref{sec:pathcounting} we study both models analytically in the limit of low error rate, where energy dominates entropy and the rotated code outperforms the original. We review the failure rate calculation of the original model~\cite{Dennis02} in this limit and derive a new formula for the rotated model. In Sec.~\ref{sec:MC} we use numerical methods to observe that the two codes behave almost identically close to the threshold error rate. Remarkably, for finite system sizes up to $n \lesssim 2500$, we find that the unrotated variant of the code has a marginally lower logical error rate. We also use sophisticated numerical methods~\cite{Bennett76, Bravyi13} to verify our analysis in the low error rate regime. In Sec.~\ref{sec:model} we generalise the path counting calculations for the low error rate regime to gain insight into the numerical results for finite error rate. \begin{figure} \includegraphics{SummaryFigure.pdf} \caption{Summary of results. The failure probability $\overline{P}(p,n)$ of the two orientations depends on the number of qubits $n$, and physical error rate $p$. Surprisingly, we find at error rates greater than about half the threshold error rate $p_\text{th}$ for system sizes $\sqrt{n} \lesssim 50 $, that the code with smaller distance outperforms the rotated variant due to entropic effects. We hatch this region in parameter space. We use a number of approaches to explore the different parts of parameter space that are shown with different colors in the figure. In Sec.~\ref{sec:pathcounting} we find closed expressions for $p \rightarrow 0$ (green). For large $n$, these have the form $\overline{P}(p,n) \rightarrow (\gamma_0)^{\sqrt{n}} p^{d/2} $, with $\gamma^{\blacksquare}_0 = 2^{\frac{1}{\sqrt{2}}} \approx 1.6325$ and $3.4142 \approx 2+ \sqrt{2} \leq \gamma^{\blacklozenge}_0 \leq \sqrt{27/2} \approx 3.6742$. In Sec.~\ref{sec:MC} we use Monte Carlo sampling to study the finite error rate regime numerically (blue). We use the splitting method (red), introduced for topological codes by Bravyi and Vargo in Ref. ~\cite{Bravyi13}, to numerically interpolate between the Monte-Carlo studies and the analytic path counting results which verifies our analysis in the low error rate regime. Finally, in Section~\ref{sec:model} we introduce an analytical model that can be regarded as a generalisation of the path-counting method that accurately models a number of features seen in the numerical studies (yellow). \label{fig:summaryintro} } \end{figure} \section{Error correction with the surface code} \label{Sec:Preliminary} We first describe the surface code, the error model and the minimum-weight matching decoding algorithm we use to obtain our results. A detailed description of these concepts are found in the seminal references~\cite{Kitaev03, Dennis02} and in recent review articles~\cite{Terhal15, Brown16}. \subsection{The surface code} The surface code is defined on a graph embedded on a topologically non-trivial two-dimensional manifold with qubits on its edges. Encoded states of the surface code, $ | \psi \rangle $, lie in the common $+1$ eigenspace of elements of its stabilizer group, $\mathcal{S} \subseteq \mathcal{P}_n$, where $\mathcal{P}_n$ is the Pauli group acting on $n$ qubits which is generated by the Pauli operators $X_e$, $Y_e$ and $Z_e$ acting on the qubit of edge $e$~\cite{Gottesman97}. The stabilizers of the surface code, commonly known as star and plaquette operators, $S_v, S_f \in \mathcal{S}$, are associated to vertices, $v$, and faces, $f$, of the graph. Specifically, we have star operators $S_v = \prod_{\partial {e}\ni v} X_e$ and plaquette operators $S_f = \prod_{e \in \partial f} Z_e$, where $\partial e$($\partial f$) denote the set of vertices(edges) in the boundary of $e$($f$). See Fig.~\ref{Fig:Lattices} for examples of star and plaquette operators. The logical operators that generate the rotations about the encoded states of the surface code, $\overline{Z}_j$ ($\overline{X}_j$), are the tensor product of Pauli-Z(Pauli-X) operators supported along non-trivial cycles of the graph(dual-graph). Examples of non-trivial cycles of the graph are shown by colored lines in Fig.~\ref{Fig:Lattices}. Closed cycles of Pauli-operators that form contractable loops are elements of the stabilizer group. Important to our study is the code distance, $d$ which is the weight of the least-weight non-trivial logical operator. In this paper, we only consider codes with even distance. The lattice shown on the left of Fig.~\ref{Fig:Lattices}, i.e. the square-lattice code, has a code distance $d = \sqrt{n/2}$, and the lattice on the right, the rotated-lattice model, has code distance $d = \sqrt{n}$, where $n$, the code length, is the number of qubits in the system. Where there is ambiguity, we use symbols `$\blacksquare$' and `$\blacklozenge$' to distinguish the square(rotated) lattice surface codes shown to the left(right) of Fig.~\ref{Fig:Lattices}. To the best of our knowledge, the first appearance of the surface code with the square(rotated)-lattice was due to Kitaev(Wen) in Refs.~\cite{Kitaev03} and~\cite{Wen03}, respectively. \subsection{Error model} \label{sec:errormodel} Stabilizer measurements project local noise onto an encoded state that has been acted upon by some Pauli error. We thus focus our attention on Pauli errors $E \in \mathcal{P}_n$. Given that the surface code identifies Pauli-Z(Pauli-X) errors separately in an equivalent manner, and the group $\mathcal{P}_n$ is generated by operators $X_e$ and $Z_e$, we can exclusively study Pauli-Z errors that are identified by star operators. An equivalent discussion holds for Pauli-X errors that are identified by plaquette operators. We therefore consider errors of the form \begin{equation} E = \prod_{e\in\mathcal{Q}} Z_e^{x_e}\label{eqn:E}, \end{equation} where $x_e = 1$ with constant probability $p$ and $x_e = 0$ otherwise. We can concisely write the probability that an error, $E$, is drawn from the independent and identically distributed dephasing noise model $\pi(E)$ with the expression \begin{equation} \pi (E) = (1-p)^{n - \text{wt}(E)} p ^{\text{wt}(E)}=(1-p)^{n}e^{-\beta \text{wt}(E)}, \label{Eqn:iidDistribution} \end{equation} where $\text{wt}(E) = \sum_e x_e$ is the weight of error operator $E$. It will also be helpful to denote by $\pi_w$ the probability that an error of weight $w$ occurs, i.e. $\pi_w = \pi(E)$ for $\text{wt}(E) = w$. \subsection{The minimum-weight matching decoder} \label{SubSec:Decoder} Let $| \phi \rangle = E |\psi \rangle$ be a code state which has suffered error $E$. For the surface code, it is helpful to interpret errors as strings that lie along edges of the lattice. We measure defects, i.e. stabilizer measurements that return the $-1$ outcome, to determine a correction operator $C$ where $ CE \in \mathcal{S}$ such that $C | \phi \rangle = |\psi \rangle$. The defects from the stabilizer measurements are associated with end points of error strings. Assuming $E$ is low weight, we seek to find an operator $C$ that connects pairs of defects via strings of low weight. We evaluate $C$ with minimum-weight matching~\cite{Edmonds65, Kolmogorov09, Dennis02}. The algorithm takes a complete graph where vertices represent defects and the edges are assigned weights according to the separation of the defects by Manhattan distance. The algorithm returns a bipartite graph whose edges correspond to a least-weight correction. It is instructive to view error correction with the surface code from a homological perspective. Given that $C$ and $E$ must have the same boundary that is indicated by the locations of stabilizer defects, the product of $CE$ necessarily forms a closed cycle on the graph. Error correction fails if and only if $CE$ produces a non-trivial cycle as this operator will correspond to a logical operator. Otherwise we have that $CE \in \mathcal{S}$ such that error correction does not introduce a logical error to the system. For an introduction to homology theory see Ref.~\cite{Nakahara} or Appendix A of Ref.~\cite{Anwar14} for an introduction that connects topological error correction with homology. \subsection{The entropic contribution to failure rates} \label{sec:entropy} We analyse and compare the the logical failure rates of the square-lattice and rotated-lattice surface codes. We denote the logical failure probability \begin{equation} \overline{P}(p,n) = \sum_{E \in \mathcal{F}} \pi(E), \end{equation} where $\mathcal{F}$ is the set of all errors that will cause the logical failure of a code and we append a superscript $\blacksquare$ or $\blacklozenge$ symbol where there is ambiguity which model we are discussing. By expressing the likelihood of error $\pi(E)$ with $\text{wt}(E) = w$ occurring as $ \pi(E) = (1-p)^n \exp(-\beta w)$ where $\beta = \log\left[(1-p) / p \right]$ plays the role of an inverse temperature, we can express the logical failure rate as follows \begin{equation} \overline{P}(p,n) = (1-p)^{n} \sum_{w=d/2}^{n} N_{\text{fail}}(w) e^{ -\beta w}, \end{equation} where the $N_{\text{fail}}(w)$ is the number of weight-$w$ elements of $\mathcal{F}$. We remark that the summation begins at $w = d / 2$ as in the model of interest this is the least-weight error that will cause the minimum-weight matching decoder to fail. Finally, by defining entropy $S_{\text{fail}} = \log[N_\text{fail}(w)]$ we have the logical failure rate in terms of free energy such that \begin{equation} \overline{P}(p,n) = (1-p)^{n} \sum_{w=d/2}^{n} e^{-\beta F(w)}, \label{eq:entropy} \end{equation} where \begin{equation} F(w) = w - S_\text{fail}(w) / \beta, \end{equation} and the weight of the error $w$ can be regarded as the energy term of the free energy. Ubiquitous in the behaviour of physical systems is the competition between the energy term and the entropy term of their free energy. For a given $\beta$, there will be an energy $\langle w(\beta) \rangle$ which minimizes the free energy $F$, and this dominates the performance $\overline{P} \sim (1-p)^n e^{-\beta F(\langle w(\beta) \rangle)}$. For two different error correction schemes, the scheme which has the largest minimum free energy will have better performance. As we have already discussed, the distance of the surface code embedded on the rotated lattice is greater than that of the square lattice model, and therefore has an energetic advantage. However, the difference in the performance of the two models becomes less clear when the entropy term is also taken into account. As we summarise below, we use a number of different methods to compare the entropy for the square-lattice and diamond lattice surface-code models as a function of the available number of physical qubits and the physical error rate. \section{Low physical error rates} \label{sec:pathcounting} We first consider the regime of asymptotically low error rate. In this regime we can restrict our consideration to logical failures caused by errors of minimal weight, $d/2$. Specifically, we take the following limit \begin{equation} \overline{P}_{\text{low-}p}(p,n) : = \lim_{p \rightarrow 0} \overline{P}(p,n) \sim N_\text{fail}(d/2) ~p^{d/2}. \end{equation} We now focus on finding analytical expressions for $N_\text{fail}(d/2)$ for each of the models. \subsection{The square lattice model at low error rates} Path counting has already been well-studied with the square-lattice variant of the surface code~\cite{Dennis02, Watson14} where the code distance is $d= \sqrt{n / 2}$. Logical failures can occur when the error incident to the system is supported on at least half of the qubits of a horizontal or a vertical path through the lattice. There are $2d$ such paths; we illustrate one of them in red to the left of Fig.~\ref{Fig:Lattices}. We can therefore count the number of least-weight errors by multiplying the number of minimal-weight non-trivial cycles by the number of error configurations of weight $d/2$ that lie on each given cycle. The number of configurations on a given path is just the binomial coefficient $ C^{d}_{d/2}$ where $C^a_b = a! /b!(a-b)!$, yielding \begin{eqnarray} N_{\text{fail}}^{\blacksquare}(d/2) &= & \frac{1}{2} \cdot 2d \cdot C^{d}_{d/2}. \label{eq:NpcK} \end{eqnarray} The factor of $1/2$ comes from the fact that we have assumed $d$ is an even integer whereby each syndrome will occur for exactly two possible errors, $E_1$ and $E_2$, such that $E_1E_2$ is a non-trivial logical operator. Supposing then that the decoder is deterministic, and the correction operator for the given syndrome is, say $C = E_1$, then the decoder will decode this syndrome correctly in half of the instances where this syndrome appears under the error model. Indeed, on average, half of the times this syndrome occurs will be due to incident error $E_1$, and the other half will be due to error $E_2$. In the limit of large $d$ we can make use of Stirling's approximation to obtain $C^d_{d/2}\approx \sqrt{\frac{2}{\pi d}}2^d$. Dropping terms polynomial in $n$ yields the large $n$ expression \begin{eqnarray} N_{\text{fail}}^{\blacksquare}(d/2) &\sim & (\gamma^{\blacksquare}_0)^{\sqrt{n}}, \end{eqnarray} where \begin{eqnarray} \gamma^{\blacksquare}_0 = 2^{\frac{1}{\sqrt{2}}} \approx 1.6325. \end{eqnarray} \subsection{The rotated lattice at low error rates} \label{subsec:rotatedpathcounting} It is instructive to compare the rotated code to the square lattice model in the low error rate regime because we observe that, although the rotated lattice has an increased distance, $d^{\blacklozenge} = \sqrt{2} d ^\blacksquare$, it also has a considerably larger number of least weight errors. Indeed, consider moving from the left point to the right point of the gray diamond on the lattice on the right of Fig.~\ref{Fig:Lattices}. For a given starting point in Fig.~\ref{Fig:Lattices}, there are $C^{d}_{d/2}$ distinct paths within the diamond. Given that up to $C^{d}_{d/2}$ errors can be arranged along each path, we find an upper bound on the number of least-weight errors $N^\blacklozenge_\text{fail}(d/2)$ on the rotated lattice to be $\sqrt{n} \cdot 2^{2 \sqrt{n}}$. Before launching into an involved calculation of $N^\blacklozenge_\text{fail}(d/2)$ we consider some general arguments. We remark that there are also logical error paths of length $d$ that wrap around a diagonal of the lattice. We focus only on the paths that wrap horizontally or vertically around the lattice as there are significantly more of these, and for large $n$ the contribution of the failures due to $d/2$ errors lying on these diagonal paths is negligible by comparison. This upper-bound over-estimates the number of least weight errors that will cause a logical failure with the rotated lattice. To see why this is the case, recall that we are interested in the number of least-weight errors, not the number of least-weight paths that can support a least-weight error. Some of the paths we have considered in the calculation so far can overlap, see for instance, the blue and green paths at the right of Fig.~\ref{Fig:Lattices}. Indeed, with the counting given above, we have counted an error of weight $d/2$ that lies entirely on both the blue path and the green path twice. In fact, this same error could have appeared on any of $C^{d/2}_{d/4}$ different paths. Supposing that all the errors have been over counted $ C^{d/2}_{d/4} \sim 2^{\sqrt{n} / 2}$ times, we can lower bound the number of least weight errors to $n^{1/4} \cdot 2^{3 \sqrt{n} / 2}$, where we have divided by the largest possible number of non-trivial cycles that every error may have come from by which we may have over counted. To this end, we anticipate a large $n$ expression for the number of failing configurations \begin{eqnarray}\label{eqn:pc2} N_{\text{fail}}^{\blacklozenge}(d/2) &\sim & (\gamma^{\blacklozenge}_0)^{\sqrt{n}}, \end{eqnarray} where $2.8284 \approx 2 \sqrt{2} \le \gamma^{\blacklozenge}_0 \le 4$. In Sec.~\ref{sec:pathcountingWen} and in the associated Appendix~\ref{app:pc} we tighten these bounds on $N_{\text{fail}}^{\blacklozenge}(d/2)$ giving \begin{eqnarray} 3.4142 \approx 2 + \sqrt{2} \leq \gamma^{\blacklozenge}_0 \leq \sqrt{27/2} \approx 3.6742.\label{eqn:tighter} \end{eqnarray} These bounds hold for any choice of minimum weight matching decoder for the rotated lattice, although as we describe in Appendix~\ref{app:decoder}, the path counting performance of different decoders can vary. We conjecture that the upper bound is tight for any minimum weight decoder. With these ranges of $\gamma^{\blacksquare}_0$ and $\gamma^{\blacklozenge}_0$ we compare the two models by studying the ratio $ \overline{P}^{\blacklozenge}_{\text{low-p}} / \overline{P}^{\blacksquare}_{\text{low-p}} \sim \Delta(p)^{\sqrt{n}} $ where \begin{eqnarray} \Delta(p) &=& \frac{\gamma^{\blacklozenge}_0}{\gamma^{\blacksquare}_0} \cdot p^{\frac{d^{\blacklozenge}-d^{\blacksquare}}{2\sqrt{n}}} .\nonumber \end{eqnarray} Written in this form it is clear that in the regime of low $p$, $\Delta(p)<1$ meaning that the smaller entropic contribution of the square lattice compared with the rotated lattice is not enough to make up for the its smaller distance. We therefore observe that the toric code on the rotated lattice has a smaller asymptotic logical failure rate than on the square lattice for the same $n$. However, this formula for $\Delta(p)$ is greater than one for $p > 0.7 \%$, suggesting that the square lattice could outperform the rotated lattice in spite of its reduced distance for some values of $p$ below threshold. Caution must be taken with this conclusion since we are using path counting results (valid only for asymptotically small $p$) to reason about a finite $p$ regime. In Sections~\ref{sec:MC} and \ref{sec:model} we analyze the finite $p$ regime numerically and analytically, and find that this crossover survives for finite size lattices up to $n\sim 2500$. \subsection{Tighter bounds for path counting with the rotated lattice} \label{sec:pathcountingWen} We now set out to derive the tighter bounds of Eqn. (\ref{eqn:tighter}) for the number of minimum weight failing errors $N_{\text{fail}}^{\blacklozenge}(d/2)$. To do so, we define a map from each weight-$d/2$ error configuration either to a weight-$d$ logical operator containing the error, or to `null' when no such logical operator exists. Then we step through each weight-$d$ logical operator, which can be enumerated straightforwardly, and count the number of errors in its pre-image. This allows us to enumerate every weight-$d/2$ error which can cause a minimum weight decoder to fail, without counting the same error multiple times (as each error is only in the pre-image of one logical operator). We start by enumerating all horizontal logical operators, that we will also be refer to as paths, which pass through the left and right corners of the gray diamond depicted to the right of Fig.~\ref{Fig:Lattices}. Note that any minimal weight path passing through these two points will be in the gray diamond. We denote the vertices within the gray area with a coordinate system $(x,y)$ where the origin lies to the left of the figure and the axes are shown in white. The principle insight we use to calculate the number of least-weight errors that can lead to logical failure is the characterisation of the paths by the vertices a given path turns on. Indeed, turns come in two types --- left turns and right turns --- where a left(right) turn is a vertex where the direction of the path changes to move along the $y-$axis($x$-axis). The right turns made by the red path in Fig.~\ref{Fig:Lattices}(right) are marked with yellow spots. Left turns are unmarked. Given this labeling, we now define a map from a weight-$d/2$ error to a horizontal weight-$d$ logical operators $CE$ containing $E$. We define the map by imagining a deterministic decoder that returns a correction $C$ which contains no right turns. With this particular decoder, we are able to count the number of errors that give rise to a particular path \textit{uniquely} with the following observation: The path $CE$ contains a right turn if and only if there is at least one error lying on the path on an edge adjacent to every vertex that supports that right turn. We can therefore count the number of least-weight errors that maps to a particular path by counting the number of configurations where all right turns of a given path are adjacent to an edge that supports an error. It is easy to check that non-trivial cycles of length $d$ are such that every right turn is preceded by a left turn and vice versa. We count the number of configurations that lie on a path with $T$ right turns by first separating the segments of the path into two sets; those that are adjacent to right turns and those that are not. We first focus on those adjacent to the right turns. Each of the $T$ right turns must have an error on one of its adjacent edges, and we suppose that $0 \le w \le w_\text{max} \equiv \min(T,d/4)$ of the right turns support two errors. Of the set of all right turns the subset of $w$ right turns that are occupied by two errors can be configured in any of $C^T_w$ ways. Now, given that right turns with only one adjacent error can take two configurations determined by which edge the error lies on, and the remaining $w$ right turns with both edges occupied by an error can take only one configuration we find that the edges adjacent to right turns can take $C^T_w \times 2^{T-w}$ configurations if $w$ of the right turns have errors on both of their adjacent edges. The remaining $d/2 - w - T$ errors that lie on the path can be distributed arbitrarily along the remaining $d - 2T$ edges of the path. We therefore obtain an expression, $\#(T,d)$, for the number of error configurations that give rise to a path with $T$ right turns using this specialised decoder. We find \begin{equation} \#(T,d) = \sum_{w = 0 }^{w_{\text{max}}} 2^{T - w} C_w^T C_{d/2 - T - w}^{d - 2T}. \label{Eqn:Turns} \end{equation} It remains to count the number of paths around the torus with $T$ right turns. We can specify any path of length $d$ containing $t$ right turns by listing the coordinates of the right turns of the path that lie in the red-dashed square in Fig.~\ref{Fig:Lattices}. We denote these with coordinates $\{(x_1,y_1),(x_2,y_2),\dots,(x_t,y_t)\}$ . Every choice of $t$ coordinates such that $x_j$ and $y_j$ take integer values and are strictly increasing (such that $ 0 \le x_1 < x_2 < \dots < x_t \le d / 2 - 1$ and $ 1 \le y_1 < y_2 < \dots < y_t \le d / 2$) will specify a valid path. There are therefore $\left(C_{t}^{d/2}\right)^2$ length-$d$ paths with $t$ right turns from $(0,0)$ to $(d/2,d/2)$. Finally, to determine the number of paths that include $T$ right turns exactly we must look to the boundary, which may also include a right turn. The paths with an extra right turn will be precisely those which do not have $x_1=1$ and do not have $y_T=d/2-1$. There are $\left( {C_T^{d/2-1} }\right) ^2$ such paths with $T+1$ turns if we include the right turn at the boundary. We therefore subtract this number that accounts for the paths that include an extra right turn at the boundary. To complete this calculation, we then add the terms with $T$ right turns including one on the boundary. These must have $T-1$ turns in the bulk arranged such that $x_1 \not=1$ and $y_{T-1} \not=d/2-1$. We find there are $C^{T-1}_{d-1}$ such paths. We therefore arrive at $\left( {C_T^{d/2} }\right) ^2 - \left( {C_T^{d/2-1} }\right) ^2 + \left( {C_{T-1}^{d/2-1} }\right) ^2$ different paths with $T$ right corners. We finally sum over paths with $ 0 \le T \le d/2$ right turns to obtain at the bound \begin{eqnarray} \! N_{\text{fail}}^{\blacklozenge}\!\left( \! \frac{d}{2} \right) \!\! &\leq& \! d \sum_{T = 0}^{\frac{d}{2}} \! \left[ \! \left( \! {C_T^{\frac{d}{2}} } \! \right)^{\!\!2} \!\! - \! \left( \! {C_T^{\frac{d}{2}-1} }\right)^{\!\!2} \!\! +\! \left( \! {C_{T-1}^{\frac{d}{2}-1} }\right)^{\!\!2} \right] \! \#(T,d)+ \nonumber\\ &&+ ~d \cdot C^d_{d/2} - 2d^2. \label{eqn:pcW} \end{eqnarray} The sums pre-factor $d$ accounts for the number of initial points a path can begin from, where we have considered non-trivial cycles along both the horizontal and vertical directions, and the function $\#(T,d)$ is given in Eqn.~(\ref{Eqn:Turns}). The term $d \cdot C^d_{d/2}$ accounts for the errors which can fail to diagonal paths. Note however that there are $d^2$ errors that have been counted three times that run along $d/2$ contiguous edges with a common direction. We have therefore subtracted $2d^2$ paths to account for this. In Appendix~\ref{app:pc} we consider Eqn.~(\ref{eqn:pcW}) in the large $n$ limit and calculate $\gamma^{\blacklozenge}_0 \leq \sqrt{27/2} \approx 3.6742$ in the expression $N_{\text{fail}}^{\blacklozenge}(d/2) \sim (\gamma^{\blacklozenge}_0)^{\sqrt{n}}$. It is possible to further tighten the upper bound in Eq.~(\ref{eqn:pcW}) by making use of the fact that any minimum-weight decoder must correct at least one error for a given syndrome, but this does not change the bound on $(\gamma^{\blacklozenge}_0)$. In the same appendix we also prove the lower bound $\gamma^{\blacklozenge}_0 \geq 2 + \sqrt{2} \approx 3.4142$. We believe the upper bound to be tight. \section{A numerical study} \label{sec:MC} Using a low-$p$ approximation we have predicted that for $p \gtrsim 0.4 \%$ it pays to minimise the entropy of the code, and not its distance. However, this prediction is unreliable since it is based on results that only rigorously hold in the limit $p\rightarrow 0$. We next turn to numerical methods to determine the failure rates of the two models at higher error rates, up to threshold. In what follows we describe our simulations. We find that close to threshold, both models behave almost identically for a given $n$. This is in stark contrast with the low-$p$ behavior. Furthermore, we identify a region where the original lattice marginally outperforms the rotated model. We find that this region does not persist for system sizes greater than $\sqrt{n} \sim 50$ or for error rates below $p \sim 0.05$. We also use the method due to Bravyi and Vargo~\cite{Bravyi13} to verify our low-$p$ formulas given in the previous section. Finally, we fit our data to a two paramter ansatz, which helps characterise the behavior of the two orientations for small system sizes. Our numerical results are summarised in Fig.~\ref{fig:summaryintro}. Monte-Carlo sampling proceeds by generating $\eta$ instances of error operators $E$ to estimate \begin{equation} \overline{P} = 1-\text{prob}(CE \in \mathcal{S}). \end{equation} where errors $E$ are drawn from the distribution determined by the error model defined above for a given $p$, and the correction operator $C$ is evaluated using the minimum-weight matching decoder for the syndrome of each error. \begin{figure} \includegraphics[width=\columnwidth]{MagicNumbers.pdf} \caption{\label{Fig:MagicNumber} The logical failure rate for the square lattice model with $n^\blacksquare = 1152 (d^\blacksquare = 24)$, shown in blue, compared with that of the rotated lattice with $n^\blacklozenge = 1156 (d^\blacklozenge = 34)$ in yellow, calculated using $\eta \sim 10^8 \text{--} 10^9$ samples. The inset shows the ratio of the failure rates of the two models, where the ratio in excess of unity marks the region where the square lattice model outperforms the rotated model using four fewer qubits.} \end{figure} We begin by identifying a point where the original lattice outperforms the rotated lattice. In Fig.~\ref{Fig:MagicNumber} we compare the logical failure rates of the unrotated model with distance $d^\blacksquare = 24$ and the rotated model with $d^\blacklozenge = 34$ with Monte-Carlo sampling. We choose these two system sizes as they each use a similar number of qubits, $n^\blacksquare = 1152$ compared with $n^\blacklozenge = 1156$. Error bars are determined according to $\Delta \overline{P} = \sqrt{(1-\overline{P}) \overline{P} / \eta}$ where we collect $ \eta \sim 10^8 \text{--} 10^9$ samples. In the inset we take the ratio of the error rates of the two models, $\overline{P}^\blacklozenge / \overline{P}^\blacksquare$. Although the logical failure rate is almost identical for both models, remarkably, the inset shows that the ratio exceeds $1$ at around $p \sim 5\%$. Not forgetting that the two models are equivalent up to their orientation, this is surprising result given that the original lattice has a smaller distance and four fewer qubits than the rotated lattice. We have thus identified a location in parameter space where the distance is not the determining feature of the logical failure rates. We mark this area of parameter space with the hatched region in Fig.~\ref{fig:summaryintro}. It now remains to explore the extent of this behaviour. In what follows we look to determine the boundaries of this region in parameter space. \subsection{Large system sizes} \begin{figure} \includegraphics[width=\columnwidth]{LargeMCdata.pdf} \caption{\label{Fig:LargeSizes} Monte Carlo data comparing large system sizes of the original(blue) and rotated(yellow) lattice for error rates $p = 4.5\%, 5\%,5.5\%, 6\%, 7\%, 8\%, 9\%, 10\%$ running from the bottom fitting to the top fittings. The inset shows the system size, $L^*$, where the linear fittings of the two different models cross for each value of $p$. These crossing points mark the top of the hatched region in Fig.~\ref{fig:summaryintro}, above which, the rotated lattice begins to outperform the original square lattice model.} \end{figure} We next use Monte-Carlo sampling to look at large system sizes at error rates $p \gtrsim 4.5 \% $ to determine the relative performance of the two models as $n$ diverges. We find that at system sizes larger than $\sqrt{n} \sim 50$ the rotated model again outperforms the square-lattice model. In Fig.~\ref{Fig:LargeSizes} we show logical failure rates for system sizes with $ 40 \lesssim \sqrt{n} \lesssim 64$ for physical error rates $p = 4.5\%,\, 5\%,\,5.5\%,\, 6\%,\, 7\%,\, 8\%,\, 9\%,\, 10\%$ where the smallest error rates are shown by the steep straight lines fitting at the bottom of the figure, and the square(rotated) lattice data and fittings are shown in blue(yellow). Like in Fig.~\ref{Fig:MagicNumber}, the data in Fig.~\ref{Fig:LargeSizes} shows that the performance of the two models are almost indistinguishable. The separation in the two models becomes more apparent when $p$ is small. Indeed, the difference in the gradients of the fittings is appreciable at $p = 4.5\% $, but this difference rapidly vanishes as $p$ increases. To determine the smallest system size above which the rotated model outperforms the square lattice model we find the system size at which the linear fittings shown in the graph cross. We mark the crossing points with small black crosses in the main plot, and the inset shows the crossing points as a function of $p$. The crossing points numerically estimate the location of the top boundary of the hatched region shown in Fig.~\ref{fig:summaryintro}. The numerical results that we have described to this point indicate that, in contrast to the regime of low error-rates given in the previous section, the behaviour of the two models is very similar for $p \ge 5\%$ with a region of parameter space where the square lattice model slightly outperforms the rotated model. To understand the extent of the difference between the two models it is illuminating to extrapolate the fitting found in Fig.~\ref{Fig:LargeSizes} to get an optimistic sense of the magnitude of the difference between the two models using our idealised error model. We use the following Ansatz~\cite{Brown15} to fit the data in the figure \begin{equation} \overline{P}_{\text{Ansatz}} = A(p) \exp \left( \alpha(p) \log\left( \frac{p} { p_{\text{th}}} \right) \sqrt{n} \right), \label{Eqn:MCFittingAnsatz} \end{equation} where $\alpha(p)$ and $A(p)$ are free parameters that depend on $p$ and $p_{\text{th.}}$ is the threshold error rate. Unsurprisingly, we find identical threshold error rates \begin{equation} p_{\text{th.}}^\blacksquare \sim 0.1035\pm 0.0002, \quad p_{\text{th.}}^\blacklozenge \sim 0.1035 \pm 0.0002, \label{Eqn:KitaevThresholdEval} \end{equation} for the two models. We evaluate the threshold and we plot $\alpha(p)$ and $\log_{10}A(p)$ for the two models in App.~\ref{App:MonteCarlo}. As an example, we extrapolate the fittings found above with Eqn.~(\ref{Eqn:MCFittingAnsatz}) at $p = 5\%$. For this error rate we find $$ \log_{10}A^\blacksquare = -0.61\pm0.04, \quad \alpha^\blacksquare= -0.323\pm 0.003, $$ and $$ \log_{10}A^\blacklozenge = -0.47\pm0.04, \quad \alpha^\blacklozenge= -0.335 \pm 0.003. $$ With this extrapolation we find that number of qubits at which the logical failure rate of the rotated model is one half of that of the square lattice model at system is $\sqrt{n} \sim 116$. At this point where we have in excess of ten-thousand physical qubits operating at an error rate below half threshold the logical failure rate of the two models is of the order $\sim 10^{-12}$ which is a relevant error rate for large-scale quantum algorithms~\cite{Fowler12b}. On the logarithmic scale that we use to measure failure rate, a factor of one half is relatively inconsequential. Our results thus indicate that, unless we have a very large number of qubits, it may be more valuable to optimise over factors such as the performance of the two different models when performing logical gates~\cite{Brown16a} instead of code distance at high error rates below threshold. \subsection{Low error rates} We now verify the calculations made in the previous section for low error rates. We adopt the method of Bravyi and Vargo~\cite{Bravyi13} to probe logical failure rates, $\overline{P}(n, p_0)$, for low physical error rates, $p_0$, that are intractable by regular Monte Carlo sampling. The method proceeds by spliting~\cite{Rubino09} the logical failure rate into a series of ratios $R_j = \overline{P}(n, p_{j}) / \overline{P}(n, p_{j+1}) $ such that \begin{equation} \overline{P}(n, p_0) = \overline{P}(n, p_\Lambda) \prod_{j=0}^{\Lambda-1} R_j, \end{equation} where $\overline{P}(n, p_\Lambda) $ is a failure rate that can be easily determined by, say, Monte Carlo methods. Then, we evaluate ratios $R_j$ using the acceptance ratio method due to Bennett~\cite{Bennett76}. The acceptance ratio method expresses $R_j$ as the fraction of two expectation values that can be evaluated efficiently via the Metropolis-Hastings algorithm. Details of the algorithm and its implementation are given in App.~\ref{App:Splitting}. In Fig.~\ref{Fig:WenPathCounting} we show the logical failure rates for the rotated model at system sizes $\sqrt{n} = 10,\,12,\dots,\,22$ to error rates as low as $p = 2 \cdot 10^{-4}$. As we explain in detail in App.~\ref{App:Splitting}, we evaluate each expectation value using the Metropolis-Hastings algorithm where we propose $N = 10^9 $ new trials for each expectation value, and find that at least $ \sim 5 \cdot 10 ^5 $ different error configurations are accepted for each calculation for distributions at the lowest $p$ values we investigate. For larger values of $p$ we find that as many as $\sim 10^8 $ different error configurations of the $10^9$ configurations that are proposed during the Metropolis Hastings algorithm are accepted for a given expectation value. We use the method to compare the data with the low error rate limit we obtained in Eqn.~(\ref{eqn:pcW}). Our results show that the data converges onto our limit, thus verifying the predictions made in the previous section. To better illustrate the convergence of the data, the inset of Fig.~\ref{Fig:WenPathCounting} shows the ratio of the numerically evaluated failure rates and the analytical expression. The inset shows the ratio $\overline{P}^\blacklozenge / \overline{P}^\blacklozenge_{\text{low-}p}$ approaches $1$ from above as the physical error rate vanishes. This behaviour is to be expect as higher order terms in the logical error rate become less appreciable as $p$ decreases. \begin{figure} \includegraphics[width=\columnwidth]{WenPathCounting.pdf} \caption{\label{Fig:WenPathCounting} Logical failure rates obtained using the numerical method due to Bravyi and Vargo~\cite{Bravyi13} compared with the low physical error rate bound given in Eqn.~(\ref{eqn:pcW}) as a function of physical error rate $p$ for system sizes $\sqrt{n} = 10,\,12,\dots,\,22$. The inset shows the ratio of the failure rates that we obtained numerically and the low error rate bound. The convergence to unity as $p$ vanishes shows good agreement with the approximation.} \end{figure} Together with an explanation of the method due to Bravyi and Vargo, we provide an equivalent analysis using the square-lattice model in App.~\ref{App:Splitting}. The comparison with the well-established expression for logical failure rate demonstrates the accuracy of the method at low error rates. We finally examine more closely the logical failure rates as the error rate vanishes. In Fig.~\ref{Fig:alpha} we plot the fitting function $\alpha(p)$ of Eqn. (\ref{Eqn:MCFittingAnsatz}) for the two models for system sizes in the interval $10 \lesssim \sqrt{n} \lesssim 22$ over an extensive range of $p$. The logical failure rates for intermediate error rates for the square-lattice and rotated-lattice models that have not been presented explicitly so far in the text are shown in Figs.~\ref{Fig:KitSplitData} and~\ref{Fig:WenSplitData}, respectively in Appendix.~\ref{App:Splitting}. The plot shows that $\alpha^\blacksquare(p) $ tends towards $1/ 2\sqrt{2}$ and $\alpha^\blacklozenge(p)$ tends towards $1/2$ as $p$ vanishes, as expected from the low error rate analysis given previously. \begin{figure} \includegraphics[width=\columnwidth]{alpha.pdf} \caption{\label{Fig:alpha} The function $\alpha(p)$ from the fitting Ansatz using Monte Carlo samples for $p > 0.05$ and the splitting method otherwise. We collect data for system sizes $10\le n\le 22$ as shown in Figs.~\ref{Fig:KitSplitData} and~\ref{Fig:WenSplitData}. The square(rotated)-lattice model is shown in blue(yellow). We observe slow convergence of $\alpha$ to $1/2^{3/2}$($1/2$) for the square(rotated)-lattice models as we have predicted using the path-counting formulae presented in the previous Section. We also observe a crossing in the functions $\alpha^\blacksquare(p)$ and $\alpha^\blacklozenge(p)$ at around $p \sim 2\%$. } \end{figure} We observe that the values of $\alpha(p)$ for the two models cross at around $ p \sim 2 \%$ in Fig.~\ref{Fig:alpha} such that the square-lattice model will outperform the rotated lattice model if we extrapolate the system size. This is in contrast to conclusion obtained with the Monte-Carlo analysis where we study larger system sizes, see Fig.~\ref{Fig:LargeAlpha}. Indeed, we find that, in fact, $\alpha(p)$ varies slowly with $n$. As such, the Ansatz given in Eqn.~(\ref{Eqn:MCFittingAnsatz}) needs to be modified to account for the $n$ dependence of the functions $A$ and $\alpha$. Here we have implicitly assumed that the drift in these values is slow in $n$ such that we can use a linear fit for $\log_{10}\overline{P}$ in $\sqrt{n}$, provided we only study a small interval of $n$, see Figs.~\ref{Fig:KitSplitData} and~\ref{Fig:WenSplitData}. \begin{figure} \includegraphics[width=\columnwidth]{SplittingCrossings.pdf} \caption{\label{Fig:logA} The system sizes at which the square-lattice model begins to outperform the rotated-lattice model as $n$ increases. We also mark $\sqrt{n} = 10$ and $\sqrt{n} = 22$ by black dashed lines to indicate where the range of system sizes for which we collect data. These data points mark the lower boundary of the hatched region shown in Fig.~\ref{fig:summaryintro}. The plot of $\log A$ shown as a function of $p$ between the path-counting regime and the threshold error rate for the square-lattice model(blue) and the rotated-lattice model(yellow) is shown in the inset. } \end{figure} We finally compare the logical failure rates of the two models within the smaller system sizes where $10 \lesssim \sqrt{n} \lesssim 22$. Indeed, due to discrepancies in $\log_{10}A(p)$, we find that for system sizes $\sqrt{n} \lesssim 16$, the rotated-lattice model outperforms the square lattice model. We show the values of $\log_{10}A(p)$ in the inset of Fig.~\ref{Fig:logA}, where these values are evaluated using the intersect of the fitting with $n = 0$ found in Figs.~\ref{Fig:KitSplitData} and~\ref{Fig:WenSplitData}. We mark the points where the linear fittings for the square-lattice model and the rotated-lattice model cross in the main plot in Fig.~\ref{Fig:logA}. These data points mark the bottom boundary of the hatched region, thus completing our numerical estimate of the location of the boundary of the parameter regime where the square-lattice model outperforms the rotated model. It will be interesting to explain the change in $\alpha(p)$ as $n$ increases throughout the hatched region. \section{Modeling finite physical error rates}\label{sec:model} In Sec. \ref{sec:pathcounting}, we considered the first-order analytical approximation for asymptotically low error rate regime for arbitrarily large system size. Here we generalise path counting methods to model topological quantum error correction at intermediate error rates. We will argue that the following expression \begin{equation} \overline{P}_{\text{model}} = \sum_{l=d}^{n} N_{\text{con}}(l) ~\xi(p)^l~ p^{\frac{l}{2}} \left( 1 -p \right)^{\frac{l}{2}}, \label{eq:generalclosedmodel} \end{equation} provides a qualitatively accurate expression for the failure rate, where $N_{\text{con}}(l)$ is the number of non-self intersecting paths of length $l$ that wrap around the torus. The model can be interpreted as a statistical mechanics model of a string wrapping around the torus in the presence of a `background gas'. The $\xi(p)$ term in Eqn.~(\ref{eq:generalclosedmodel}) describes the interaction which can be seen as a `negative friction' term encouraging the string to fluctuate. \subsection{Upper bounding the failure probability, and recovering a lower bound of the threshold} The following discussion applies to either orientation of the surface code on $n$ qubits. As described in Section~\ref{sec:entropy}, the probability of a logical error is \begin{eqnarray} \overline{P}(p,n) &=& (1-p)^{n} \sum_{w=d/2}^{n} N_{\text{fail}}(w) \left(\frac{p}{1-p}\right)^{\!\! w}, \end{eqnarray} where $N_{\text{fail}}(w)$ is the number of weight-$w$ elements of the failing error set $\mathcal{F}$. To obtain a more transparent expression for $\overline{P}$, we need to characterize which bit strings are contained in $\mathcal{F}$. The product of an error operator and its subsequent correction, $C(E)E$, is supported on closed paths on the lattice in the form of stabilizer operators or of logical operators. When a failure occurs, at least one of the closed paths in $C(E)E$ must be non-contractible -- we call the subset of edges that form this single non-contractible closed path $L$. Suppose that $C(E)E$ contains some specific non-contractible closed path $L$ such that a logical error occurs. It must be that $L \cap E$ has weight greater than $|L|/2$, since otherwise the minimum weight decoder would have yielded a lower weight correction $C'(E) = C(E) L$, where $L$ is absent in $C'(E)E$. We can use this to help construct the following bound, \begin{equation}\label{eqn:bound} \! \overline{P} \leq (1-p)^{n} \! \sum_{L} \! \sum_{u=\frac{|L|}{2}}^{|L|} \! \sum_{v=0}^{n-|L|} \! C_{u}^{|L|} C^{n-|L|}_{v} \left(\frac{p}{1-p}\right)^{\!\! u+v}. \!\! \end{equation} The outer sum is over all non-contractible, self-avoiding closed paths in the lattice. Given such a closed path $L$, the inner sums add up the probability that an error occurs which has support on more than half of the edges of $L$. To do this, we divide the lattice into the $|L|$ edges in the non-contractible closed path $L$ and the $n-|L|$ edges in its compliment. There are $C^{|L|}_{u}$ choices of $u$ edges along the closed path, and $C^{n-|L|}_{v}$ choices of $v$ edges outside it. The probability of any error configuration with a total $u+v$ errors is $p^{u+v}(1-p)^{n-u-v}$. It is useful to define $N_{\text{con}}(l)$, the number of length-$l$ non-contractible closed paths in the lattice, constrained by the requirement that they can have no self-intersections. We then rewrite the bound as \begin{eqnarray} \overline{P} &\leq& (1-p)^{n} \sum_{l=d}^{n} N_{\text{con}}(l) \sum_{u=\frac{l}{2}}^{l} \sum_{v=0}^{n-l} C^{l}_{u} C^{n-l}_{v} \left(\frac{p}{1-p}\right)^{\!\! u + v}, \nonumber \\ & \leq & \sum_{l=d}^{n} N_{\text{con}}(l)~ 2^l ~p^{\frac{l}{2}} \left( 1 -p \right)^{\frac{l}{2}}. \label{eq:bound} \end{eqnarray} We have used an explicit computation of the sum over $v$, \begin{equation} \sum_{v=0}^{n-l} C^{n-l}_{v}\left(\frac{p}{1-p}\right)^{\!\! v} = \left( 1 -p \right)^{-n+l}, \end{equation} and we also used the following bound for the sum over $u$, \begin{equation} \sum_{u=\frac{l}{2}}^{l}C^{l}_{u} \left(\frac{p}{1-p}\right)^{\!\! u} \leq 2^l \left(\frac{p}{1-p}\right)^{\!\! l/2}, \label{eq:ineq} \end{equation} which holds for all $l$ and $0<p<1/2$. Our formula in Eqn.~(\ref{eq:bound}) for an upper bound on the failure probability can be used to give a lower bound on the error correction threshold similar to that in \cite{Dennis02}. First, we use the bound that $N_{\text{con}}(l) < N_0 c^l$ (for a constant $N_0$ and with $c = 2.638\dots$). $c$ is the expansion constant for self-avoiding walks on the square lattice \cite{Madras96}. Then we use Stirling's bound to group terms that are exponential in $l$. Finally, we find the maximum $p$ below which the upper bound of $\overline{P}$ approaches zero for large $n$. \begin{eqnarray} \overline{P} &\leq& \sum_{l=d}^{n} N_{\text{con}}(l,n)~2^l~ p^{\frac{l}{2}} \left( 1 -p \right)^{\frac{l}{2}}, \nonumber \\ &\leq& \sum_{l=d}^{n} N_0 \cdot \left( 2 c \sqrt{p(1-p)} \right)^{\! l}.~~~~~\label{eq:DennisBound} \end{eqnarray} For sufficiently large $n$, the right-hand side will approach zero with $p\rightarrow 0$ provided that the parenthesis is less than 1 in Eqn.~(\ref{eq:DennisBound}). This gives a lower bound on the threshold of $p_{\text{th}}> p_{\text{bound}}=0.0373$, which is the same as the bound in Ref.~\cite{Dennis02}. The sum in Eqn.~(\ref{eq:bound}) over-counts in two ways: some error configurations which are not present in $\mathcal{F}$ are included, and some error configurations are included more than once. Indeed, some distributions of $u> l/2$ in Eqn.~(\ref{eq:bound}) can be associated to several strings, and these are counted several times. Our goal in the following subsections will be to remove terms from the sum, and to estimate $N_{\text{con}}(l,n)$ to get closer to the true value of $\overline{P}$. In doing so, we will lose the guarantee that we overestimate $\overline{P}$. \subsection{Counting closed paths} \label{sec:countingNonContractible} An important quantity in Eqn.~(\ref{eq:bound}) is $N_{\text{con}}(l,n)$, the number of length-$l$, non-contractible, self-avoiding closed paths on the lattice. In this section we estimate $\log[N_{\text{con}}(l,n)]$ in the limit of large $n$ as shown in Fig.~\ref{fig:PlotUnconstrainedPaths}. \begin{figure} \includegraphics[width=\columnwidth]{PlotUnconstrainedAndConstrainedPaths.pdf} \caption{ The logarithm of the number of paths, normalized by $\sqrt{n/2}$ in the limit of large $n$. The dashed curves represent our estimates of $N^{\blacksquare}_{\text{con}}(l,n)$ and $N^{\blacklozenge}_{\text{con}}(l,n)$, and the solid curves are the exact limit of $N_{\text{unc}}(l;x,y)$ for $(x,y)=(\sqrt{n/2},0)$ and $(x,y)=(\sqrt{n}/2,\sqrt{n}/2)$ as calculated using Eq.~(\ref{eq:gaussianintegral}). Note that the two curves asymptotically approach one another for large $l/\sqrt{n/2}$. The blue and yellow curves are for the square diamond lattice respectively. The number of constrained and unconstrained paths match for $l=d$ for both square and diamond lattice orientations. The black line which both $N^{\blacksquare}_{\text{con}}(l,n)$ and $N^{\blacklozenge}_{\text{con}}(l,n)$ appear to be approaching is $N=c^l$ where $c= 2.638\dots$ is the square-lattice connective constant.} \label{fig:PlotUnconstrainedPaths} \end{figure} \subsubsection{Number of unconstrained paths} \label{subsec:unconstrained} We can gain insight into the behaviour of $N_{\text{con}}(l,n)$ by considering a closely related quantity, the number of unconstrained paths $N_{\text{unc}}(l;x,y)$, which is much easier to calculate. We define $N_{\text{unc}}(l;x,y)$ to be the number of length-$l$ paths from the origin to the coordinate $(x,y)$ on an infinite square lattice, where the axes are aligned along the edges of the respective lattices, see Fig.~\ref{Fig:Lattices}. When $l=x+y=d$, the path is perfectly tight, and the function $N(l;x,y)$ can be precisely related to $N_{\text{con}}(l,n)$ \begin{eqnarray} N_{\text{con}}^{\blacksquare}(d,n) & = & 2 \cdot d \cdot N_{\text{unc}}(d;d,0),\label{eqn:NconNuncK}\\ N_{\text{con}}^{\blacklozenge}(d,n) & = & 2 \cdot \frac{d}{2} \cdot N_{\text{unc}}(d;d/2,d/2) + d.\label{eqn:NconNuncW} \end{eqnarray} To see why these relations hold, first note that $N_{\text{unc}}(d;d,0)=1$, i.e. there is just one length-$d$ path from the origin to the point $(d,0)$. The multiplicative factor of $2$ accounts for the fact that in addition to the horizontal cycle on the torus, there is also a vertical cycle. The factor of $d$ accounts for translations; the horizontal path could go through the point $(0,j)$ for $j=1,2,\dots d-1$ rather than $(0,0)$. Similarly, the factor of $d/2$ for the rotated lattice accounts for the fact that the horizontal paths could go through the point $(-j,j)$ for $j=1,2,\dots d/2$ rather than $(0,0)$. The additional constant contribution $d$ for the rotated lattice counts the non-contractible closed paths which wind simultaneously both horizontally and vertically around the torus. For $l$ larger than $d$, we do not know of any clean relation like Eqn.~(\ref{eqn:NconNuncK}) and Eqn.~(\ref{eqn:NconNuncW}) between the number of constrained and unconstrained paths. We assume in the following that $l$ is even. To calculate $N_{\text{unc}}(l;x,y)$ we assign an orientation; up, down, left or right, denoted $\{\uparrow, \downarrow, \leftarrow, \rightarrow\}$, to each of the edges in the sequence of $l$ contiguous edges on a path from the origin to the point $(x,y)$. The direction of each edge is determined by the direction the path follows through the edge towards its terminal point. Obviously, the difference between the number of right edges and the number of left edges must equal $x$, i.e. $n_\rightarrow - n_\leftarrow = x$. Likewise, the difference between the number of up edges and down edges must equal $y$. Again, expressed as an equation we have $n_\uparrow - n_\downarrow = y$. By definition, we also have that $n_\uparrow + n_\downarrow +n_\rightarrow + n_\leftarrow = l$. Using these three conditions, we can rewrite $n_\downarrow, n_\rightarrow$ and $n_\leftarrow$ in terms of $n_\uparrow$ such that \begin{equation} N_{\text{unc}}(l;x,y)= \!\!\!\sum_{n_\uparrow = y}^{(l + y - x)/2} \!\!\! \frac{l !}{n_\uparrow! n_\downarrow(n_\uparrow)! n_\rightarrow(n_\uparrow)! n_\leftarrow(n_\uparrow)!}. \!\! \label{eq:Nunconstrained} \end{equation} In Appendix~\ref{app:unconstrainedpaths} we find a closed form expression for $\log[N_{\text{unc}}(l;x,y)]$, which we plot in Figure~\ref{fig:PlotUnconstrainedPaths}. Defining $r$ and $\theta$ by $x=r \cos\theta$ and $y=r \sin\theta$ expanding the expression in powers of $r/l$ yields \begin{eqnarray} \frac{\log[N_{\text{unc}}]}{l} &\! = \! & \log[4] - \frac{r^2}{l^2} \!+\! \frac{(\cos[4 \theta] \!-\! 3)r^4}{12 l^4} \!+\! \mathcal{O}\left(\frac{r^6}{l^6}\right)\!.~~~~ \end{eqnarray} This is quite informative as it suggests that for loose strings where $l \gg r$ the two lattice orientations have the same $N_{\text{unc}}(l)$. We know that for tight strings, for which $l \approx x+y$, the behaviour of $N_{\text{unc}}(l)$ is very different for the two lattice orientations. This phenomenon can be regarded as a way of recovering the Euclidean distance from the Manhattan distance. Indeed, the degeneracy of the paths to reach one point from another restores a Euclidean like metric over a square lattice that would otherwise respect the square lattice geometry. We suggest that this feature is responsible for our numerical results that show that the logical error rate for both orientations are similar at threshold. \subsubsection{Estimating the number of constrained paths} \label{subsec:constrained} Although we cannot calculate $N_{\text{con}}(l,n)$ exactly in the large $n$ limit, we can estimate it in two stages. First, we estimate $N_{\text{con}}(l,n)$ for a sequence of system sizes by randomly sampling unconstrained paths that contribute to $N_{\text{unc}}(l;x,y)$ and counting what fraction happen to satisfy the non-self intersection property. Second, we extrapolate $N_{\text{unc}}(l;x,y)$ by fitting to a functional form that allows us to estimate the limit for asymptotically large $n$. We provide further details in Appendix~\ref{sec:FiniteSizeNcl}. The results are shown for the square and diamond lattice in Fig.~\ref{fig:PlotUnconstrainedPaths}. We summarize some key features. \begin{enumerate}[(i)] \item $N^{\blacksquare}_{\text{con}}(l,n)=0$ for $l<\sqrt{n/2}$ and $N^{\blacklozenge}_{\text{con}}(l,n)=0$ for $l<\sqrt{n}$, implying that if the term multiplying $N_{\text{con}}(l,n)$ in Eqn. (\ref{eqn:bound}) is strongly suppressed when $l>\sqrt{n}$, then the failure rates will be very different. \item Both $N^{\blacksquare}_{\text{con}}(l,n)$ and $N^{\blacklozenge}_{\text{con}}(l,n)$ approach $c^l$ in the limit $l/\sqrt{n/2} \rightarrow \infty$ from numerical studies \cite{Madras96}, where $c\approx 2.638$ is the expansion constant. This should be contrasted with the unconstrained case in which $N^{\blacksquare}_{\text{unc}}(l;x,y)\sim 4^l$. The asymptotic scaling also appears to be reached more rapidly for the unconstrained paths than for the constrained ones. \item $N^{\blacklozenge}_{\text{con}}(l,n)<N^{\blacksquare}_{\text{con}}(l,n)$ for $l=\sqrt{n}$, and although our estimates indicate that the two functions cross, it is hard to conclusively claim this with our data. \end{enumerate} \subsection{Estimating the failure probability} Our goal here is to justify and further analyse the model of Eqn. (\ref{eq:generalclosedmodel}). Note that the rigorous upper bound in Eqn. (\ref{eq:DennisBound}) dramatically over counts the failure modes, since many of the $C^{l}_{l/2}\sim 2^l$ configurations of errors along a given path are counted multiple times as they are also found as configurations on other paths. In Eqn. (\ref{eq:generalclosedmodel}), we have corrected for this by replacing $2^l$ by $\xi(p)^l$. This is no longer a rigorous bound, but forms a relatively simple model applicable for all $p$ and arbitrarily large $n$. There are two limits of $1 \leq \xi(p) \leq 2$. The limit $\xi(p) = 2$ implies that any of the $ C^{l}_{l/2} \sim 2^l$ configurations of errors along a particular string of length-$l$ will result in a correction to that specific string. On the other hand, the limit $\xi(p) = 1$ implies that every string of length-$l$ is associated with just a single configuration of errors along the string. The intuition for the introduction of the phenomenological term $ \xi(p)$ comes from our analysis of the low error rate regime in Section~\ref{sec:pathcounting}. In that regime, all errors of a failing configuration lie along the non-contractible closed path which results after correction, and we were able to identify precisely which configurations of $d/2$ errors along a particular length-$d$ path would correct to that path. We found that all configurations along a completely straight path resulted in a correction to include that path, whereas only configurations with errors in some of the corners of curved paths would do so. From this perspective, it would make sense to include an explicit dependence of $\xi$ on $l/d$, since larger $l/d$ corresponds to paths with more curves. However, for a given $p$, the main contributions to the sum in Eqn.~(\ref{eq:generalclosedmodel}) are from a narrow range of $l/d$, such that $p$ is essentially a proxy for $l/d$. Away from the low error limit, there are an extensive number of errors that do not lie on the string as well, that inevitably 'interact' with the string. Below the threshold, these errors should generically not prevent a configuration from failing, although they can result in an over counting of a failing configuration, since more than one path of the same length $l$ could have $l/2$ errors in it. \subsubsection{Identifying $\xi(p)$} By fitting the model in Eqn.~(\ref{eq:generalclosedmodel}) to the data for some small system sizes, we provide a numerical estimate of $\xi^{\blacksquare}(p)$ and $\xi^{\blacklozenge}(p)$in Fig.~\ref{fig:XiPlot}. We observe a change of behavior around $p\sim 1\%$, above which both orientations have near identical values of $\xi(p)$ and below which they diverge. This points to the fact that for low error rates, the free energy at fixed energy has a higher entropy contribution for the original lattice over the rotated lattice for the entire range of sting lengths; since both contributions $\xi(p)$ and $N_{\rm con}(l)$ are larger for the original lattice. \begin{figure} \includegraphics[width=\columnwidth]{XiPlot.pdf} \caption{We identify $\xi(p)$ numerically by fitting the data obtained by Monte Carlo and the splitting method across a large range of $p$ to the model in Eq.~(\ref{eq:generalclosedmodel}). We used $d=10$ for the rotated lattice, and $d=14$ for the square lattice. The $N_{ncl}(l)$ were estimated to within 2\% accuracy for both models by sampling, as described in Appendix~\ref{sec:FiniteSizeNcl}. There are some deviations from the expected behaviour, which we expect comes from small-size effects. For example, the blue curve appears to be slightly below the expected value of 2 for $\xi^{\blacksquare}(0)$, and the values $\xi^{\blacksquare}(p_{\text{th}})$ and $\xi^{\blacklozenge}(p_{\text{th}})$ appear to differ slightly. } \label{fig:XiPlot} \end{figure} For completeness, we estimate the value of $\xi(p)$ in the limits of $p\rightarrow 0$ and $p\rightarrow p_{\rm th}$. Keeping only the lowest order terms in $p$, and comparing with the large $n$ expressions from Section~\ref{sec:pathcounting}, \begin{eqnarray} \overline{P}_{\text{model}} & \rightarrow & N_{\text{con}}(d) ~\xi(0)^d~ p^{\frac{d}{2}}.\\ \overline{P}_{\text{low-p}} & = & (\gamma_0)^{\sqrt{n}}~p^{\frac{d}{2}}. \end{eqnarray} Comparing these, and given $N_{\text{con}}^{\blacksquare}(d) = 2 \sqrt{n/2} \sim 1^{\sqrt{n}}$, $N_{\text{con}}^{\blacklozenge}(d) = d C^{d}_{d/2} + d \sim 2^{\sqrt{n}}$ and assuming that $\gamma^\blacksquare = \sqrt{27/2}$ matches the upper bound in Eq.~(\ref{eqn:tighter}), we identify \begin{eqnarray} \xi^{\blacksquare}(0) & = & 2, \\ \xi^{\blacklozenge}(0) & = & \sqrt{27/8} \approx 1.8371. \end{eqnarray} These values are encouragingly close to the values extracted from the model in Fig.~\ref{fig:XiPlot}. Near threshold, we expect that the dominant contributions to Eqn.~(\ref{eq:generalclosedmodel}) are from terms with $l\gg d$. Therefore, a typical string will be loose (i.e. have a lot of kinks), and there are $\sim p n$ ambient errors not on the string -- which implies that many of the configurations of $ C^{l}_{l/2} \sim e^{l\log[2]}$ errors along a given string will \textit{not} correct to give that particular string. This implies that near threshold, we should expect $\xi(p_{\text{th}})$ to be close to $1$; this would correspond to a single string per error configuration. Near the threshold, we also suppose that the sum is dominated by terms for which the string length is in the regime $N_{\text{con}}(l,n) \sim c^l$ for $c \approx2.638$ for both lattice orientations. Furthermore, since $N_{\text{con}}(l,n)$ is equal for both models at threshold and the threshold is the same for both orientation (since the threshold is a bulk property), then $ \xi^{\blacksquare}(p_{\text{th}}) = \xi^{\blacklozenge}(p_{\text{th}}) \equiv \xi_{\text{th}}$ since the expression in Eqn.~(\ref{eq:generalclosedmodel}) are otherwise the same for both orientations \begin{eqnarray} \overline{P}_{\text{model}} & \rightarrow & \sum_{l=d}^{n} \left(c ~\xi_{\text{th}} \sqrt{p \left( 1 -p \right)} \right)^{l}. \end{eqnarray} Let $p_c$ be the value of $p$ at which the parenthesis becomes one. For $p<p_c$, the parenthesis is less than one, such that the sum is dominated by small $l$, for which the assumption that $N_{\text{con}}(l,n) \sim c^l$ would break down. For $p>p_c$ on the other hand, the exponent is maximized for large $l$ and the total failure probability will become large. Solving for $p_c$, we obtain \begin{eqnarray} p_c=\frac{1}{2} - \sqrt{\frac{1}{4}-\frac{1}{\xi_{\text{th}}^2c^2}}. \end{eqnarray} The actual threshold value is known to be $p_{th} = 10.3\%$, which corresponds to $\xi_{\text{th}} \approx 1.2471$. This is close to the value obtained from the data in Fig.~\ref{fig:XiPlot}. \section{Conclusions} We have taken a number of different approaches to explore the configuration space of errors that can cause logical failure for the surface code with different lattice geometries. We have found that, while it pays to optimize the distance of the code, entropic factors can have a significant effect on the performance of codes at modest physical error rates. It will be interesting to explore the entropic contribution on other codes with improved encoding rates such as twisted surface codes~\cite{Yoder16}, color codes~\cite{Bombin06, Landahl11}, stellated color codes~\cite{Kesselring18} and hyperbolic codes~\cite{Freedman01, Freedman02, Delfosse13, Breuckmann16, Breuckmann17} to determine how the entropic considerations affect logical failure rates here. Indeed, one could imagine that codes that require a reduced number of physical qubits to realize a code of a given distance may suffer adversarially from entropic effects. Further, for the system sizes we have studied, the logical failure rate of the two different codes are almost indistinguishable until the physical error rate is an order of magnitude below threshold. We might like to account for this when we consider the change in distance of fault-tolerant quantum systems as we perform logical operations~\cite{Brown16a}. It may also be worthwhile studying entropic effects during fault-tolerant error correction. Correlated errors that occur during syndrome extraction manifest themselves as diagonal bonds during error correction~\cite{Fowler09, Fowler12}. Extensions of the present work may consider choosing circuits to minimize these effects. Recently, flag fault tolerance~\cite{Chao18, Chao17} has been considered in topological codes to minimize these correlated errors~\cite{Chamberland18}. One might view the extra resources used to implement these ciruits as an additional hardware expense used to reduce logical failure rates by minimzing the configuration space of errors. Ultimately, it will be very useful to determine bounds on the extent to which the physics of quantum error-correcting codes will permit us to minimize entropic factors in the logical failure rate to help us to design better fault-tolerant quantum-computational protocols in the future. \begin{acknowledgements} We are grateful for helpful and supportive discussions with H. Bomb\'{i}n, N. Delfosse, S. Flammia, R. Harper, A Hutter, D. Poulin, J. Preskill, S. Simon and J. Wootton. The authors acknowledge the facilities, and the scientific and technical assistance of the Sydney Informatics Hub at the University of Sydney and, in particular, access to the high performance computing facility Artemis. BJB and MJK are supported by Villum Fonden. MJK ackowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Projektnummer 277101999 -- TRR 183 (project B02). BJB is also supported by the University of Sydney Fellowship Programme and the Australian Research Council via the Centre of Excellence in Engineered Quantum Systems(EQUS) project number CE170100009. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction \label{sec:introduction:background}} Mechanical heating is required to maintain the energy balance in the solar chromosphere, as suggested by the temperature difference between the radiative equilibrium atmospheric model \citep{1989ApJ...346.1010A} and the observation-based semi-empirical model \citep{1981ApJS...45..635V}. Waves have been recognized as important contributors to chromospheric heating, although their heating mechanisms are elusive \citep[see][for a review]{2015SSRv..190..103J}. \rvr{While} the propagation of waves in the chromosphere has been well studied from both the observational and theoretical perspectives \rvr{\cite[e.g.,][]{2003ApJ...599..626B,2004A&A...422.1085H,2005ApJ...631.1270H,2008ApJ...680.1542H,2009A&A...508..951V,2011ApJ...743..142H,2012ApJ...755...18V,2013ApJ...764L..11D,2016A&A...585A.110K,2016A&A...590L...3S,2018MNRAS.479.5512K,2020ApJ...890...22A}}, a firm quantitative conclusion is still a distance away. \rv{In the chromosphere, physical parameters change drastically, leading to difficulties in studying chromospheric dynamics.} The plasma beta varies in both the vertical and horizontal directions, and waves can change their modes when crossing the equipartition layer \citep{2006RSPTA.364..333C,2019ApJ...881L..21P} where the speed of sound is identical to the Alfv\'en speed. Density stratification also adds to the complexity by increasing the amplitude of the acoustic waves, leading to increased nonlinearity and formation of shocks. In the high-beta regions of the chromosphere where the role of the magnetic field could be ignored, the propagation of acoustic waves has been well studied by hydrodynamic simulations with non-local thermodynamic equilibrium (non-LTE) radiative transfer \citep{1995ApJ...440L..29C,1997ApJ...481..500C}. In these studies, waves are generated by longitudinal piston motion. They succeed in reproducing the Ca II spectral profile which agrees with the observations. The situation becomes even more complicated in the low-beta chromosphere with the participation of the magnetic field. \cite{2016ApJ...817...94A} and \cite{2016ApJ...829...80B} show that the shock heating rate in the chromosphere is larger than or consistent with the observation-based radiative cooling rate. A similar result is also obtained in \cite{2020ApJ...891..110W} with an improved treatment of the radiative loss term introduced by \cite{2012A&A...539A..39C}. In \cite{2016ApJ...817...94A}, \cite{2016ApJ...829...80B}, and \cite{2020ApJ...891..110W}, waves are generated by artificial transverse torque or transverse motion at the bottom of the flux tube. These studies do not include the effect of waves originating from outside the flux tube. \rv{Theoretical studies could be divided into two categories, idealized models and realistic models. \cite{2016ApJ...817...94A}, \cite{2016ApJ...829...80B}, and \cite{2020ApJ...891..110W} are examples of idealized models. The physical process is clear in idealized models, but the results are affected by artificial settings in the model.} On the other hand, there are also realistic models \rvr{\cite[e.g.,][]{2011ApJ...730L..24K,2016A&A...585A...4C,2017ApJ...848...38I,2017Sci...356.1269M}} that aim to include complicated physical processes to approach reality. Realistic models are used to reproduce the synthesized images or spectral profiles for comparison with observations \cite[e.g.,][]{2013ApJ...772...90L,2009ApJ...694L.128L,2019MNRAS.486.4203Q}, but their complexity makes it difficult to understand the underlying elemental physical processes involved in heating. These studies do not focus on the physical processes that occur during wave propagation such as thermalization, nonlinear steepening, or mode conversions in the chromosphere. \rv{The purpose of our study is to conduct a quantitative investigation on wave heating in the chromosphere. \rv{In particular, previous studies do not focus on the role of the fast magnetic wave in heating the low-beta chromosphere.} We perform a realistic two-dimensional radiative MHD simulation while conducting a detailed investigation on the propagation of waves to estimate the contribution to chromospheric heating by different modes of waves. To achieve this goal, we develop a novel method of automatically identifying the mode of waves and calculating the heating rate due to different modes of waves.} \section{Numerical model}\label{sec:2} We use RAMENS code \citep{2016PhDT.........5I,2015ApJ...812L..30I} which solves MHD equations with gravity, heat conduction, equation of state under local thermodynamic equilibrium (LTE) condition, radiative transfer in the photosphere, and approximated radiative loss term in the chromosphere and the corona. The basic equations of the simulation are the same as those in \cite{2017ApJ...848...38I}. One could refer to \cite{2016PhDT.........5I} for a detailed description of this code. We modified the original RAMENS code by replacing the treatment of the chromospheric radiative loss term with the improved recipe developed by \cite{2012A&A...539A..39C}. The simulation domain is a 16 Mm $\times$ 16 Mm two-dimensional square extending from $-$ 2 Mm below the photosphere to 14 Mm in the corona with a uniform grid spacing of 8.5 km. The temperature of the corona is 1 MK which is maintained by the top boundary condition. The initial magnetic field is vertical and has a strength of 6 G. We start with a plane parallel atmosphere in the hydrostatic equilibrium state, though this setup does not strongly influence the later results obtained after the well-developed magneto-convections. The data analyzed cover 1000 s of the simulation which is approximately 10 times the transit time for acoustic waves in the chromosphere. \section{Shock identification and heating rate calculation}\label{sec:21} Our study focuses on wave heating in the low-beta chromosphere. Comparing this mechanism with other possible heating mechanisms (e.g., reconnection and turbulence with ambipolar diffusion), there is observational evidence showing that waves can carry enough energy for chromospheric heating \citep{2010ApJ...723L.134B}. Waves are generated by photospheric convection and steepen to shocks as they propagate upward in the chromosphere. To estimate the shock-heating rate, we identify the shock front in the chromosphere, determine the mode of each shock, and calculate the corresponding heating rate. The positions of the shock fronts are identified by the local minimum of $\nabla \cdot \mathbf{V}$ with \begin{equation}\label{eq:sel22} -\nabla \cdot \mathbf{V} \ge c_{\text{th}} (\cs/\Delta x), \end{equation} where $c_{\text{th}}$ is a parameter indicating the threshold for identification, $\cs$ is the speed of sound, and $\Delta x$ is the grid size. The value of the parameter $c_{\text{th}}$ should depend on the shock-capturing quality of the numerical scheme and was taken to be $c_{\text{th}}=0.25$ in this study \citep[see Appendix in][]{2020ApJ...891..110W}. \rv{The heating rate at the shock front is calculated using the following steps}. First, we extract the density, temperature, velocity, gas pressure, and magnetic pressure along the direction of propagation which is assumed to be identical to the direction of the gradient of the total pressure. The upstream and downstream quantities of the detected shock are determined as the first local maximum and minimum of $\partial^2 V_l/\partial l^2$ beside $l = l_{\text{c}}$ where $V_l$ is the velocity along the direction of propagation, $l$ is the distance along the direction of propagation, and $l_{\text{c}}$ is the position of the shock front. The upstream side is determined by the side with the lower density. We estimate the increment of the thermal energy flux at the shock front: \begin{equation}\label{eq:ql} \Delta F_{\text{th}} = U_1 \rho_1 T_1 (\smone-\smtwo), \end{equation} where $\Delta F_{\text{th}}$ is the increase in the thermal energy flux. Subscripts 1 and 2 denote the physical parameters that are sampled at the upstream and downstream region, respectively. $U$ is the shock-normal velocity in the shock rest frame, $T$ is the temperature, and $\sm$ is the entropy per unit mass. $U_1$ is calculated by mass conservation, $U_1 \rho_1 = U_2 \rho_2$, and the velocity relationship in different frames of reference, $v_1 - v_2 = U_1 - U_2$ where $\rho$ is the density and $v$ is the shock-normal velocity in the laboratory frame \rvr{(see Figure \ref{fig:sc} for a schematic plot)}. To estimate the heating rate per unit volume, we assume that the heating is evenly distributed in the volume of one grid point at the shock front. As a result, the heating rate per unit volume is calculated by \begin{equation}\label{eq:cal} Q_{\text{heat}}=\Delta F_{\text{th}}/w_{\text{shock}}, \end{equation} where $w_{\text{shock}}$ is the width of the shock wave. Although the actual thickness in the real shocks should be given by the microscopic dissipation process, we here use the grid spacing $w_{\text{shock}} = \Delta x$ for convenience. \rvr{The heating rate is calculated each time step. We assume that the heating rate at a fixed position do not change within one time step}. Note that the spatially integrated amount of $Q_{\text{heat}}$ is independent of the choice of $w_{\text{shock}}$ and is used only for the later discussion. \begin{figure} \centering \includegraphics[width=7cm]{s1.png} \caption{\rvr{A schematic figure showing the calculation of thermal energy flux. $t$ is time. $A$ is the area on the shock front. $m=U_1 \rho_1\Delta t\Delta A=U_2 \rho_2\Delta t\Delta A$ is the mass of plasma that crosses the shock front. Color in the upstream and the downstream regions denotes the value of entropy per unit mass (red: higher value, blue: lower value). $\Delta Q_{\text{m}}=T\Delta\sm$ is the increment of thermal energy per unit mass. Thus, we can obtain the thermal flux by $\Delta F_{\text{th}}=m\Delta Q_{\text{m}}/(\Delta t\Delta A)$}.} \label{fig:sc} \end{figure} \rv{Finally, we determine the mode of each shock wave \rvr{by checking whether the gas pressure and the magnetic pressure across the shock front change in the same direction}. The sign of $\int (\partial P_g / \partial l)(\partial P_m/ \partial l) \text{d} l$ across the shock front is used to determine whether it is a fast shock (positive value) or slow shock (negative value), where $P_g$ is the gas pressure and $P_m$ is the magnetic pressure. \rvr{We do not use phase speed to determine the mode of waves since it is difficult to obtain the local fast speed and slow speed in the dynamic chromosphere.}} \section{Results} \label{sec:3} Figure \ref{fig:fig1} shows the identified shock fronts in the dynamic simulation of solar chromosphere. Waves are generated by photospheric convection and they steepen to shocks in the chromosphere. Shocks dissipate their energy continuously in the chromosphere. A number of shocks gradually become undetectable during their propagation due to dissipation. When shocks impinge on the transition region, they drive the upward motion of the transition region that forms spicules. \rv{We focus on the low-beta chromospheric plasma. Due to the large deformation of the transition region by the spicules, we cannot distinguish the chromosphere and the corona using a simple threshold on the geometrical height. The \rvr{low-beta} chromospheric plasma is defined by the following criteria}: (1) $\cmass> 10^{-5.5}$ $\cmassu$, (2) $T< 10^4$ K, and (3) Alfv\'en speed is larger than sound speed. The variable $\cmass$ is the column mass, and $\cmass(z)=\int_z^{z_{\text{top}}} \rho(z') \text{d} z'$, where $z_{\text{top}}$ is the height of the top of the simulation box. The temperature and column mass threshold are used to exclude coronal plasma. The values of the thresholds are chosen from the joint probability density distribution of the temperature and the column mass (Figure \ref{fig:fig2-1}). \rv{The time and horizontal averaged radiative loss rate and heating rate of the low-beta chromospheric plasma are shown in Figure \ref{fig:fig2}}. It is shown that the shock heating is well balanced with radiative cooling below 2.5 Mm. At locations higher than 2.5 Mm, the energy balance is gradually disrupted due to the formation of spicules (in the presence of spicules, the energy balance at a fixed position is determined by the entropy flow carried by them). \begin{figure*} \centering \includegraphics[width=15cm]{fig1c.png} \caption{Snapshot of the simulation result with shock identification. The green line marks the position of the transition region (characterized by $T = 10^4$ K). The black solid lines are magnetic field lines. The gray shadow indicates the region where speed of sound is larger than the Alfv\'en speed. Identified shocks are plotted in blue (fast shock) and red (slow shock). Only a part of the simulation domain is shown in this figure. (An animation of this figure is available.)} \label{fig:fig1} \end{figure*} \begin{figure} \centering \includegraphics[width=5cm]{fig2-1.png} \caption{Joint probability density distribution of the temperature and the column mass. The yellow line shows the average temperature at each column mass. The brown lines show the threshold values for chromospheric plasma at $\cmass=10^{-5.5}$ $\cmassu$ and $T = 10^4$ K.} \label{fig:fig2-1} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{fig2bbb.png} \caption{Heating and radiative loss rate of the low-beta chromospheric plasma as a function of height. The black dashed line is the radiative loss rate in the simulation. The brown line is the sum of the heating rates due to fast and slow shocks. The blue solid line is the fast wave heating rate. The red solid line is the slow wave heating rate. For the blue and red lines, the thin lines with perturbation are the results that are directly calculated from the simulation; we also smooth the results with a Savitsky--Golay filter and plot them in thick lines. The green line represents the heating rate due to heat conduction. The average column mass at each height is shown in the secondary axis. \rvr{Only heating and cooling in the low-beta regions are included in this figure.}} \label{fig:fig2} \end{figure} \rv{Where do these fast mode waves in the low-beta regions originate? }We find that low-beta fast magnetic waves originate from high-beta fast acoustic waves through mode conversion. An example of mode conversion is shown in Figure \ref{fig:fig3}. Mode conversion occurs when fast acoustic waves propagate from the high-beta region to the low-beta region and cross the equipartition layer. An attacking angle (the angle between the wavevector and the magnetic field) close to $90^{\circ}$ will result in a larger conversion rate \citep{2006RSPTA.364..333C,2019ApJ...881L..21P}. \begin{figure*} \centering \includegraphics[width=13cm]{fig6.png} \caption{Example of a fast acoustic wave to fast magnetic wave mode conversion. The upper panels show the time evolution (from left to right: $t_0 - 12$ s, $t_0$, and $t_0 + 6$ s, where $t_0$ is the time of the snapshot shown in Figure \ref{fig:fig1}). In the upper panels, gray lines represent magnetic field lines. The blue line shows the position of a fast shock. Shadows mark the region where the speed of sound is larger than the Alfv\'en speed. Orange lines mark the position of slices used in the lower panels. The lower panels show the distribution of the gas pressure (solid line) and magnetic pressure (dashed line) across the shock front. In each panel, the horizontal axis is the distance along the slice, in which zero corresponds to the location of the shock front.} \label{fig:fig3} \end{figure*} \label{fig:fig3} \section{Discussion \label{sec:4}} The propagation of waves in MHD simulation with an idealized setting is also \rvr{carried on in previous researches \citep{2005ApJ...631.1270H,2008ApJ...680.1542H,2009A&A...508..951V,2012ApJ...755...18V}. These studies mainly focus on waves that originate inside a flux tube. For these waves, as they propagate upwards, they propagate along the magnetic field lines thus the attacking angle is small and mode conversion is less efficient}. \cite{2008ApJ...680.1542H} do mention the waves that originate outside a flux tube could generate fast magnetic waves in the flux tube through mode conversion but they do not discuss the heating by the fast magnetic waves in detail. Our result shows that, with quantification of the heating rate, fast waves do play a role in heating the low-beta chromosphere. \rvr{\cite{2006ApJ...653..739K} show that refraction could affect the propagation of fast waves and prevent their efficient energy transport to the chromosphere. They focus on waves inside a strong flux tube (sunspots). On the other hand, in our simulation, fast waves in the regions between two flux tubes are less affected by refraction since there is no substantial horizontal gradient of fast speed in these regions. In addition, the intensity of magnetic field in the flux tube is weaker in our simulation which also reduces the horizontal gradient of fast speed.} Our simulation shows that shock heating is the dominant heating process in the chromosphere. This result is consistent with those from previous studies. However, the wave modes contributing to heating are different. In \cite{2016ApJ...817...94A}, \cite{2020ApJ...891..110W} and \rvr{\cite{2004A&A...422.1085H}}, transverse waves at the foot of a low-beta flux tube undergo nonlinear mode coupling and generate slow acoustic waves. They steepen to shocks which dissipate and contribute to chromospheric heating. In our simulation, Alfv\`en waves vanish because of the two-dimensional geometry. As a result, the nonlinear mode coupling is also absent. As fast waves propagate like an expanding sphere, the strongest perturbation of the vertical velocity appears at the top of the sphere, whereas compression of the vertical magnetic field appears at the lateral sides. In our simulation, the background magnetic field is 6 G, mimicking the quiet sun region. The resultant intensity of the magnetic field perturbation in the chromosphere could be as large as 10--20 G. The combination of vertical velocity and vertical magnetic field perturbation can be used as a signal of the fast wave. Such a signal can hopefully be detected by next-generation solar telescopes such as Daniel K. Inouye Solar Telescope \citep[DKIST;][]{2020SoPh..295..172R} and Chinese Giant Solar Telescope \citep[CGST;][]{2011ASInC...2...31D}. \rvr{In order to investigate effect of the topology of magnetic field line, we carry on another simulation with the same initial and boundary condition described in Section \ref{sec:2}. The only difference is that we increase the intensity of the initial background magnetic field from 6 G to 20 G. In this new setting, the magnetic field lines are less inclined which results in smaller attacking angle for waves that propagate upward. We find that the percentage of heating by slow wave increases, especially in the higher part of the chromosphere characterized by $\cmass<10^{-4.2}$ $\cmassu$. However, our main result that fast magnetic shock waves play a significant role in heating the low-beta chromosphere remains unchanged.} \rvr{Our study is limited in the quiet region. In sunspots, observations show that wave energy is insufficient for chromospheric heating \citep{2011ApJ...735...65F}. In these regions, other effects related with magnetic field such as reconnection should be taken into consideration.} In this study, the ambipolar diffusion and dynamic ionization of hydrogen are not considered. The dissipation of ambipolar diffusion could lead to substantial heating locally \rvr{\citep{2012ApJ...747...87K,2016ApJ...819L..11S,2017Sci...356.1269M,2019ApJ...871....3S}.} On the other hand, \cite{2016ApJ...817...94A} compare the time-averaged heating rate resulting from ambipolar diffusion and shock dissipation and find that shock heating is much stronger than the heating resulting from ambipolar diffusion. \cite{2007A&A...473..625L} compare simulations with the LTE assumption and dynamic ionization. It is shown that in the simulation with dynamic ionization, shock temperatures are higher and the intershock temperatures are lower than in the simulation with the LTE assumption. This effect could affect the measurement of the entropy jump. Moreover, dynamic ionization is important to determine the electron and the ion number density and will further affect the estimation of ambipolar diffusion, especially when the ionization degree is low. Further studies that compare shock heating, turbulence heating \citep[][]{2011ApJ...736....3V}, and ambipolar diffusion \rvr{\citep[][]{2005A&A...442.1091L,2018A&A...618A..87K,2020ApJ...889...95M,2020A&A...642A.220G}} in realistic simulations are expected to be conducted in the future. \section{Conclusion \label{chap:summary}} We perform a two-dimensional MHD simulation to study the propagation of MHD waves in the chromosphere. We identify the mode of the shock waves in the chromosphere, calculate the heating rate from the entropy jump, and find that the heating rate balances with the radiative loss. Fast magnetic shock waves play a significant role in heating the low-beta chromosphere. These low-beta fast magnetic waves are generated by mode conversion. \rvr{We acknowledge the referee for valuable comments.} The authors thank M. Carlsson for providing numerical tables for the recipe of the chromospheric radiative loss. \rvr{The authors thank B. Yu's assistance in making Figure \ref{fig:sc}.} Numerical computations were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. T.Y. is supported by JSPS KAKENHI grant No. 15H03640, No. 20KK0072, and No. 21H01124. H.I. is supported by JSPS KAKENHI grant No. 19K14756. \clearpage \bibliographystyle{apj}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Quotients of complex projective or affine varieties by linear actions of complex reductive groups can be constructed and studied using Mumford's geometric invariant theory (GIT) \cite{Dolg,GIT,New}. Given a linear action on a complex projective variety $X$ of a linear algebraic group $G$ which is {\em not} reductive, the graded algebra of invariants is not necessarily finitely generated, and even if it is finitely generated, so that there is a GIT quotient $X/\!/G$ given by the associated projective variety, the geometry of this GIT quotient is hard to describe. When $G$ is reductive then $X/\!/G$ is the image of a surjective morphism $\phi:X^{ss} \to X/\!/G$ from an open subset $X^{ss}$ of $X$ (consisting of the semistable points for the linear action), and $\phi(x) = \phi(y)$ if and only if the closures of the $G$-orbits of $x$ and $y$ meet in $X^{ss}$. When $G$ is not reductive $\phi:X^{ss} \to X/\!/G$ can still be defined in a natural way but it is not in general surjective, and indeed its image is not in general an algebraic variety, even when the algebra of invariants is finitely generated \cite{DK}. One situation in which the algebra of invariants for a non-reductive linear algebraic group action on a projective variety $X$ with respect to an ample line bundle $L$ is guaranteed to be finitely generated is when the group $H$ is a Grosshans subgroup of a reductive group $G$ and the linear action of $H$ extends to $G$. Recall that a closed subgroup $H$ of $G$ is a Grosshans subgroup \cite{Grosshans3,Grosshans} if and only if the algebra of invariants $\mathcal{O}(G)^H$ is finitely generated and $H$ is an observable subgroup of $G$ in the sense that $$H = \{ g \in G : f(gx)=f(x) \mbox{ for all $x \in X$ and } f \in \mathcal{O}(G)^H \}$$ (see $\Sigma$3). In this case $G/H$ is quasi-affine, and the finite generation of $\mathcal{O}(G)^H$ is equivalent to the existence of a finite-dimensional $G$-module $V$ and some $v \in V$ such that $H=G_v$ is the stabiliser of $v$ and $\dim(\overline{G\cdot v}\setminus G\cdot v)\le \dim(G\cdot v)-2$. Then we find that the non-reductive GIT quotient $X/\!/H$ (the projective variety associated to the algebra of invariants for the linear action of $H$) is given by the classical GIT quotient $$ X /\!/ H = (X \times {\rm Spec}(\mathcal{O}(G)^H))/\!/G$$ of $X \times {\rm Spec}(\mathcal{O}(G)^H)$ by the diagonal action of $G$. Here $ {\rm Spec}(\mathcal{O}(G)^H) \cong \overline{G\cdot v}$ is the canonical affine completion of the quasi-affine variety $G/H$. The fact that the embedding $$ G/H \to G/\!/H = {\rm Spec}(\mathcal{O}(G)^H) $$ is not in general an isomorphism means that the natural map $X^{ss} \to X/\!/H$ is not in general surjective and $X/\!/H$ cannot be described as $X^{ss}$ modulo an equivalence relation, in contrast to classical GIT. When $H$ is a unipotent subgroup of a reductive group $G$ then $G/H$ is quasi-affine. In general if $H$ is a closed subgroup of a reductive group $G$ then $G/H$ is quasi-projective but not necessarily quasi-affine, so that Grosshans theory does not apply directly. In this paper we will study families of subgroups $H$ of reductive groups $G$, where $H$ is neither reductive nor unipotent, which possess a property related to the Grosshans property. That is, given any action of $H$ on a projective variety $X$ extending to an action of $G$ which is linear with respect to an ample line bundle on $X$, then {\it provided} that we are willing to twist the linearisation of the action of $H$ by a suitable (rational) character of $H$ we find that the $H$-invariants form a finitely generated algebra; moreover the natural morphism $\phi: X^{ss} \to X/\!/H$ is surjective and satisfies $\phi(x) = \phi(y)$ if and only if the closures of the $H$-orbits of $x$ and $y$ meet in $X^{ss}$. This property is weaker than the Grosshans property in that we are only guaranteed finitely generated invariants when we twist the linearisation by suitable rational characters of $H$ (though of course unipotent groups have no non-trivial characters). However it is stronger in the sense that we obtain a surjective morphism from $X^{ss}$ to $X/\!/H$ and a geometric description of $X/\!/H$ as $X^{ss}$ modulo the equivalence relation given by $x \sim y$ if and only if the closures of the $H$-orbits of $x$ and $y$ meet in $X^{ss}$. Our first motivation for this investigation was the reparametrisation group ${\mathbb G }_n$ consisting of $n$-jets of germs of biholomorphisms of $({\mathbb C },0)$, which acts on the jet bundle $J_n(Y)$ over a complex manifold $Y$ (for some positive integer $n$). The fibre of $J_n(Y)$ over $x\in Y$ is the space of $n$-jets of germs at the origin of holomorphic curves $f:({\mathbb C },0) \to (Y,x)$, and polynomial functions on $J_n(Y)$ are algebraic differential operators $Q(f',\ldots,f^{(n)})$, called jet differentials. The reparametrisation group ${\mathbb G }_n$ acts fibrewise on the bundle $E_n(Y)$ of jet differentials. This action has played a central role in the history of hyperbolic varieties and the Kobayashi conjecture on the non-existence of holomorphic curves in compact complex manifolds of generic type (see for example \cite{demailly,dmr,Merker1,Merker2}). The reparametrisation group ${\mathbb G }_n$ is the semi-direct product ${\mathbb U }_n \rtimes {\mathbb C }^*$ of its unipotent radical ${\mathbb U }_n$ with ${\mathbb C }^*$. It is a subgroup of $\mathrm{GL}(n)$ with the upper triangular form $$ {\mathbb G }_n \cong \left\{ \left(\begin{array}{ccccc} \alpha_1 & \alpha_2 & \alpha_3 & \cdots & \alpha_n \\ 0 & \alpha_1^2 & \cdots & & \\ 0 & 0 & \alpha_1^3 & \cdots & \\ \cdot & \cdot & \cdot & \cdot & \cdot \\ 0 & 0 & 0 & \cdots & \alpha_1^n \end{array} \right) : \alpha_1 \in {\mathbb C }^*, \alpha_2,\ldots,\alpha_n \in {\mathbb C } \right\} $$ where the entries above the leading diagonal are polynomials in $\alpha_1, \ldots, \alpha_n$, and ${\mathbb U }_n$ is the subgroup consisting of matrices of this form with $\alpha_1=1$. Notice that if $n$ is odd then the embedding of ${\mathbb G }_n$ in $\mathrm{GL}(n)$ can be modified by multiplying a matrix in ${\mathbb G }_n$ with first row $(\alpha_1, \ldots, \alpha_n)$ by the scalar $(\alpha_1)^{-(n+1)/2}$. The image of this modified embedding is a subgroup $\tilde{{\mathbb G }}_n$ of $\mathrm{SL}(n)$; if $n$ is even then the corresponding subgroup $\tilde{{\mathbb G }}_n$ of $\mathrm{SL}(n)$ is a double cover of ${{\mathbb G }}_n$. This paper studies more generally actions of groups $U$, $\hat{U}$ and $\tilde{U}$ of a similar form to those of ${\mathbb U }_n$, ${\mathbb G }_n$ and $\tilde{{\mathbb G }}_n$. Let $U$ be a unipotent subgroup of $\mathrm{SL}(n)$ with a semi-direct product $$\hat{U} = U \rtimes {\mathbb C }^*$$ where ${\mathbb C }^*$ acts on the Lie algebra of $U$ with all its weights strictly positive; we call such groups graded unipotent groups. We assume that $U$ and $\hat{U}$ are upper triangular subgroups of $\mathrm{GL}(n)$ which are \lq generated along the first row' in the sense that there are integers $1 = \omega_1 < \omega_2 \leq \omega_3 \leq \cdots \leq \omega_n$ and polynomials $\mathfrak{p}_{i,j}(\alpha_1,\ldots,\alpha_n)$ in $\alpha_1,\ldots,\alpha_n$ with complex coefficients for $1<i<j \leq n$ such that \begin{equation} \label{label1} \hat{U}=\left\{\left(\begin{array}{ccccc}\alpha_1 & \alpha_2 & \alpha_3 & \ldots & \alpha_n \\ 0 & \alpha_1^{\omega_2} & p_{2,3}(\mathbf{\alpha}) & \ldots & p_{2,n}(\mathbf{\alpha}) \\ 0 & 0 & \alpha_1^{\omega_3} & \ldots & p_{3,n}(\mathbf{\alpha}) \\ \cdot & \cdot & \cdot & \cdot &\cdot \\ 0 & 0 & 0 & 0 & \alpha_1^{\omega_n} \end{array}\right : \mathbf{\alpha} =(\alpha_1,\ldots, \alpha_n) \in {\mathbb C }^* \times {\mathbb C }^{n-1} \right\} \end{equation} and $U$ is the unipotent radical of $\hat{U}$; that is, $U$ is the subgroup of $\hat{U}$ where $\alpha_1 = 1$. Now we consider the subgroup $\tilde{U}$ of $\mathrm{SL}(n)$ which is the intersection of $\mathrm{SL}(n)$ with the product $\hat{U} Z(\mathrm{GL}(n))$ of $\hat{H}$ with the central one-parameter subgroup $Z(\mathrm{GL}(n)) \cong {\mathbb C }^*$ of $\mathrm{GL}(n)$. Like $\hat{U}$, the subgroup $\tilde{U}$ of $\mathrm{GL}(n)$ is a semi-direct product $$\tilde{U} = U \rtimes {\mathbb C }^*$$ where ${\mathbb C }^*$ acts on the Lie algebra of $U$ with all weights strictly positive. Let $\tilde{U}=U \rtimes {\mathbb C }^* \subseteq \mathrm{SL}(n)$ act linearly on a projective variety $X$ with respect to an ample line bundle $L$ on $X$ and assume that the action extends to a linear action of $\mathrm{SL}(n)$. Let $\chi: \tilde{U} \to {\mathbb C }^*$ be a character of $\tilde{U}$ with kernel containing $U$; we will identify $\chi $ with the integer $r_{\chi}$ such that $$ \chi \left(\begin{array}{ccccc}t^{n\omega_1 - (\omega_1 + \cdots + \omega_n)} & 0 &0 & \ldots & 0 \\ 0 & t^{n\omega_2 - (\omega_1 + \cdots + \omega_n)}& 0 & \ldots & 0\\ 0 & 0 & t^{n\omega_3 - (\omega_1 + \cdots + \omega_n)}& \ldots & 0 \\ \cdot & \cdot & \cdot & \cdot &\cdot \\ 0 & 0 & 0 & 0 & t^{n\omega_n - (\omega_1 + \cdots + \omega_n)} \end{array}\right) = t^{r_{\chi}}.$$ Assume that the maximum \[\max \{\omega_1+\ldots +\omega_n- \omega_{i+1}+\omega_i: \omega_{i-1}<\omega_i,1\le i \le n-1 \}\] is taken at $i=i_0$. Let $c$ be a positive integer such that $$\omega_1+\ldots +\omega_n- \omega_{i_0+1}+\omega_{i_0}-n<\frac{\chi}{c(\omega_1 + \cdots + \omega_n)}<\omega_1 + \cdots + \omega_n - n;$$ we call rational characters $\chi/c$ with this property {\it well-adapted} to the linear action. The linearisation of the action of $\tilde{U}$ on $X$ with respect to the ample line bundle $L^{\otimes c}$ can be twisted by the character $\chi$; let $L_\chi^{\otimes c}$ denote this twisted linearisation. The main theorem of this paper is \begin{theorem} \label{maina} Let $\hat{U} = U \rtimes {\mathbb C }^*$ be a subgroup of $\mathrm{GL}(n)$ which is generated along its first row with positive weights $1=\omega_1 < \omega_2 \leq \ldots \leq \omega_n$ as at (\ref{label1}) above, and let $\tilde{U} = U \rtimes {\mathbb C }^*$ be the intersection of $\mathrm{SL}(n)$ with the product $\hat{U} Z(\mathrm{GL}(n))$. Suppose that $\tilde{U}$ acts linearly on a projective variety $X$ with respect to an ample line bundle $L$ and the action extends to a linear action of $\mathrm{SL}(n)$. Then the algebra of invariants $\oplus_{m=0}^\infty H^0(X,L_{\chi}^{\otimes cm})^{\tilde{U}}$ is finitely generated for any well-adapted rational character $\chi/c$ of $\tilde{U}$. In addition the projective variety $X/\!/ \tilde{U}$ associated to this algebra of invariants is a categorical quotient of an open subset $X^{ss,\tilde{U}}$ of $X$ by $\tilde{U}$ and contains as an open subset a geometric quotient of an open subset $X^{s,\tilde{U}}$ of $X$. \end{theorem} Applying a similar argument after replacing $X$ with $X \times {\mathbb P } ^1$ we can obtain geometric information on the action of the unipotent group $U$ on $X$: \begin{theorem} \label{cor:invariants} In the situation above let $\tilde{U}$ act diagonally on $X \times {\mathbb P } ^1$ and linearise this action using the tensor product of $L_\chi$ with $\mathcal{O}_{ {\mathbb P } ^1}(M)$ for suitable $M \geq 1$. Then $(X \times {\mathbb P } ^1)/\!/\tilde{U}$ is a projective variety which is a categorical quotient by $\tilde{U}$ of a $\tilde{U}$-invariant open subset of $X \times {\mathbb C }$ and contains as an open subset a geometric quotient of a $U$-invariant open subset $X^{\hat{s},U}$ of $X$ by $U$. \end{theorem} \begin{rem}\label{cor:invariants2} This theorem's proof also shows that the algebra $\oplus_{m=0}^\infty H^0(X\times {\mathbb P } ^1,L_{\chi}^{\otimes cm} \otimes \mathcal{O}_{ {\mathbb P } ^1}(M))^{\tilde{U}}$ of $\tilde{U}$-invariants is finitely generated for a well-adapted rational character $\chi/c$ of $\tilde{U}$ when $c$ is a sufficiently divisible positive integer. This graded algebra can be identified with the subalgebra of the algebra of $U$-invariants $\oplus_{m=0}^\infty H^0(X,L^{\otimes cm})^{{U}}$ generated by those weight vectors for the action of ${\mathbb C }^* \leq \tilde{U}$ on $\oplus_{m=0}^\infty H^0(X,L^{\otimes cm})^{{U}}$ with non-positive weights after twisting by the well-adapted character $\chi$. \end{rem} Note that if $U$ is {\it any} unipotent complex linear algebraic group which has an action of ${\mathbb C }^*$ with all weights strictly positive (that is, $U$ is graded unipotent), then $U$ can be embedded in $\mathrm{GL}({\rm Lie}(U \rtimes {\mathbb C }^*))$ via its adjoint action on the Lie algebra ${\rm Lie}({U}\rtimes {\mathbb C }^*)$ as the unipotent radical of a subgroup $\hat{U}$ of this form which is generated along the first row, and as the unipotent radical of the associated subgroup $\tilde{U}$ of $\mathrm{SL}(n)$. We will call this the adjoint form of $U$. However there are many examples (including the reparametrisation groups for jet differentials) of subgroups of $\mathrm{GL}(n)$ of the form (\ref{label1}) where the action of $U$ is not equivalent to its adjoint action on ${\rm Lie}\hat{U}$. Theorem \ref{maina} gives us the following result for the adjoint form of a graded unipotent group: \begin{corollary} Let $U$ be any unipotent complex linear algebraic group with an action of ${\mathbb C }^*$ with strictly positive weights and associated ${\mathbb C }^*$ extension $\hat{U}$ and adjoint embedding in $\mathrm{GL}({\rm Lie}(\hat{U}))$ . If $U$ acts linearly on a projective variety $X$ and the action extends to a linear action of $\mathrm{GL}(\mathrm{Lie}\hat{U})$ then the conclusions of Theorem \ref{cor:invariants} hold. \end{corollary} \begin{rem} In the situation when $\hat{U}={\mathbb G }_n$ we cannot apply Theorems \ref{maina} and \ref{cor:invariants} directly to the action of ${\mathbb G }_n$ on the fibre $J_n(Y)_x$ of the jet bundle over $x\in Y$, where $Y$ is a $k$-dimensional complex manifold, as this fibre is not projective. However, $J_n(Y)_x$ can be identified with the set of $n\times k$ matrices with nonzero first column which forms an open subset of the affine space of all $n\times k$ matrices, and we can apply the argument to the associated projective space $ {\mathbb P } ^{nk-1}$, on which ${\mathbb G }_n$ acts linearly with respect to the hyperplane line bundle $L$. When $n \geq 2$ the fibre at $x$ of the bundle $E_n(Y)$ of jet differentials can then be identified with the algebra $\oplus_{m=0}^\infty H^0( {\mathbb P } ^{nk-1},L^{\otimes m})$ and the Demailly algebra $E_n(Y)_x^{{\mathbb U }_n}$ of ${\mathbb U }_n$-invariant jet differentials can be identified with its subalgebra $\oplus_{m=0}^\infty H^0( {\mathbb P } ^{nk-1},L^{\otimes m})^{{{\mathbb U }_n}}$. \label{Demconj} \end{rem} \noindent We will use a generalisation of a criterion in \cite{DK} to prove Theorem \ref{maina} as follows. In \Sigma\ref{sec:construction} we obtain an explicit $\mathrm{GL}(n)$-equivariant embedding of the quasi-affine variety $\mathrm{GL}(n)/\hat{U}$ (which can also be identified with the quotient of $\mathrm{SL}(n)/U$ by a finite central subgroup of $\hat{U} \cap \tilde{U}$) in the Grassmannian $\mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$ of $n$-dimensional linear subspaces of \[\mathrm{Sym}^{\mathbf{\weight}}\CC^n={\mathbb C }^n \oplus \mathrm{Sym}^{\omega_2}({\mathbb C }^n) \oplus \ldots \oplus \mathrm{Sym}^{\omega_n}({\mathbb C }^n)\] where $\mathrm{Sym}^{k}({\mathbb C }^n)$ is the $k$th symmetric product of ${\mathbb C }^n$. Using Pl\"{u}cker coordinates we thus obtain an explicit projective embedding $\mathrm{GL}(n)/\hat{U} \hookrightarrow {\mathbb P } (\wedge^n\mathrm{Sym}^{\mathbf{\weight}}\CC^n).$ In fact $\mathrm{GL}(n)/\hat{U}$ embeds into the open affine subset of $ {\mathbb P } (\wedge^n\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$ where the coordinate corresponding to the one-dimensional summand $\wedge^n{\mathbb C }^n$ of $\wedge^n\mathrm{Sym}^{\mathbf{\weight}}\CC^n$ does not vanish. The advantage of this embedding lies in the fact that we can control the boundary of the orbit $\mathrm{GL}(n)/\hat{U}$ in $ {\mathbb P } (\wsymk n)$; we will prove that \[\overline{(\mathrm{GL}(n)/\hat{U})} \setminus (\mathrm{GL}(n)/\hat{U}) \] is contained in the union of two subspaces $ {\mathbb P } (\mathcal{W}_{v_1})$ and $ {\mathbb P } (\mathcal{W}_{\det})$ of $ {\mathbb P } (\wedge^n\mathrm{Sym}^{\mathbf{\weight}}\CC^n).$ In order to prove Theorem \ref{maina} we show that if $X$ is a nonsingular complex projective variety on which $\mathrm{SL}(n)$ acts linearly with respect to a very ample line bundle $L$, and if the linear action of $\tilde{U} \leq \mathrm{SL}(n)$ on $X$ is twisted by a well-adapted rational character $\chi/c$, then $ {\mathbb P } (\mathcal{W}_{v_1})\times X$ and $ {\mathbb P } (\mathcal{W}_{\det})\times X$ are unstable with respect to the induced linear action of $\mathrm{SL}(n) \times {\mathbb C }^*$ on $ {\mathbb P } (\wedge^n\mathrm{Sym}^{\mathbf{\weight}}\CC^n) \times X$ for an appropriate linearisation. A similar argument applies when $X$ is replaced with $X \times {\mathbb P } ^1$ and enables us to prove Theorem \ref{cor:invariants}. The layout of this paper is as follows. $\Sigma$2 provides a very brief review of classical geometric invariant theory and some non-reductive GIT. In $\Sigma$3 we describe our embedding of $\mathrm{GL}(n)/\hat{U}$ into $\mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$. In $\Sigma$4 we recall the original motivation for this work from global singularity theory and jet differentials and discuss the link between jet differentials and the curvilinear component of Hilbert schemes of points. In $\Sigma$5 we complete the proofs of Theorems \ref{maina} and \ref{cor:invariants}. Finally in $\Sigma$6 we consider our results in the cases of jet differentials and the adjoint forms of graded unipotent groups. \medskip \noindent\textbf{Acknowledgments} The authors thank Brent Doran, Thomas Hawes and and Rich\'ard Rim\'anyi for helpful discussions on this topic. \section{Classical and non-reductive geometric invariant theory} Let $X$ be a complex quasi-projective variety on which a complex reductive group $G$ acts linearly; that is, there is a line bundle $L$ on $X$ (which we will assume to be ample) and a lift $\mathcal{L}$ of the action of $G$ to $L$. Then $y \in X$ is said to be {\em semistable} for this linear action if there exists some $m > 0$ and $f \in H^0(X, L^{\otimes m})^G$ not vanishing at $y$ such that the open subset $$ X_f = \{ x \in X \ | \ f(x) \neq 0 \}$$ is affine ($X_f$ is automatically affine if $X$ is projective or affine), and $y$ is {\em stable} if also $f$ can be chosen so that the action of $G$ on $X_f$ is closed with all stabilisers finite. The open subset $X^{ss}$ of $X$ consisting of semistable points has a quasi-projective categorical quotient $X^{ss} \to X/\!/G$, which restricts to a geometric quotient $X^s \to X^s/G$ of the open subset $X^s$ of stable points (see \cite{GIT} Theorem 1.10). When $X$ is projective then $X_f$ is affine for any nonzero $f \in H^0(X, L^{\otimes m})^G$ (since we are assuming $L$ to be ample), and there is an induced action of $G$ on the homogeneous coordinate ring \[\hat{\mathcal{O}}_L(X) = \bigoplus_{m \geq 0} H^0(X, L^{\otimes m}) \] of $X$. The subring ${\hat{\mathcal{O}}}_L(X)^G$ consisting of the elements of ${\hat{\mathcal{O}}}_L(X)$ left invariant by $G$ is a finitely generated graded complex algebra because $G$ is reductive, and the GIT quotient $X/\!/G$ is the associated projective variety ${\rm Proj}({\hat{\mathcal{O}}}_L(X)^G)$ \cite{Dolg, GIT, New}. When $X$ is affine and the linearisation of the action of $G$ is trivial then the algebra $\mathcal{O}(X)^G$ of $G$-invariant regular functions on $X$ is finitely generated and $X^{ss} = X$ and $X/\!/G = {\rm Spec}(\mathcal{O}(X)^G)$ is the affine variety associated to $\mathcal{O}(X)^G$. Suppose now that $H$ is any linear algebraic group, with unipotent radical $U \unlhd H$ (so that $H/U$ is reductive), acting linearly on a complex projective variety $X$ with respect to an ample line bundle $L$. Then the scheme ${\rm Proj}({\hat{\mathcal{O}}}_L(X)^H)$ is not in general a projective variety, since the graded complex algebra of invariants $${\hat{\mathcal{O}}}_L(X)^H = \bigoplus_{m \geq 0} H^0(X, L^{\otimes m})^H$$ is not necessarily finitely generated, and geometric invariant theory (GIT) cannot be extended immediately to this situation (cf. \cite{DK,F2,F1,GP1,GP2,KPEN,W}). However in some cases it is known that $\hat{\mathcal{O}}_L(X)^U$ is finitely generated, which implies that \[{\hat{\mathcal{O}}}_L(X)^H = \left( {\hat{\mathcal{O}}}_L(X)^U \right)^{H/U}\] is finitely generated since $H/U$ is reductive, and then the {\em enveloping quotient} in the sense of \cite{BDHK,DK} is given by \[X/\!/H={\rm Proj}(\hat{\mathcal{O}}_L(X)^H).\] Moreover, there is a morphism \[q: X^{ss} \to X/\!/H,\] where $X^{ss}$ is defined as in the reductive case above, which restricts to a geometric quotient \[q:X^s \to X^s/H\] for an open subset $X^s \subset X^{ss}$. In such cases we have a GIT-like quotient $X/\!/H$ and we would like to understand it geometrically. However there is a crucial difference here from the case of reductive group actions, even though we are assuming that the invariants are finitely generated: the morphism $X^{ss} \to X/\!/H$ is not in general surjective, so we cannot describe $X/\!/H$ geometrically as $X^{ss}$ modulo some equivalence relation. In this paper we will study the situation when $U$ is a unipotent group with a one-parameter group of automorphisms $\lambda:{\mathbb C }^* \to \mbox{Aut}(U)$ such that the weights of the induced ${\mathbb C }^*$ action on the Lie algebra ${\mathfrak u}$ of $U$ are all strictly positive. We will call such a group $U$ a graded unipotent group. In this situation we can form the semidirect product $$\hat{U} = {\mathbb C }^* \ltimes U$$ given by ${\mathbb C }^* \times U$ with group multiplication $$(z_1,u_1).(z_2,u_2) = (z_1 z_2, (\lambda(z_2^{-1})(u_1))u_2).$$ Note that the centre of $\hat{U}$ is finite and meets $U$ in the trivial subgroup, so we have an inclusion given by the composition $$ U \hookrightarrow \hat{U} \to \mbox{Aut}(\hat{U}) \hookrightarrow \mathrm{GL}(\mbox{Lie}(\hat{U})) =\mathrm{GL}({\mathbb C } \oplus {\mathfrak u})$$ where $\hat{U}$ maps to its group of inner automorphisms and ${\mathfrak u} = \mbox{Lie}(U)$. Thus we find that $U$ is isomorphic to a closed subgroup of the reductive group $G=\mathrm{SL}({\mathbb C } \oplus {\mathfrak u})$ of the form $$ U=\left\{\left(\begin{array}{ccccc} 1 & \alpha_2 & \alpha_3 & \ldots & \alpha_n \\ 0 & 1 & p_{2,3}(\alpha_2 \ldots ,\alpha_n) & \ldots & p_{2,n}(\alpha_2,\ldots, \alpha_n) \\ 0 & 0 & 1 & \ldots & p_{3,n}(\alpha_2,\ldots, \beta_n) \\ \cdot & \cdot & \cdot & \cdot &\cdot \\ 0 & 0 & 0 & 0 & 1 \end{array}\right) :\alpha_2, \ldots, \alpha_n \in {\mathbb C }\right\} $$ where $n = 1 + \dim U$ and $p_{i,j}(\alpha_2,\ldots,\alpha_n)$ is a polynomial in $\alpha_2,\ldots,\alpha_n$ with complex coefficients for $1<i<j \leq n$. Our aim is to study linear actions of subgroups $U$ and $\hat{U}$ of $\mathrm{GL}(n)$ of this form (but with the embedding in $\mathrm{GL}(n)$ not necesssarily induced by the adjoint action on the Lie algebra of $\hat{U}$) which extend to linear actions of $\mathrm{GL}(n)$ itself, by finding explicit affine and projective embeddings of the quasi-affine varieties $\mathrm{GL}(n)/\hat{U}$. In cases where the action of $\hat{U}$ on $X$ extends to an action of $\mathrm{GL}(n)$ there is an isomorphism of $\mathrm{GL}(n)$-varieties \begin{equation} \label{9Febiso} G \times_U X \cong (G/U) \times X \end{equation} given by $ [g,x] \mapsto (gU, gx)$. Here $\mathrm{GL}(n) \times_{\hat{U}} X$ denotes the quotient of $\mathrm{GL}(n) \times X$ by the free action of $\hat{U}$ defined by $u(g,x)=(g u^{-1}, ux)$ for $u \in \hat{U}$, which is a quasi-projective variety by \cite{PopVin} Theorem 4.19. Then there is an induced $\mathrm{GL}(n)$-action on $\mathrm{GL}(n) \times_{\hat{U}} X$ given by left multiplication of $\mathrm{GL}(n)$ on itself. Geometric invariant theory for linear actions of a unipotent group $U$ on a projective variety was studied in \cite{DK}. If $U$ is a unipotent subgroup of the reductive group $G$ then $U$-invariants on $X$ can be related to $G$-invariants of appropriate projective compactifications $\overline{G \times_U X}$ of the quasi-projective variety $G\times_U X$ where $\overline{G \times_U X}$ has a suitable $G$-linearisation extending the linearisation for the $U$ action on $X$. \begin{theorem}{(\cite{DK} Corollary 5.3.19)}\label{thm:fgcriterion} Let $X$ be a nonsingular complex projective variety on which $U$ acts linearly with respect to an ample line bundle $L$. Let $L'$ be a $G$-linearisation over a projective completion $\overline{G \times_{U} X}$ of $G \times_{U} X$ extending the $G$ linearisation over $G \times_{U} X$ induced by $L$. Let $D_1, \ldots , D_r$ be the codimension $1$ components of the boundary of $G \times_U X$ in $\overline{G \times_U X}$ and suppose that for all sufficiently divisible $N$ $L'_N=L^{\prime}[N \sum_{j=1}^r D_j]$ is an ample line bundle on $\overline{G \times_{U} X}$. Then the algebra of invariants $\bigoplus_{k \geq 0} H^0( X,L^{\otimes k})^U$ is finitely generated if and only if for all sufficiently divisible $N$ any $G$-invariant section of a positive tensor power of $L'_N$ vanishes on every codimension one component $D_j$. \end{theorem} \begin{rem} This result appears in \cite{DK} as a corollary to a theorem (Theorem 5.3.18 in \cite{DK}) which claims without the additional hypothesis that $L'_N$ is ample for sufficiently large $N$, that $\bigoplus_{k \geq 0} H^0( X,L^{\otimes k})^U$ is finitely generated if and only if any $G$-invariant section of a positive tensor power of $L'_N$ vanishes on every codimension $1$ component $D_j$ in boundary of $G\times_H X$ in $\overline{G\times_H X}$. However, there is an error in the proof of that theorem: it requires the algebra of $G$-invariants $\oplus_{k\ge 0}H^0(\overline{G\times_H X},(L'_N)^{\otimes k}$ to be finitely generated . Since $G$ is reductive and $\overline{G\times_H X}$ is projective, this is true when $L'_N$ is an ample line bundle for sufficiently divisible $N$, but does not follow in general without assuming such additional hypothesis. Since \cite{DK} Corollary 5.3.19 includes the hypothesis that $L'_N$ is ample line bundle for $N$ sufficiently divisible, its validity is unaffected by this error. \end{rem} Theorem \ref{thm:fgcriterion} can be generalised to allow us to study $H$-invariants for linear algebraic groups $H$ which are neither unipotent nor reductive. Over ${\mathbb C }$ any linear algebraic group $H$ is a semi-direct product $H=U_H\rtimes R$ where $U_H \subset H$ is the unipotent radical of $H$ (its maximal unipotent subgroup) and $R\simeq H/U_H$ is a reductive subgroup of $H$. If $H$ is a subgroup of a reductive group $G$ then there is an induced right action of $R$ on $G/U_H$ which commutes with the left action of $G$. Similarly if $H$ acts on a projective variety $X$ then there is an induced action of $G\times R$ on $G\times_{U_H}X$ with an induced $G\times R$-linearisation. The same is true if we replace the requirement that $H$ is a subgroup of $G$ with the existence of a group homomorphism $H\to G$ whose restriction to $U_H$ is injective. \begin{definition} A group homomorphism $H \to G$ from a linear algebraic group $H$ to a reductive group $G$ will be called $U_H$-faithful if its restriction to the unipotent radical $U_H$ of $H$ is injective. \end{definition} The proof of \cite{DK} Theorem 5.1.18 gives us \begin{theorem}\label{thm:fgcriteriongeneral} Let $X$ be a nonsingular complex projective variety acted on by a linear algebraic group $H=U_H\rtimes R$ where $U_H$ is the unipotent radical of $H$ and let $L$ be a very ample linearisation of the $H$ action defining an embedding $X\subseteq {\mathbb P } ^n$. Let $H \to G$ be an $U_H$-faithful homomorphism into a reductive subgroup $G$ of $\mathrm{SL}(n+1)$ with respect to an ample line bundle $L$. Let $L'$ be a $G\times R$-linearisation over a normal nonsingular projective completion $\overline{G \times_{U_H} X}$ of $G \times_{U_H} X$ extending the $G\times R$ linearisation over $G \times_{U_H} X$ induced by $L$. Let $D_1, \ldots , D_r$ be the codimension one components of the boundary of $G \times_{U_H} X$ in $\overline{G \times_{U_H} X}$, and suppose for all sufficiently divisible $N$ that $L'_N=L^{\prime}[N \sum_{j=1}^r D_j]$ is an ample line bundle on $\overline{G \times_{U_H} X}$. Then the algebra of invariants $\bigoplus_{k \geq 0} H^0( X,L^{\otimes k})^H$ is finitely generated if and only if for all sufficiently divisible $N$ any $G\times R$-invariant section of a positive tensor power of $L'_N$ vanishes on every codimension one component $D_j$. \end{theorem} \begin{proof} For the forward direction first note that by restriction $$\bigoplus_{k \geq 0} H^0( \overline{G \times_{U_H} X},(L_N')^{\otimes k})^{G \times R} \subseteq \bigoplus_{k \geq 0} H^0( {G \times_{U_H} X},(L_N')^{\otimes k})^{G \times R} = (\bigoplus_{k \geq 0} H^0( {G \times_{U_H} X},(L_N')^{\otimes k})^{G})^{ R} $$ $$\cong (\bigoplus_{k \geq 0} H^0( { X},L^{\otimes k})^{U_H})^ { R} = \bigoplus_{k \geq 0} H^0( { X},L^{\otimes k})^H. $$ We can identify $H^0( \overline{G \times_{U_H} X},L_n')$ with a subspace of $H^0( \overline{G \times_{U_H} X},L_{n+1}')$ for any natural number $n$, so that a section $f$ of $L_{n+1}')$ extends to a section $F$ of $L_{n}')$ if and only if it vanishes on each $D_j$ as a section of $L_{n+1}')$. Any given $G\times R$-invariant section of $L^{\otimes k}$ over $G \times_{U_H} X$ extends to a section of $(L_N')^{\otimes k}$ over each $D_j$ for large enough $N$ and thus by normality extends over $ \overline{G \times_{U_H} X}$ (cf. the proof of Converse 1.13 on page 41 of \cite{GIT}). So if the algebra of invariants $\bigoplus_{k \geq 0} H^0( { X},L^{\otimes k})^H$ is finitely generated, for large enough $N$ the finitely many generators will all extend over and vanish on every $D_j$ as a section of a tensor power of $L_N'$, and hence every element of $\bigoplus_{k \geq 0} H^0( { X},L^{\otimes k})^H$ will have the same property. The reverse direction follows by proving that for any such $N$ the ring of invariants $$ \bigoplus_{k \geq 0} H^0( { X},L^{\otimes k})^H \cong \bigoplus_{k \geq 0} H^0( {G \times_{U_H} X},(L_N')^{\otimes k})^{G \times R} $$ is isomorphic to the ring of invariants $\bigoplus_{k \geq 0} H^0( \overline{G \times_{U_H} X},(L_N')^{\otimes k})^{G \times R}$, which is finitely generated since $ \overline{G \times_{U_H} X}$ is a projective variety acted on linearly with respect to the ample linearisation $L′_N$ by the reductive group $G$. This isomorphism arises since any $G \times R$-invariant section $s$ over $G \times_{U_H} X$ of $L′_N$ extends as above to an invariant section of $L_{N′}'$ over $\overline{G \times_{U_H} X}$ for some $N′ > N$. By hypothesis this section vanishes on each $D_j$ and hence defines an invariant section of $L_{N'-1}'$ extending $s$. Repeating this argument enough times we find that $s$ extends to a section of $L′_N$. The same argument applies to any invariant section $s$ over $G \times_{U_H} X$ of a positive tensor power $(L′_N)^{\otimes k}$ of $L′_N$, so we have $$\bigoplus_{k \geq 0} H^0( { X},L^{\otimes k})^H \cong \bigoplus_{k \geq 0} H^0( \overline{G \times_{U_H} X},(L_N')^{\otimes k})^{G \times R}$$ as required. \end{proof} \begin{rem} The proof of Theorem \ref{thm:fgcriteriongeneral} tells us that when the hypotheses hold and the algebra of invariants $\bigoplus_{k \geq 0} H^0( X,L^{\otimes k})^H$ is finitely generated then the enveloping quotient \begin{equation}\label{equotient} X/\!/H =\mathrm{Proj}(\oplus_{k \geq 0} H^0( X,L^{\otimes k})^H)\simeq \overline{G \times_{U_H} X}/\!/_{L'_N} (G\times R) \end{equation} for sufficiently divisible $N$. \end{rem} \begin{rem} Note that in this argument there is in fact no requirement for $U=U_H$ to be the full unipotent radical of $H$; all we need is that $U$ is a normal subgroup of $H$ and $R=H/U$ is reductive. \end{rem} In general even when the algebra of invariants $\bigoplus_{k \geq 0} H^0( X,L^{\otimes k})^H$ on $X$ is finitely generated and \eqref{equotient} is true, the morphism $X \to X/\!/_eH$ is not surjective and in order to study the geometry of $X/\!/_eH$ by identifying it with $\overline{G \times_{U_H} X}/\!/_{L'_N} (G\times R)$ we need information about the boundary $\overline{G \times_{U_H} X}\setminus G\times_{U_H} X$ of $\overline{G \times_{U_H} X}$. If, however, we are lucky enough to to find a $G \times R$-equivariant projective completion $\overline{G \times_{U_H} X}$ with a linearisation $L$ such that for sufficiently large $N$ $L'_N$ is an ample line bundle and the boundary $\overline{G \times_{U_H} X}\setminus G\times_{U_H} X$ is unstable for $L'_N$ then we have a situation which is almost as well behaved as for reductive group actions on projective varieties with ample linearisations as follows. \begin{definition} Let $X^{\overline{ss}}=X\cap \overline{G \times_{U_H} X}^{ss,G\times R}$ and $X^{\overline{s}}=X\cap \overline{G \times_{U_H} X}^{s,G\times R}$ where $X$ is embedded in $G \times_{U_H} X$ in the obvious way as $x\mapsto [1,x]$. \end{definition} \begin{theorem}\label{thm:geomcor} Let $X$ be a complex projective variety acted on by a linear algebraic group $H=U_H\rtimes R$ where $U_H$ is the unipotent radical of $H$ and let $L$ be a very ample linearisation of the $H$ action defining an embedding $X\subseteq {\mathbb P } ^n$. Let $H \to G$ be an $U_H$-faithful homomorphism into a reductive subgroup $G$ of $\mathrm{SL}(n+1)$ with respect to an ample line bundle $L$. Let $L'$ be a $G\times R$-linearisation over a projective completion $\overline{G \times_{U_H} X}$ of $G \times_{U_H} X$ extending the $G\times R$ linearisation over $G \times_{U_H} X$ induced by $L$. Let $D_1, \ldots , D_r$ be the codimension $1$ components of the boundary of $G \times_{U_H} X$ in $\overline{G \times_{U_H} X}$, and suppose that some integral multiple of each $D_j$ is Cartier and for all sufficiently divisible $N$ that $L'_N=L^{\prime}[N \sum_{j=1}^r D_j]$ is an ample line bundle on $\overline{G \times_{U_H} X}$. If for all sufficiently divisible $N$ any $G\times R$-invariant section of a positive tensor power of $L'_N$ vanishes on the boundary of $G \times_{U_H} X$ in $\overline{G \times_{U_H} X}$, then \begin{enumerate} \item the algebra of invariants $\bigoplus_{k \geq 0} H^0( X,L^{\otimes k})^H$ is finitely generated; \item the enveloping quotient $X/\!/H \simeq \overline{G \times_{U_H} X}/\!/_{L'_N} (G\times R)\simeq \mathrm{Proj}(\oplus_{k \geq 0} H^0( X,L^{\otimes k})^H)$ for sufficiently divisible $N$; \item $\overline{G \times_{U_H} X}^{ss,G\times R, L'_N} \subseteq G\times_{U_H} X$ and therefore the morphism \[ \phi: X^{\overline{ss}} \rightarrow X/\!/H\] is surjective and $X/\!/H$ is a categorical quotient of X^{\overline{ss}}$; \item if $x,y \in X^{\overline{ss}}$ then $\phi(x) = \phi(y)$ if and only if the closures of the $H$-orbits of $x$ and $y$ meet in X^{\overline{ss}}$; \item $\phi$ restricts to a geometric quotient $X^{\overline{s}} \rightarrow X^{\overline{s}}/H \subseteq X/\!/H$. \end{enumerate} \end{theorem} \begin{rem} Note that the hypotheses in Theorem \ref{thm:fgcriteriongeneral} that $X$ should be nonsingular and that $\overline{G \times_{U_H} X}$ should be normal and nonsingular are not needed in Theorem \ref{thm:geomcor}. This is because these hypotheses are only required to ensure that sections extend over irreducible components of codimension at least two in the boundary which are not unstable; in the circumstances of Theorem \ref{thm:geomcor} there are no such irreducible components. \end{rem} \begin{proof} If $N$ is sufficiently divisible then the composition \begin{equation}\label{comp} X^{\overline{ss}}\to \overline{G \times_{U_H} X}^{ss,G\times R,L'_N} \to \overline{G \times_{U_H} X}/\!/_{L'_N}(G\times R)\end{equation} is an $H$-invariant morphism, and $\overline{G \times_{U_H} X}/\!/_{L'_N}(G\times R)$ has an ample line bundle $\mathcal{L}$ which pulls back to a positive tensor power $L^{\otimes r}$ of the restriction to $X^{\overline{ss}}$ of the linearisation $L$ of the $H$ action on $X$. $X^{\overline{ss}}$ is an open subset of $X^{ss,fg}$ and \eqref{comp} factors through the quotient map \[q:X^{ss,fg} \to q(X^{ss,fg})\subseteq \mathcal{U}\] where $\mathcal{U}$ is a quasi-projective open subset of the enveloping quotient $X/\!/_eH$ with a birational morphism $\tau: \mathcal{U} \dasharrow \overline{G \times_{U_H} X}/\!/_{L'_N}(G\times R)$ as \[X^{\overline{ss}} \hookrightarrow X^{ss,fg} \xrightarrow{q} \mathcal{U} \xrightarrow{\tau} \overline{G \times_{U_H} X}/\!/_{L'_N}(G\times R).\] The hypothesis that the boundary of $\overline{G \times_{U_H} X}$ is unstable ensures that this composition is surjective. Moreover, $\overline{G \times_{U_H} X}/\!/_{L'_N}(G\times R)$ is a categorical quotient of $G\times_{U_H} X^{\overline{ss}}$ by $G\times R=G\times (H/U_H)$, so it is also a categorical quotient of $G\times X^{\overline{ss}}$ by $G\times H$ and a categorical quotient of $X^{\overline{ss}}$ by $H$, and (4) and (5) now follow from the analogous properties for classical GIT applied to the reductive group $G \times R$. The $H$-invariant morphism $q: X^{ss,fg} \to \mathcal{U}$ then factors through a birational morphism \[\sigma: \overline{G \times_{U_H} X}/\!/_{L'_N}(G\times R) \to \mathcal{U}.\] Since $\sigma$ is surjective and $\overline{G \times_{U_H} X}/\!/_{L'_N}(G\times R)$ is projective it follows that $\mathcal{U}$ is projective which means that $\mathcal{U}=X/\!/_e H$ and $q$ is surjective. Furthermore $\sigma$ and $\tau$ are mutually inverse isomorphisms between $X/\!/_e H$ and $\overline{G \times_{U_H} X}/\!/_{L'_N}(G\times R)$. Finally, since \[\overline{G \times_{U_H} X}^{ss,G\times R, L'_N} \subset G\times_{U_H}X\] we have \begin{multline} \bigoplus_{k\ge 0} H^0(X,L^{\otimes rk})^H \simeq \bigoplus_{k\ge 0} H^0(G \times_{U_H} X,(L'_N)^{\otimes rk})^{G\times R}\simeq \\ \bigoplus_{k\ge 0} H^0(\overline{G \times_{U_H} X}^{ss,G\times R,L'_N},(L'_N)^{\otimes rk})^{G\times R}\simeq \oplus_{k\ge 0} H^0(\overline{G \times_{U_H} X}/\!/_{L'_N}(G\times R),\mathcal{L}^{\otimes k}). \end{multline} Thus $\bigoplus_{k\ge 0} H^0(X,L^{\otimes rk})^H$ is a finitely generated graded algebra and \[X/\!/_eH \simeq \overline{G \times_{U_H} X}/\!/_{L'_N} (G\times R)\simeq \mathrm{Proj}(\oplus_{k \geq 0} H^0( X,L^{\otimes k})^H).\] \end{proof} \begin{rem} Note that in the circumstances of Theorem \ref{thm:geomcor} so that $\overline{G \times_{U_H} X}^{ss,G\times R, L'_N}=G\times_{U_H} X^{\overline{ss}}$ we get a nice geometric description of $X/\!/H$. We know from classical GIT that the morphism from $\overline{G \times_{U_H} X}^{ss,G\times R, L'_N}=G\times_{U_H} X^{\overline{ss}}$ to $X/\!/H$ is $G$-invariant and surjective, and maps two points of $X^{\overline{ss}}$ to the same point of $X/\!/H$ if and only if the closures of their $G \times R$-orbits meet in $\overline{G \times_{U_H} X}^{ss,G\times R, L'_N}=G\times_{U_H} X^{\overline{ss}}$. Since the $G \times R$-sweep in $G\times_{U_H} X^{\overline{ss}}$ of any closed $H$-invariant subset of $X^{\overline{ss}}$ is closed in $G\times_{U_H} X^{\overline{ss}}$, it follows that the $H$-invariant morphism $\phi: X^{\overline{ss}} \twoheadrightarrow X/\!/_LH$ is surjective and if $x_1,x_2\in X^{\overline{ss}}$ then $\phi(x_1)=\phi(x_2)$ if and only if $\overline{Hx_1} \cap \overline{Hx_2}\cap X^{\overline{ss}}\neq \emptyset$, as in Theorem \ref{thm:geomcor} (3) and (4). We can also use the Hilbert--Mumford criteria for (semi)stability from classical GIT to determine the subsets $X^{\overline{s}}$ and $X^{\overline{ss}}$ of $X$ in an analogous way. \end{rem} \subsection{Symplectic geometry of $X/\!/H$} Suppose that the action of $H$ on $X$ extends to a linear action of $G$ on $X$ and that the projective completion $\overline{G \times_{U_H} X}$ is of the form $\overline{G/U_H} \times X$ where $\overline{G/U_H}$ is a $G\times R$-equivariant projective completion of $G/U_H$ and $G \times_{U_H} X$ is identified with $G/U_H \times X$ via the $G\times R$-equivariant isomorphism \[[g,x] \mapsto (gU_H, gx).\] If furthermore $K$ is a maximal compact subgroup of $G$ such that $K_R=K\cap R$ is a maximal compact subgroup of $R$ then we can give a moment map description of $X/\!/H$. For this we choose coordinates for the projective embedding of $X$ defined by $L^N$ and of $\overline{G/U_H}$ such that $K$ acts unitarily. Then we have moment maps \[\mu_X:X \to k^* \text{ and } \mu_{\overline{G/U_H}}: \overline{G/U_H} \to k^* \times k_R^*\] for the actions of $K$ on $X$ and of $K\times K_R$ on $\overline{G/U_H}$ such that the moment map for the action of $K \times K_R$ on $\overline{G/U_H} \times X$ with respect to $L'_N$ is given by \[\mu:(y,x) \mapsto (N\mu_{\overline{G/U_H}}(y)+\mu_X(x),0) \in k^* \times k^*_R\] We can identify $X/\!/H$ with \[\mu^{-1}(0)/(K \times K_R)=\left\{(y,x)\in (\pi_{k_R} \circ \mu_{\overline{G/U_H}})^{-1}(0) \times X:\mu_X(x)=-N\pi_k\mu_{\overline{G/U_H}}(y)\right\}/(K \times K_R)\] where \[\pi_k: k^* \times k^*_R \to k^* \text{ and } \pi_{k_R}: k^* \times k^*_R \to k^*_R \] are the projections. Given a good understanding of the moment map $\mu_{\overline{G/U_H}}:\overline{G/U_H} \to k^*\times k_R^*$ this can provide a nice description of $X/\!/H$ in terms of $\mu_X$. \begin{ex} When the unipotent radical $U_H$ of $H$ is a maximal unipotent subgroup of $G$ we can use the theory of symplectic implosion, due to Guillemin, Jeffrey and Sjamaar \cite{Guillemin-JS:implosion} (or more generally when $U$ is the unipotent radical of a parabolic subgroup of $G$ we can use a generalised version of symplectic implosion \cite{Ksympimpl}). Let us choose a $K$-invariant inner product on the Lie algebra \( \lie k \) of a maximal compact subgroup \( K \) of $G$, which allows us to identify \( \lie k \) with its dual \( \lie k^* \). Let $\lie t_+$ be a positive Weyl chamber in the Lie algebra \( \lie t \) of a maximal torus \( T \) of $K$. Given a symplectic manifold \( M \) with a Hamiltonian symplectic action of \( K \), the implosion \( M_{\mathrm{impl}} \) is a stratified symplectic space with a Hamiltonian action of the maximal torus \( T \) of \( K \), such that there is an identification of reduced spaces $$ M \symp_\lambda^s K = M_\textup{impl} \symp^s_\lambda T = (M \times \mathsf O_{-\lambda}) \symp_0^s K = \mu^{-1}(\lambda)/{\Stab_K(\lambda)} $$ for all \( \lambda \) in the closure of the fixed positive Weyl chamber in \( \lie t^* \), where \( \symp_\lambda^s \) denotes symplectic reduction at level \( \lambda \) and \( \mathsf O_\lambda \) is the coadjoint orbit of \( K \) through \( \lambda \) with its canonical symplectic structure, while \( \mu\colon M \to \lie k^* \) is the moment map for the \( K \)-action on \( M \) and \( \Stab_K(\lambda) \) is the stabiliser in \( K \) of \( \lambda \in \lie k^* \) under the coadjoint action of \( K \). When \( M \) is the cotangent bundle \( T^*K \) (which may be identified with \( G = K_{\mathbb C } \)) then \( (T^*K)_\textup{impl} \) is obtained from \( K \times \lie t_{+} \) by identifying \( (k_1, \xi) \) with \( (k_2, \xi) \) if \( k_1, k_2 \) lie in the same orbit of the commutator subgroup of \( \Stab_K(\xi)\). If \( \xi \) is in the interior of the chamber, its stabiliser is a torus and no identifications are made: an open dense subset of \( (T^*K)_\textup{impl} \) is just the product of \( K \) with the interior of the Weyl chamber. As \( T^*K \) has a Hamiltonian \( K \times K \)-action its implosion inherits a Hamiltonian \( K \times T \)-action. The moment map for the $K$-action is induced by the map \( K \times \lie t_{+} \to \lie k \cong \lie k^* \) given by $(k,\xi) \mapsto k(\xi)$ while the moment map for the $T$-action is induced by the projection onto $\lie t_+ \subseteq \lie t \cong \lie t$. For a general symplectic manifold \( M \) with a Hamiltonian \( K \)-action the imploded space \( M_\textup{impl} \) is the symplectic quotient \( (M \times (T^*K)_\textup{impl}) \symp_0^s K \), with its induced Hamiltonian \( T \)-action. This can be obtained from $\mu^{-1}(\lie t_+)$ by identifying $x$ with $y$ if $\mu(x) = \mu(y) = \xi$ and furthermore $x$ and $y$ lie in the same orbit of the commutator subgroup of \( \Stab_K(\xi)\). \( (T^*K)_\textup{impl} \) can be identified with the affine variety which is the non-reductive GIT quotient \begin{equation*} K_{\mathbb C } \symp U = {\rm Spec}(\mathcal{O}(K_{\mathbb C })^U) = \overline{G/U}, \end{equation*} of the complex reductive group \( G = K_{\mathbb C } \) by a maximal unipotent subgroup \( U \); here $\mathcal{O}(K_{\mathbb C })^U$ is always finitely generated. This variety has a stratification by quotients of \( K_{\mathbb C } \) by commutators of parabolic subgroups; the open stratum is just \( K_{\mathbb C }/U \) and \( K_{\mathbb C } \symp U \) is the canonical affine completion of the quasi-affine variety \( K_{\mathbb C } / U \). Thus when $G$ acts linearly on a projective variety $X$ with an ample linearisation $L$, then the enveloping quotient $X/\!/ U$ has a description in terms of the corresponding moment map $\mu_{X,K}: X \to \lie k^*$: it can be obtained from $\mu_{X,K}^{-1}(\lie t_+)$ by identifying $x$ with $y$ if $\mu_{X,K}(x) = \mu_{X,K}(y) = \xi$ and furthermore $x$ and $y$ lie in the same orbit of the commutator subgroup of \( \Stab_K(\xi)\). There is a similar description for $X/\!/U$ when $U$ is the unipotent radical of any parabolic subgroup of $G$. Moreover when $H$ is a subgroup of the normaliser of $U$ in $G$ with reductive quotient $R = H/U$ which can be identified with the complexification of a subgroup $K_R$ of $K$, then we get an induced moment map for the action of $K \times K_R$ on $X \times \overline{G/U} $ and thus a description of $X/\!/H$ in terms of $\mu_{X,K}$ and the moment map $\mu_R$ for the action of $K \cap R$ on $\overline{G/U}$. In the situation of Theorem \ref{thm:geomcor} we can identify $X/\!/H$ with $\mu_{X,K}^{-1}(\lie t_+^R)/(K \cap R)$ where $\lie t_+^R$ is a $K \cap R$-invariant subset of $\lie t_+$ whose intersection with the image of $\mu_{X,K}$ does not meet the boundary of $\lie t_+$. \end{ex} \section{Embeddings in Grassmannians}\label{sec:construction} Let $U$ be a unipotent subgroup of the complex special linear group $\mathrm{SL}(n)$ and let $\hat{U}=U \rtimes {\mathbb C }^*$ be a subgroup of the complex general linear group $\mathrm{GL}(n)$ which is a ${\mathbb C }^*$-extension of $U$ such that the weights of the ${\mathbb C }^*$ action on $\mathrm{Lie}(U)$ are all strictly positive. Let us suppose also that $U$ and $\hat{U}$ are upper triangular subgroups of $\mathrm{GL}(n)$ which are generated along the first row; that is, there are integers $1 = \omega_1 < \omega_2 \leq \omega_3 \leq \cdots \leq \omega_n$ and polynomials $p_{i,j}(\alpha_1,\ldots,\alpha_n)$ in $\alpha_1,\ldots,\alpha_n$ with complex coefficients for $1<i<j \leq n$ such that \begin{equation}\label{presentation} \hat{U}=\left\{\left(\begin{array}{ccccc}\alpha_1 & \alpha_2 & \alpha_3 & \ldots & \alpha_n \\ 0 & \alpha_1^{\omega_2} & p_{2,3}(\mathbf{\alpha}) & \ldots & p_{2,n}(\mathbf{\alpha}) \\ 0 & 0 & \alpha_1^{\omega_3} & \ldots & p_{3,n}(\mathbf{\alpha}) \\ \cdot & \cdot & \cdot & \cdot &\cdot \\ 0 & 0 & 0 & 0 & \alpha_1^{\omega_n} \end{array}\right : \mathbf{\alpha} =(\alpha_1,\ldots, \alpha_n) \in {\mathbb C }^* \times {\mathbb C }^{n-1} \right\} \end{equation} and $$ U=\left\{\left(\begin{array}{ccccc} 1 & \alpha_2 & \alpha_3 & \ldots & \alpha_n \\ 0 & 1 & p_{2,3}(\alpha) & \ldots & p_{2,n}(\alpha) \\ 0 & 0 & 1 & \ldots & p_{3,n}(\alpha) \\ \cdot & \cdot & \cdot & \cdot &\cdot \\ 0 & 0 & 0 & 0 & 1 \end{array}\right) :\alpha=(1,\alpha_2, \ldots, \alpha_n) \in {\mathbb C }^{n-1}\right\}. $$ This implies that the Lie algebra ${\mathfrak u} = {\rm Lie}(U)$ has a similar form: $$ {\mathfrak u}=\left\{\left(\begin{array}{ccccc} 0 & a_2 & a_3 & \ldots & a_n \\ 0 & 0 & q_{2,3}(a) & \ldots & q_{2,n}(a) \\ 0 & 0 & 0 & \ldots & q_{3,n}(a) \\ \cdot & \cdot & \cdot & \cdot &\cdot \\ 0 & 0 & 0 & 0 & 0 \end{array}\right) :a=(a_2, \ldots, a_n) \in {\mathbb C }^{n-1}\right\} $$ where the $q_{i,j}$ are linear forms in the parameters $a=(a_2,\ldots, a_n) \in {\mathbb C }^{n-1}$ satisfying the following properties: \begin{enumerate} \item[(i)] $q_{i,j}=0$ for $i\le j$. \item[(ii)] Let $\hat{\omega}_i=\omega_i-1$ for $i=1,\ldots n$ be the weights of the adjoint action of the subgroup ${\mathbb C }^*$ of $\hat{U}$ on $\hat{{\mathfrak u}}=\mathrm{Lie} \hat{U}$, so that $\hat{\omega}_1=0$ and $\hat{\omega}_i > 0$ if $i=2,\ldots, n$. Then the ${\mathbb C }^*$-action makes ${\mathfrak u} = \mathrm{Lie} U$ into a graded Lie algebra, and therefore \begin{equation} \label{structure} q_{i,j}(a_2,\ldots, a_n)=\sum_{\ell:\hat{\omega}_\ell+\hat{\omega}_i=s_j}c_j^{\ell i}a_\ell \end{equation} for some structure coefficients $c_j^{\ell i} \in {\mathbb C }$. \end{enumerate} \begin{rem} \label{rmk3.2} In particular, \eqref{structure} implies that $c^{\ell i}_j=0$ for $\ell\ge j$ unless $i=1$. But $q_{1,j}=a_j$ so this means that for $i\ge 2$ \[q_{i,j}(a_2,\ldots, a_n)=q_{ij}(a_2,\ldots, a_{j-1})\] is a linear form in the first $j-1$ free parameters. It follows immediately that for $j \geq i\geq 2$ \[p_{i,j}(\mathbf{\alpha}) = p_{i,j}(\alpha_1, \ldots, \alpha_{j-1})\] depends only on $\alpha_1, \ldots, \alpha_{j-1}$. \end{rem} \begin{prop}\label{homogprop} Let the weighted degree of $\alpha_s$ be $\deg(\alpha_s)=\omega_s$ for $1 \leq s \leq n$. Then \begin{enumerate} \item[(i)] the polynomial $p_{i,j}(\mathbf{\alpha})$ which is the $(i,j)$th entry of the element of $\hat{U}$ parametrised by $\mathbf{\alpha} = (\alpha_1,\ldots,\alpha_n) \in {\mathbb C }^* \times {\mathbb C }^{n-1}$ is homogeneous of degree $\omega_i$ in $\alpha_1,\ldots,\alpha_n$; \item[(ii)] $p_{i,j}(\mathbf{\alpha})$ is weighted homogeneous of degree $\omega_j$ in $\alpha_1,\ldots,\alpha_n$. \end{enumerate} \end{prop} \begin{proof} The first (respectively second) part of the statement follows from the fact that $\hat{U}$ is closed under multiplication by its subgroup \[ {\mathbb C }^*=\left \{ \left(\begin{array}{cccc} \alpha_1 & 0 & \ldots & 0 \\ 0 & \alpha_1^{\omega_2} & \ldots & 0 \\ \cdot & \cdot & \cdot & \cdot \\ 0 & \cdot & \cdot & \alpha_1^{\omega_n} \end{array} \right): \alpha_1 \in {\mathbb C }^* \right \}\] on the left (respectively right). \end{proof} \subsection{The construction} For a vector of positive integers $\mathbf{\omega}=(\omega_1,\ldots, \omega_n)$ we introduce the notation \[\mathrm{Sym}^{\mathbf{\weight}}\CC^n={\mathbb C }^n \oplus \mathrm{Sym}^{\omega_2}({\mathbb C }^n) \oplus \ldots \oplus \mathrm{Sym}^{\omega_n}({\mathbb C }^n),\] where $\mathrm{Sym}^{s}({\mathbb C }^n)$ is the $s$th symmetric power of ${\mathbb C }^n$. Any linear group action on ${\mathbb C }^n$ induces an action on $\mathrm{Sym}^{\mathbf{\weight}}\CC^n$. The most straightforward way to find an algebraic description of the quotient $\mathrm{GL}(n)/\hat{U}$ is to find a $\mathrm{GL}(n)$-module $W$ with a point $w \in W$ whose stabiliser is $\hat{U}$. Then the orbit $\mathrm{GL}(n)\cdot w$ is isomorphic to $\mathrm{GL}(n)/\hat{U}$ as a quasi-affine variety, and its closure $\overline{\mathrm{GL}(n)\cdot w}$ in $W$ is an affine completion of $\mathrm{GL}(n)/\hat{U}$, while its closure in a projective completion of $W$ is a compactification of $\mathrm{GL}(n)/ \hat{U}$. \begin{theorem} \label{embed} Let $\hat{U}=U \rtimes {\mathbb C }^*$ be a ${\mathbb C }^*$ extension of a unipotent subgroup $U$ of $\mathrm{SL}(n)$ with positive weights $1=\omega_1 < \omega_2 \le \cdots \le \omega_n$ and a polynomial presentation \eqref{presentation}. Fix the standard basis $\mathcal{E}=\{e_1,\ldots, e_n\}$ of ${\mathbb C }^n$ and define \begin{equation}\label{pn} \mathfrak{p}_n=[e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge (e_j + \sum_{i=2}^j p_{i,j}(e_1,\ldots,e_{j-1}))\wedge \ldots \wedge (e_n + \sum_{i=2}^n p_{i,n}(e_1,\ldots,e_n))] \end{equation} $$ \in \mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)\subset {\mathbb P } (\wedge^n\mathrm{Sym}^{\mathbf{\weight}}\CC^n).$$ Then the stabiliser $\mathrm{GL}(n)_{\mathfrak{p}_n}$ of $\mathfrak{p}_n$ in $\mathrm{GL}(n)$ is $\hat{U}$. \end{theorem} \begin{corollary}\label{phi} The map $\phi_n:\mathrm{GL}(n) \to {\mathbb P } [\wedge^n \mathrm{Sym}^{\mathbf{\weight}}\CC^n]$ which sends a matrix with column vectors $v_1,\ldots, v_n$ to the point \begin{equation}\label{phidef} (v_1,\ldots, v_n) \mapsto [v_1 \wedge (v_2+v_1^{\omega_2}) \wedge \ldots \wedge (v_n + \sum_{i=2}^n p_{i,n}(v_1,\ldots,v_n))] \end{equation} is invariant under right multiplication of $\hat{U}$ on $\mathrm{GL}(n)$ and $\mathrm{GL}(n)$-equivariant with respect to left multiplication on $\mathrm{GL}(n)$ and the induced action on $ {\mathbb P } [\wedge^n\mathrm{Sym}^{\mathbf{\weight}}\CC^n]$. It therefore defines a $\mathrm{GL}(n)$-equivariant embedding \begin{equation}\label{embedding} \phi_n: \mathrm{GL}(n)/\hat{U} \hookrightarrow \mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n). \end{equation} \end{corollary} \begin{rem} \label{afemb} Note that the image of the embedding $\phi_n:\mathrm{GL}(n) \to {\mathbb P } [\wedge^n \mathrm{Sym}^{\mathbf{\weight}}\CC^n]$ lies in the open affine subset defined by the non-vanishing of the coordinate in $\wedge^n \mathrm{Sym}^{\mathbf{\weight}}\CC^n$ corresponding to the one-dimensional summand $\wedge^n {\mathbb C }^n$ of $\wedge^n \mathrm{Sym}^{\mathbf{\weight}}\CC^n$ spanned by $e_1 \wedge \cdots \wedge e_n$. \end{rem} \begin{proof}[Proof of Theorem \ref{embed}] First we prove that $\hat{U}$ is contained in the stabiliser $\mathrm{GL}(n)_{\mathfrak{p}_n}$. For $(\alpha_1,\ldots, \alpha_n)\in {\mathbb C }^* \times {\mathbb C }^{n-1}$ let \[u(\alpha_1, \ldots, \alpha_n) =\left(\begin{array}{ccccc}\alpha_1 & \alpha_2 & \alpha_3 & \ldots & \alpha_n \\ 0 & \alpha_1^{\omega_2} & p_{2,3}(\mathbf{\alpha}) & \ldots & p_{2,n}(\mathbf{\alpha}) \\ 0 & 0 & \alpha_1^{\omega_3} & \ldots & p_{3,n}(\mathbf{\alpha}) \\ \cdot & \cdot & \cdot & \cdot &\cdot \\ 0 & 0 & 0 & 0 & \alpha_1^{\omega_n} \end{array}\right)\in \hat{U} \] denote the element of $\hat{U}$ determined by the parameters $(\alpha_1, \ldots, \alpha_n)$ and for an $n$-tuple of vectors $\mathbf{v}=(v_1,\ldots, v_n)\in ({\mathbb C }^n)^{\oplus n}$ forming the columns of the $n \times n$-matrix $A\in \mathrm{GL}(n)$ we similarly define the matrix \[u(A)=u(v_1, \ldots, v_n) =\left(\begin{array}{ccccc}v_1 & v_2 & v_3 & \ldots & v_n \\ 0 & v_1^{\omega_2} & p_{2,3}(\mathbf{v}) & \ldots & p_{2,n}(\mathbf{v}) \\ 0 & 0 & v_1^{\omega_3} & \ldots & p_{3,n}(\mathbf{v}) \\ \cdot & \cdot & \cdot & \cdot &\cdot \\ 0 & 0 & 0 & 0 & v_1^{\omega_n} \end{array}\right)\in M_{n \times n}(\mathrm{Sym}^{\mathbf{\weight}}\CC^n) \] with entries in $\mathrm{Sym}^{\mathbf{\weight}}\CC^n$. Then the map $\phi$ in \eqref{phidef} is the composition \[\phi(v_1,\ldots, v_n)=(u\circ \pi)(v_1,\ldots, v_n)\] where the rational map $\pi:M_{n \times n}(\mathrm{Sym}^{\mathbf{\weight}}\CC^n) \dasharrow \mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$ restricts to a morphism on an open subset of $M_{n \times n}(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$ containing the image of $u:\mathrm{GL}(n) \to M_{n \times n}(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$. Now, since $\hat{U}$ is a group, the $(i,j)$ entry of the product of two elements is the polynomial $p_{i,j}$ in the entries of the first row of the product; that is, \[u(\beta_1,\ldots, \beta_n)u(\alpha_1,\ldots, \alpha_n)=u(\alpha_1\beta_1,\alpha_1^{\omega_2}\beta_2+\beta_1\alpha_2, \ldots,\sum_{m=1}^n p_{m,n}(\alpha_1,\ldots \alpha_n)\beta_m)\] for any $\alpha_1, \ldots, \alpha_n,\beta_1, \ldots, \beta_n$. This implies that \[u(e_1,\ldots, e_n)\cdot u(\alpha_1,\ldots, \alpha_n)=u(\alpha_1e_1,\alpha_1^{\omega_2}e_2+\alpha_2e_1, \ldots,\sum_{m=1}^n p_{m,n}(\alpha_1,\ldots \alpha_n)e_m)\] where $\{e_1, \ldots, e_n\}$ is the standard basis for ${\mathbb C }^n$. However, the $n$-tuple $$(\alpha_1e_1,\alpha_1^{\omega_2}e_2+\alpha_2e_1, \ldots,\sum_{m=1}^n p_{m,n}(\alpha_1,\ldots \alpha_n)e_m)) \in ({\mathbb C }^n)^{\oplus n}$$ on the right hand side forms the columns of the matrix $ u(\alpha_1,\ldots,\alpha_n)$, so we arrive at \begin{equation}\label{groupproperty} u(e_1,\ldots, e_n)\cdot u(\alpha_1,\ldots, \alpha_n)=u(u(\alpha_1,\ldots, \alpha_n)\cdot e_1,\ldots , u(\alpha_1,\ldots, \alpha_n)\cdot e_n). \end{equation} Since $u(\alpha_1,\ldots, \alpha_n)$ lies in the standard Borel subgroup $B_n$ of $\mathrm{GL}(n)$, the matrices $u(e_1,\ldots, e_n)$ and $u(e_1,\ldots, e_n)\cdot u(\alpha_1,\ldots, \alpha_n)$ represent the same element in $\mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$; that is, in $\mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$ we have \begin{multline}\nonumber \mathfrak{p}_n=\pi(u(e_1,\ldots ,e_n))=\pi(u(e_1,\ldots, e_n)\cdot u(\alpha_1,\ldots, \alpha_n))=\\ \pi(u(u(\alpha_1,\ldots, \alpha_n)\cdot e_1,\ldots , u(\alpha_1,\ldots, \alpha_n)\cdot e_n) \end{multline} which completes the proof that $\hat{U} \subseteq \mathrm{GL}(n)_{\mathfrak{p}_n}$. It remains to prove that $\mathrm{GL}(n)_{\mathfrak{p}_n}\subseteq \hat{U}$. Suppose that $g = (g_{ij})_{i,j=1}^n \in \mathrm{GL}(n)_{\mathfrak{p}_n}$; we want to show that $g \in \hat{U}$. For $1 \leq m \leq n$ let $$g^{\leq m} = (g_{ij})_{i,j=1}^m \in \mathrm{GL}(m)$$ be the upper left $m\times m$ block of $g$. Recall that by Remark \ref{rmk3.2} if $j \geq i \geq 2$ then $p_{i,j}(\alpha_1,\ldots,\alpha_n) = p_{i,j}(\alpha, \ldots, \alpha_{j-1})$ depends only on $\alpha_1,\ldots,\alpha_{j-1}$. We will prove by induction on $m$ that $$g^{\leq m} = u(g_{11},g_{12}, \ldots, g_{1m})$$ This is clear for $m=1$ since $g^{\leq 1} =(g_{11})=u(g_{11})$. Suppose that it is true for some $m<n$. Since $g \in \mathrm{GL}(n)_{\mathfrak{p}_n}$ the Pl\"{u}cker coordinates $$ e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^n p_{i,n}(e_1,\ldots, e_n) $$ of $\mathfrak{p}_n$ agree up to multiplication by a nonzero scalar with the Pl\"{u}cker coordinates $$ g e_1 \wedge (g e_2+g e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^n p_{i,n}(g e_1,\ldots, g e_n) $$ of $g \mathfrak{p}_n$, where $g e_j = \sum_{s=1}^n g_{sj}e_s$ and $p_{i,j}(g e_1,\ldots,g e_n) \in \mathrm{Sym}^{\omega_i}({\mathbb C }^n) \subseteq \mathrm{Sym}^{\mathbf{\weight}}\CC^n$. By the inductive hypothesis we have $$g_{ij} = p_{i,j}(g_{11}, \ldots, g_{1j})$$ for $1 \leq i \leq m$ and $1 \leq j \leq m$, so with our previous notation \[g^{\le m}=u(g_{11},\ldots, g_{1m})\in \hat{U}\] holds, and therefore $g^{\le m}$ fixes $\mathfrak{p}_m$; thus \[\mathfrak{p}_m=\pi(u(e_1,\ldots, e_m))=\pi(u(g^{\le m}e_1,\ldots ,g^{\le m}e_m)).\] In coordinates this means that $$ \cdot e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^m p_{i,m}(e_1,\ldots, e_m)$$ agrees up to multiplication by a nonzero scalar with $$ g e_1 \wedge (g e_2+g e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^m p_{i,m}(g e_1,\ldots, g e_m). $$ Therefore $$ e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^m p_{i,m}(e_1,\ldots, e_m) \wedge \sum_{i=1}^{m+1} p_{i,m+1}(e_1,\ldots, e_{m+1}) \wedge \ldots \wedge \sum_{i=1}^n p_{i,n}(e_1,\ldots, e_n)$$ and $$ e_1 \wedge ( e_2+ e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^m p_{i,m}(e_1,\ldots, e_m) \wedge \sum_{i=1}^{m+1} p_{i,m+1}(ge_1,\ldots, g e_{m+1})\wedge \ldots \wedge \sum_{i=1}^n p_{i,n}(g e_1,\ldots, g e_n)$$ agree up to multiplication by a nonzero scalar. Applying the identification \begin{equation}\label{wedge} \bigwedge^n (\oplus_{i=1}^tV_i)=\bigoplus_{p_1+\ldots+p_t=n}\left( \wedge^{p_1}V_1 \otimes \ldots \otimes \wedge^{p_t}V_t\right), \end{equation} with $V_1=\bigwedge^{m+1}({\mathbb C }^n \oplus \mathrm{Sym}^{\omega_2}{\mathbb C }^n \oplus \cdots\oplus \mathrm{Sym}^{\omega_{m+1}}{\mathbb C }^n)$ and $$V_2=\mathrm{Sym}^{\omega_{m+2}}{\mathbb C }^n,\ldots, V_{n-m}=\mathrm{Sym}^{\omega_n}{\mathbb C }^n$$ we get a natural $\mathrm{GL}(n)$-equivariant projection to the direct summand corresponding to $p_1=m+1,p_2=\ldots =p_{n-m}=1$ given by $$\pi: \bigwedge^n \mathrm{Sym}^{\mathbf{\weight}}\CC^n \to \bigwedge^{m+1}({\mathbb C }^n \oplus \mathrm{Sym}^{\omega_2}{\mathbb C }^n \oplus \cdots\oplus \mathrm{Sym}^{\omega_{m+1}}{\mathbb C }^n) \otimes \mathrm{Sym}^{\omega_{m+2}} \otimes \cdots \otimes \mathrm{Sym}^{\omega_n}{\mathbb C }^n$$ which takes $e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^n p_{i,n}(e_1,\ldots, e_n)$ to $$e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^m p_{i,m}(e_1,\ldots, e_m) \wedge \sum_{i=1}^{m+1} p_{i,m+1}(e_1,\ldots, e_{m+1}) \otimes e_1^{\omega_{m+2}} \otimes \cdots \otimes e_1^{\omega_n}.$$ This must agree up to multiplication by a nonzero scalar with the projection \begin{multline}\nonumber \pi\left(g e_1 \wedge (g e_2+g e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^m p_{i,m}(e_1,\ldots, e_m) \wedge \sum_{i=1}^n p_{i,n}(g e_1,\ldots, g e_n)\right)=\\ e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^{m+1} p_{i,m+1}(g e_1,\ldots, g e_{m+1}) \otimes {q_{m+2}} \otimes \cdots \otimes {q_n} \end{multline} for some $q_j \in \mathrm{Sym}^{\omega_j}{\mathbb C }^n$ for ${m+2}\leq j \leq n$. It follows from this that \begin{multline}\label{transformed} \lambda e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^{m+1} p_{i,m+1}(e_1,\ldots, e_{m+1})=\\ e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^{m+1} p_{i,m+1}(g e_1,\ldots, g e_{m+1}), \end{multline} for some nonzero scalar $\lambda$. Now, $g^{\le m}=u(g_{11},\ldots, g_{1m})$ and therefore by \eqref{groupproperty} \[u(g^{\le m}e_1,\ldots, g^{\le m}e_m)=u(e_1,\ldots, e_n)\cdot u(g_{11},\ldots, g_{1m}).\] But if $m+1 \geq i\ge 2$ then $p_{i,m+1}(\alpha_1,\ldots,\alpha_{m})$ is a polynomial in $\alpha_1,\ldots, \alpha_{m}$, and does not depend on $\alpha_{m+1},\ldots, \alpha_n$. Therefore \begin{multline} p_{i,m+1}(ge_1,\ldots, ge_{m})=\sum_{s=2}^{n}p_{is}(e_1,\ldots, e_m) p_{s,m+1} (g_{11},\ldots,g_{1,m+1})\text{ for } 2\le i \le m+1 \end{multline} and \[p_{1,m+1}(ge_1,\ldots, ge_{m+1})=ge_{m+1}=\sum_{i=1}^n g_{i,m+1}e_i. \] Substituting this into \eqref{transformed} we arrive at the equation \begin{multline}\label{transformed2} \lambda \cdot \left(e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \sum_{i=1}^{m+1} p_{i,m+1}(e_1,\ldots, e_{m+1})\right)=\\ =e_1 \wedge (e_2+e_1^{\omega_2}) \wedge \ldots \wedge \left(\sum_{i=2}^{m+1} \sum_{s=1}^{n}p_{s,m+1} (g_{11},\ldots,g_{1,m+1})p_{is}(e_1,\ldots, e_{m+1})+\sum_{s=2}^n g_{s,m+1}e_i\right). \end{multline} There is another $\mathrm{GL}(n)$-equivariant projection to the direct summand corresponding to $V_i=\mathrm{Sym}^{\omega_i}{\mathbb C }^n$ and $p_1=2,p_2=\ldots =p_{m}=1$ in \eqref{wedge}, given by \[ \rho:\bigwedge^{m+1}({\mathbb C }^n \oplus \mathrm{Sym}^{\omega_2}{\mathbb C }^n \oplus \cdots\oplus \mathrm{Sym}^{\omega_{m+1}}{\mathbb C }^n) \to \\ \wedge^2{\mathbb C }^n \otimes \mathrm{Sym}^{\omega_2}{\mathbb C }^n \otimes \cdots \otimes \mathrm{Sym}^{\omega_m}{\mathbb C }^n \] which takes the left hand side of \eqref{transformed2} to \[\lambda (e_1\wedge e_{m+1}) \otimes e_1^{\omega_2} \otimes \ldots \otimes e_1^{\omega_m}\] and the right hand side to \[\left(e_1 \wedge (\Sigma_{s=2}^m(p_{s,m+1} (g_{11},\ldots,g_{1,m+1})-g_{s,m+1})e_s+g_{m+1,m+1}e_{m+1})\right) \otimes e_1^{\omega_2} \otimes \ldots \otimes e_1^{\omega_m} . \] These two are equal, so we obtain \begin{equation}\label{lambda1} g_{s,m+1}=p_{s,m+1}(g_{11},\ldots, g_{1,m+1}) \text { for } s \neq 1,m+1 \,\,\, \mbox{ and } \,\,\, \lambda=g_{m+1,m+1}. \end{equation} Note that the right hand side of \eqref{transformed2} is independent of $b_{1,m+1}$, which can be chosen arbitrarily, as we expect. Finally, for $s=m+1$, we take the third $\mathrm{GL}(n)$-equivariant projection corresponding to $V_i=\mathrm{Sym}^{\omega_i}{\mathbb C }^n$ and $p_1=\ldots =p_n=1$ in \eqref{wedge}, given by \begin{multline}\nonumber \xi:\bigwedge^{m+1}(\mathrm{Sym}^{\omega_1}{\mathbb C }^n \oplus \mathrm{Sym}^{\omega_2}{\mathbb C }^n \oplus \cdots\oplus \mathrm{Sym}^{\omega_{m+1}}{\mathbb C }^n) \to \\ {\mathbb C }^n \otimes \mathrm{Sym}^{\omega_2}{\mathbb C }^n \otimes \cdots \otimes \mathrm{Sym}^{\omega_m}{\mathbb C }^n \otimes \mathrm{Sym}^{\omega_{m+1}}{\mathbb C }^n, \end{multline} and project the equation \eqref{transformed2}. We get \[\lambda\cdot e_1^{\omega_1} \otimes e_1^{\omega_2} \otimes \ldots \otimes e_1^{\omega_{m+1}}=e_1^{\omega_1} \otimes e_1^{\omega_2} \otimes \ldots \otimes e_1^{\omega_m} \otimes p_{m+1,m+1}(b_{11},\ldots ,b_{1,m+1})e_1^{\omega_{m+1}}\] which gives $\lambda=p_{m+1,m+1}(b_{11},\ldots ,b_{1,m+1})$. From \eqref{lambda1} we get $$g_{m+1,m+1}=p_{m+1,m+1}(b_{11},\ldots ,b_{1,m+1})$$ and Theorem \ref{embed} is proved. \end{proof} \subsection{Changing the basis of ${\mathfrak u}$} We observed in Proposition \ref{homogprop} that the left-right multiplication action of the subgroup ${\mathbb C }^*$ of $\hat{U}$ implies that the polynomial entry $p_{i,j}(\alpha)$ of an element of $\hat{U}$ with parameters $\alpha$ in the first row has degree $i$ and weighted degree $\omega_j$ in $\alpha$. Similarly we have a bigrading on $\mathrm{Sym}^{\mathbf{\weight}}\CC^n$ as follows: the Lie algebra ${\mathfrak u}=\mathrm{Lie}(U)$ decomposes into eigenspaces for the adjoint action of $\mathrm{Lie} {\mathbb C }^*={\mathbb C } z = {\mathfrak u}_1$ as \[{\mathfrak u}=\oplus_{i=1}^r {\mathfrak u}_i,\] where $z \in {\mathfrak u}_1 \setminus \{0\}$ and \[{\mathfrak u}_i=\{x \in {\mathfrak u}:[x,z]=(\tilde{\omega}_i-1) x\}\] if $\tilde{\omega}_1,\ldots, \tilde{\omega}_r$ are the different weights among $\omega_1,\ldots, \omega_n$. This induces a decomposition \[\mathrm{Sym}^{\mathbf{\weight}}\CC^n={\mathbb C }^n \oplus ({\mathfrak u}_2 \otimes \mathrm{Sym}^{\tilde{\omega}_2}{\mathbb C }^n) \oplus \ldots \oplus ({\mathfrak u}_r \otimes \mathrm{Sym}^{\tilde{\omega}_r}{\mathbb C }^n)\] of $\mathrm{Sym}^{\mathbf{\weight}}\CC^n$. Let $\mathrm{Sym}^{a}_{b}{\mathbb C }^n=\oplus_{\omega_{i_1}+\ldots +\omega_{i_a}=b} ({\mathbb C } e_{i_1}\ldots e_{i_a})\subseteq \mathrm{Sym}^{a}{\mathbb C }^n$ and define \[\sym^{\mathbf{\weight}}_{\Delta}\CC^n =\oplus_{i,j=1}^r (\mathfrak{u}_i \otimes \mathfrak{u}_j \otimes \mathrm{Sym}^{\tilde{\omega}_i}_{\tilde{\omega}_j}{\mathbb C }^n).\] The image of the embedding $\phi_n$ of $\mathrm{GL}(n)/\hat{U}$ sits in the subset $\mathrm{Grass}_n( \sym^{\mathbf{\weight}}_{\Delta}\CC^n)$ of $\mathrm{Grass}_n( \mathrm{Sym}^{\mathbf{\weight}}\CC^n)$, and the group \[\widetilde{\mathrm{GL}}({\mathfrak u})={\mathbb C }^* \times \mathrm{GL}({\mathfrak u}_2) \times \ldots \times \mathrm{GL}({\mathfrak u}_r) \subset \mathrm{GL}(\hat{\mathfrak{u}})\] acts on $\sym^{\mathbf{\weight}}_{\Delta}\CC^n$ via conjugation and thus on $\mathrm{Grass}(n,\sym^{\mathbf{\weight}}_{\Delta}\CC^n)$. If $g \in \widetilde{\mathrm{GL}}({\mathfrak u})$ then the subgroup \[ g^{-1}\hat{U} g \] of $\mathrm{GL}(n)$ with Lie subalgebra $g^{-1}{\mathfrak u} g $ has the same form as $\hat{U}$ and so we can compare the corresponding embeddings $\phi_n$ of $\mathrm{GL}(n)/\hat{U}$ and $\mathrm{GL}(n)/g^{-1}\hat{U} g$ in $\mathrm{Grass}(n,\sym^{\mathbf{\weight}}_{\Delta}\CC^n)$; let us denote these by $\phi_{\hat{U}}$ and $\phi_{g^{-1}\hat{U} g}$. The linear forms in the first row of $g^{-1}{\mathfrak u} g$ (and the same linear forms in the first row of $g^{-1} \hat{U} g$) are linearly independent, and give parameters $b_1,\ldots, b_n$ for the group and its Lie algebra. The corresponding embedding is then $\phi_{g^{-1}\hat{U} g}$, and we have \begin{prop}\label{changebasis} A linear change of basis of $\hat{{\mathfrak u}}$ by any element of $\widetilde{\mathrm{GL}}({\mathfrak u})$ does not change the closure of the image of the embedding $\phi_{\hat{U}}$ of $\mathrm{GL}(n)/\hat{U}$ into the Grassmannian $\mathrm{Grass}(n,\sym^{\mathbf{\weight}}_{\Delta}\CC^n)$ up to isomorphism. \end{prop} \begin{proof} This follows from the commutativity of the diagram \begin{diagram}[LaTeXeqno,labelstyle=\textstyle]\label{diagram} \mathrm{GL}(n)/\hat{U} & \rInto^{\phi_{\hat{U}}} & \mathrm{Grass}(n,\sym^{\mathbf{\weight}}_{\Delta}\CC^n) \\ \dTo_{\mathrm{conj(g)}} & & \dTo_{\mathrm{conj}(g)\circ (g_{11}\cdot g^{-1})}\\ \mathrm{GL}(n)/g^{-1}\hat{U} g & \rInto^{\phi_{g^{-1}\hat{U} g}} & \mathrm{Grass}(n,\sym^{\mathbf{\weight}}_{\Delta}\CC^n), \end{diagram} where \begin{enumerate} \item the left vertical $\mathrm{conj}(g)$ is the conjugation action sending the coset $\hat{U} h\in \mathrm{GL}(n)/\hat{U}$ to $(g^{-1}\hat{U} g)(g^{-1}hg)=g^{-1}\hat{U} hg \in \mathrm{GL}(n)/g^{-1}\hat{U} g$; \item the right vertical map is the composition of the multiplication by the scalar $g_{11}$ and the matrix $g^{-1}$ on ${\mathbb C }^n$, and conjugation with $g \in \widetilde{\mathrm{GL}}({\mathfrak u})$ on $\mathrm{Sym}^{\mathbf{\weight}}\CC^n$. \end{enumerate} \end{proof} \section{Singularities, jet differentials and curvilinear Hilbert schemes}\label{sec:jetdifferentials} In this section we will study an important example of a group of the form $\hat{U}$ and its projective embedding $\phi_{\hat{U}}:\mathrm{GL}(n)/\hat{U} \hookrightarrow \mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$ given by Theorem \ref{embed} whose image is contained in the affine open subset of the Grassmannian $\mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$ where the coordinate corresponding to $\wedge^n {\mathbb C }^n$ is nonzero. We will see that here the codimension-$2$ property does not hold. Nonetheless in the next section we will see that a modification of this embedding can be used to find an affine embedding of $\mathrm{SL}(n)/U \rtimes F_{(M)}$ (where $U \rtimes F_{(M)}$ is an extension of $U$ by a finite subgroup of $\mathrm{SL}(n)$) for which the boundary does have codimension at least two. The example we will study in this section is given by $\hat{U} = \mathbb{G}_n \leq \mathrm{GL}(n)$, where as in the introduction $\mathbb{G}_n$ is the group of polynomial reparametrisations of $n$-jets of holomorphic germs $({\mathbb C },0) \to ({\mathbb C },0)$. This group plays a central role in global singularity theory \cite{arnold} and in the recent history of hyperbolic varieties \cite{demailly,dmr,kobayashi, siu}. We will see that the compactification $\overline{\mathrm{GL}(n)/{\mathbb G }_n}$ constructed in $\Sigma$4 as the closure of an orbit of $\mathrm{GL}(n)$ with stabiliser $\mathbb{G}_n$ in a Grassmannian $\mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$ is isomorphic to the so-called curvilinear component of the punctual Hilbert scheme on ${\mathbb C }^n$ \cite{b2,bertin}. \subsection{Singularity theory in a nutshell \cite{arnold,b2,bsz,gaffney,mather,porteous,ronga}} Let $J_n(m,l)$ denote the space of $n$-jets of holomorphic map germs from ${\mathbb C }^m$ to ${\mathbb C }^l$ mapping the origin to the origin. This is a finite dimensional complex vector space, and there is a complex linear composition of jets \[J_n(m,l) \otimes J_n(l,p) \to J_n(m,p).\] Let $J_n^\mathrm{reg}(m,l)$ denote the open dense subset of $J_n(m,l)$ consisting of jets whose linear part is regular (that is, of maximal rank). Note that $$\mathbb{G}_n=J_n^{\mathrm{reg}}(1,1)$$ becomes a group under composition of jets, and it acts via reparametrisation on $J_n(1,n)$. If $z$ denotes the standard complex coordinate on ${\mathbb C }$, then elements of the vector space $J_n(1,1)$ can be identified with polynomials of the form $p(z)=\alpha_1z+\ldots +\alpha_nz^n$ with coefficients in ${\mathbb C }$, so $\{z,z^2,\ldots, z^n\}$ is a natural basis for $J_n(1,1)$ over ${\mathbb C }$. The composition of $p(z)$ with $q(z)=\beta_1z+\ldots +\beta_nz^n$ is \[(p \circ q)(z)=(\alpha_1\beta_1)z+(\alpha_2\beta_1+\alpha_1^2\beta_2)z^2+\ldots \] which corresponds (with respect to the basis $\{z,z^2,\ldots, z^n\}$) to multiplication on the right by the matrix \begin{equation}\label{bbg} \left( \begin{array}{ccccc} \alpha_1 & \alpha_2 & \alpha_3 & \ldots & \alpha_n\\ 0 & \alpha_1^2 & 2\alpha_1\alpha_2 & \ldots & 2\alpha_1\alpha_{n-1}+\ldots \\ 0 & 0 & \alpha_1^3 & \ldots & 3\alpha_1^2\alpha_ {n-2}+ \ldots \\ 0 & 0 & 0 & \ldots & \cdot \\ \cdot & \cdot & \cdot & \ldots & \alpha_1^n \end{array} \right) \end{equation} where the polynomial in the $(i,j)$ entry is \[p_{i,j}({\alpha}_1, \ldots, \alpha_n)=\sum_{\ell_1+\ell_2+\ldots +\ell_i=j}\alpha_{\ell_1}\alpha_{\ell_2} \ldots \alpha_{\ell_i}.\] Thus the subgroup $\mathbb{G}_n$ of $\mathrm{GL}(n)$ is an extension by ${\mathbb C }^*$ of its unipotent radical $\mathbb{U}_n$, and both $\mathbb{G}_n$ and $\mathbb{U}_n$ are generated along the first row and have the form (\ref{presentation}) with weights $1,2,\ldots,n$. We can think of the quotient $J^n(1,n)/\mathbb{G}_n$ as the moduli space of $n$-jets of entire holomorphic curves in ${\mathbb C }^n$. Global singularity theory studies global and local behavior of singularities of holomorphic maps between complex manifolds; \cite{arnold} is a standard reference. For a holomorphic map $f:M \to N$ with $f(p)=q \in N$ the local algebra is $A(f)=\mathfrak{m}_p /f^*\mathfrak{m}_q$; if $\mathfrak{m}_p$ is a finite $\mathfrak{m}_q$-module, then $p$ is an isolated singularity. For a complex nilpotent algebra $A$ with $\dim_{{\mathbb C }} A=n$ we define \[\Sigma_A(m,l)=\left\{f \in J_n(m,l):A(f)\simeq A \right\}\] to be the subset of $J_n(m,l)$ consisting of germs with local algebra at the origin isomorphic to $A$; these are known as the $A$-singularity germs. There is a natural hierarchy of singularities where for two algebras $A$ and $A'$ of the same dimension $n$ we have \[A>A' \text{ if } \Sigma_{A}(m,l) \subset \overline{\Sigma_{A'}(m,l)} \text{ for } l>>m.\] When $A_n=z{\mathbb C }[z]/z^{n+1}$ is the nilpotent algebra generated by one element, the corresponding singularities are the so-called $A_n$-singularities (also known as Morin singularities or curvilinear singularities). These vanish to order $n$ in some direction, giving us the geometric description \[\Sigma_{A_n}(m,l)=\{\psi\in J_n(m,l): \exists \gamma \in J_n(1,m) \text{ such that } \gamma \circ \psi=0\}.\] If $\psi\in J_n(m,l)$ and a test curve $\gamma_0 \in J_n(1,m)$ exists with $ \gamma_0 \circ \psi=0 $, then there is a whole family of such test curves. Indeed, for any $\beta \in J_n^\mathrm{reg}(1,1)$, the curve $\beta \circ \gamma_0$ is also a test curve, and in fact if $\psi \in J_n^{\mathrm{reg}}(m,l)$ then we get all test curves $\gamma \in J_n(1,m)$ with $ \gamma \circ \psi=0 $ in this way. This description of the curvilinear jets using the so-called \lq test-curve model' goes back to Porteous, Ronga and Gaffney \cite{gaffney,porteous,ronga}. This means that the regular part of $\Sigma_{A_n}(m,l)$ fibres over the quotient $J_n^\mathrm{reg}(1,m)/\mathbb{G}_n$, which can be thought of as representing moduli of $n$-jets of holomorphic germs in ${\mathbb C }^m$. We can identify $J_n(1,m)$ with the set $M_{m \times n}({\mathbb C })$ of $m\times n$ complex matrices by putting the $i$th derivative of $\gamma \in J_n(1,m)$ into the $i$th column of the corresponding matrix, and then $J_n^\mathrm{reg}(1,m)$ consists of the matrices in $M_{m\times n}({\mathbb C })$ with nonzero first column. Therefore when $m=n$ the quotient $J^{\rm{reg}}_n(1,n)/\mathbb{G}_n$ contains $\mathrm{GL}(n)/\mathbb{G}_n$ as a dense open subset. In \cite{bsz} the first author and Szenes use this model of the Morin singularities and the machinery of equivariant localization to compute some useful invariants of $A_n$ singularities: their Thom polynomials. These ideas were later generalised in \cite{kazarian,rf}. The hierarchy of singularities is only partially understood, but there are well-known singularity classes in the closure of the $A_n$-singularities (for details see \cite{arnold,rimanyi}). In particular, for $n=4$, the so called $I_{a,b}$ singularities with $a+b=4$ are defined by the algebra \[A_{I_{a,b}}=(x,y)/(xy,x^a+y^b)\] and it is well known (see \cite{rimanyi,rf}) that \[\Sigma_{I_{2,2}}(m,l) \subset \overline{\Sigma_{A_4}(m,l)}\] has codimension $1$ in $\overline{\Sigma_{A_4}(m,l)}$. But as we have just seen, a dense open subset of $\Sigma_{A_4}(4,l)$ fibres over $\mathrm{GL}(4)/\mathbb{G}_4$, and the latter is embedded via $\phi_4$ (see Corollary \ref{phi}) into $\mathrm{Grass}_4(\mathrm{Sym}^{\mathbf{\weight}}\CC^n)$ where $\mathbf{\omega}=(1,2,3,4)$ as at \eqref{bbg}. When $l=1$, then in fact \[\overline{\Sigma_{A_4}(4,1)}=\overline{\phi_4(\mathrm{GL}(4)} \subseteq \mathrm{Grass}_4(\mathrm{Sym}^{\mathbf{\weight}}\CC^n),\] because the fibres are trivial. So it follows that $\Sigma_{I_{2,2}}(4,1)$ lies in the boundary of $\overline{\phi_4(\mathrm{GL}(4)}$ and has codimension one. In fact \[\mathfrak{p}_{2,2}=\lim_{t \to 0}\left(\begin{array}{cccc}t & t^{-2} & -t^{-5} & 0\\ 0 & 1 & -2t^{-3} & 0\\ 0 & 0 & t^{-1} & 0\\ 0 & 0 & 0 & 1 \end{array}\right)\cdot \mathfrak{p}_n=e_1 \wedge e_2 \wedge (e_3+e_1^2) \wedge (e_4+e_1e_3+e_2^2+e_1^3)\] sits in $\Sigma_{I_{2,2}}(4,1)$ and its orbit has codimension $1$ in $\overline{\phi(\mathrm{GL}(4)}$. Indeed it can be checked by direct computation that the stabiliser of $\mathfrak{p}_{2,2}$ is \[ \left\{ \left(\begin{array}{cccc}t & a & b & c\\ 0 & t^{3/2} & -2t^{1/2}a & d\\ 0 & 0 & t^{2} & tb+a^2\\ 0 & 0 & 0 & t^3 \end{array}\right): t \in {\mathbb C }^*, a,b,c,d \in {\mathbb C } \right\} \] which has dimension $5$, whereas the stabiliser ${\mathbb G }_4$ of $\mathfrak{p}_4$ in $\mathrm{GL}(4)$ has dimension $4$. \subsection{Invariant jet differentials and the Demailly bundle} Jet differentials have played a central role in the study of hyperbolic varieties. Their contribution can be traced back to the work of Bloch \cite{bloch}, Cartan \cite{cartan}, Ahlfors \cite{ahlfors}, Green and Griffiths \cite{gg}, Siu \cite{siu}, whose ideas were extended in the seminal paper of Demailly \cite{demailly}, and recently used by Diverio, Merker and Rousseau \cite{dmr} and the first author in \cite{b} to prove the Green Griffiths conjecture for generic projective hypersurfaces of high order; see also the survey papers \cite{kobayashi,demailly,dr} for more details. Let \[f:{\mathbb C } \to X,\ \ t\to f(t)=(f_1(t),f_2(t), \ldots ,f_d(t))\] be a curve written in local holomorphic coordinates $(z_1,\ldots ,z_d)$ on a complex manifold $X$, where $d=\dim(X)$. Let $J_n(X)$ be the $n$-jet bundle over $X$ of holomorphic curves, whose fibre $(J_n(X))_x$ at $x \in X$ is the space of $n$-jets of germs at $x$ of holomorphic curves in $X$. This fibre can be identified with $J_n(1,d)$. The group of reparametrisations $\mathbb{G}_n=J_n^{\mathrm{reg}}(1,1)$ acts fibrewise on $J_n(X)$, and the action is linearised as at \eqref{bbg}. For $\lambda \in {\mathbb C }^*$ we have \[(\lambda \cdot f)(t)=f(\lambda \cdot t),\text{ so } \lambda \cdot (f',f'',\ldots ,f^{(k)})=(\lambda f',\lambda^2 f'',\ldots ,\lambda^k f^{(k)}).\] Polynomial functions on $J_n(X)$ correspond to algebraic differential operators called jet differentials; these have the form \[Q(f',f'',\ldots ,f^{(k)})=\sum_{\alpha_i \in \mathbb{N}^n}a_{\alpha_1,\alpha_2,\ldots \alpha_k}(f(t))(f'(t)^{\alpha_1}f''(t)^{\alpha_2}\cdots f^{(n)}(t)^{\alpha_n}),\] where $a_{\alpha_1,\alpha_2,\ldots \alpha_n}(z)$ are holomorphic coefficients on $X$ and $t \mapsto f(t)$ is the germ of a holomorphic curve in $X$. Here $Q$ is homogeneous of weighted degree $m$ under the ${\mathbb C }^*$ action if and only if \[Q(\lambda f',\lambda^2 f'',\ldots ,\lambda^k f^{(n)})=\lambda^m Q(f',f'',\ldots ,f^{(n)})\] for every $\lambda \in {\mathbb C }$. \begin{defn} (i) (Green-Griffiths \cite{gg}) Let $E_{n,m}^{GG}$ denote the sheaf on $X$ of jet differentials of order $n$ and weighted degree $m$. (ii) (Demailly, \cite{demailly}) The bundle of invariant jet differentials of order $n$ and weighted degree $m$ is the subbundle $E_{n,m}$ of $E_{n,m}^{GG}$ whose elements are invariant under the action of the unipotent radical $\mathbb{U}_n$ of the reparametrisation group $\mathbb{G}_n$ and transform under the action of $\mathbb{G}_n$ as \[Q((f\circ \phi)',(f \circ \phi)'',\ldots ,(f \circ \phi)^{(n)})=\phi'(0)^mQ(f',f'',\ldots, f^{(n)})\] for $\phi \in \mathbb{G}_n$. \end{defn} Thus the fibres of the Demailly bundle $\bigoplus_{m \geqslant 0} E_{n,m}$ are isomorphic to ${\mathbb C }[J_n(1,d)]^{\mathbb{U}_n}$, where $\mathbb{U}_n$ is the unipotent radical of $\mathbb{G}_n$. Demailly in \cite{demailly} conjectured that this algebra of invariant jet differentials is finitely generated. Rousseau (\cite{rousseau}) and Merker (\cite{Merker1,Merker2}) showed that when both $n$ and $\dim X$ are small then this conjecture is true, and in \cite{Merker2} Merker provided an algorithm which produces finite sets of generators when they exist for any $\dim X$ and $n$. In \cite{BK} the authors put forward a proof that ${\mathbb U }_n$ is a Grosshans subgroup of $\mathrm{SL}(n)$, with the Demailly conjecture as an immediate corollary, but we later discovered a gap in that proof. In this paper we are studying quotient constructions for linear actions such as that of ${\mathbb U }_n$ on a fibre of the Demailly bundle $\bigoplus_{m \geqslant 0} E_{n,m}$ from a more geometric point of view; however it will follow from this point of view (see Remark \ref{Demconj}) that the subalgebra of ${\mathbb C }[J_n(1,d)]^{\mathbb{U}_n}$ spanned by the jet differentials which are weight vectors with non-positive weight for the action of ${\mathbb C }^* \leq \tilde{{\mathbb G }}_n$ twisted by a well adapted rational character is finitely generated (cf. \cite{Merker1,Merker2}). \subsection{Curvilinear Hilbert schemes}\label{subsec:hilb} In \cite{b3} the closure $\overline{J_n(1,d)/\mathbb{G}_n}$ of ${J_n(1,d)/\mathbb{G}_n}$ embedded in $\mathrm{Grass}_n(\oplus_{i=1}^n \,{\rm Sym} \,^i{\mathbb C }^d)$ is identified with the curvilinear component of the $n+1$-point punctual Hilbert scheme on ${\mathbb C }^d$; this geometric component of the punctual Hilbert scheme on ${\mathbb C }^d$ is thus the compactification of a non-reductive quotient. Hilbert schemes of points on surfaces form a central object of geometry and representation theory and have a rich literature (see for example \cite{nakajima,bertin}). Recently many interesting connections between Hilbert schemes of points on planar curve singularities and the topology of their links have been discovered \cite{shende,oblomkovshende,ors,maulik}. However, much less is known about Hilbert schemes or punctual Hilbert schemes on higher dimensional manifolds. As above let $\mathbb{G}_n=\jetreg 11$ denote the group of $n$-jets of reparametrisation germs of ${\mathbb C }$, which acts on the space $\jetreg 1d$ of $n$-jets of germs of curves $f:({\mathbb C },0) \to ({\mathbb C }^d,0)$ with nonzero linear part. As in $\Sigma$4 we have a map \[\phi: \jetreg 1d \to \mathrm{Grass}_n(\oplus_{i=1}^n \,{\rm Sym} \,^i{\mathbb C }^d)\] \[(v_1,\ldots, v_n) \mapsto [v_1\wedge (v_2 + v_1^2)\wedge \ldots \wedge (\sum_{a_1+a_2+\ldots +a_i=n}v_{a_1}v_{a_2} \ldots v_{a_i})]\] where $v_i \in {\mathbb C }^d$ is the degree $i$ part of the germ in $\jetreg 1d$, so that $v_1\neq 0$. This map is invariant under the action of $\mathbb{G}_n = \jetreg 11$ on the left, and gives us an embedding \[\jetreg 1d/\mathbb{G}_n \hookrightarrow \mathrm{Grass}_n(\oplus_{i=1}^n \,{\rm Sym} \,^i{\mathbb C }^d)).\] Let $X_{n,d} = \overline{\jetreg 1d/\mathbb{G}_n}$ denote the closure of the image of this embedding. In \cite{b3} it is proved that $X_{n,d}$ is the curvilinear component of the punctual Hilbert scheme of $n+1$ points on ${\mathbb C }^d$. This component is defined as follows. Let $({\mathbb C }^d)^{[n]}$ denote the Hilbert scheme of $n$ points on ${\mathbb C }^d$; that is, the set of zero-dimensional subschemes of ${\mathbb C }^d$ of length $n$. The punctual Hilbert scheme $({\mathbb C }^d)^{[n]}_0$ consists of those subschemes which are supported at the origin in ${\mathbb C }^d$. The components of the punctual Hilbert scheme are not known for $d\ge 3$ but there is a distinguished component containing all curvilinear subschemes. \begin{defn} A subscheme $\xi \in ({\mathbb C }^d)^{[n]}_0$ is called curvilinear if $\xi$ is contained in some smooth curve $C\subset {\mathbb C }^d$. Equivalently, one might say that $\mathcal{O}_\xi$ is isomorphic to the ${\mathbb C }$-algebra ${\mathbb C }[z]/z^{n}$. The punctual curvilinear locus is the set of curvilinear subschemes supported at the origin in ${\mathbb C }^d$ and its closure $\mathcal{C}^{[n]}_d$ is the (punctual) curvilinear component of $({\mathbb C }^d)^{[n]}_0$. \end{defn} Let $\mathfrak{m}=(x_1,\ldots, x_d)\subset \mathcal{O}_{{\mathbb C }^d,0}$ denote the maximal ideal of the local ring at the origin. Then \[\mathcal{C}^{[n]}_d=\overline{\{I \subset \mathfrak{m}:\mathfrak{m}/I \simeq t{\mathbb C }[t]/t^{n}\}}.\] Note that $ \,{\rm Sym} \,^{\le n} {\mathbb C }^d=\mathfrak{m}/\mathfrak{m}^{n+1}=\oplus_{i=1}^n \,{\rm Sym} \,^i{\mathbb C }^d$ consists of function-germs of degree $\le n$, and the punctual Hilbert scheme sits naturally in its Grassmannian \[\rho:({\mathbb C }^d)^{[n+1]}_0 \hookrightarrow \mathrm{Grass}(n, \,{\rm Sym} \,^{\le n} {\mathbb C }^d)\] \[I \mapsto \mathfrak{m}/I. \] The idea of \cite{b3} to describe the curvilinear component is the observation that curvilinear subschemes have test curves; that is, map germs $\gamma \in J_n(1,d)$ on which they vanish up to order $n$, so that $\gamma({\mathbb C }) \subseteq \mathrm{Spec}(\mathfrak{m}/I)$. Such a test curve is unique up to polynomial reparametrisation of $({\mathbb C },0)$. Therefore the image of $\phi$ is the same as the image of $\rho$ and their closures coincide. \begin{prop} For $d,n \in \mathbb{Z}^{>0}$ we have $\mathcal{C}^{[n+1]}_d=X_{n,d}$. \end{prop} When $d=2$ the curvilinear component $\mathcal{C}^{[n+1]}_2$ is dense in $({\mathbb C }^2)^{[n+1]}_0$, and therefore the full punctual Hilbert scheme is equal to the closure of the image of $\phi$. \begin{corollary} $({\mathbb C }^2)^{[n+1]}_0=X_{n,2}$ for any positive integer $n$. \end{corollary} This description of the curvilinear component becomes particularly useful when $n \le d$ so that the number of points is not more than the dimension $d$ plus $1$. In this case, the curvilinear component $\mathcal{C}^{[n+1]}_d$ is the closure of a $\mathrm{GL}(n)$-orbit in the Grassmannian $\mathrm{Grass}_n( \,{\rm Sym} \,^{\le n}{\mathbb C }^n)$. In fact, for any fixed basis $\{e_1,\ldots ,e_d\}$ of ${\mathbb C }^d$, we have $X_{n,d}=\overline{\mathrm{GL}(n) \cdot \mathbf{e}_{n,d}}$ where \[\mathbf{e}_{n,d}=e_1 \wedge (e_2\oplus e_1^2) \wedge \ldots \wedge (\sum_{\substack{a_1+\ldots+a_l=n\\ l\le d}} e_{a_1}\ldots e_{a_l}).\] This follows when $n \le d$ from the fact that $\phi$ is $\mathrm{GL}(n)$-equivariant, but for $n>d$ it cannot be true as the dimension of the quotient is larger than the dimension of $\mathrm{GL}(n)$. In particular, when $d=n$ we have $\mathrm{GL}(n) \subset J_n^\mathrm{reg}(1,n)$, and an embedding \[\mathrm{GL}(n)/\mathbb{G}_n \subseteq \mathrm{Grass}_n( \,{\rm Sym} \,^{\le n}{\mathbb C }^n)\] and the closure of the image $X_{n,n}=\mathcal{C}^{[n+1]}_n$ is the curvilinear component of the punctual Hilbert scheme of $n+1$ points on ${\mathbb C }^n$. In \cite{b3} this parametrisation of the curvilinear Hilbert scheme is used to develop an iterated residue formula for cohomological intersection numbers of tautological bundles over the curvilinear component. \section{Proof of the theorems} \subsection{Boundary components of $\mathrm{GL}(n)/\hat{U}$ in $ {\mathbb P } (\wsymk n)$}\label{affineboundary} Let us now return to the situation in $\Sigma$4 where $\hat{U}$ and $U$ are subgroups of $\mathrm{GL}(n)$ of the form described at (\ref{presentation}). In \Sigma \ref{sec:construction} we embedded $\mathrm{GL}(n)/\hat{U}$ in the Grassmannian $$\mathrm{Grass}_n(\mathrm{Sym}^{\mathbf{\weight}}\CC^n) \subseteq {\mathbb P } (\wsymk n)$$ as the $\mathrm{GL}(n)$ orbit of \[\mathfrak{p}_n=\phi_n(e_1,\ldots, e_n)=[e_1\wedge (e_2 + e_1^{\omega_2})\wedge \ldots \wedge (\sum_{i=1}^n p_{in}(e_1,\ldots, e_n))]\in {\mathbb P } [\wsymk n],\] and observed at Remark \ref{afemb} that the image of this embedding lies in the open affine subset defined by the non-vanishing of the coordinate in $ {\mathbb P } (\wsymk n)$ corresponding to the one-dimensional summand $\wedge^n {\mathbb C }^n$ of $\wsymk n$ spanned by $e_1 \wedge \cdots \wedge e_n$. In $\Sigma$5 we saw that there exist examples where the image has codimension-one boundary components which meet this affine open subset, and therefore the Grosshans principle is not applicable in this situation. In this section we study first the boundary of the orbit $ \mathrm{GL}(n)\mathfrak{p}_n $ in the affine space $\mathcal{W}=\wsymk n$. The stabiliser of $\mathfrak{p}_n$ in $\mathrm{GL}(n)$ is $U$. Let $\mathcal{W}_{v_1} $ be the linear subspace $$\mathcal{W}_{v_1} = \bigoplus_{(k_1,k_2,\ldots,k_n) \neq (1,1,\ldots,1)} \wedge^{k_1} ({\mathbb C }^n) \otimes \wedge^{k_2} (\mathrm{Sym}^{\omega_2} {\mathbb C }^n) \otimes \cdots \otimes \wedge^{k_n} (\mathrm{Sym}^{\omega_n} {\mathbb C }^n) $$ of $ \mathcal{W}$ where the coefficients corresponding to $v_1 \wedge v_1^{\omega_2} \wedge \ldots \wedge v_1^{\omega_n}$ are zero; that is, if $\pi^{\wedge}:\mathcal{W} \to {\mathbb C }^n \wedge \,{\rm Sym} \,^{\omega_2} {\mathbb C }^n \wedge \ldots \wedge \,{\rm Sym} \,^{\omega_n} {\mathbb C }^n$ denotes the projection onto the corresponding summand of $ \mathcal{W}$ then \[\mathcal{W}_{v_1}=\{w\in \mathcal{W}: \pi^{\wedge}(w)=0\} \subset \mathcal{W}.\] Similarly, let \[\mathcal{W}_{\det}=\{w\in \mathcal{W}: \pi^{\det}(w)=0\} \subset \mathcal{W}\] denote the kernel of the coordinate corresponding to $v_1\wedge \ldots \wedge v_n$, or equivalently the projection $\pi^{\det}:\mathcal{W} \to \wedge^n {\mathbb C }^n$. \begin{prop}\label{boundary} The boundary of the orbit $\mathrm{GL}(n)(\mathfrak{p}_n)$ in $\mathcal{W}$ is contained in the union of the subspaces $\mathcal{W}_{v_1}$ and $\mathcal{W}_{\det}$: \[\overline{\mathrm{GL}(n)(\mathfrak{p}_n)} \setminus \mathrm{GL}(n)(\mathfrak{p}_n )\subset \mathcal{W}_{v_1} \cup \mathcal{W}_{\det}\] \end{prop} \begin{proof} Let $B_n \subset \mathrm{GL}(n)$ denote the standard upper triangular Borel subgroup of $\mathrm{GL}(n)$ which stabilises the +filtration ${\mathbb C } e_1 \subset {\mathbb C } e_1 \oplus {\mathbb C } e_2 \subset \cdots \subset {\mathbb C }^n$. Since $\mathrm{GL}(n)/B_n$ is projective we have \[\overline{\mathrm{GL}(n)\cdot (\mathfrak{p}_n \oplus e_1^r)}= \mathrm{GL}(n) \overline{B_n\cdot (\mathfrak{p}_n \oplus e_1^r)} .\] Let $$w=\lim_{m \to \infty} b^{(m)}(\mathfrak{p}_n \oplus e_1^r) \in \overline{B_n(\mathfrak{p}_n \oplus e_1^r)} \subseteq \mathcal{W}$$ be a limit point where \begin{equation}\label{bmform} b^{(m)}=\left(\begin{array}{cccc}b^{(m)}_{11} & b_{12}^{(m)} & \ldots & b_{1n}^{(m)} \\ 0 & b^{(m)}_{22} & \ldots & b^{(m)}_{2n} \\ & & \ddots & \\ 0 & 0 & \ldots & b^{(m)}_{nn}\end{array}\right)\in B_{n} \subset \mathrm{GL}(n) \end{equation} Now expanding the wedge product in the definition of $\mathfrak{p}_n$ we get \[b^{(m)}(\mathfrak{p}_n)=(\det(b^{(m)}) e_1 \wedge \ldots \wedge e_n+\ldots +(b^{(m)}_{11})^{1+\omega_2+\ldots +\omega_n}e_1 \wedge e_1^{\omega_2} \wedge \ldots \wedge e_1^{\omega_n})\] so by considering the coefficient of $e_1 \wedge \ldots \wedge e_n$ we see that the determinant $\det(b^{(m)})$ tends to a limit in ${\mathbb C }$ as $m \to \infty$. If this limit is zero then the limit point $w$ sits in $\mathcal{W}_{\det}$, so we will focus on the other case when $\lim_{m\to \infty}\det(b^{(m)}) \in {\mathbb C } \setminus \{0\}$. Then we have to show that if $w$ is a boundary point then $w\in \mathcal{W}_{v_1}$, that is, $\lim_{m\to \infty} b_{11}^{(m)}=0$. We show indirectly that $b^{(\infty)}_{11}=\lim_{m\to \infty} b^{(m)}_{11}\in {\mathbb C } \setminus \{0\}$ implies that $w \in B_n (\mathfrak{p}_n \oplus e_1)$ sits in the orbit. Here \begin{multline} b^{(m)}\mathfrak{p}_n = b^{(m)}_{11}e_1 \wedge (b^{(m)}_{22}e_2+(b^{(m)}_{11})^{\omega_2}e_1^{\omega_2}) \wedge \ldots \wedge (b^{(m)}_{nn}e_n+b^{(m)}_{n-1n}e_{n-1}+\ldots +b^{(m)}_{1n}e_1+ \nonumber \\ +\sum_{s=2}^{n-1} p_{sn}( b^{(m)}_{11}e_{1}, b^{(m)}_{22}e_{2}+b^{(m)}_{12}e_{1}, \ldots ,b^{(m)}_{nn} e_n+ \ldots +b^{(m)}_{1n}e_{1})+(b^{(m)}_{11})^{\omega_i}e_1^{\omega_i} ). \end{multline} Now look at the coefficient of \[e_1 \wedge e_1^{\omega_2} \wedge \ldots \wedge e_1^{\omega_{i-1}} \wedge e_j \wedge e_1^{\omega_{i+1}} \wedge \ldots \wedge e_1^{\omega_n}\] in $b^{(m)}(\mathfrak{p}_n)$ when $1 \leq j \leq i \leq n$; we see that \[(b^{(m)}_{11})^{1+\omega_2+\ldots +\omega_{i-1} + \omega_{i+1} +\ldots +\omega_n}b^{(m)}_{ji} \] tends to a limit in ${\mathbb C }$ as $m \to \infty$, and so since $b^{(\infty)}_{11} \neq 0$ \[b^{(m)}_{ji} \to b^{(\infty)}_{ji} \in {\mathbb C }.\] Also \[\lim_{m\to \infty} \det (b^{(m)})=b^{(\infty)}_{11} b^{(\infty)}_{22} \cdots b^{(\infty)}_{nn} \in {\mathbb C } \setminus \{0\},\] so $b^{(m)} \to b^{(\infty)} \in \mathrm{GL}(n)$. Therefore $w = b^{(\infty)}(\mathfrak{p}_n \oplus e_1)$ lies in the orbit $\mathrm{GL}(n)(\mathfrak{p}_{n} \oplus e_1)$ as required. \end{proof} \begin{corollary} \label{corbound} The boundary of the orbit $\mathrm{GL}(n)[\mathfrak{p}_n]$ in $ {\mathbb P } (\mathcal{W})$ is contained in the union of the subspaces $ {\mathbb P } (\mathcal{W}_{v_1})$ and $ {\mathbb P } (\mathcal{W}_{\det})$. \end{corollary} \begin{proof} By rescaling using elements of ${\mathbb C }^*=\hat{U}/U$ we can assume that \[\lim_{m\to \infty} b^{(m)}[\mathfrak{p}_n]=[\lim_{m\to \infty} b^{(m)}\mathfrak{p}_n].\] Proposition \ref{boundary} then gives us the statement. \end{proof} \subsection{Well-adapted characters} Let $X$ be a nonsingular complex projective variety on which $\hat{U}$ acts linearly with respect to a very ample line bundle $L$ inducing a $\hat{U}$-equivariant embedding of $X$ in $ {\mathbb P } ^N$. Let $\mathrm{GL}(n) \times_{\hat{U}} X$ denote the quotient of $\mathrm{GL}(n) \times X$ by the free action of $\hat{U}$ defined by $\hat{u}(g,x)=(g \hat{u}^{-1}, \hat{u}x)$ for $\hat{u} \in \hat{U}$, which is a quasi-projective variety by \cite{PopVin} Theorem 4.19. Then there is an induced $\mathrm{GL}(n)$-action on $\mathrm{GL}(n) \times_{\hat{U}} X$ given by left multiplication of $\mathrm{GL}(n)$ on itself. In cases where the action of $\hat{U}$ on $X$ extends to an action of $\mathrm{GL}(n)$ there is an isomorphism of $\mathrm{GL}(n)$-varieties \begin{equation} \label{basic} \mathrm{GL}(n) \times_{\hat{U}} X \cong (\mathrm{GL}(n)/\hat{U}) \times X \end{equation} given by $ [g,x] \mapsto (g\hat{U}, gx). $ In this case the linearisation $L$ on $X$ extends to a very ample $\mathrm{GL}(n)$-linearisation $L^{(p,q)}$ on $\mathrm{GL}(n) \times_{\hat{U}} X$ and its closure $\overline{\mathrm{GL}(n) \times_{\hat{U}} X}$ using the inclusions \[ \mathrm{GL}(n) \times_{\hat{U}} X \hookrightarrow \mathrm{GL}(n) \times_{\hat{U}} \mathbb{P}^N \cong (\mathrm{GL}(n)/{\hat{U}}) \times \mathbb{P}^N \hookrightarrow {\mathbb P } (\wsymk n) \times {\mathbb P } ^N\] and the very ample line bundle $\mathcal{O}_{ {\mathbb P } (\wsymk n)}(p) \otimes \mathcal{O}_{ {\mathbb P } ^N}(q)$. Here the $\mathrm{GL}(n)$-invariants on $\mathrm{GL}(n) \times_{\hat{U}} X$ are given by \begin{equation} \label{name} \bigoplus_{m \geq 0} H^0(\mathrm{GL}(n) \times_{\hat{U}} X, L^{\otimes pm})^{\mathrm{GL}(n)} \cong \bigoplus_{m \geq 0} H^0(X, L^{\otimes pm})^{\hat{U}} = {\hat{\mathcal{O}}}_{L^{\otimes p}}(X)^{\hat{U}}.\end{equation} Note that the normaliser $N_{\mathrm{GL}(n)}(\hat{U})$ of $\hat{U}$ in $\mathrm{GL}(n)$ acts on the right on $\mathrm{GL}(n) \times_{\hat{U}} X$ via $$n[g,x] = [gn,n^{-1}x].$$ The central one-parameter subgroup $Z\mathrm{GL}(n)$ of $\mathrm{GL}(n)$ normalises $\hat{U}$, and since $gn=ng$ for every $n \in Z\mathrm{GL}(n)$ and $g \in \mathrm{GL}(n)$, the right action of $Z\mathrm{GL}(n)$ on $\mathrm{GL}(n) \times_{\hat{U}} X$ extends to a linear action on $ {\mathbb P } (\wsymk n) \times {\mathbb P } ^N$ given by $n(y,x) = (ny,x)$. Note also that the induced right action of $\hat{U}$ on $\mathrm{GL}(n) \times_{\hat{U}} X$ is trivial and its closure in $ {\mathbb P } (\wsymk n) \times {\mathbb P } ^N$ is trivial, and that the image of $Z\mathrm{GL}(n)$ in $N_{\mathrm{GL}(n)}(\hat{U})/\hat{U}$ is the same as the image of the one-parameter subgroup ${\mathbb C }^*$ of $\tilde{U}$. However the induced right action of $\hat{U}$ on the line bundle $\mathcal{O}_{ {\mathbb P } (\wsymk n)}(p) \otimes \mathcal{O}_{ {\mathbb P } ^N}(q)$ is not trivial; it is multiplication by $(\omega_1 + \omega_2 + \cdots \omega_n)$ times the character $\hat{U} \to {\mathbb C }^*$ with kernel $U$. Thus the weights $w$ and $\tilde{w}$ of the right actions of $Z\mathrm{GL}(n)$ and the one-parameter subgroup ${\mathbb C }^* \leq \tilde{U}$ are related by $$\tilde{w} = (1 + \omega_2 + \cdots + \omega_n)(w - n)$$ when we choose the basis vector $\mbox{diag}(1,1,\ldots,1)$ for $\mbox{Lie}Z\mathrm{GL}(n)$ and the basis vector $$\mbox{diag}( 1 + \omega_2 + \cdots + \omega_n - n, 1 + \omega_2 + \cdots + \omega_n - n\omega_2, \ldots , 1 + \omega_2 + \cdots + \omega_n - n\omega_n)$$ for the Lie algebra of ${\mathbb C }^* \leq \tilde{U}$. When $\wsymk n$ is identified with the sum of summands $$ \wedge^{k_1} ({\mathbb C }^n) \otimes \wedge^{k_2} (\mathrm{Sym}^{\omega_2} {\mathbb C }^n) \otimes \cdots \otimes \wedge^{k_n} (\mathrm{Sym}^{\omega_n} {\mathbb C }^n) $$ over non-negative integers $k_1,\ldots, k_n$ such that $k_1 + \cdots + k_n = n$, the weight of the $Z\mathrm{GL}(n)$ action on the summand $ \wedge^{k_1} ({\mathbb C }^n) \otimes \wedge^{k_2} (\mathrm{Sym}^{\omega_2} {\mathbb C }^n) \otimes \cdots \otimes \wedge^{k_n} (\mathrm{Sym}^{\omega_n} {\mathbb C }^n) $ is $$k_1\omega_1 + \ldots + k_n \omega_n.$$ Thus the weight of the right action of the one-parameter subgroup ${\mathbb C }^* \leq \tilde{U}$ on this summand is $$(k_1\omega_1 + \ldots + k_n \omega_n - n)(\omega_1 + \omega_2 + \cdots + \omega_n).$$ The weights for the $Z\mathrm{GL}(n)$ action on $\overline{\mathrm{GL}(n)/\hat{U}}$ satisfy $ k_j + k_{j+1} + \cdots + k_n \leq n-j+1$ for $1 \leq j \leq n$ and therefore \[\omega_{\min}=n\omega_1 \le k_1\omega_1 + \ldots + k_n \omega_n \le \omega_{\max}=\omega_1+\ldots +\omega_n\] where the minimum weight $\omega_{\min}=n\omega_1=n$ is taken on the summand spanned by $e_1\wedge \ldots \wedge e_n$ whereas the maximum weight $\omega_{\max}=\omega_1+\ldots +\omega_n$ is taken on the summand spanned by $v_1^{\omega _1} \wedge \ldots \wedge v_1^{\omega_n}$. In fact, since since $1 = \omega_1 < \omega_2 \leq \omega_3 \leq \ldots \leq \omega_n$ holds, this is the only summand where the value $\omega_{\max}$ is taken. Let $\omega_{\max-1}<\omega_{\max}$ denote the second highest weight for the $Z\mathrm{GL}(n)$ which must have the form $$\omega_{\max - 1} = \omega_1 + \omega_2 + \cdots + 2\omega_{i}+\omega_{i+2}+\cdots + \omega_n= \omega_{\max} - \omega_{i+1} + \omega_i$$ for some $1\le i \le n-1$. Let $\chi: \hat{U} \to {\mathbb C }^*$ be a character of $\hat{U}$. We will want to choose $p$ and $\chi$ such that $$ p (\omega_{\max} - n) (\omega_1 + \cdots + \omega_n) - \chi > 0 > p (\omega_{\max - 1} - n) (\omega_1 + \cdots + \omega_n) - \chi $$ or equivalently \begin{equation} \label{welladapted} \omega_{\max-1}-n < \frac{\chi}{p(\omega_1 + \cdots \omega_n)} < \omega_1 + \cdots + \omega_n - n. \end{equation} We call rational characters $\chi/p$ with this property {\it well-adapted}. The linearisation of the action of $\hat{U}$ on $X$ with respect to $L^{\otimes p}$ can be twisted by $\chi$ so that the weights $\rho_j$ of $Z\mathrm{GL}(n)$ are replaced with $\rho_j p-\chi$ for $j=0,\ldots, s$. Let $L_\chi^{\otimes p}$ denote this twisted linearisation. \subsection{Hilbert-Mumford for the left action of $\mathrm{SL}(n)$} Recall that $\mathrm{SL}(n) = \mathrm{SU}(n) B_{\mathrm{SL}(n)}$ where $B_{\mathrm{SL}(n)}$ is the standard (upper triangular) Borel subgroup of $\mathrm{SL}(n)$ and $\mathrm{SU}(n)$ is compact, so that $$\overline{\mathrm{GL}(n)/\hat{U}} = \overline{\mathrm{SL}(n)[\mathfrak{p}_n] } = \mathrm{SU}(n) ( \overline{B_{\mathrm{SL}(n)}[\mathfrak{p}_n]} ) .$$ Moreover $ {\mathbb P } (\mathcal{W}_{\det})$ is $\mathrm{SL}(n)$-invariant, so $$ {\mathbb P } (\mathcal{W}_{\det}) \cap \overline{\mathrm{GL}(n)/\hat{U}} = \mathrm{SU}(n) ( {\mathbb P } (\mathcal{W}_{\det}) \cap \overline{B_{\mathrm{SL}(n)}[\mathfrak{p}_n]} ) .$$ Now fix positive integers $\rho_1 \gg \rho_2 \gg \ldots \gg \rho_{n-1}>0$ and consider the left action of the one-parameter subgroup ${\mathbb C }^*_\rho \leq \mathrm{SL}(n)$ given by \[ t \mapsto \left(\begin{array}{ccccc} t^{\rho_1} & & & &\\ & t^{\rho_2} & & & \\ & & \ddots & & \\ & & & t^{\rho_{n-1}} & \\ & & & & t^{-(\rho_1 + \rho_2 + \cdots + \rho_{n-1})} \end{array}\right) \mbox{ for $t\in {\mathbb C }^*$ }. \] The weights of ${\mathbb C }^*_\rho$ acting on $ {\mathbb P } (\mathcal{W}_{\det}) \cap \overline{B_{\mathrm{SL}(n)}[\mathfrak{p}_n]}$ are all of the form $k_1 \rho_1 + k_2 \rho_2 + \cdots + k_{n-1} \rho_{n-1}$ where $k_1, \ldots , k_{n-1}\geq 0$. By Remark \ref{rmk3.2} and Proposition \ref{homogprop} $\overline{B_{\mathrm{SL}(n)}[\mathfrak{p}_n]}$ is contained in the subspace \[ {\mathbb P } ^*= {\mathbb P } (W_1 \wedge \ldots \wedge W_n) \subset {\mathbb P } (\wsymk n)\] where the subspaces \[W_i=\mathrm{Span}_{\mathbb C }(e_\tau: \mathrm{supp}(\tau)\subseteq \{1,\ldots i\}, \sum_{t\in \tau}\omega_t \le \omega_i)\subset \mathrm{Sym}^\omega {\mathbb C }^n\] are invariant under the upper Borel subgroup $B_n \subset \mathrm{GL}(n)$ which preserves the flag $\mathrm{Span}(e_1)\subset \mathrm{Span}(e_1,e_2) \ldots \subset \mathrm{Span}(e_1,\ldots, e_n)$. Here $\tau=(\tau_1\le \tau_2 \le \ldots \le \tau_r)$ is a sequence whose support $\mathrm{supp}(\tau)$ is the set of elements in $\tau$ and $e_{\tau}=\prod_{j\in \tau}e_j=\prod_{i=1}^r e_{\tau_i} \in \mathrm{Sym}^{r}{\mathbb C }^n$. Basis elements of $ {\mathbb P } ^*$ are parametrised by {\it admissible} sequences of partitions $\mathbf{\pi}=(\pi_1,\ldots, \pi_n)$. We call a sequence of partitions $\mathbf{\pi}=(\pi_1 \ldots \pi_n)\in \Pi^{\times n}$ admissible if \begin{enumerate} \item $\mathrm{supp}(\pi_l)\subseteq \{1,\ldots l\}$ \item $\sum_{t\in \pi_l}\omega_t \le \omega_l$ for $1\le l \le n$, and \item $\pi_l\neq\pi_m$ for $1\leq l\neq m\leq n$. \end{enumerate} We will denote the set of admissible sequences of length $n$ by $\mathbf{\Pi}$. The corresponding basis element is then $e_{\pi_1} \wedge \ldots \wedge e_{\pi_n} \in W_1 \wedge \ldots \wedge W_n$. \begin{lemma} For $\rho=(\rho_1 \gg \rho_2 \gg \ldots \gg \rho_{n-1}>0)$ and $\mathbf{\pi} \in \mathbf{\Pi}$ the weight of the left ${\mathbb C }^*_{\rho}$ action on $e_\pi$ is strictly positive unless $\mathbf{\pi}=(1,2,\ldots, n)$ corresponding to the basis element $e_{(1,2,\ldots, n)}=e_1\wedge \ldots \wedge e_n$. \end{lemma} \begin{proof} The weight $\rho_\pi$ of the left ${\mathbb C }^*_{\rho}$ action on $e_{\mathrm{\pi}}=e_{\pi_1}\wedge \ldots \wedge e_{\pi_n}$ is \[\rho_{\mathrm{\pi}}=\sum_{i=1}^n \sum_{j \in \pi_l} \rho_j \ge \sum_{i=1}^{n-1} \rho_i+\sum_{j \in \pi_n} \rho_j\] whenever $\rho_1 \gg \rho_2 \gg \ldots \gg \rho_{n-1}>0$ holds by (1) and (2) in the definition of admissible sequences. Moreover, equality holds if and only if $\pi_l=(l)$ for $1\le l \le n-1$. Finally $\sum_{j \in \pi_n} \rho_j\ge -(\rho_1+\ldots +\rho_{n-1})$ with equality if and only if $\pi_n=(n)$, otherwise $e_n$ does not appear in $e_{\pi_n}$. \end{proof} Let $\eta_{\min} = \eta_1 < \cdots < \eta_\rho = \eta_{\max}$ be the weights of the action of ${\mathbb C }^*_\rho$ on $X$ with respect to the linearisation $L$ of the $\mathrm{SL}(n)$ action. Then if $q\eta_{\min} + p n > 0$ it follows that every point of $( {\mathbb P } (\mathcal{W}_{\det})\times X) \cap \overline{B_{\mathrm{SL}(n)}[\mathfrak{p}_n] \times X} )$ is unstable for the left action of this one parameter subgroup of $\mathrm{SL}(n)$ with respect to the linearisation $L^{(p,q)}$ (or equivalently $L^{(p,q)}_\chi$). It follows that \begin{lemma} \label{lempdet} If $p > - q\eta_{\min}/n$ then every point of \[( {\mathbb P } (\mathcal{W}_{\det}) \times X) \cap \overline{(\mathrm{GL}(n)/\hat{U}) \times X} = \mathrm{SU}(n) ( ( {\mathbb P } (\mathcal{W}_{\det}) \times X) \cap \overline{B_{\mathrm{SL}(n)}[\mathfrak{p}_n]} \times X )\] is unstable for the left action of $\mathrm{SL}(n)$ with respect to the linearisation $L^{(p,q)}$ (or equivalently $L^{(p,q)}_\chi$). \end{lemma} \subsection{Hilbert-Mumford for the right action of ${\mathbb C }^* \leq \tilde{U}$} Recall that the boundary of $\mathrm{GL}(n) \times_{\hat{U}} X = (\mathrm{GL}(n)/{\hat{U}}) \times X$ in the projective completion \[\overline{\mathrm{GL}(n)/{\hat{U}}} \times X \subset {\mathbb P } (\wsymk n) \times X\] is contained in the union of $ {\mathbb P } (\mathcal{W}_{v_1}) \times X$ and $ {\mathbb P } (\mathcal{W}_{\det}) \times X$. If we twist the linear action of $\tilde{U}$ on $X$ with respect to $L^{\otimes p}$ which extends to a linear action of $\mathrm{SL}(n)$ by a character $\chi$, then the induced right action of the one parameter subgroup ${\mathbb C }^* \leq \tilde{U}$ on $\overline{\mathrm{GL}(n)/{\hat{U}}} \times X$ with respect to the line bundle $\mathcal{O}_{ {\mathbb P } (\wsymk n)}(p) \otimes \mathcal{O}_{ {\mathbb P } ^N}(q)$ has weights $$ p ( k_1 \omega_1 + \cdots + k_n \omega_n - n) (\omega_1 + \cdots + \omega_n) - \chi.$$ If the rational character $\chi/p$ is well adapted in the sense of (\ref{welladapted}) then the twisted $\mathrm{SL}(n) \times {\mathbb C }^*$-linearisation $L^{(p,q)}_{\chi}$ on $\overline{\mathrm{GL}(n)/\hat{U}} \times X$ has strictly negative weights under the right action of the one parameter subgroup ${\mathbb C }^* \leq \tilde{U}$ on $ {\mathbb P } (\mathcal{W}_{v_1}) \times X$ and therefore all points of $ {\mathbb P } (\mathcal{W}_{v_1}) \times X$ are unstable with respect to this linear action of $\mathrm{SL}(n) \times {\mathbb C }^*$. We know from Corollary \ref{corbound} that the boundary of the orbit $\mathrm{GL}(n)[\mathfrak{p}_n]$ in $ {\mathbb P } (\mathcal{W})$ is contained in the union of the subspaces $ {\mathbb P } (\mathcal{W}_{v_1})$ and $ {\mathbb P } (\mathcal{W}_{\det})$. So combining this with Lemma \ref{lempdet} we obtain \begin{prop} \label{propbound} If $p >> q > 0$ and the rational character $\chi/p$ is well adapted in the sense of (\ref{welladapted}), then the boundary of the closure $\overline{\mathrm{GL}(n)/\times_{\hat{U}} X} \cong \overline{\mathrm{GL}(n)/\hat{U} } \times X$ of $\mathrm{GL}(n) \times_{\hat{U}} X$ in $ {\mathbb P } (\wsymk n) \times X $ is unstable for the linear action of $\mathrm{SL}(n) \times {\mathbb C }^* = \mathrm{SL}(n) \times (\tilde{U}/U)$ with respect to the linearisation $L^{(p,q)}_\chi$. \end{prop} Recall that $\mathrm{GL}(n)/\hat{U} = \mathrm{SL}(n) /(\hat{U} \cap \mathrm{SL}(n))$ where $\hat{U} \cap \mathrm{SL}(n)$ is a finite extension of $U$ which is contained in $\tilde{U}$ with $\tilde{U}/(\hat{U} \cap \mathrm{SL}(n)) \cong {\mathbb C }^*$. It thus follows immediately from Theorem \ref{thm:geomcor} that we have \begin{theorem} \label{thmbound} Let $X$ be a projective variety acted on linearly by $\tilde{U}$, and suppose that the action extends to a linear action of $\mathrm{SL}(n)$ with respect to an ample linearisation. If the linearisation of the $\tilde{U}$-action is twisted by a well-adapted rational character $\chi/p$ for sufficiently divisible $p$, then \begin{enumerate} \item the algebra of invariants $\bigoplus_{k \geq 0} H^0( X,L^{\otimes kp})^{\tilde{U}}$ is finitely generated; \item the enveloping quotient $X/\!/\tilde{U} \simeq (\overline{\mathrm{GL}(n)/\hat{U}} \times X) /\!/_{L^{(p,1)}_\chi} (\mathrm{SL}(n)\times {\mathbb C }^*)\simeq \mathrm{Proj}(\oplus_{k \geq 0} H^0( X,L^{\otimes kp})^{\tilde{U}})$; \item the morphism \[ \phi:X^{ss,\tilde{U}} \rightarrow X/\!/\tilde{U}\] is surjective and $X/\!/\tilde{U}$ is a categorical quotient of $X^{ss,\tilde{U}}$ with $\phi(x) = \phi(y)$ if and only if the closures of the $\tilde{U}$-orbits of $x$ and $y$ meet in $X^{ss,\tilde{U}}$. \end{enumerate} \end{theorem} \subsection{The action of $\tilde{U}$ on $X \times {\mathbb P } ^1$} Now let us consider the diagonal action of $\tilde{U}$ on $X \times {\mathbb P } ^1$ where $\tilde{U}$ acts on $ {\mathbb P } ^1$ linearly with respect to $\mathcal{O}_{ {\mathbb P } ^1}(1)$ by $$ \tilde{u} [ x_0:x_1] = [\chi_0(\tilde{u})x_0: x_1]$$ where $\chi_0: \tilde{U} \to {\mathbb C }^*$ is the character with kernel $U$ given by \[ \chi_0 \left(\begin{array}{cccc} t^{n\omega_1 -(\omega_1+ \cdot + \omega_n)} & & & \\ & t^{n\omega_2-(\omega_1+ \cdots + \omega_n)} & & \\ & & \ddots & \\ & & & t^{n\omega_n - (\omega_1+ \cdots + \omega_n)} \end{array}\right) = t. \] We can adapt the arguments of $\Sigma$5.3 and $\Sigma$5.4 to the induced linear action of $\mathrm{SL}(n) \times {\mathbb C }^*$ on $$ \overline{\mathrm{GL}(n)/\hat{U}} \times X \times {\mathbb P } ^1 \subseteq {\mathbb P } (\wsymk n) \times {\mathbb P } ^N \times {\mathbb P } ^1 $$ for the linearisation $L^{(p,q,r)}_\chi$ defined with respect to the line bundle $\mathcal{O}_{ {\mathbb P } (\wsymk n)}(p) \otimes \mathcal{O}_{ {\mathbb P } ^N}(q) \otimes \mathcal{O}_{ {\mathbb P } ^1}(r)$ where the action of $\tilde{U}$ on $X$ extends to a linear action of $\mathrm{SL}(n)$ but is then twisted by a rational character $\chi/p$. Now we want to choose $\chi, p,q$ and $r$ such that $\chi/p$ is well adapted in the sense of (\ref{welladapted}), and $p > -q \eta_{\min}/n$ as before, and also $r > - q \eta_{\min}$ in order that all points of $\overline{\mathrm{GL}(n)/\hat{U}} \times X \times \{ \infty \}$ will be unstable for the action of $\mathrm{SL}(n) \times {\mathbb C }^*$. Note that $\eta_{\min} <0$ unless the action of $\tilde{U}$ is trivial, and then these two conditions will be satisfied if $p>>q$ and $r>> q$. So the proofs of Proposition \ref{propbound} and Theorem \ref{thmbound} give us \begin{prop} \label{propbound2} If $p >> q > 0$ and $r>> q$ and the rational character $\chi/p$ is well adapted in the sense of (\ref{welladapted}), then the boundary of the closure $ \overline{\mathrm{GL}(n)/\hat{U} } \times X \times {\mathbb P } ^1$ of $\mathrm{GL}(n) \times_{\hat{U}} (X \times {\mathbb C })$ in $ {\mathbb P } (\wsymk n) \times X \times {\mathbb P } ^1$ is unstable for the linear action of $\mathrm{SL}(n) \times {\mathbb C }^* = \mathrm{SL}(n) \times (\tilde{U}/U)$ with respect to the linearisation $L^{(p,q,r)}_\chi$. \end{prop} \begin{defn} Let $X^{\hat{s},U}$ denote the $U$-invariant open subset of $X$ such that $$\{ [\mathfrak{p}_n] \} \times X^{\hat{s},U} \times \{[1:1]\} = (\{ [\mathfrak{p}_n] \} \times X \times \{[1:1]\}) \cap ( {\mathbb P } (\wsymk n) \times X \times {\mathbb P } ^1)^{s,\mathrm{SL}(n) \times {\mathbb C }^*}$$ where $( {\mathbb P } (\wsymk n) \times X \times {\mathbb P } ^1)^{s, \mathrm{SL}(n) \times {\mathbb C }^*}$ denotes the stable subset of $ {\mathbb P } (\wsymk n) \times X \times {\mathbb P } ^1$ with respect to the linearisation $L^{(p,1,r)}_\chi$. \end{defn} \begin{theorem} \label{thmbound2} Let $X$ be a projective variety acted on linearly by $\tilde{U}$, and suppose that the action extends to a linear action of $\mathrm{SL}(n)$ with respect to an ample linearisation. If the linearisation of the diagonal action of $\tilde{U}$ on $X \times {\mathbb P } ^1$ is twisted by a well-adapted rational character $\chi/p$ for sufficiently divisible $p$, then \begin{enumerate} \item the algebra of invariants $\bigoplus_{k \geq 0} H^0( X \times {\mathbb P } ^1,L^{\otimes kp})^{\tilde{U}}$ is finitely generated; \item the enveloping quotient $(X \times {\mathbb P } ^1)/\!/\tilde{U} \simeq (\overline{\mathrm{GL}(n)/\hat{U}} \times X \times {\mathbb P } ^1) /\!/ (\mathrm{SL}(n)\times {\mathbb C }^*)\simeq \mathrm{Proj}(\oplus_{k \geq 0} H^0( X \times {\mathbb P } ^1,L^{\otimes kp} \otimes \mathcal{O}_{ {\mathbb P } ^1}(r))^{\tilde{U}})$ for $r>> 1$; \item there is a surjective $\tilde{U}$-invariant morphism \[ \phi:(X \times {\mathbb C })^{ss,\tilde{U}} \rightarrow (X \times {\mathbb P } ^1)/\!/\tilde{U}\] from a $\tilde{U}$-invariant open subset $(X \times {\mathbb C })^{ss,\tilde{U}}$ of $X \times {\mathbb C }$ making $X/\!/\tilde{U}$ is a categorical quotient of $(X\times {\mathbb C })^{ss,\tilde{U}}$ with $\phi(x) = \phi(y)$ if and only if the closures of the $\tilde{U}$-orbits of $x$ and $y$ meet in $(X \times {\mathbb C })^{ss,\tilde{U}}$; \item this morphism $\phi$ restricts to a geometric quotient $X^{\hat{s},U} \to X^{\hat{s},U}/U$ for the action of $U$ on the $U$-invariant open subset $X^{\hat{s},U}$ of $X$. \end{enumerate} \end{theorem} \section{Some applications} Recall that if $U$ is { any} unipotent complex linear algebraic group of dimension $n-1$ which has an action of ${\mathbb C }^*$ with all weights strictly positive, then $U$ can be embedded in $\mathrm{GL}({\rm Lie}(U \rtimes {\mathbb C }^*))$ via its adjoint action on the Lie algebra ${\rm Lie}({U}\rtimes {\mathbb C }^*)$ as the unipotent radical of a subgroup $\hat{U}$ of the form (\ref{label1}) which is generated along the first row, and as the unipotent radical of the associated subgroup $\tilde{U}$ of $\mathrm{SL}(n)$ (which we are calling the adjoint form of $U$). Here the weights of the action of ${\mathbb C }^*$ on the Lie algebra of $U$ are $\omega_j - 1$ for $j=2,\ldots, n$. We can apply Theorems \ref{thmbound} and \ref{thmbound2} to this situation, and also in the situation of jet differentials considered in $\Sigma$5. Here $U$ is the unipotent radical ${\mathbb U }_n$ of the reparametrisation group ${\mathbb G }_n$, and $\tilde{U}$ is the associated subgroup $\tilde{{\mathbb G }}_n \cong {\mathbb U }_n \rtimes {\mathbb C }^*$ of $\mathrm{SL}(n)$ which is isomorphic to ${\mathbb G }_n$ when $n$ is odd and is a double cover of ${\mathbb G }_n$ when $n$ is even. We will finish this section by describing two examples of algebras of invariants in the case of jet differentials. \begin{ex} \textbf{Invariant jet differentials of order $2$ in dimension $2$.} When $n=2$ then ${\mathbb U }_2$ (which is the unipotent radical of the standard Borel subgroup of $\mathrm{SL}(2)$) is a Grosshans subgroup of $\mathrm{SL}(2)$. As usual let $\{e_1,e_2\}$ be the standard basis for ${\mathbb C }^2$, and consider the group \[{\mathbb G }_2 = \left\{ \left(\begin{array}{cc} \alpha_1 & \alpha_2 \\ 0 & \alpha_1^2 \end{array} \right) : \alpha_1 \in {\mathbb C }^*, \alpha_2\in {\mathbb C } \right.\}={\mathbb C }^* \ltimes {\mathbb C }^{+}\] with maximal unipotent ${\mathbb C }^+$ acting on ${\mathbb C }^2$ by translation. Then $\mathrm{Sym}^{\mathbf{\weight}}\CC^n={\mathbb C }^n \oplus \mathrm{Sym}^2 {\mathbb C }^2$ has an induced basis $\{e_1,e_2,e_1^2,e_1e_2,e_2^2\}$. Let $x_{ij}$ denote the standard coordinate functions on $\mathrm{SL}(2)\subset ({\mathbb C }^2)^* \otimes {\mathbb C }^2$. Then in the notation of $\Sigma$4 \[\phi_1(x_{11},x_{12},x_{21},x_{22})=(x_{11},x_{21}),\] and \[\phi_2(x_{11},x_{12},x_{21},x_{22})=(x_{11},x_{21})\wedge ((x_{12},x_{22})+ (x_{11}^2,2x_{11}x_{21},x_{21}^2)),\] and $\mathcal{O}(\mathrm{SL}(n))^U$ is generated by $x_{11},x_{21}$ and the $2 \times 2$ minors of \[\left(\begin{array}{ccccc} x_{11} & x_{21} & 0 & 0 & 0 \\ x_{12} & x_{22} & x_{11}^2 & 2x_{11}x_{21} & x_{21}^2 \end{array}\right). \] Since the determinant is $1$ this set of generators reduces to two generators $x_{11},x_{21}$, as expected since $\mathrm{SL}(2)/{\mathbb C }^+ \cong {\mathbb C }^2 \setminus \{ 0 \}$ and its canonical affine completion $\mathrm{SL}(2)/\!/{\mathbb C }^+$ is ${\mathbb C }^2$. \end{ex} \begin{ex} \textbf{Invariant jet differentials of order $3$ in dimension $3$.} When $n=3$ the finite generation of the Demailly-Semple algebra $\mathcal{O}((J_3)_x)^{{\mathbb U }_3}$ was proved by Rousseau in \cite{rousseau}. Here \[{\mathbb G }_3 = \left\{ \left(\begin{array}{ccc} \alpha_1 & \alpha_2 & \alpha_3\\ 0 & \alpha_1^2 & 2\alpha_1\alpha_2 \\ 0 & 0 & \alpha_1^3 \end{array} \right) : \alpha_1 \in {\mathbb C }^*, \alpha_2,\alpha_3 \in {\mathbb C } \right.\}={\mathbb C }^* \ltimes U\] while $\mathrm{Sym}^{\mathbf{\weight}}\CC^n={\mathbb C }^n \oplus \mathrm{Sym}^2 {\mathbb C }^3 \oplus \mathrm{Sym}^3 {\mathbb C }^3$ has basis $\{e_1,e_2,e_3,e_1^2,e_1e_2,\ldots, e_3^3\}$. Let $x_{ij}$ denote the standard coordinate functions on $\mathrm{SL}(3)$. Then in the notation of $\Sigma$4 \begin{multline} \phi_3(x_{11},\ldots, x_{33})=(x_{11},x_{21},x_{31})\wedge ((x_{12},x_{22},x_{32}))+ (x_{11}^2,2x_{11}x_{21},x_{21}^2,2x_{21}x_{31},2x_{11}x_{31},x_{31}^2)\\ \wedge ((x_{12},x_{22},x_{32})+(2x_{11}x_{12},\ldots, 2x_{13}x_{23})+(x_{11}^3,\ldots, x_{31}^3)) \end{multline} and $\mathcal{O}(\mathrm{SL}(3))^U$ is generated by those minors of \[\left(\begin{array}{ccccccccccc} x_{11} & x_{21} & x_{31} & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 \\ x_{12} & x_{22} & x_{32} & x_{11}^2 & 2x_{11}x_{21} & \cdots & x_{33}^2 & 0 & 0 & \cdots & 0 \\ x_{13} & x_{23} & x_{33} & x_{11}x_{12} & x_{11}x_{22}+x_{12}x_{21} & \cdots & x_{31}x_{32} & x_{11}^3 & x_{11}^2x_{21} & \cdots & x_{31}^3 \end{array} \right) \] whose rows form an initial segment of $\{1,2,3\}$, that is the minors $\Delta_{\mathbf{i}_1,\ldots \mathbf{i}_s}$ with rows $1,\ldots, s$ and columns indexed by $\mathbf{i}_1,\ldots, \mathbf{i}_s$, where $s=1,2$ or $3$ and $|\mathbf{i}_j|\le 3$. \end{ex}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} An \emph{$n$--chain link} consists of $n$ unknotted circles embedded in $S^3$, linked together in a closed chain. Notice that links of a chain can be connected with an arbitrary amount of twisting. In particular, if we embed the first link in the plane of projection, the next perpendicular to the plane of projection, the next again in the plane of projection, and so on, then the last link may include any integer number of half--twists. See, for example, Figure \ref{fig:chain-link1}. \begin{figure}[h!] \includegraphics{figures/6-chain} \hspace{.5in} \includegraphics{figures/7-chain} \hspace{.5in} \includegraphics{figures/6-chain-twist} \caption{Left: Minimally twisted $6$--chain link. Middle: Minimally twisted $7$--chain link. Right: $6$--chain link with more half--twists.} \label{fig:chain-link1} \end{figure} Hyperbolic structures on $n$--chain link complements have been studied, for example, by Neumann and Reid \cite{neumann-reid:arithmetic}. They show any $n$--chain link complement with $n\geq 5$ admits a hyperbolic structure. In this paper, we are primarily interested in hyperbolic manifolds, so we restrict our attention to $n\geq 5$. A \emph{minimally twisted $n$--chain link} is an $n$--chain link such that, if $n$ is even, each link component alternates between lying embedded in the projection plane and lying perpendicular to the projection plane. If $n$ is odd, the link may be arranged such that each component alternates between lying in the projection plane and perpendicular to it, except a single link component which connects a link which is embedded in the projection plane to one which is perpendicular, with no twisting. See Figure \ref{fig:chain-link1}. Notice that there are actually two choices for the minimally twisted $n$--chain link for $n$ odd, depending on which way the last links are connected. However, these are isometric by an orientation reversing isometry, so we will not distinguish between them. In \cite{agol:min-vol}, Agol conjectures that minimally twisted $n$--chain link complements, for $n\leq 10$, are the smallest volume hyperbolic 3--manifolds with exactly $n$ cusps, but notes that Venzke has pointed out they cannot be smallest for $n\geq 11$, as the $(n-1)$--fold cyclic cover over one component of the Whitehead link has smaller volume. This statement is included in Venzke's thesis \cite{venzke:thesis}. However, Venzke does not give a proof. In this paper, we give a rigorous proof for $n\geq 60$. The following theorem is the main result of this paper. \begin{named}{Theorem \ref{thm:not-minvolume}} For $n\geq 60$, the minimally twisted $n$--chain link complement has volume strictly greater than that of the $(n-1)$--fold cyclic cover over one component of the Whitehead link. Hence the minimally twisted $n$--chain link complement cannot be the smallest volume hyperbolic 3--manifold with $n$ cusps, $n\geq 60$. \end{named} For $n$ between $11$ and $59$, inclusive, we present computer tabulation of volumes, compared with the volumes of the $(n-1)$--fold cyclic cover of the Whitehead link. See Table \ref{table:volumes}. By inspection, Theorem \ref{thm:not-minvolume} also holds for these manifolds. In Section \ref{sec:compute}, we explain how these computations can be made completely rigorous --- at least for those values of $n$ for which the minimally twisted $n$--chain link complement is triangulated with fewer than $100$ tetrahedra. In this case, this includes $n$ between 12 and 25, inclusive. Finally, in Section \ref{sec:arblinks} we evaluate volumes of arbitrarily twisted $n$--chain link complements. The main result of that section is Theorem \ref{thm:vol-withtwist}, which states that no $n$--chain link complement can be the minimal volume $n$--cusped hyperbolic 3--manifold, provided either $n\geq 60$, or the chain link contains at least $17$ half--twists. We present computational data to show that similarly, for $11 \leq n \leq 59$, no $n$--chain link complement can be minimal volume. When $5 \leq n \leq 10$, we rigorously prove, by computer, that no $n$--chain link which is \emph{not} minimally twisted can be the minimal volume $n$--cusped hyperbolic 3--manifold. \begin{remark} Since a version of this paper was made public, Hidetoshi Masai has pointed out to us that for even chain links, more can be said. In \cite[Chapter 6]{thurston}, Thurston finds a formula for the volumes of minimally twisted chain links with an even number of link components. Namely, $${\rm vol}(S^3\setminus C_{2n}) = 8n \left( \Lambda\left(\frac{\pi}{4} + \frac{\pi}{2n}\right) + \Lambda\left(\frac{\pi}{4} - \frac{\pi}{2n}\right)\right),$$ where $\Lambda$ is the Lobachevsky function. Masai notes that the difference of this volume and the volume of the $(2n-1)$--cyclic cover over a component of the Whitehead link is an increasing function in $n$, for $n \geq 6$. This result will give a rigorous proof that minimally twisted $n$--chain links are not minimal volume for $17$ additional chain links, namely for $n = 26, 28, 30, \dots, 56, 58$. In addition, this gives an alternate proof that minimally twisted $2n$--chain links are not minimal volume for larger $n$. However, the result for odd links requires other techniques, for instance those in this paper. \end{remark} \subsection{Acknowledgements} We thank Ian Agol and Hidetoshi Masai for helpful conversations. We also thank Peter Milley, Harriet Moser, and Saul Schleimer for their assistance with computational aspects of this project. Authors Kaiser and Rollins were both supported in part by Brigham Young University mentoring funds for undergraduate research. All authors were supported in part by a grant from the National Science Foundation. \section{Slope lengths on covers}\label{sec:slopes} To prove the main theorems of this paper, for $n\geq 60$, we will obtain the complement of the minimally twisted $n$--chain link by Dehn filling a manifold $\widehat{W}_n$ which is geometrically explicit, constructed by gluing together manifolds isometric to the Whitehead link complement, cut along 2--punctured disks. We work with the diagram of the Whitehead link as in Figure \ref{fig:w1}, left, with a link component denoted $K$. Note that by switching the direction of a pair of crossings, we obtain a link whose complement is isometric to that of the Whitehead link complement by an orientation reversing isometry. The isometry takes $K$ to a link component we denote $\overline{K}$, as in Figure \ref{fig:w1}, right. \begin{figure} \input{figures/Whitehead.pdf_t} \hspace{.5in} \input{figures/Whitehead-refl.pdf_t} \caption{The Whitehead link (left), and its reflection (right), with component labeled $K$ and $\overline{K}$, respectively.} \label{fig:w1} \end{figure} \begin{lemma} The shape of the cusp of $K$ is a parallelogram with meridian and longitude meeting at angle $-\pi/4$ (measured from meridian to longitude). Similarly, the shape of the cusp of $\overline{K}$ is a parallelogram with meridian and longitude meeting at angle $\pi/4$ (measured from meridian to longitude). When we take a maximal horocusp about $K$ or $\overline{K}$, the meridian has length $\sqrt{2}$, and the longitude has length $4$. \label{lemma:k_shape} \end{lemma} Lemma \ref{lemma:k_shape} is illustrated in Figure \ref{fig:whitehead-cusp}. \begin{figure} \includegraphics{figures/cusp-whitehead} \caption{Cusp shape of component $K$ of Whitehead link. Meridian runs horizontally across the top, longitude runs diagonally.} \label{fig:whitehead-cusp} \end{figure} \begin{proof} The first part of the lemma, and lengths of slopes on $K$, are well known for the Whitehead link. See, for example \cite{neumann-reid:arithmetic}. As for $\overline{K}$, the orientation reversing isometry taking the Whitehead link to its reflection takes a meridian of $K$ to a meridian of $\overline{K}$, and reflects the longitude. Since this is an isometry, the meridian and longitude of $\overline{K}$ have the same lengths as those of $K$, but the angle from the meridian to longitude is reflected across the meridian, to be $\pi/4$. \end{proof} Now, the manifold $\widehat{W}_n$ can be described as the minimally twisted $n$--chain link embedded in a standard solid torus. Therefore, to obtain the complement of the minimally twisted $n$--chain link, we will Dehn fill $\widehat{W}_n$ along a standard longitude of the solid torus boundary component. To build $\widehat{W}_n$ from the Whitehead link complement, proceed as follows. First, cut the Whitehead link complement along the 2--punctured disk bounded by $K$ to get a clasp in a cylinder, which we call $W_1$, shown second from left in Figure \ref{fig:wn-construct}. Similarly, cut the reflected Whitehead link complement along the 2--punctured disk bounded by $\overline{K}$ to get a clasp in the opposite direction in a cylinder, which we call $\overline{W}_1$, shown third from left in Figure \ref{fig:wn-construct}. \begin{figure} \includegraphics{figures/wn-construct} \caption{Constructing the manifold $\widehat{W}_n$: cut the Whitehead link complement along a 2--punctured disk to obtain $W_1$ (second from left). Its reflection is $\overline{W}_1$ (third from left). Attach these to form a link in a solid cylinder, right.} \label{fig:wn-construct} \end{figure} Now, attach a copy of $W_1$ to $\overline{W}_1$ via an isometry of the 2--punctured disk as on the right in Figure \ref{fig:wn-construct}. In particular, boundary components are glued as shown without twisting. Call the resulting link in a solid cylinder $W$. For $n$ even, glue ${n}/{2}$ copies of $W$ together end to end, without twisting, followed by gluing the remaining two ends. For $n$ odd, glue $(n-1)/{2}$ copies of $W$ together without twisting, then glue a single copy of $W_1$, and attach the ends without twisting. This completes the construction of $\widehat{W}_n$. \begin{lemma} Let $\epsilon = n\mod{2}$. The minimally twisted $n$--chain link in a solid torus, $\widehat{W}_n$, has solid torus boundary component comprised of $\lfloor n/2 \rfloor + \epsilon$ copies of the cusp $K$ coming from $W_1$, and $\lfloor n/2 \rfloor$ copies of the cusp $\overline{K}$ coming from $\overline{W}_1$. The standard longitude of the solid torus follows a meridian of each copy of $K$ and $\overline{K}$, where the meridians of each copy of $K$ are orthogonal to the meridians of each copy of $\overline{K}$. The length of the longitude of the solid torus boundary component is $\sqrt{n^2 + \epsilon}$. \label{lemma:torus_boundary_component} \end{lemma} \begin{proof} This follows from the construction of $\overline{W}_n$ and Lemma \ref{lemma:k_shape}. The boundary component corresponding to the solid torus comes from $n/2$ copies of the cusp $K$ and $n/2$ copies of the cusp $\overline{K}$ for $n$ even, and $(n-1)/2 + 1$ copies of the cusp $K$ and $(n-1)/2$ copies of the cusp $\overline{K}$, for $n$ odd. These are glued together along their respective longitudes. Since a copy of the cusp of $K$ is glued to one of $\overline{K}$ along the longitude of each, the meridians meet at right angles. See Figure \ref{fig:cusp-wnbar}. \begin{figure} \input{figures/cusp-constr.pdf_t} \caption{The cusp construction of $\widehat{W_n}$.} \label{fig:cusp-wnbar} \end{figure} The longitude of the solid torus of $\widehat{W_n}$ is given by following each meridian of the copies of $K$ and $\overline{K}$ that glue to give the solid torus boundary component. Since these meridians always meet at right angles, the length of the longitude of the solid torus may be determined by the Pythagorean theorem. By Lemma \ref{lemma:k_shape}, the length of the meridian of the cusp $K$, and that of the cusp $\overline{K}$, is $\sqrt{2}$. We see that the length of the longitude of the solid torus boundary component of $\widehat{W}_n$ is $\sqrt{\left(\sqrt{2} \left( \lfloor \frac{n}{2} \rfloor + \epsilon \right)\right)^2 + \left( \sqrt{2} \lfloor \frac{n}{2} \rfloor \right)^2} = \sqrt{n^2 + \epsilon}$. \end{proof} Notice that while the construction of $\widehat{W}_n$ as described above uses $\lfloor n/2 \rfloor + \epsilon$ copies of $W_1$ and $\lfloor n/2 \rfloor$ copies of $\overline{W}_1$, we could have constructed it using $\lfloor n/2 \rfloor$ copies of $W_1$ and $\lfloor n/2 \rfloor + \epsilon$ copies of $\overline{W}_1$ instead. The result of this modified method of construction would be isometric to that of the original construction, by an orientation reversing isometry. To obtain $C_n$ from $\widehat{W}_n$ we Dehn fill a slope on the solid torus boundary component of $\widehat{W}_n$ that follows one standard longitude of the solid torus. Hence we Dehn fill along a slope of length $\sqrt{n^2 + (n\mod{2})}$. \section{Volumes, large minimally twisted chains}\label{sec:volumes} Using the information on slopes above, we may deduce geometric information on minimally twisted chain links by applying appropriate theorems bounding change in geometry under Dehn filling. This will give the desired result when $n\geq 60$. \subsection{Dehn filling and volume} We use the following theorem, which is a slightly simpler version of the main theorem in \cite{fkp:dfvjp}. \begin{theorem}[Futer--Kalfagianni--Purcell \cite{fkp:dfvjp}] Let $M$ be a complete, finite--volume hyperbolic manifold with (at least one) cusp, with horoball neighborhood $C$ about that cusp, and let $s$ be a slope on $\partial C$ with length $\ell(s) > 2\pi$. Then the manifold $M(s)$ obtained by Dehn filling $M$ along $s$ is hyperbolic, with volume: $$ {\rm vol}(M(s)) \: \geq \: \left(1-\left(\frac{2\pi}{\ell(s)}\right)^2\right)^{3/2} {\rm vol}(M).$$ \label{thm:fkp} \end{theorem} Putting this theorem together with Lemma \ref{lemma:torus_boundary_component}, we obtain the following. \begin{theorem} For $n\geq 7$, the volume of the complement of the minimally twisted $n$--chain link $C_n$ is at least: $${\rm vol}(S^3\setminus C_n) \geq n\, v_8 \, \left( 1 - \frac{4\pi^2}{ n^2 + \epsilon } \right)^{3/2},$$ where $v_8 = 3.66386\ldots$ is the volume of a hyperbolic regular ideal octahedron, and $\epsilon = n\mod{2}$. \label{thm:volume} \end{theorem} \begin{proof} The chain link complement is obtained by Dehn filling the longitude of $\widehat{W_n}$. This is obtained by gluing copies of the Whitehead link and its reflection along totally geodesic 3--punctured spheres, hence has volume $n$ times the volume of the Whitehead link, $n\cdot v_8$. By Lemma \ref{lemma:torus_boundary_component}, we know the length of the Dehn filling slope is $\sqrt{n^2 + \epsilon}$. The result follows by putting this data into Theorem \ref{thm:fkp}. \end{proof} We now give a proof of the main theorem. \begin{theorem} For $n\geq 60$, the minimally twisted $n$--chain link complement has volume strictly greater than that of the $(n-1)$--fold cyclic cover over one component of the Whitehead link. Hence the minimally twisted $n$--chain link complement cannot be the smallest volume hyperbolic 3--manifold with $n$ cusps, $n\geq 60$. \label{thm:not-minvolume} \end{theorem} \begin{proof} The volume of the $(n-1)$--fold cyclic cover of the Whitehead link is $(n-1)\,v_8$. By Theorem \ref{thm:volume}, the volume of the complement of the minimally twisted $n$--chain link is $${\rm vol}(S^3\setminus C_n) \geq n\,v_8\,\left(1-\frac{4\pi^2}{n^2+\epsilon}\right)^{3/2} \geq n\,v_8\,\left(1-\frac{4\pi^2}{n^2}\right)^{3/2}.$$ We want to find $n$ for which the following inequality holds: $$n\,v_8\,\left(1-\frac{4\pi^2}{n^2}\right)^{3/2} > (n-1)\,v_8,$$ or \begin{equation}\label{ineq} \left( \frac{n}{n-1} \right)\left(1-\frac{4\pi^2}{n^2}\right)^{3/2} -1 > 0. \end{equation} Let $f(n)$ be the function on the left side of inequality \eqref{ineq}. Using calculus, one sees that $\lim_{n\to\infty}f(n)=0$, $f$ is increasing between $n=7$ and $n=6\pi^2 + 2\pi\sqrt{9\pi^2 -2} \approx 117.8$, and decreasing for larger $n$, which implies $f$ has at most one root for $n\geq 7$, and that $f$ is positive to the right of any root. The Intermediate Value Theorem implies that there is a root between $n=59$ and $n=59.1$. Hence the inequality is satisfied for $n\geq 60$. \end{proof} \section{Computations of volume, smaller minimally twisted chains}\label{sec:compute} Now we analyze volumes of minimally twisted $n$--chain links for $n$ between $11$ and $59$, since the main method of proof of Theorem \ref{thm:not-minvolume} will not apply to these manifolds. For $n$ between $11$ and $59$ inclusive, in Table \ref{table:volumes} we present computational data using SnapPea (SnapPy) \cite{weeks:snappea, snappy} that shows that the minimally twisted $n$--chain link complement cannot be the minimal volume hyperbolic 3--manifold with $n$ cusps. In particular, $W_{n-1}$, the $(n-1)$--fold cyclic cover over one component of the Whitehead link, has smaller volume. The volume of $W_{n-1}$ is always $(n-1)\,v_8$, where $v_8 = 3.66386\ldots$ is the volume of a hyperbolic regular ideal octahedron, which is the volume of the Whitehead link complement. Notice that for $n\geq 11$, the volume of $S^3\setminus C_n$ is strictly larger than that of $W_{n-1}$. \begin{table} \input{volume-table.tex} \caption{Volumes of the complement of the minimally twisted $n$--chain link $C_n$, compared to volumes of $W_{n-1}$, the $(n-1)$--fold cyclic cover over a component of the Whitehead link, for $5 \leq n \leq 60$. Note that $S^3\setminus C_n$ has greater volume for $n\geq 11$.} \label{table:volumes} \end{table} It would be nice to turn this data into a rigorous proof that the minimally twisted $n$--chain links for $n$ between $11$ and $59$ cannot be minimal volume. One way to do this would be to use the methods of Moser \cite{moser} and Milley in \cite{milley:minvol}. Milley has written a program to rigorously prove that a hyperbolic 3--manifold with hyperbolic structure computed by Snap \cite{goodman:snap} has volume greater than some constant. This program, which is available as supplementary material with \cite{milley:minvol}, is in theory exactly what we need for these chain link examples. However, in practice, making Moser and Milley's programs work with the chain links has proven to be difficult, due to the computational complexity of the chain links. While Milley worked with small manifolds, for example with less than $10$ tetrahedra, and Moser's largest manifold included $57$ tetrahedra, our triangulations of minimally twisted $n$--chain link complements include between $40$ and $236$ tetrahedra. We were successfully able to run Moser's algorithm for $n$ between $11$ and $25$, inclusive, which gives results for those manifolds triangulated with up to $100$ tetrahedra, but then the program failed. We were able to run Milley's algorithm for all values of $n$ for which Moser's algorithm applied. However, Milley's algorithm only returned a positive result for $n$ between 12 and 25, inclusive. Therefore, we have the following result. \begin{theorem}\label{thm:vol-computational} For $n$ between $12$ and $25$, inclusive, the minimally twisted $n$--chain link complement has volume strictly greater than that of the $(n-1)$--fold cyclic cover over one component of the Whitehead link, hence cannot be the smallest volume hyperbolic 3--manifold with $n$ cusps. \end{theorem} \begin{proof} The proof is identical to that of Milley \cite{milley:minvol}, and uses his code, included as supplementary material with that reference \cite{milley:minvol}, modified to read in minimally twisted $n$--chain links rather than Dehn fillings of census manifolds. The first step is to feed the triangulations of the minimally twisted $n$--chain links into Snap, and ensure that the triangulations used in the computation are geometric, that is, all tetrahedra are positively oriented. This is true for all minimally twisted $n$--chain links, $11 \leq n \leq 59$. Next, use Moser's algorithm \cite{moser} to find a value $\delta$ which measures the maximal error between Snap's computed solution and the true solution. Moser's algorithm gave us such a value for $11 \leq n \leq 25$, but failed thereafter, presumably due to computational complexity of the chain link complements. Finally, for each $n$ between $12$ and $25$, inclusive, input the Snap triangulation data and Moser's value $\delta$ into Milley's program {\tt{rigorous\_volume.C}}, along with the constant value $(n-1)*3.66386237670888$. The program checks rigorously whether the volume of the given $n$--chain link is larger than the given constant. For $12 \leq n \leq 25$, the program definitively proved that the volumes of the minimally twisted $n$--chain link complement are strictly larger than that of the $(n-1)$--fold cyclic cover over a component of the Whitehead link. \end{proof} \begin{remark} Note that the above theorem does not hold for $n=11$. Although the triangulation of the minimally twisted 11--chain link is positively oriented, and Moser's algorithm returns a value of $\delta$ for this link, Milley's program {\tt{rigorous\_volume.C}} is unable to verify that its volume is larger than that of the 10--fold cyclic cover of the Whitehead link. When $n=11$, the volumes of these manifolds are too close for rigorous checking. \end{remark} What about the volumes output by SnapPea for $26 \leq n \leq 59$? Note in Table \ref{table:volumes} that the minimally twisted $n$--chain link for these $n$ has volume greater than $2$ plus the volume of $W_{n-1}$. It is highly unlikely that SnapPea's computation would be so far off as to make the theorem untrue for any of these values of $n$. However, since we do not have a rigorous proof at this time, we do not include the result as a theorem. \section{Arbitrary chain links}\label{sec:arblinks} \input{twisting} \bibliographystyle{amsplain} \subsection{Chain links with 5 through 10 link components} Our methods can be used to show that of all $n$--chain links, only the minimally twisted $n$--chain link can possibly be the minimal volume manifold with $n$ cusps for $n$ between $5$ and $10$, inclusively. In fact, because the complexity of these manifolds was comparatively small, we ran them through Milley's algorithm \cite{milley:minvol}, to rigorously check this fact. The algorithm successfully applied, and we have the following theorem. \begin{theorem}\label{thm:small-chains} Let $n$ be an integer between $5$ and $10$, inclusively. If $C_n$ is an $n$--chain link that is \emph{not} minimally twisted, then the complement $S^3\setminus C_n$ cannot be the minimal volume $n$--cusped hyperbolic manifold. \end{theorem} \begin{proof} Theorem \ref{thm:vol-withtwist} implies that for these $n$, and those chain links with at least $11$ half--twists, the volume is strictly greater than that of the $(n-1)$--fold cyclic cover over a component of the Whitehead link, which is known to have larger volume than that of the minimally twisted $n$--chain links in these cases. The remaining cases to check are Dehn fillings $(1, \pm 1), (1, \pm 2), \dots, (1, \pm 5)$ on manifolds $\widehat{W}_n$ for $n$ odd, Dehn fillings $(1,1), \dots, (1,5)$ on manifolds $\widehat{W}_n$ for $n$ even, and Dehn fillings $(1,0), (1,1), \dots, (1,5)$ on manifolds $\overline{W}_n$ for $n$ even. These cases were run through algorithms of Moser \cite{moser} and Milley \cite{milley:minvol}, and their programs rigorously proved that the volumes of these chain links were larger than that of the corresponding minimally twisted $n$--chain link. Programs are available from Milley \cite{milley:minvol} or the second author. \end{proof} In Table \ref{table:small}, we show the volumes of $n$--chain link complements, $n$ between $5$ and $10$, whose volumes are not automatically larger than the minimally twisted $n$--chain link by Theorem \ref{thm:vol-withtwist}. These are compared with the volume of the minimally twisted chain link. \begin{table} \input{table-small} \smallskip \caption{Volumes of small chain links obtained by Dehn filling $\widehat{W}_n$ or $\overline{W}_n$ along slope $s = (1,m)$, where $m$ is the integer at the top of the column, compared with the volume of the minimally twisted chain link.} \label{table:small} \end{table}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Holomorphic tubular neighborhood} \label{section:tubular-neighborhood} We first show that there is a special holomorphic tubular neighborhood for each divisor $D_i = \sum_{j=1}^{m_i} {C_{ij}}$ in the surface $Z$ with a vanishing cohomology condition; Proposition~\ref{theorem:tubular-neighborhood}. \begin{lemma}\label{lemma:tubular-neighborhood} Let $C$ be a smooth rational curve with $C^2 \le -1$ on $Z$. Then there is a holomorphic tubular neighborhood $V$ of $C$ in $Z$ such that $H^2(Z \setminus V, \sheaf{T_{Z \setminus V}})=0$. \end{lemma} \begin{proof} We use a similar method in Takamura~\cite{Takamura-1996}. Let $A \in \linsys{lH}$ ($l \gg 0$) be a smooth irreducible hyperplane section of $Z$. Fix a base point $p \in C \cup A$ of the Abel-Jacobi map $\alpha: \divisorgroup(A) \to \jacobian(A)$. Let $V_p$ be a small open neighborhood of $p$ in $A$. Choose a nonzero section $s \in H^0(A, \sheaf{O_A}(A))$. Set $R = \divisor(s)$, which is an effective divisor on $A$ of degree $d=A^2$. Note that the fundamental domain of the complex torus $\jacobian(A)$ is bounded and $\alpha(V_p)$ is an open set (by shrinking $V_p$ if necessarily) which contains the origin in $\jacobian(A)$; hence we have $\alpha(\symmetric^{nd}{V_p}) = \jacobian(A)$ for sufficiently large $n$ because the Abel-Jacobi map $\alpha$ is a homomorphism of abelian groups. Therefore we have $\alpha(nR) \in \alpha(\symmetric^{nd}{V_p})$ for sufficiently large $n$. Since $\alpha^{-1}(\alpha(nR)) = \mathbb{P}H^0(A, \sheaf{O_A}(nA))$, we have \[\alpha^{-1}(\alpha(nR)) \cap \symmetric^{nd}{V_p} = \mathbb{P}H^0(A, \sheaf{O_A}(nA)) \cap \symmetric^{nd}{V_p} \neq \varnothing.\] Choose an effective divisor $R' \in \mathbb{P}H^0(A, \sheaf{O_A}(nA)) \cap \symmetric^{nd}{V_p}$ and let $s' \in H^0(A, \sheaf{O_A}(nA))$ be a section such that $R'=\divisor(s')$. Note that $\supp(R') \subset V_p$. Since $A$ is a hyperplane section, we have $H^1(Z, \sheaf{O_Z}((n-1)A)=0$; hence the restriction map \begin{equation*} H^0(Z, \sheaf{O_Z}(nA)) \to H^0(A, \sheaf{O_H}(nA)) \end{equation*} is surjective. Therefore there is an smooth irreducible curve $A' \in \linsys{nA}$ of $Z$ whose restriction on $A$ is $R'$. Then $A \cap A' \subset V_p$. We may assume that $A'$ intersects $C$ transversely and that $p \not\in A' \cap C$ because $A'$ is also a hyperplane section. Since $\ext^1(\sheaf{N_{C,Z}}, \sheaf{T_C}) = 0$, we have $\sheaf{T_Z}|_C = \sheaf{T_C} \oplus \sheaf{N_{C,Z}}$. Therefore there is a holomorphic tubular neighborhood $V$ of $C$ in $Z$. We may assume the followings: (i) $V \supset V_p$ by shrinking $V_p$ if necessarily, (ii) $V \supset A \cap A'$, and (iii) at least one component of $V \cap A$ (and also $V \cap A'$) is a fiber of $V$, where $V$ is considered to be a holomorphic disk bundle over $C$. We remark the following observation: Let $N$ be a holomorphic line bundle over an open Riemann surface $S$ and let $N_0$ be obtained by deleting a tubular neighborhood of the zero section. Then $N_0$ is biholomorphic to $S \times B$ where $B$ is a complement of an open disk in $\mathbb{C}$ because any holomorphic line bundle over an open Riemann surface is trivial. Since both $S$ and $B$ are Stein, so is $N$. In particular, the boundary $\partial N_0$ is pseudoconvex. In our case, the set $Z_0 = Z \setminus (A \cup V)$ (or $Z_0' = Z \setminus (A' \cup V$) is pseudoconvex in the affine variety $Z \setminus A$ (respectively, $Z \setminus A'$) by the above observation. Therefore $Z_0$ and $Z_0'$ are Stein. And $Z_0 \cap Z_0'$ is also stein. Since $A \cap A' \subset V_p$, we have $Z \setminus V = Z_0 \cup Z_0'$. By the Mayer-Vietoris sequence, we have the exact sequence \[H^1(Z_0 \cap Z_0', \sheaf{T_Z}) \to H^2(Z_0 \cup Z_0', \sheaf{T_Z}) \to H^2(Z_0, \sheaf{T_Z}) \oplus H^2(Z_0', \sheaf{T_Z}).\] Therefore, according to the vanishing of higher cohomology of a Stein space, we have $H^2(Z \setminus V, \sheaf{T_{Z \setminus V}})=0$. \end{proof} \begin{theorem}\label{theorem:tubular-neighborhood} There is a holomorphic tubular neighborhood $U_i$ for each $D_i=\sum_{j=1}^{m_i} C_{ij}$ in $Z$ such that $H^2(Y, \sheaf{T_{Y_i}})=0$ where $Y_i = Z \setminus U_i$. Furthermore, setting $Y=Z \setminus (U_1 \cup \dotsb \cup U_k)$, we have $H^2(Y, \sheaf{T_Y})=0$. \end{theorem} \begin{proof} By Lemma~\ref{lemma:tubular-neighborhood} there is a holomorphic neighborhood $W_{ij}$ of $C_{ij}$ such that $H^2(Y_{ij}, \sheaf{T_{Y_{ij}}})=0$ for each $j=1,\dotsc,m_i$ where $Y_{ij} = Z \setminus W_{ij}$. Set $U_i = \cup_{j=1}^{m_i} W_{ij}$ and $Y_i = Z-U_i = \cap_{j=1}^{m_i} Y_{ij}$. By the Mayer-Vietoris sequence and the induction on $m_i$ it is easy to show that $H^2(Y_i, \sheaf{T_{Y_i}})=0$. Since $Y=\cup_{i=1}^{k}{Y_i}$ it follows that $H^2(Y, \sheaf{T_Y})=0$. \end{proof} \begin{remark} Since $Y = Z \setminus U$ is not compact, the vanishing does not imply that a infinitesimal deformation on $Z \setminus U$ is automatically integrable. \end{remark} \section{Infinitesimal deformations} \label{section:infinitesimal-deformations} Let $H$ be an effective ample divisor on $Z$. We construct infinitesimal deformations of the open surface $Y=Z \setminus U$ derived from $H$; Proposition~\ref{theorem:infinitesimal_deformation}. \begin{lemma}\label{lemma:E-and-L} There exists an effective divisor $L = x_0 H + E$ where \[E=\sum_{j=1}^{m_1}{x_{1j} C_{1j}} + \dotsb + \sum_{j=1}^{m_k}{x_{kj} C_{kj}}\] satisfying the following properties: \begin{enumerate}[\rm (i)] \item $x_{ij} \ge 1$ for all $i$, $j$, \item $LC_{ij}=0$ for all $i$, $j$, \item $L$ descends to an ample divisor on $X$. \end{enumerate} Furthermore a tuple of coefficients $(x_0, x_{11}, \dotsc, x_{km_k})$ is uniquely determined by $H$ up to a constant multiple. \end{lemma} \begin{proof} For simplicity we first consider the case $k=1$. We denote $C_{1j}$ by $C_j$, $b_{1j}$ by $b_j$, and $m_1$ by $m$. Set $a_j = HC_j > 0$ for $j=1, \dotsc, m$. If $m=1$, set $L = b_1H + a_1C_1$. If $m=2$, set $L = (b_1b_2-1)H + (a_1b_2+a_2)C_1 + (a_1+a_2b_1)C_2$. For $m \ge 3$, the condition (i) would be interpreted as the system of linear equations \begin{equation}\label{equation:H} BX=0 \end{equation} where \begin{equation*} B=\begin{pmatrix} -b_1 & 1 & & & & a_1 \\ 1 & -b_2 & 1 & & & a_2 \\ & & \ddots & & & \vdots \\ & & 1 & -b_{m-1} & 1 & a_{m-1} \\ & & & 1 & -b_m & a_{m} \end{pmatrix}, \quad X=\begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \\ x_0 \end{pmatrix}. \end{equation*} Since $B$ is of full rank, the solution of \eqref{equation:H} is unique up to constant multiple. Since $a_i, b_i > 0$, it is easy to show that there are positive integers $x_0, \dotsc, x_m$ satisfying \eqref{equation:H}. Therefore there is an effective divisor $L = x_0 H + x_1 C_1 + \dotsb + x_m C_m$ such that $LC_i=0$ for all $i=1,\dotsc,m$. We now show that $L$ descends to an ample divisor on $X$. Let $E = x_1 C_1 + \dotsb + x_m C_m$. Consider the exact sequence \begin{equation*} H^0(Z, nx_0H) \xrightarrow{\phi} H^0(Z, nL) \xrightarrow{\psi} H^0(nE, nL|_{nE}) \to H^1(Z, nx_0H). \end{equation*} Since $H$ is ample, $H^1(Z, nx_0H)=0$ for $n \gg 0$; hence $\psi$ is surjective for $n \gg 0$. On the other hand, setting $V=f(U)$ and $p=f(D)$, $(U, D) \to (V, p)$ is the minimal resolution of the rational singularity $p$ of $X$. We may assume that $V \subset X$ is Stein and contractible by shrinking $U$ if necessary. Then $H^1(U, \sheaf{O_U})=H^2(U, \sheaf{O_U})=0$ and $D$ is a deformation retract of $U$, in particular the exponential sequence on $U$ gives an isomorphism $\picard(U) = H^2(U, \mathbb{Z})$. Therefore a line bundle $\sheaf{L}$ on $U$ is trivial if and only if $\sheaf{L} \cdot C_i=0$ for every irreducible component $C_i$ of $D$. Therefore the restriction of $L$ on $U$ is trivial and $nL$ is also trivial on $U$. In particular, the restriction map $H^0(nE, nL|_{nE}) \to H^0(D, nL|_D)$ is surjective. Since $nL|_D$ is trivial, we can choose $s_0 \in H^0(Z, nL)$ such that $s_0|_D$ is a nonzero constant. On the other hand, since $H$ is ample, we may choose $\{\widetilde{s}_1, \dotsc, \widetilde{s}_l\} \in H^0(Z, nx_0H)$ so that they gives an embedding $Z \hookrightarrow \mathbb{P}^{l-1}$ for $n \gg 0$. Set $s_i = \phi(s_i)$ ($i=1,\dotsc,l$). Since $s_i|_D=0$ for $i=1,\dotsc,l$, the map $\pi : Z \to \mathbb{P}^l$ defined by $\pi(x) = (s_0(x),s_1(x),\dotsc,s_l(x))$ contracts $D$ and gives an embedding of $Z \setminus D$; hence, the map $\pi$ is an embedding of $X$, which implies that $L$ descends to an ample divisor on $X$. In case of $k \ge 2$, one may consider the following equation instead of \eqref{equation:H}: Setting $a_{ij} = HC_{ij} > 0$, \begin{equation*} \begin{pmatrix} B_1 & & & & A_1\\ & B_2 & & & A_2\\ & & \ddots & & \vdots \\ & & & B_k & A_k \end{pmatrix} X = 0 \end{equation*} where \begin{equation*} B_i=\begin{pmatrix} -b_{i1} & 1 & & & \\ 1 & -b_{i2} & 1 & & \\ & & \ddots & & \\ & & 1 & -b_{i(m_i-1)} & 1 \\ & & & 1 & -b_{m_i} \end{pmatrix}, A_i=\begin{pmatrix} a_{i1} \\ a_{i2} \\ \vdots \\ a_{i(m_i-1)} \\ a_{im_i} \end{pmatrix}, X=\begin{pmatrix} x_{11} \\ x_{12} \\ \vdots \\ x_{km_k} \\ x_0 \end{pmatrix}. \end{equation*} Then the proof is identical to the proof of the case $k=1$. \end{proof} Our infinitesimal deformations of $Y$ are obtained by restricting to $Y$ meromorphic $1$-forms on $Z$ with poles along $C_{ij}$. In order to construct such infinitesimal deformations we use a similar strategy in Takamura~\cite{Takamura-1996}. \begin{theorem}\label{theorem:infinitesimal_deformation} Let $E$ be the effective divisor in Lemma~\ref{lemma:E-and-L} corresponding to the ample divisor $H$. Then the restriction map $H^1(Z, \sheaf{T_Z}(nE)) \to H^1(Y, \sheaf{T_Y})$ is injective for any $n \gg 1$. \end{theorem} \begin{proof} The divisor $L = x_0H + E$ descends to an ample divisor on $X$ by Lemma~\ref{lemma:E-and-L}. Let $\overline{L} = L|X$. Since $\overline{L}$ is ample, we may choose an irreducible smooth curve $C \in \linsys{n\overline{L}}$ for $n \gg 0$ such that $C$ does not pass through the singular point of $X$. We denote again by $C$ the inverse image of $C$ under the contraction map $f$. Then $C$ is linearly equivalent to $nL$ and $C \cap \supp(E) = \varnothing$. Consider the exact sequence \begin{equation*} H^1(Z, \sheaf{T_Z}(nE-C)) \to H^1(Z, \sheaf{T_Z}(nE)) \xrightarrow{\alpha} H^1(C, \sheaf{T_Z}(nE) \otimes \sheaf{O_{C}}). \end{equation*} Since $C-nE=nH$ is ample, we have \begin{equation*} \begin{split} H^1(Z, \sheaf{T_Z}(nE-C)) &= H^1(Z, \Omega_Z(K_Z + C - nE)) \\ &=H^1(Z, \Omega_Z(K_Z + nH)) = 0. \end{split} \end{equation*} Therefore the map $\alpha$ is injective. On the other hand, since $C \cap \supp(E) = \varnothing$, the map $\alpha$ factors through $H^1(Y, \sheaf{T_Y})$, i.e., \begin{equation*} \alpha: H^1(Z, \sheaf{T_Z}(nE)) \to H^1(Y, \sheaf{T_Y}) \to H^1(C, \sheaf{T_Z}(nE) \otimes \sheaf{O_{C}}). \end{equation*} Therefore the restriction map $H^1(Z, \sheaf{T_Z}(nE)) \to H^1(Y, \sheaf{T_Y})$ is injective for any $n \gg 1$. \end{proof} \begin{remark}[Integrability] Since $Y$ is not compact the vanishing of $H^2(Y, \sheaf{T_Y})$ does not necessarily imply the integrability of the infinitesimal deformations induced by $H^1(Z, \sheaf{T_Z}(nE))$. This question needs further investigation. \end{remark} The infinitesimal deformation $H^1(Z, \sheaf{T_Z}(nE))$ is nonempty; Proposition~\ref{proposition:grows_quadrically}. We use a similar method in Takamura~\cite{Takamura-2000}. \begin{lemma}\label{lemma:h^1(T_Z(nE)|nE)} The dimension $h^1(nE, \sheaf{T_Z}(nE) \otimes \sheaf{O_{nE}})$ grows at least quadratically in $n \gg 0$. \end{lemma} \begin{proof} Since the supports of the divisors $D_i$ are disjoint we may assume that $k=1$. For simplicity we denote $C_{1j}$ by $C_j$, $b_{1j}$ by $b_j$, $m_1$ by $m$, and $x_{1j}$ by $x_j$. We divide the proof into two cases. Case 1: $m=1$. Let $C=C_1$, $b=b_1$, and $x=x_1$ for brevity. We first claim that $h^0(\sheaf{T_Z}(nE-lC)|_C)=0$ and $h^1(\sheaf{T_Z}(nE-lC)|_C)=2nx-2lb+b$ for $0 \le l \le nx$. Consider the exact sequence \[0 \to \sheaf{T_C}(nE-lC) \to \sheaf{T_Z}(nE-lC)|_C \to \sheaf{N_{C,Z}}(nE-lC) \to 0\] induced from the tangent-normal bundle sequence. Since \begin{align*} \deg{\sheaf{T_C}(nE-lC)} &= -2-nxb+lb < 0,\\ \deg{\sheaf{N_{C,Z}}(nE-lC)} &= -b-nxb+lb < 0 \end{align*} for $0 \le l \le nx$, it follows from the above exact sequence and the Riemann-Roch theorem that $h^0(\sheaf{T_Z}(nE-lC)|_C)=0$ and $h^1(\sheaf{T_Z}(nE-lC)|_C)=2nx-2lb+b$. Consider the decomposition sequence \[0 \to \sheaf{T_Z}(nE-(l+1)C)|_{(nx-l-1)C} \to \sheaf{T_Z}(nE-lC)|_{(nx-l)C} \to \sheaf{T_Z}(nE-lC)|_C \to 0.\] for $0 \le l \le nx_1-2$. Since $h^0(\sheaf{T_Z}(nE-lC)|_C)=0$, we have \[h^1(\sheaf{T_Z}(nE-lC)|_{(nx-l)C}) = h^1(\sheaf{T_Z}(nE-lC)|_C) + h^1(\sheaf{T_Z}(nE-(l+1)C)|_{(nx-l-1)C}).\] Therefore \begin{equation*} \begin{split} h^1(\sheaf{T_Z}(nE)|_{nxC}) &= h^1(\sheaf{T_Z}(nE)|_C) + h^1(\sheaf{T_Z}(nE-C)|_{(nx-1)C}) \\ &=\cdots\\ &= \sum_{l=0}^{nx-1} h^1(\sheaf{T_Z}(nE-lC)|_C) \\ &= \sum_{l=0}^{nx-1} (2nx-2lb+b) \\ &\sim O(n^2). \end{split} \end{equation*} Case 2: $m \ge 2$. The proof is similar to the case $m=1$. For the convenience of the reader, we briefly sketch the proof. We claim that $h^0(\sheaf{T_Z}(nE-lC)|_C)=0$ and $h^1(\sheaf{T_Z}(nE-lC)|_C)=2na_1x_0-2lb_1+b_1$ for $0 \le l \le \left[ \frac{na_1x_0+2}{b_1}\right] -1$. Set $\alpha = \left[ \frac{na_1x_0+2}{b_1}\right]$. Consider the exact sequence \[0 \to \sheaf{T_C}(nE-lC) \to \sheaf{T_Z}(nE-lC)|_C \to \sheaf{N_{C,Z}}(nE-lC) \to 0.\] Since $-x_1b_1 + x_2 = -a_1x_0$ by \eqref{equation:H}, we have \begin{align*} \deg{\sheaf{T_C}(nE-lC)} &= -2 - nx_1b_1 + nx_2 + lb_1 = -2 - na_1x_0 + lb_1 < 0,\\ \deg{\sheaf{N_{C,Z}}(nE-lC)} &= -b_1 - nx_1b_1 + nx_2 + lb_1 = -b_1 - na_1x_0 + lb_1 < 0 \end{align*} for $0 \le l \le \alpha-2$. Therefore $h^0(\sheaf{T_Z}(nE-lC)|_C)=0$. By using $\sheaf{T_Z}|_C = \sheaf{T_C} \oplus \sheaf{N_{C,Z}}$ and the Riemann-Roch, we have $h^1(\sheaf{T_Z}(nE-lC)|_C)=2na_1x_0-2lb_1+b_1$. Hence the claim follows. From the decomposition sequence \[0 \to \sheaf{T_Z}(nE-(l+1)C)|_{(nx-l-1)C} \to \sheaf{T_Z}(nE-lC)|_{(nx-l)C} \to \sheaf{T_Z}(nE-lC)|_C \to 0\] and the above claim, we have \[h^1(\sheaf{T_Z}(nE-lC)|_{(nx-l)C}) = h^1(\sheaf{T_Z}(nE-lC)|_C) + h^1(\sheaf{T_Z}(nE-(l+1)C)|_{(nx-l-1)C})\] for $0 \le l \le \alpha -1$. Therefore \begin{equation*} \begin{split} h^1(\sheaf{T_Z}(nE)|_{nxC}) &= h^1(\sheaf{T_Z}(nE)|_C) + h^1(\sheaf{T_Z}(nE-C)|_{(nx-1)C})\\ &=\cdots\\ &= \sum_{l=0}^{\alpha-1} h^1(\sheaf{T_Z}(nE-lC)|_C) + h^0(\sheaf{T_Z}(nE-\alpha C_1)|_{(nx_1-\alpha)C_1})\\ &= \sum_{l=0}^{\alpha-1}(2na_1x_0-2lb_1+b_1) + h^0(\sheaf{T_Z}(nE-\alpha C_1)|_{(nx_1-\alpha)C_1})\\ &\sim O(n^2) \end{split} \end{equation*} if $n \gg 0$ because $\alpha \sim O(n)$ for $n \gg 0$. \end{proof} \begin{proposition}\label{proposition:grows_quadrically} The dimension $h^1(Z, \sheaf{T_Z}(nE))$ grows at least quadratically in $n \gg 0$. \end{proposition} \begin{proof} Consider the exact sequence \begin{equation*} \cdots \to H^1(\sheaf{T_Z}(nE)) \xrightarrow{\alpha} H^1(nE, \sheaf{T_Z}(nE) \otimes \sheaf{O_{nE}}) \xrightarrow{\beta} H^2(Z, \sheaf{T_Z}) \to \cdots \end{equation*} induced from the exact sequence \begin{equation*} 0 \to \sheaf{T_Z} \to \sheaf{T_Z}(nE) \to \sheaf{T_Z}(nE) \otimes \sheaf{O_{nE}} \to 0. \end{equation*} Then we have \begin{equation*} h^1(nE, \sheaf{T_Z}(nE) \otimes \sheaf{O_{nE}}) = \dim \image{\alpha} + \dim \image{\beta}. \end{equation*} By Lemma~\ref{lemma:h^1(T_Z(nE)|nE)}, the left hand side of the above equation grows quadratically for $n \gg 3$, but $\dim \image{\beta}$ bounds for $h^2(Z, \sheaf{T_Z})$. Hence $\dim \image{\alpha}$ must grows quadratically for $n \gg 0$, which implies that $h^1(Z, \sheaf{T_Z}(nE))$ grows at least quadratically in $n \gg 0$. \end{proof} The infinitesimal deformation spaces $H^1(Z, \sheaf{T_Z}(nE))$ ($n \gg 0$) form a stratification of $H^1(Y, \sheaf{T_Y})$; Proposition~\ref{proposition:injective}. \begin{lemma}\label{lemma:H^0(T_Z(nE-lC_1)|C_1)=0} For each pair $(i,j)$, we have $H^0(x_{ij}C_{ij}, \sheaf{T_Z}(nE) \otimes \sheaf{O_{x_{ij}C_{ij}}}) = 0$ for every $n \gg 0$. \end{lemma} \begin{proof} We only prove the lemma in case $i=1$, $j=1$, and $m_1 \ge 2$. The proof of the other cases are similar. For simplicity we denote $C_{1j}$ by $C_j$, $b_{1j}$ by $b_j$, $m_1$ by $m$, and $x_{1j}$ by $x_j$. According to the claim in the proof of Lemma~\ref{lemma:h^1(T_Z(nE)|nE)}, we have $H^0(C_1, \sheaf{T_Z}(nE-lC_1)|_{C_1})=0$ for $0 \le l \le \left[ \frac{na_1x_0+2}{b_1}\right] -1$. Note that $x_1 \le \left[ \frac{na_1x_0+2}{b_1}\right] + 1$ for $n \gg 0$. Therefore \begin{equation}\label{equation:H^0(T_Z(nE-lC_1)|C_1)=0} H^0(C_1, \sheaf{T_Z}(nE-lC_1)|_{C_1})=0 \end{equation} for $0 \le l \le x_1-2$. Consider the decomposition sequence \begin{multline*} 0 \to \sheaf{T_Z}(nE-(l+1)C_1)|_{(x_1-l-1)C_1} \\ \to \sheaf{T_Z}(nE-lC_1)|_{(x_1-l)C_1} \\ \to \sheaf{T_Z}(nE-lC_1)|_{C_1} \to 0. \end{multline*} By \eqref{equation:H^0(T_Z(nE-lC_1)|C_1)=0} and the induction on $l$, it follows that $H^0(\sheaf{T_Z}(nE)|_{x_1C_1})=0$. \end{proof} From the exact sequence \begin{equation*} 0 \to \sheaf{T_Z}(nE) \to \sheaf{T_Z}((n+1)E) \to \sheaf{T_Z}((n+1)E) \otimes \sheaf{O_E} \to 0, \end{equation*} we have an induced map $H^1(Z, \sheaf{T_Z}(nE)) \to H^1(Z, \sheaf{T_Z}((n+1)E))$. \begin{proposition}\label{proposition:injective} The map $H^1(Z, \sheaf{T_Z}(nE)) \to H^1(Z, \sheaf{T_Z}((n+1)E))$ is injective for every $n \gg 0$. \end{proposition} \begin{proof} Let $E_i = \sum_{j=1}^{m_j} x_{ij}C_{ij}$. Since the supports of $D_i$'s are disjoint we have $H^1(Z, \sheaf{T_Z}(nE)) = \oplus_{i=1}^{k} H^1(Z, \sheaf{T_Z}(nE_i))$. Therefore it is enough to show that $H^1(Z, \sheaf{T_Z}(nE_i)) \to H^1(Z, \sheaf{T_Z}((n+1)E_i))$ is injective for every $n \gg 0$. We prove only the case of $i=1$ and $m_i \ge 2$. For simplicity we denote $C_{1j}$ by $C_j$, $b_{1j}$ by $b_j$, $m_1$ by $m$, and $x_{1j}$ by $x_j$. Consider the exact sequence \begin{equation*} 0 \to \sheaf{T_Z}(nE-x_1C_1)|_{E-x_1C_1} \to \sheaf{T_Z}(nE)|_E \to \sheaf{T_Z}(nE)|_{x_1C_1} \to 0. \end{equation*} By Lemma~\ref{lemma:H^0(T_Z(nE-lC_1)|C_1)=0}, we have $H^0(\sheaf{T_Z}(nE)|_{x_1C_1})=0$; hence, \begin{equation}\label{equation:H^0(T_Z(nE)|E)} H^0(\sheaf{T_Z}(nE)|_E) = H^0(\sheaf{T_Z}(nE-x_1C_1)|_{E-x_1C_1}) \subset H^0(\sheaf{T_Z}(nE)|_{E-x_1C_1}). \end{equation} Consider the exact sequence \begin{equation*} 0 \to \sheaf{T_Z}(nE-x_2C_2)|_{E-x_1C_1-x_2C_2} \to \sheaf{T_Z}(nE)|_{E-x_1C_1} \to \sheaf{T_Z}(nE)|_{x_2C_2} \to 0. \end{equation*} By Lemma~\ref{lemma:H^0(T_Z(nE-lC_1)|C_1)=0}, we have $H^0(\sheaf{T_Z}(nE)|_{x_2C_2})=0$. Therefore it follows from \eqref{equation:H^0(T_Z(nE)|E)} that \begin{equation*} \begin{split} H^0(\sheaf{T_Z}(nE)|_E) &\subset H^0(\sheaf{T_Z}(nE)|_{E-x_1C_1}) = H^0(\sheaf{T_Z}(nE-x_2C_2)|_{E-x_1C_1-x_2C_2}) \\ & \subset H^0(\sheaf{T_Z}(nE)|_{E-x_1C_1-x_2C_2}). \end{split} \end{equation*} Repeating this process, then we have \begin{equation*} H^0(\sheaf{T_Z}(nE)|_E) \subset H^0(\sheaf{T_Z}(nE)|_{E-x_1C_1}) \subset \dotsb \subset H^0(\sheaf{T_Z}(nE)|_{x_mC_m}) = 0. \qedhere \end{equation*} \end{proof} \section{Properties of the infinitesimal deformations} \label{section:nontrivial} A small deformation of the domain $Y$ in $Z$ changes the complex structure only near the boundary of $Y$ while keeping unchanged the complex structure away from $\partial Y$. In this section we will show that the deformations induced from $H^1(Z, \sheaf{T_Z}(nE))$ are not small deformations. In fact, any non-zero $\alpha \in H^1(Z, \sheaf{T_Z}(nE))$ induces a non-trivial infinitesimal deformation of a certain curve away from $\partial Y$; Theorem~\ref{theorem:nontrivial}. Since the divisor $L$ in Lemma~\ref{lemma:E-and-L} descends to an ample divisor on $X$, we may choose an irreducible smooth curve $C \in \linsys{nL}$ ($n \gg 0$) on $Z$ such that $C \cap U \neq \varnothing$. \begin{theorem}\label{theorem:nontrivial} Any infinitesimal deformation induced from a non-zero element $H^1(Z, \sheaf{T_Z}(nE))$ preserves $C \cap Y$ but changes infinitesimally the complex structure of $C \cap Y$. \end{theorem} \begin{proof} We use a similar method in Takamura~\cite{Takamura-1996}. Let $C'=C \cap Y$. Consider the exact sequence \begin{equation*} H^1(Y, \sheaf{T_Y}(-\log{C'})) \xrightarrow{\psi} H^1(Y, \sheaf{T_Y}) \to H^1(C', \sheaf{N_{C',Y}}) \end{equation*} induced from \begin{equation*} 0 \to \sheaf{T_Y}(-\log{C'}) \to \sheaf{T_Y} \to \sheaf{N_{C',Y}} \to 0. \end{equation*} Since $C \cap U \neq \varnothing$, $C'$ is stein; hence, $H^1(C', \sheaf{N_{C',Y}})=0$ and the map $\psi$ is surjective. Therefore any infinitesimal deformation of $Y$ induced from a non-zero element in $H^1(Z, \sheaf{T_Z}(nE))$ preserves $C'$. Let $\alpha \in H^1(Z, \sheaf{T_Z}(nE))$ be a non-zero element. We denote again by $\alpha$ the image of $\alpha$ of the injective map $H^1(Z, \sheaf{T_Z}(nE)) \to H^1(Y, \sheaf{T_Y})$. Take any $\widetilde{\alpha} \in H^1(Y, \sheaf{T_Y}(-\log{C'}))$ such that $\psi(\widetilde{\alpha}) = \alpha$. Consider the commutative diagram \begin{equation*} \xymatrix{ H^1(Y, \sheaf{T_Y}(-\log{C'})) \ar[r]^{\psi} \ar[d]_{\phi} & H^1(Y, \sheaf{T_Y}) \ar[d] \\ H^1(C', \sheaf{T_{C'}}) \ar[r] & H^1(C', \sheaf{T_Y}|_{C'}) } \end{equation*} In order to prove that $\widetilde{\alpha}$ changes infinitesimally the complex structure of $C'$, it is enough to show that the image of $\widetilde{\alpha}$ by $\phi : H^1(Y, \sheaf{T_Y}(-\log{C'})) \to H^1(C', \sheaf{T_{C'}})$ is non-zero. In the proof of Proposition~\ref{theorem:infinitesimal_deformation}, we showed that the restriction $\widetilde{\alpha}|_C' \in H^1(C', \sheaf{T_Z}|_{C'})$ is non-zero because \begin{equation*} H^1(Z, \sheaf{T_Z}(nE)) \to H^1(C', \sheaf{T_Z}(nE)|_{C'}) \cong H^1(Z, \sheaf{T_Z}|_{C'}) \end{equation*} is injective for $n \gg 0$. Since the above diagram commutes, $\phi(\widetilde{\alpha})$ is also non-zero. \end{proof} \subsection*{Acknowledgements} The author would like to thank Heesang Park for valuable discussions during the work. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2] \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Effros and Choi gave in~\cite{ChoiEffros} an abstract characterization of the self-adjoint subspaces $S$ in $C^*$-algebras with hierarchy of cones of positive elements in $M_n(S)$. In Section 2 of the present paper we are concerned with the same question for $*$-subalgebras of $C\sp*$-algebras. More precisely, let $\mathcal{A}$ be an associative $*$-algebra with unit. In Theorem~\ref{operatorstar} we present a characterization of the collections of cones $C_n\subseteq M_n(\mathcal{A})$ for which there exist faithful $*$-representation $\pi$ of $\mathcal{A}$ on a Hilbert space $H$ such that $C_n$ coincides with the cone of positive operators contained in $\pi^{(n)}(M_n(\mathcal{A}))$. Here $\pi^{(n)}( (x_{i,j}) ) = ( \pi(x_{i,j}) )$ for every matrix $(x_{i,j})\in M_n(\mathcal{A})$. Note that we do not assume that $\mathcal{A}$ has any faithful $*$-representation. This follows from the requirements imposed on the cones. In terms close to Effros and Choi we give an abstract characterizations of matrix ordered (not necessary closed) operator $*$-algebras up to complete order $*$-isomorphism. Based on this characterization we study the question when an operator algebra is similar to a $C^*$-algebra. Let $\mathcal{B}$ be a unital (closed) operator algebra in $B(H)$. In~\cite{merdy} C. Le Merdy presented necessary and sufficient conditions for $\mathcal{B}$ to be self-adjoint. These conditions involve all completely isometric representations of $\mathcal{B}$ on Hilbert spaces. Our characterization is different in the following respect. If $S$ is a bounded invertible operator in $B(H)$ and $\mathcal{A}$ is a $C\sp*$-algebra in $B(H)$ then the operator algebra $S^{-1} \mathcal{A} S$ is not necessarily self-adjoint but only isomorphic to a $C^*$-algebra via completely bounded isomorphism with completely bounded inverse. By Haagerup's theorem every completely bounded isomorphism $\pi$ from a $C\sp*$-algebra $\mathcal{A}$ to an operator algebra $\mathcal{B}$ has the form $\pi(a)=S^{-1}\rho(a)S$, $a\in \mathcal{A}$, for some $*$-isomorphism $\rho:\mathcal{A}\rightarrow B(H)$ and invertible $S\in B(H)$. Thus the question whether an operator algebra $\mathcal{B}$ is completely boundedly isomorphic to a $C^*$-algebra via isomorphism which has completely bounded inverse, is equivalent to the question if there is bounded invertible operator $S$ such that $S \mathcal{B} S^{-1}$ is a $C^*$-algebra. We will present a criterion for an operator algebra $\mathcal{B}$ to be completely boundedly isomorphic to a $C\sp*$-algebra in terms of the existence of a collection of cones $C_n\in M_n(\mathcal{B})$ satisfying certain axioms (see def.~\ref{consdef}). The axioms are derived from the properties of the cones of positive elements of a $C^*$-algebra preserved under completely bounded isomorphisms. The main results are contained in section 2. We define a $*$-admissible sequence of cones in an operator algebra and present a criterion in Theorem~\ref{main} for an operator algebra to be completely boundedly isomorphic to a $C^*$-algebra. In the last section we consider the operator algebras and collections of cones associated with Kadison similarity problem. \section{Operator realizations of matrix-ordered $*$-algebras. } The aim of this section is to give necessary and sufficient conditions on a sequences of cones $C_n\subseteq M_n(\mathcal{A})_{sa}$ for a unital $*$-algebra $\mathcal{A}$ such that $C_n$ coincides with the cone $M_n(\mathcal{A})\cap M_n(B(H))^+$ for some realization of $\mathcal{A}$ as a $*$-subalgebra of $B(H)$, where $M_n(B(H))^+$ denotes the set of positive operators acting on $H^n = H\oplus \ldots \oplus H$. In \cite{Popovych} it was proved that a $*$-algebra $\mathcal{A}$ with unit $e$ is a $*$-subalgebra of $B(H)$ if and only if there is an algebraically admissible cone on $\mathcal{A}$ such that $e$ is an Archimedean order unit. Applying this result to some inductive limit of $M_{2^n}(\mathcal{A})$ we obtain the desired characterization in Theorem \ref{operatorstar}. First we give necessary definitions and fix notations. Let $\mathcal{A}_{sa}$ denote the set of self-adjoint elements in $\mathcal{A}$. A subset $C\subset \mathcal{A}_{sa}$ containing unit $e$ of $\mathcal{A}$ is \textit{algebraically admissible} cone (see \cite{powers}) provided that \begin{enumerate}[(i)] \item $C$ is a cone in $\mathcal{A}_{sa}$, i.e. $\lambda x+\beta y\in C$ for all $x$, $y\in C$ and $\lambda\geq 0$, $\beta\geq 0$, $\lambda,\beta\in\mathbb{R}$; \item $C \cap (-C) =\{0\}$; \item $x C x^* \subseteq C$ for every $x\in \mathcal{A}$; \end{enumerate} We call $e\in \mathcal{A}_{sa}$ an \textit{order unit} if for every $x\in \mathcal{A}_{sa}$ there exists $r>0$ such that $re+x\in C$. An order unit $e$ is \textit{Archimedean} if $re+x\in C$ for all $r>0$ implies that $x\in C$ In what follows we will need the following. \begin{theorem}\label{onecone} Let $\mathcal{A}$ be a $*$-algebra with unit $e$ and $C\subseteq \mathcal{A}_{sa}$ be a cone containing $e$. If $x C x^* \subseteq C$ for every $x\in \mathcal{A}$ and $e$ is an Archimedean order unit then there is a unital $*$-representation $\pi: \mathcal{A} \to B(H)$ such that $\pi(C) = \pi(\mathcal{A}_{sa})\cap B(H)^+$. Moreover \begin{enumerate} \item\label{onecone1} $\norm{\pi(x)} = \inf \{r>0 : r^2 \pm x^*x\in C\}$. \item\label{onecone2} $\ker \pi =\{ x : x^*x \in C \cap (-C)\}$. \item\label{onecone3} If $C \cap (-C) = \{0\}$ then $\ker \pi =\{ 0\}$, $\norm{\pi(a)} = \inf \{r>0 : r \pm a\in C\}$ for all $a=a^*\in \mathcal{A}$ and $\pi(C) = \pi(\mathcal{A})\cap B(H)^+$ \end{enumerate} \end{theorem} \begin{proof} Following the same lines as in \cite{Popovych} one obtains that the function $\| \cdot\|: \mathcal{A}_{sa}\to \mathbb{R}_+$ defined as $$ \| a \| = \inf \{ r> 0: re \pm a \in C \} $$ is a seminorm on $\mathbb{R}$-space $\mathcal{A}_{sa}$ and $|x| = \sqrt{\|x^*x\|}$ for $x\in \mathcal{A}$ defines a pre-$C\sp*$-norm on $\mathcal{A}$. If $N$ denote the null-space of $|\cdot|$ then the completion $\mathcal{B} = \overline{\mathcal{A}/ N}$ with respect to this norm is a $C\sp*$-algebra and canonical epimorphism $\pi: \mathcal{A} \to \mathcal{A}/ N$ extends to a unital $*$-homomorphism $\pi: \mathcal{A}\to \mathcal{B}$. We can assume without loss of generality that $\mathcal{B}$ is a concrete $C\sp*$-algebra in $B(H)$ for some Hilbert space $H$. Thus $\pi:\mathcal{A} \to B(H)$ can be regarded as a unital $*$-representation. Clearly, $$\|\pi(x)\|= | x |\text{ for all } x\in \mathcal{A}.$$ This implies~\ref{onecone1}. To show \ref{onecone2} take $x\in \ker\pi$ then $\|\pi(x)\|=0$ and $re\pm x^*x\in C$ for all $r>0$. Since $e$ is an Archimedean unit we have $x^*x\in C\cap(-C)$. Conversely if $x^*x\in C\cap(-C)$ then $re\pm x^*x\in C$, for all $r>0$, hence $\|\pi(x)\|=0$ and \ref{onecone2} holds. Let us prove that $\pi(C) = \pi(\mathcal{A}_{sa})\cap B(H)^+$. Let $x\in \mathcal{A}_{sa}$ and $\pi(x)\geq 0$. Then there exists a constant $\lambda>0$ such that $\|\lambda I_{H}-\pi(x)\|\leq \lambda$, hence $|\lambda e-x|\leq \lambda$. Since $\|a\|\leq |a|$ for all self-adjoint $a\in \mathcal{A}$, see Lemma 3.3 of \cite{Popovych}, we have $\|\lambda e-x\|\leq \lambda$. Thus given $\varepsilon>0$ we have $(\lambda+\varepsilon)e\pm (\lambda e-x)\in C$. Hence $\varepsilon e+x\in {C}$. Since $e$ is Archimedean $x\in {C}$. Conversely, let $x\in {C}$. To show that $\pi(x)\geq 0$ it is sufficient to find $\lambda>0$ such that $\|\lambda I_{H}-\pi(x)\|\leq \lambda$. Since $\|\lambda I_{H}-\pi(x)\|=|\lambda e-x|$ we will prove that $|\lambda e-x|\leq \lambda$ for some $\lambda>0$. From the definition of norm $|\cdot|$ we have the following equivalences: \begin{eqnarray}|\lambda e-x|\leq \lambda &\Leftrightarrow& (\lambda+\varepsilon)^2e-(\lambda e-x)^2\in C\text{ for all }\varepsilon>0\\\label{ineq1} &\Leftrightarrow& \varepsilon_1 e+x(2\lambda e-x)\geq 0,\text{ for all }\varepsilon_1>0. \end{eqnarray} By condition (iii) in the definition of algebraically admissible cone we have that $xyx\in C$ and $yxy\in C$ for every $x,y\in C$. If $xy=yx$ then $xy(x+y)\in C$. Since $e$ is an order unit we can choose $r>0$ such that $r e-x\in C$. Put $y=r e-x$ to obtain $rx(r e -x)\in C$. Hence (\ref{ineq1}) is satisfied with $\lambda=\frac{r}{2}$. Thus $\|\lambda e-\pi(x)\|\leq \lambda$ and $\pi(x)\geq 0$, which proves $\pi(C) = \pi(\mathcal{A}_{sa})\cap B(H)^+$. In particular, for $a=a^*$ we have \begin{gather}\label{form} \norm{\pi(a)} = \inf \{r>0 : r I_H \pm \pi(a) \in \pi(C)\}. \end{gather} We now in a position to prove \ref{onecone3}. Suppose that $C\cap (-C) = 0$. Then $\ker \pi$ is a $*$-ideal and $\ker \pi \not= 0$ implies that there exists a self-adjoint $0\not= a\in \ker\pi$, i.e. $\abs{a} = 0$. Inequality $\| a \| \le |a|$ implies $r e \pm a \in C$ for all $r>0$. Since $e$ is Archimedean, $\pm a \in C$, i.e. $a\in C\cap (-C)$ and, consequently, $a = 0$. Since $\ker \pi = 0$ the inclusion $r I_H \pm \pi(a) \in \pi(C)$ is equivalent to $r e \pm a \in C$, and by~(\ref{form}), $\norm{\pi(a)} = \inf \{r>0 : r e \pm a \in C\}$. Moreover if $\pi(a) = \pi(a)^*$ then $a = a^*$. Thus we have $\pi(C) = \pi(A)\cap B(H)^+$. \end{proof} We say that a $*$-algebra $\mathcal{A}$ with unit $e$ is a \textit{matrix ordered} if the following conditions hold: \begin{enumerate}[(a)] \item for each $n\geq 1$ we are given a cone $C_n$ in $M_n(\mathcal{A})_{sa}$ and $e\in C_1$, \item $C_n\cap(-C_n)=\{0\}$ for all $n$, \item for all $n$ and $m$ and all $A\in M_{n\times m}(\mathcal{A})$, we have that $A^*C_n A \subseteq C_m$, \end{enumerate} We call $e\in \mathcal{A}_{sa}$ a \textit{matrix order unit} provided that for every $n\in \mathbb{N}$ and every $x\in M_n(\mathcal{A})_{sa}$ there exists $r>0$ such that $re_n+x\in C_n$, where $e_n=e\otimes I_n$. A matrix order unit is called \textit{Archimedean matrix order unit} provided that for all $n\in \mathbb{N}$ inclusion $re_n+x\in C_n$ for all $r>0$ implies that $x\in C_n$. Let $\pi:\mathcal{A}\rightarrow B(H)$ be a $*$-representation. Define $\pi^{(n)}:M_{n}(\mathcal{A})\rightarrow M_n(B(H))$ by $\pi^{(n)}((a_{ij}))=(\pi(a_{ij}))$. \begin{theorem}\label{operatorstar} If $\mathcal{A}$ is a matrix-ordered $*$-algebra with a unit e which is Archime\-dean matrix order unit then there exists a Hilbert space $H$ and a faithful unital $*$-representation $\tau:\mathcal{A}\rightarrow B(H)$, such that $\tau^{(n)}(C_n)=M_n(\tau(\mathcal{A}))^+$ for all $n$. Conversely, every unital $*$-subalgebra $\mathcal{D}$ of $B(H)$ is matrix-ordered by cones $M_n(\mathcal{D})^+=M_n(\mathcal{D})\cap B(H)^+$ and the unit of this algebra is an Archimedean order unit. \end{theorem} \begin{proof} Consider an inductive system of $*$-algebras and unital injective $*$-homomorphisms: $$\phi_n:M_{2^n}(\mathcal{A})\rightarrow M_{2^{n+1}}(\mathcal{A}),\quad \phi_n(a)=\left(% \begin{array}{cc} a & 0 \\ 0 & a \\ \end{array}% \right)\text{ for all } n\geq 0, a\in M_{2^n}(\mathcal{A}).$$ Let $\mathcal{B}=\underrightarrow{\lim} M_{2^n}(\mathcal{A})$ be the inductive limit of this system. By $(c)$ in the definition of the matrix ordered algebra we have $\phi_n(C_{2^n})\subseteq C_{2^{n+1}}$. We will identify $M_{2^n}(\mathcal{A})$ with a subalgebra of $\mathcal{B}$ via canonical inclusions. Let $C=\bigcup\limits_{n\ge 1} C_{2^n}\subseteq\mathcal{B}_{sa}$ and let $e_{\infty}$ be the unit of $\mathcal{B}$. Let us prove that $C$ is an algebraically admissible cone. Clearly, $C$ satisfies conditions (i) and (ii) of definition of algebraically admissible cone. To prove (iii) suppose that $x\in \mathcal{B}$ and $a\in C$, then for sufficiently large $n$ we have $a\in C_{2^n}$ and $x\in M_{2^n}(\mathcal{A})$. Therefore, by $(c)$, $x^*ax\in C$. Thus (iii) is proved. Since $e$ is an Archimedean matrix order unit we obviously have that $e_{\infty}$ is also an Archimedean order unit. Thus $*$-algebra $\mathcal{B}$ satisfies assumptions of Theorem~\ref{onecone} and there is a faithful $*$-representation $\pi:\mathcal{B}\rightarrow B(H)$ such that $\pi(C) = \pi(\mathcal{B}) \cap B(H)^+$. Let $\xi_n:M_{2^n}(\mathcal{A})\rightarrow \mathcal{B}$ be canonical injections ($n\geq 0$). Then $\tau=\pi\circ\xi_0:\mathcal{A}\rightarrow B(H)$ is an injective $*$-homomorphism. We claim that $\tau^{(2^n)}$ is unitary equivalent to $\pi\circ \xi_n$. By replacing $\pi$ with $\pi^{\alpha}$, where $\alpha$ is an infinite cardinal, we can assume that $\pi^{\alpha}$ is unitary equivalent to $\pi$. Since $\pi\circ \xi_n:M_{2^n}(\mathcal{A})\rightarrow B(H)$ is a $*$-homomorphism there exist unique Hilbert space $K_n$, $*$-homomorphism $\rho_n:\mathcal{A}\rightarrow B(K_n)$ and unitary operator $U_n:K_n\otimes \mathbb{C}^{2^n}\rightarrow H$ such that $$\pi\circ \xi_n=U_n(\rho_n\otimes id_{M_{2^n}})U_n^*.$$ For $a\in \mathcal{A}$, we have \begin{eqnarray*} \pi\circ \xi_0(a)&=&\pi\circ\xi_n(a\otimes E_{2^n})\\ &=&U_n(\rho_n(a)\otimes E_{2^n})U_n^*, \end{eqnarray*} where $E_{2^n}$ is the identity matrix in $M_{2^n}(\mathbb{C})$. Thus $\tau(a)=U_0\rho_0(a)U_0^*=U_n(\rho_n(a)\otimes E_{2^n})U_n^*$. Let $\sim$ stands for the unitary equivalence of representations. Since $\pi\circ \xi_n\sim\rho_n\otimes id_{M_{2^n}}$ and $\pi^{\alpha}\sim \pi$ we have that $\rho_n^{\alpha}\otimes id_{M_{2^n}}\sim\pi^{\alpha}\circ\xi_n\sim \rho_n\otimes id_{M_{2^n}}$. Hence $\rho^\alpha_n\sim\rho_n$. Thus $\rho_n\otimes E_{2^n}\sim \rho_n^{2^n\alpha}\sim\rho_n$. Consequently $\rho_0\sim\rho_n$ and $\pi\circ\xi_n\sim\rho_0\otimes id_{M_{2^n}}\sim \tau\otimes id_{M_{2^n}}$. Therefore $\tau^{(2^n)}=\tau\otimes id_{M_{2^n}}$ is unitary equivalent to $\pi\circ \xi_n$. What is left to show is that $\tau^{(n)}(C_n)=M_{n}(\tau(\mathcal{A}))^+$. Note that $\pi\circ \xi_n(M_{2^n}(\mathcal{A}))\cap B(H)^+=\pi(C_{2^n})$. Indeed, the inclusion $\pi\circ\xi(C_{2^n})\subseteq M_{2^n}(\mathcal{A})\cap B(H)^+$ is obvious. To show the converse take $x\in M_{2^n}(\mathcal{A})$ such that $\pi(x)\geq 0$. Then $x\in C\cap M_{2^n}(\mathcal{A})$. Using $(c)$ one can easily show that $C\cap M_{2^n}(\mathcal{A})= C_{2^n}$. Hence $\pi\circ \xi_n(M_{2^n}(\mathcal{A}))\cap B(H)^+=\pi(C_{2^n})$. Since $\tau^{(2^n)}$ is unitary equivalent to $\pi\circ \xi_n$ we have that $\tau^{(2^n)}(C_{2^n})=M_{2^n}(\tau(\mathcal{A}))\cap B(H^{2^n})^+$. Let now show that $\tau^{(n)}(C_n)=M_{n}(\tau(\mathcal{A}))^+$. For $X\in M_n(\mathcal{A})$ denote $$\widetilde{X}=\left(% \begin{array}{cc} X & 0_{n\times (2^n-n)} \\ 0_{(2^n-n) \times n} & 0_{(2^n-n)\times (2^n-n)} \\ \end{array}% \right)\in M_{2^n}(\mathcal{A}).$$ Then, clearly, $\tau^{(n)}(X)\geq 0$ if and only if $\tau^{(2^n)}(\widetilde{X})\geq 0$. Thus $\tau^{(n)}(X)\geq 0$ is equivalent to $\widetilde{X}\in C_{2^n}$ which in turn is equivalent to $X\in C_n$ by $(c)$. \end{proof} \section{Operator Algebras completely boundedly isomorphic to $C^*$-algebras.} The algebra $M_{n}(B(H))$ of $n\times n$ matrices with entries in $B(H)$ has a norm $\|\cdot\|_{n}$ via the identification of $M_n(B(H))$ with $B(H^n)$, where $H^n$ is the direct sum of $n$ copies of a Hilbert space $H$. If $\mathcal{A}$ is a subalgebra of $B(H)$ then $M_n(\mathcal{A})$ inherits a norm $\|\cdot\|_n$ via natural inclusion into $M_n(B(H))$. The norms $ \|\cdot\|_n $ are called matrix norms on the operator algebra $\mathcal{A}$. In the sequel all operator algebras will be assumed to be norm closed. Operator algebras $\mathcal{A}$ and $\mathcal{B}$ are called completely boundedly isomorphic if there is a completely bounded isomorphism $\tau:\mathcal{A}\rightarrow \mathcal{B}$ with completely bounded inverse. The aim of this section is to give necessary and sufficient conditions for an operator algebra to be completely boundedly isomorphic to a $C^*$-algebra. To do this we introduce a concept of $*$-admissible cones which reflect the properties of the cones of positive elements of a $C^*$-algebra preserved under completely bounded isomorphism. \begin{definition}\label{consdef} Let $\mathcal{B}$ be an operator algebra with unit $e$. A sequence $C_n\subseteq M_n(\mathcal{B})$ of closed (in the norm $\|\cdot\|_n$) cones will be called \textit{$*$-admissible} if it satisfies the following conditions: \begin{enumerate} \item $e\in C_1$; \item \begin{enumerate}[(i)]\item $M_n(\mathcal{B})=(C_n-C_n)+i(C_n-C_n)$, for all $n\in \mathbb{N}$, \item $C_n\cap (-C_n)=\{0\}$, for all $n\in \mathbb{N}$, \item $(C_n-C_n)\cap i(C_n-C_n)=\{0\}$, for all $n\in \mathbb{N}$; \end{enumerate} \item \begin{enumerate}[(i)]\item for all $c_1$, $c_2\in C_n$ and $c\in C_n$, we have that $(c_1-c_2)c(c_1-c_2)\in C_n$, \item for all $n$, $m$ and $B\in M_{n\times m}(\mathbb{C})$ we have that $B^*C_n B\subseteq C_m$; \end{enumerate} \item there is $r>0$ such that for every positive integer $n$ and $c\in C_{n}-C_{n}$ we have $r \|c\|e_n +c \in C_n$, \item there exists a constant $K>0$ such that for all $n\in\mathbb{N}$ and $a$, $b\in C_n-C_n$ we have $\|a\|_n\leq K\cdot\|a+ib\|_n$. \end{enumerate} \end{definition} \begin{theorem}\label{main} If an operator algebra $\mathcal{B}$ has a $*$-admissible sequence of cones then there is a completely bounded isomorphism $\tau$ from $\mathcal{B}$ onto a $C^*$-algebra $\mathcal{A}$. If, in addition, one of the following conditions holds \begin{enumerate}[(1)] \item\label{cond1} there exists $r>0$ such that for every $n\ge 1$ and $c, d \in C_{n}$ we have $ \|c + d \| \ge r \|c\|$. \item $\| (x- i y) (x+i y)\| \ge \alpha \| x - i y \| \| x+i y \|$ for all $x, y \in C_n - C_n$ \end{enumerate} then the inverse $\tau^{-1}: \mathcal{A} \to \mathcal{B}$ is also completely bounded. Conversely, if such isomorphism $\tau$ exists then $\mathcal{B}$ possesses a $*$-admissible sequence of cones and conditions $(1)$ and $(2)$ are satisfied. \end{theorem} The proof will be divided into 4 lemmas.\\ Let $\{C_n\}_{n\geq 1}$ be a $*$-admissible sequence of cones of $\mathcal{B}$. Let $\mathcal{B}_{2^n}=M_{2^n}(\mathcal{B})$, $\phi_n:\mathcal{B}_{2^n}\rightarrow \mathcal{B}_{2^{n+1}}$ be unital homomorphisms given by $\phi_n(x)=\left(% \begin{array}{cc} x & 0 \\ 0 & x \\ \end{array}% \right)$, $x\in \mathcal{B}_{2^n}$. Denote by $\mathcal{B}_{\infty}=\underrightarrow{\lim}\mathcal{B}_{2^n}$ the inductive limit of the system $(\mathcal{B}_{2^n},\phi_n)$. As all inclusions $\phi_n$ are unital $\mathcal{B}_{\infty}$ has a unit, denoted by $e_{\infty}$. Since $\mathcal{B}_{\infty}$ can be considered as a subalgebra of a $C^*$-algebra of the corresponding inductive limit of $M_{2^n}(B(H))$ we can define the closure of $\mathcal{B}_{\infty}$ in this $C^*$-algebra denoted by $\overline{\mathcal{B}}_{\infty}$. Now we will define an involution on $\mathcal{B}_{\infty}$. Let $\xi_n : M_{2^n} ( \mathcal{B} )\to \mathcal{B}_{\infty}$ be the canonical morphisms. By $(3ii)$, $\phi_n(C_{2^n})\subseteq C_{2^{n+1}}$. Hence $C= \bigcup\limits_{n} \xi_n( C_{2^n})$ is a well defined cone in $\mathcal{B}_{\infty}$. Denote by $\overline{C}$ its completion. By $(2i)$ and $(2iii)$, for every $x\in \mathcal{B}_{2^n}$, we have $x=x_1+ix_2$ with unique $x_1$, $x_2\in C_{2^n}-C_{2^n}$. By $(3ii)$ we have $\left(% \begin{array}{cc} x_i & 0 \\ 0 & x_i \\ \end{array}% \right)\in C_{2^{n+1}}-C_{2^{n+1}}$, $i=1,2$. Thus for every $x\in B_{\infty}$ we have unique decomposition $x=x_1+ix_2$, $x_1\in C-C$, $x_2\in C-C$. Hence the mapping $x\mapsto x^{\sharp}=x_1-ix_2$ is a well defined involution on $\mathcal{B}_{\infty}$. In particular, we have an involution on $\mathcal{B}$ which depends only on the cone $C_1$. \begin{lemma}\label{involution} Involution on $\mathcal{B}_{\infty}$ is defined by the involution on $\mathcal{B}$, i.e. for all $A=(a_{ij})_{i,j}\in M_{2^n}(\mathcal{B})$ $$A^{\sharp} = (a_{ji}^{\sharp})_{i,j}.$$ \end{lemma} \begin{proof} Assignment $A^\circ = (a_{ji}^{\sharp})_{i,j}$, clearly, defines an involution on $M_{2^n}(\mathcal{B})$. We need to prove that $A^{\sharp}=A^{\circ}$. Let $A=(a_{ij})_{i,j}\in M_{2^n}(\mathcal{B})$ be self-adjoint $A^{\circ}=A$. Then $A=\sum\limits_{i}a_{ii}\otimes E_{ii}+\sum\limits_{i< j}(a_{ij}\otimes E_{ij}+a_{ij}^{\sharp}\otimes E_{ji})$ and $a_{ii}^{\sharp}=a_{ii}$, for all $i$. By $(3ii)$ we have $\sum\limits_{i}a_{ii}\otimes E_{ii}\in C_{2^n}-C_{2^n}$. Since $a_{ij}=a_{ij}'+ia_{ij}''$ for some $a_{ij}'$, $a_{ij}''\in C_{2^n}-C_{2^n}$ we have \begin{eqnarray*}a_{ij}\otimes E_{ij}+a_{ij}^{\sharp}\otimes E_{ji}&=&(a_{ij}'+ia_{ij}'')\otimes E_{ij}+(a_{ij}'-ia_{ij}'')\otimes E_{ji}\\ &=&(a_{ij}'\otimes E_{ij}+a_{ij}'\otimes E_{ji})+(ia_{ij}''\otimes E_{ij}-ia_{ij}''\otimes E_{ji})\\ &=&(E_{ii}+E_{ji})(a_{ij}'\otimes E_{ii}+a_{ij}'\otimes E_{jj})(E_{ii}+E_{ij})\\ &-&(a_{ij}'\otimes E_{ii}+a_{ij}'\otimes E_{jj})\\ &+&(E_{ii}-iE_{ji})(a_{ij}''\otimes E_{ii}+a_{ij}''\otimes E_{jj})(E_{ii}+iE_{ij})\\ &-&(a_{ij}''\otimes E_{ii}+a_{ij}''\otimes E_{jj})\in C_{2^n}-C_{2^n}. \end{eqnarray*} Thus $A\in C_{2^n}-C_{2^n}$ and $A^{\sharp}=A$. Since for every $x\in M_{2^n}(\mathcal{B})$ there exist unique $x_1=x_1^{\circ}$ and $x_2=x_2^{\circ}$ in $M_{2^n}(\mathcal{B})$, such that $x=x_1+ix_2$, and unique $x_1'=x_1'^{\sharp}$ and $x_2'=x_2'^{\sharp}$, such that $x=x_1'+ix_2'$, we have that $x_1=x_1^{\sharp}=x_1'$, $x_2=x_2^{\sharp}=x_2'$ and involutions $\sharp$ and $\circ$ coincide. \end{proof} \begin{lemma}\label{invandcone} Involution $x\to x^\sharp$ is continuous on $\mathcal{B}_{\infty}$ and extends to the involution on $\overline{\mathcal{B}}_{\infty}$. With respect to this involution $\overline{C}\subseteq (\overline{\mathcal{B}}_{\infty})_{sa}$ and $x^{\sharp}\overline{C}x\subseteq\overline{C}$ for every $x\in \overline{\mathcal{B}}_{\infty}$. \end{lemma} \begin{proof} Consider a convergent net $\{x_i\}\subseteq\mathcal{B}_{\infty}$ with the limit $x\in \mathcal{B}_{\infty}$. Decompose $x_i=x_i'+ix_i''$ with $x_i', x_i'' \in C-C$. By (5), the nets $\{x_i'\}$ and $\{x_i''\}$ are also convergent. Thus $x=a+ib$, where $a=\lim x_i'\in \overline{C-C}$, $b=\lim x_i''\in \overline{C-C}$ and $\lim x_i^\sharp = a -i b$. Therefore the involution defined on $\mathcal{B}_{\infty}$ can be extended by continuity to $\overline{\mathcal{B}}_{\infty}$ by setting $x^\sharp= a- ib$. Under this involution $\overline{C}\subseteq (\overline{\mathcal{B}}_{\infty})_{sa}=\{x\in\overline{\mathcal{B}}_{\infty}:x=x^{\sharp}\}$. Let us show that $x^{\sharp}cx\in\overline{C}$ for every $x\in\overline{\mathcal{B}}_{\infty}$ and $c\in\overline{C}$. Take firstly $c\in C_{2^n}$ and $x\in \mathcal{B}_{2^n}$. Then $x=x_1+ix_2$ for some $x_1$, $x_2\in C_{2^n}-C_{2^n}$ and \begin{gather*} (x_1+ix_2)^{\sharp}c(x_1+ix_2)=(x_1-ix_2)c(x_1+ix_2)\\ = \frac{1}{2}\left(% \begin{array}{cc} 1 & 1\\ \end{array}% \right)\left(% \begin{array}{cc} -x_1 & -ix_2 \\ ix_2 & x_1 \\ \end{array}% \right)\left(% \begin{array}{cc} c & 0 \\ 0 & c\\ \end{array}% \right)\left(% \begin{array}{cc} -x_1 & -ix_2 \\ ix_2 & x_1 \\ \end{array}% \right)\left(% \begin{array}{c} 1 \\ 1 \\ \end{array}% \right) \end{gather*} By (3i), Lemma \ref{involution} and (3ii) $x^{\sharp}cx\in C_{2^n}$. Let now $c\in \overline{C}$ and $x\in\overline{\mathcal{B}}_{\infty}$. Suppose that $c_i\rightarrow c$ and $x_i\rightarrow x$, where $c_i\in C$, $x_i\in \mathcal{B}_{\infty}$. We can assume that $c_i$, $x_i\in B_{2^{n_i}}$. Then $x_i^{\sharp}c_ix_{i}\in C_{2^{n_i}}$ for all $i$ and since it is convergent we have $x^{\sharp}cx\in \overline{C}$. \end{proof} \begin{lemma}\label{ArchOrder} The unit of $\overline{\mathcal{B}}_{\infty}$ is an Archimedean order unit and $(\overline{\mathcal{B}}_{\infty})_{sa}=\overline{C}-\overline{C}$. \end{lemma} \begin{proof} Firstly let us show that $e_{\infty}$ is an order unit. Clearly, $(\overline{\mathcal{B}}_{\infty})_{sa}=\overline{C-C}$. For every $a\in \overline{C-C}$, there is a net $a_i\in C_{2^{n_i}}-C_{2^{n_i}}$ convergent to $a$. Since $\sup\limits_{i}\|a_i\|<\infty$ there exists $r_1>0$ such that $r_1e_{n_i}-a_i\in C_{2^{n_i}}$, i.e. $r_1e_{\infty}-a_i\in C$. Passing to the limit we get $r_1e_{\infty}-a\in \overline{C}$. Replacing $a$ by $-a$ we can find $r_2>0$ such that $r_2e_{\infty}+a\in\overline{C}$. If $r=\max(r_1,r_2)$ then $re_{\infty}\pm a\in \overline{C}$. This proves that $e_{\infty}$ is an order unit and that for all $a\in\overline{C-C}$ we have $a=re_{\infty}-c$ for some $c\in\overline{C}$. Thus $\overline{C-C}\in \overline{C}-\overline{C}$. The converse inclusion, clearly, holds. Thus $\overline{C-C}=\overline{C}-\overline{C}$. If $x\in(\overline{\mathcal{B}}_{\infty})_{sa}$ such that for every $r>0$ we have $r+x\in \overline{C}$ then $x\in\overline{C}$ since $\overline{C}$ is closed. Hence $e_{\infty}$ is an Archimedean order unit. \end{proof} \begin{lemma}\label{ker} $\mathcal{B}_{\infty}\cap \overline{C}=C$. \end{lemma} \begin{proof} Denote by $\mathcal{D}=\underrightarrow{\lim} M_{2^n}(B(H))$ the $C\sp*$-algebra inductive limit corresponding to the inductive system $\phi_n$ and denote $\phi_{n,m}=\phi_{m-1}\circ\ldots\circ\phi_n:M_{2^n}(B(H))\rightarrow M_{2^m}(B(H))$. For $n<m$ we identify $M_{2^{m-n}}(M_{2^n}(B(H)))$ with $M_{2^m}(B(H))$ by omitting superfluous parentheses in a block matrix $B=[B_{ij}]_{ij}$ with $B_{ij}\in M_{2^n}(B(H))$. Denote by $P_{n,m}$ the operator $diag (I, 0,\ldots, 0) \in M_{2^{m-n}}(M_{2^n}(B(H)))$ and set $V_{n,m} = \sum_{k=1}^{2^{m-n}} E_{k,k-1}$. Here $I$ is the identity matrix in $M_{2^n}(B(H))$ and $E_{k,k-1}$ is $2^n \times 2^n$ block matrix with identity operator at $(k,k-1)$-entry and all other entries being zero. Define an operator $\psi_{n,m}([B_{ij}]) = diag (B_{11}, \ldots, B_{11})$. It is easy to see that $$\psi_{n,m}([B_{ij}]) = \sum_{k=0}^{2^{m-n}-1} (V_{n,m}^k P_{n,m}) B (V_{n,m}^k P_{n,m})^*.$$ Hence by $(3ii)$ \begin{gather}\label{inc} \psi_{n,m} (C_{2^m}) \subseteq \phi(C_{2^n})\subseteq C_{2^m}. \end{gather} Clearly, $\psi_{n,m}$ is a linear contraction and $$\psi_{n,m+k}\circ \phi_{m,m+k}=\phi_{m,m+k}\circ\psi_{n,m}$$ Hence there is a well defined contraction $\psi_n=\lim\limits_{m}\psi_{n,m}:\mathcal{D}\rightarrow \mathcal{D}$ such that $$\psi_n|_{M_{2^n}(B(H))}=id_{M_{2^n}(B(H))},$$ where $M_{2^n}(B(H))$ is considered as a subalgebra in $\mathcal{D}$. Clearly, $\psi_n(\overline{\mathcal{B}}_{\infty})\subseteq \overline{\mathcal{B}}_{\infty}$ and $\psi_n|_{\mathcal{B}_{2^n}}=id$. Consider $C$ and $C_{2^n}$ as subalgebras in $\mathcal{B}_{\infty}$, by~(\ref{inc}) we have $\psi_n:C\to C_{2^n}$. To prove that $\mathcal{B}_{\infty}\cap \overline{C} =C$ take $c\in \mathcal{B}_{\infty}\cap \overline{C}$. Then there is a net $c_j$ in $C$ such that $\|c_j-c\|\to 0$. Since $c\in \mathcal{B}_{\infty}$, $c\in \mathcal{B}_{2^n}$ for some $n$, and consequently $\psi_n(c)=c$. Thus $$ \| \psi_n(c_j) - c \| = \| \psi_n(c_j-c)\|\le \| c_j-c \|. $$ Hence $\psi_n(c_j)\to c$. But $\psi_n(c_j)\in C_{2^n}$ and the latter is closed. Thus $c\in C$. The converse inclusion is obvious. \end{proof} \begin{remark}\label{rem1} Note that for every $x\in \mathcal{D}$ \begin{gather}\label{cutdiag} \lim_n \psi_n(x) = x. \end{gather} Indeed, for every $\varepsilon > 0$ there is $x\in M_{2^n}(B(H))$ such that $\| x-x_n \|< \varepsilon$. Since $\psi_n$ is a contraction and $\psi_n(x_n) = x_n$ we have \begin{eqnarray*} \norm{\psi_n(x) - x} &\le& \norm{\psi_n(x)-x_n}+ \norm{x_n-x}\\ &=&\norm{\psi_n(x-x_n)}+\norm{x_n-x}\le 2 \varepsilon. \end{eqnarray*} Since $x_n\in M_{2^n}(B(H))$ also belong to $M_{2^m}(B(H))$ for all $m\ge n$, we have that $\norm{\psi_m(x) - x}\le 2 \varepsilon$. Thus $\lim\limits_{n} \psi_n(x) = x$. \end{remark} \noindent {\bf Proof of Theorem~\ref{main}. } By Lemma \ref{invandcone} and \ref{ArchOrder} the cone $\overline{C}$ and the unit $e_{\infty}$ satisfies all assumptions of Theorem \ref{onecone}. Thus there is a homomorphism $\tau:\overline{\mathcal{B}}_{\infty}\rightarrow B(\widetilde{H})$ such that $\tau(a^{\sharp})=\tau(a)^*$ for all $a\in\overline{\mathcal{B}}_{\infty}$. Since the image of $\tau$ is a $*$-subalgebra of $B(\widetilde{H})$ we have that $\tau$ is bounded by ~\cite[(23.11), p. 81]{DoranBelfi}. The arguments at the end of the proof of Theorem \ref{operatorstar} show that the restriction of $\tau$ to ${\mathcal{B}_{2^n}}$ is unitary equivalent to the $2^n$-amplification of $\tau|_\mathcal{B}$. Thus $\tau|_\mathcal{B}$ is completely bounded. Let us prove that $\ker(\tau)=\{0\}$. By Theorem \ref{operatorstar}.\ref{onecone3} it is sufficient to show that $\overline{C}\cap(-\overline{C})=0$. If $c, d\in \overline{C}$ such that $c+d=0$ then $c=d=0$. Indeed, for every $n\ge 1$, $\psi_n(c) +\psi_n(d) = 0$. By Lemma \ref{ker}, we have $$ \psi_n(\overline{C}) \subseteq \overline{C}\cap \mathcal{B}_{2^n} = C_{2^n}.$$ Therefore $\psi_n(c)$, $\psi_n(d)\in C_{2^n}$. Hence $\psi_n(c)= -\psi_n(d)\in C_{2^n} \cap (- C_{2^n})$ and, consequently, $\psi_n(c)= \psi_n(d) = 0$. Since $\norm{\psi_n(c)-c}\to 0$ and $\norm{\psi_n(d)-d}\to 0$ by Remark \ref{rem1}, we have that $c=d=0$. If $x\in \overline{C}\cap(-\overline{C})$ then $x+(-x)=0$, $x, -x\in \overline{C}$ and $x=0$. Thus $\tau$ is injective. We will show that the image of $\tau$ is closed if one of the conditions $(1)$ or $(2)$ of the statement holds. Assume firstly that operator algebra $\mathcal{B}$ satisfies the first condition. Since $\tau(\overline{\mathcal{B}}_{\infty}) = \tau(\overline{C}) -\tau(\overline{C}) +i (\tau(\overline{C})-\tau(\overline{C}))$ and $\tau(\overline{C})$ is exactly the set of positive operators in the image of $\tau$, it is suffices to prove that $\tau(\overline{C})$ is closed. By Theorem \ref{onecone}.\ref{onecone3}, for self-adjoint (under involution $\sharp$) $x\in \overline{\mathcal{B}}_{\infty}$ we have $$ \|\tau(x)\|_{B(\widetilde{H})} = \inf\{ r>0: re_{\infty}\pm x\in \overline{C}\}. $$ If $\tau(c_\alpha)\in \tau(C)$ is a Cauchy net in $B(\widetilde{H})$ then for every $\varepsilon>0$ there is $\gamma$ such that $\varepsilon\pm (c_\alpha-c_\beta)\in \overline{C}$ when $\alpha\ge \gamma$ and $\beta\ge \gamma$. Since $\overline{C}\cap \mathcal{B}_{\infty} = C$, $\varepsilon\pm (c_\alpha-c_\beta)\in C$. Denote $c_{\alpha \beta} = \varepsilon+ (c_\alpha-c_\beta)$ and $d_{\alpha \beta} = \varepsilon- (c_\alpha-c_\beta)$. The set of pairs $(\alpha,\beta)$ is directed if $(\alpha,\beta)\ge (\alpha_1,\beta_1)$ iff $\alpha\ge \alpha_1$ and $\beta\ge \beta_1$. Since $c_{\alpha \beta} + d_{\alpha \beta} = 2 \varepsilon$ this net converges to zero in the norm of $\overline{\mathcal{B}}_{\infty}$. Thus by assumption $4$ in the definition of $*$-admissible sequence of cones, $\|c_{\alpha \beta}\|_{\overline{\mathcal{B}}_{\infty}}\to 0$. This implies that $c_\alpha$ is a Cauchy net in $\overline{\mathcal{B}}_{\infty}$. Let $c=\lim c_\alpha$. Clearly, $c\in \overline{C}$. Since $\tau$ is continuous $\| \tau(c_\alpha) - \tau(c) \|_{\overline{\mathcal{B}}_{\infty}} \to 0$. Hence the closure $\overline{\tau(C)}$ is contained in $\tau(\overline{C})$. By continuity of $\tau$ we have $\tau(\overline{C})\subseteq \overline{\tau(C)}$. Hence $\tau(\overline{C})= \overline{\tau(C)}$, $\tau(\overline{C})$ is closed. Let now $\mathcal{B}$ satisfy condition $(2)$ of the Theorem. Then for every $x\in \overline{\mathcal{B}}_{\infty}$ we have $\| x^\sharp x \| \ge \alpha \| x \| \| x^\sharp \|$. By~\cite[theorem 34.3]{DoranBelfi} $\overline{\mathcal{B}}_{\infty}$ admits an equivalent $C^*$-norm $\abs{\cdot}$. Since $\tau$ is a faithful $*$-representation of the $C^*$-algebra $(\overline{\mathcal{B}}_{\infty}, \abs{\cdot})$ it is isometric. Therefore $\tau(\overline{\mathcal{B}}_{\infty})$ is closed. Let us show that $(\tau|_\mathcal{B})^{-1}:\tau(\mathcal{B})\rightarrow \mathcal{B}$ is completely bounded. The image $\mathcal{A}=\tau(\overline{\mathcal{B}}_{\infty})$ is a $C^*$-algebra in $B(\widetilde{H})$ isomorphic to $\overline{\mathcal{B}}_{\infty}$. By Johnson's theorem (see~\cite{Jo}), two Banach algebra norms on a semi-simple algebra are equivalent, hence, $\tau^{-1}:\mathcal{A}\to \overline{\mathcal{B}}_{\infty}$ is bounded homomorphism, say $\|\tau^{-1}\| = R$. Let us show that $\|(\tau|_\mathcal{B})^{-1}\|_{cb} = R$. Since $$\tau|_{\mathcal{B}_{2^n}} = U_n (\tau|_{\mathcal{B}} \otimes id_{M_{2^n}} )U_n^*, $$ for some unitary $U_n: K\otimes \mathbb{C}^{2^n}\to \widetilde{H}$ we have for any $B= [b_{ij}]\in M_{2^n}(\mathcal{B})$ \begin{eqnarray*} \| \sum b_{ij}\otimes E_{ij} \| &\le& R \|\tau(\sum b_{ij}\otimes E_{ij})\| \\ &=& R \| U_n( \sum \tau(b_{ij}) \otimes E_{ij} )U_n^* \|\\&=& R \| \sum \tau(b_{ij}) \otimes E_{ij} \|. \end{eqnarray*}This is equivalent to $$ \| \sum \tau^{-1} (b_{ij})\otimes e_{ij} \| \le R \| \sum b_{ij}\otimes E_{ij} \|,$$ hence $\| (\tau^{-1})^{2^n} (B) \|\le R \| B \|.$ This proves that $\|(\tau|_\mathcal{B})^{-1}\|_{cb} = R$. The converse statement evidently holds with $*$-admissible sequence of cones given by $(\tau^{(n)})^{-1}(M_n(\mathcal{A})^+)$.$\Box$ Conditions (1) and (2) were used to prove that the image of isomorphism $\tau$ is closed. The natural question one can ask is wether there exists an operator algebra $\mathcal{B}$ and isomorphism $\rho:\mathcal{B}\to B(H)$ with non-closed self-adjoint image. The following example gives the affirmative answer. \begin{example} Consider the algebra $\mathcal{B} = C^1([0,1])$ as an operator algebra in $C\sp*$-algebra $\bigoplus\limits_{q\in \mathbb{Q}} M_2(C([0,1]))$ via inclusion $$f(\cdot)\mapsto \oplus_{q\in \mathbb{Q}}\left( \begin{array}{cc} f(q) & f'(q) \\ 0 & f(q) \\ \end{array} \right). $$ The induced norm $$\norm{f} = \sup\limits_{q\in \mathbb{Q}} \left[ \frac{1}{2}( 2\abs{f(q)}^2+ \abs{f'(q)}^2 + \abs{f'(q)}\sqrt{4 \abs{f(q)}^2+\abs{f'(q)}^2}) \right]^{\frac{1}{2}}$$ satisfies the inequality $ \norm{f}\ge \frac{1}{\sqrt{2}}\max\{\norm{f}_\infty,\norm{f'}_\infty\}\ge \frac{1}{2\sqrt{2}}\norm{f}_1$ where $\norm{f}_1 = \norm{f}_\infty+\norm{f'}_\infty$ is the standard Banach norm on $C^1([0,1])$. Thus $\mathcal{B}$ is a closed operator algebra with isometric involution $f^\sharp (x) = \overline{f(x)}$, ($x\in [0,1]$). The identity map $C^1([0,1]) \to C([0,1])$, $f\mapsto f$ is a $*$-isomorphism of $\mathcal{B}$ into $C\sp*$-algebra with non-closed self-adjoint image. \end{example} \section{Operator Algebra associated with Kadison's similarity problem. } In 1955 R. Kadison raised the following problem. Is any bounded homomorphism $\pi$ of a $C^*$-algebra $\mathcal{A}$ into $B(H)$ similar to a $*$-representation? The similarity above means that there exists invertible operator $S\in B(H)$ such that $x\to S^{-1}\pi(x) S$ is a $*$-representation of $\mathcal{A}$. The following criterion due to Haagerup (see~\cite{Haagerup}) is widely used in reformulations of Kadison's problem: non-degenerate homomorphism $\pi$ is similar to a $*$-representation iff $\pi$ is completely bounded. Moreover the similarity $S$ can be chosen in such a way that $\|S^{-1}\| \|S\| = \|\pi\|_{cb}$. The affirmative answer to the Kadison's problem is obtained in many important cases. In particular, for nuclear $\mathcal{A}$, $\pi$ is automatically completely bounded with $\|\pi\|_{cb}\le \|\pi\|^2$ (see~\cite{Bunce}). About recent state of the problem we refer the reader to \cite{Pisier, Paulsen}. We can associate an operator algebra $\pi(B)$ to every bounded injective homomorphism $\pi$ of a $C^*$-algebra $\mathcal{A}$. The fact that $\pi(B)$ is closed can be seen by restricting $\pi$ to a nuclear $C^*$-algebra $C^*(x^*x)$. This restriction is similar to $*$-homomorphism for every $x\in \mathcal{A}$ which gives the estimate $\| x \|\le \|\pi\|^3 \|\pi(x)\|$ (for details see~\cite[p. 4]{pitts}). Denote $C_n= \pi^{(n)}( M_n(\mathcal{A})^+)$. Let $J$ be an involution in $B(H)$, i.e. self-adjoint operator such that $J^2 = I$. Clearly, $J$ is also a unitary operator. A representation $\pi: \mathcal{A} \to B(H)$ of a $*$-algebra $\mathcal{A}$ is called $J$-symmetric if $\pi(a^*) = J \pi(a)^* J$. Such representations are natural analogs of $*$-representations for Krein space with indefinite metric $[x,y] = \langle J x,y \rangle $. We will need the following observation due to V. Shulman~\cite{Shulman} (see also ~\cite[lemma 9.3, p.131]{KissinShulman}). If $\pi$ is an arbitrary representation of $\mathcal{A}$ in $B(H)$ then the representation $\rho: \mathcal{A} \to B(H\oplus H)$, $a \mapsto \pi(a) \oplus \pi(a^*)^*$ is $J$-symmetric with $J(x\oplus y) = y \oplus x$ and representation $\pi$ is a restriction $\rho|_{K\oplus \{0\}}$. Moreover, if $\rho$ is similar to $*$-representation then so is $\pi$. Clearly the converse is also true, thus $\pi$ and $\rho$ are simultaneously similar to $*$-representations or not. In sequel for an operator algebra $\mathcal{D} \in B(H)$ we denote by $\overline{\underrightarrow{\lim} M_{2^n}(\mathcal{D})}$ the closure of the algebraic direct limit of of $M_{2^n}(\mathcal{D})$ in the $C\sp*$-algebra direct limit of inductive system $M_{2^n}(B(H))$ with standard inclusions $x \to \left( \begin{array}{cc} x & 0 \\ 0 & x \\ \end{array} \right)$. \begin{theorem}\label{kadison} Let $\pi:\mathcal{A}\to B(H)$ be a bounded unital $J$-symmmetric injective homomorphism of a $C^*$-algebra $\mathcal{A}$ and let $\mathcal{B}=\pi(\mathcal{A})$. Then $\pi^{-1}$ is a completely bounded homomorphism. Its extension $\widetilde{\pi^{-1}}$ to the homomorphism between the inductive limits $\overline{\mathcal{B}}_{\infty} = \overline{\underrightarrow{\lim} M_{2^n}(\mathcal{B})}$ and $\overline{\mathcal{A}}_{\infty} = \overline{ \underrightarrow{\lim} M_{2^n}(\mathcal{A})}$ is injective. \end{theorem} \begin{proof} Let us show that $\{C_n\}_{n\ge 1}$ is a $*$-admissible sequence of cones. It is routine to verify that conditions (1)-(3) in the definition of $*$-admissible cones are satisfied for $\{C_n\}$. To see that condition $(4)$ also holds take $B\in C_{n} - C_{n}$ and denote $r = \norm{B}$. Let $D \in M_{n}(A)_{sa}$ be such that $B = \pi^{(n)}(D)$. Since $\pi^{(n)}: M_n(\mathcal{A}) \to M_n(\mathcal{B})$ is algebraic isomorphism it preserves spectra $\sigma_{ M_n(\mathcal{A})}(x) = \sigma_{M_n(\mathcal{B})}(\pi^{(n)}(x))$. Since the spectral radius $\spr(B)\le r$ we have $\spr(D)\le r$. Hence $r e_{n} + D \in M_{n} (A)^+$ because $D$ is self-adjoint. Applying $\pi^{(n)}$ we get $r e_{n} + B\in C_{n}$ which proves condition (4). Since $\pi$ is $J$-symmetric $$\|\pi^{(n)}(a)\| =\|(J\otimes E_n) \pi^{(n)}(a)^* (J\otimes E_n) \|= \|\pi^{(n)}(a^*)\|$$ for every $a\in M_n(\mathcal{A})$, and \begin{eqnarray*} \| \pi^{(n)}(h_1) \| &\le& 1/2 ( \| \pi^{(n)}(h_1) + i \pi^{(n)}(h_2) \| + \| \pi^{(n)}(h_1) - i \pi^{(n)}(h_2) \|) \\ &=& \| \pi^{(n)}(h_1) + i \pi^{(n)}(h_2) \| \end{eqnarray*} for all $h_1, h_2 \in C_n-C_n$. Thus condition (5) is satisfied and $\{C_n\}$ is $*$-admissible. By Theorem~\ref{main}, there is an injective bounded homomorphism $\tau: \overline{\mathcal{B}}_\infty \to B(\widetilde{H})$ such that its restriction to $\mathcal{B}$ is completely bounded, $\tau(b^\sharp)=\tau(b)^*$ and $\tau_n(C_n)= \tau_n(M_n(\mathcal{B}))^+$. Denote $\rho = \tau \circ \pi :\mathcal{A} \to B(\widetilde{H})$. Since $\rho$ is a positive homomorphism, it is a $*$-representation. Moreover, $\ker \rho = \{0\}$ because both $\pi$ and $\tau$ are injective. Therefore $\rho^{-1}$ is $*$-isomorphism. Since $\tau: \mathcal{B}\to B(\widetilde{H})$ extends to an injective homomorphism of inductive limit $\overline{\mathcal{B}}_\infty$ and $\rho^{-1}$ is completely isometric, we have that $\pi^{-1} = \rho^{-1}\circ \tau$ extends to injective homomorphism of $\overline{\mathcal{B}}_\infty$. It is also clear that $\pi^{-1}$ is completely bounded as a superposition of two completely bounded maps. \end{proof} \begin{remark} The first statement of Theorem~\ref{kadison} can be deduced also from~\cite[Theorem 2.6]{pitts}. \end{remark} \begin{remark} Note that condition (1) and (2) in Theorem~\ref{main} for cones $C_n$ from the proof of Theorem~\ref{kadison} is obviously equivalent to $\pi$ being completely bounded. \end{remark} \begin{center} {\bf Acknowledgments.} \end{center} The authors wish to express their thanks to Victor Shulman for helpful comments and providing the reference \cite{Shulman}. The work was written when the second author was visiting Chalmers University of Technology in G\"oteborg, Sweden. The second author was supported by the Swedish Institute.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} While it has been established that neutrinos are massive due to the discovery of neutrino oscillations \cite{Fukuda:1998mi,Ahmad:2002jz}, their precise properties, however, are still under active investigation. An analogous (and even more perplexing) story applies to dark matter (DM) whose nature remains unknown despite the ever-growing evidence for its existence from the astrophysical observables. An intriguing possibility regarding these mysteries would be to introduce right-handed neutrinos (RHNs), which can address not only the neutrino mass and DM but also their potential roles in the inflation and baryon asymmetry production \cite{seesaw,Asaka:2005an,Fukugita:1986hr,Asaka:2005pn,Canetti:2012kh,Ibe:2015nfa,kk2}. We, in this article, seek a possibility for a sterile RHN to make up the whole DM in the Universe and, in particular, propose the new production mechanism of sterile RHN DM through the mixing among RHNs. This is in contrast to the conventional mechanisms requiring the sterile RHN DM to couple to left-handed neutrinos which suffer from the severe tension between the bounds from the X-ray observation and the small-scale structure data \cite{Dodelson:1993je,Shi:1998km,Tremaine:1979we,Boyarsky:2005us,Horiuchi:2013noa,ir2017}. These constraints, however, heavily depend on their production mechanisms and many possibilities have been explored to produce the desired DM abundance in addition to the conventional nonresonant/resonant active-sterile neutrino conversion mechanisms \cite{Asaka:2006ek,Bezrukov:2009th,kenji,Anisimov:2008gg,Adhikari:2016bei,Asaka:2005pn,Canetti:2012kh,Ibe:2015nfa,kk2}. Our scenario is distinguishable from such alternative scenarios in that it still uses a simple oscillation between the thermal heavy RHN and DM, and yet it demonstrates the totally different features from the Dodelson-Widrow scenario such as the occurrence of the production peak above/around the electroweak which is of great advantage in circumventing the Lyman-$\alpha$ bounds due to the redshifting of DM momentum. After outlining our setup in Sec. II, we illustrate our scenario in Sec. III for a simple example of two RHNs. Section IV then demonstrates the concrete realization where we introduce a RHN mass matrix whose off-diagonal term can arise from the scalar field vacuum expectation value so that we can explain the light neutrino masses by the seesaw mechanism while avoiding the tight X-ray bounds. Section V is devoted to the discussion/conclusion. \section{Setup} \label{sec:setup} The Lagrangian we study is the standard model (SM) with three Majorana RHNs, given by ${\cal L} = {\cal L}_{\rm SM} + {\cal L}_N$ where ${\cal L}_{\rm SM}$ is the SM Lagrangian and ${\cal L}_N$ reads \begin{eqnarray} \overline \nu_R i\slashed{\partial} \nu_R - \left[ \nu_R^{c}{}^T y_\nu L H - \frac{1}{2}\nu_R^{c}{}^T{\cal M}_N\nu_R^c + {\rm H.c.} \right], \label{eq:lagrangian} \end{eqnarray} where $H, L$, and $\nu_R$ are, respectively, the Higgs doublet, lepton doublet and RHN. For simplicity, we concentrate on the case of three RHNs. We begin with the field basis where $y_\nu y_\nu^\dagger$ is diagonal, denoted as $y_\nu^{\rm diag}$ so that $y_\nu^{\rm diag} y_\nu^{{\rm diag}\dagger}$ becomes a $3\times 3$ diagonal matrix. ${\cal M}_N$ is, in general, a nondiagonal matrix, which we call the interaction basis. A familiar seesaw mechanism for the mass of left-handed neutrino $\nu_L$ reads, in terms of its Dirac mass $m_D^{\rm diag} = y_\nu^{\rm diag} v$ with $v=\langle H\rangle$, ${\cal M}_\nu = m_D^{\rm diag} {}^T {\cal M}_N^{-1} m_D^{\rm diag}$ which can be diagonalized as ${\cal M}_\nu^{\rm diag} = U_L^T {\cal M}_\nu U_L$ ($U_L$ is the Pontecorvo-Maki-Nakagawa-Sakata matrix.\footnote{Throughout this article, we take the charged lepton Yukawa coupling to be diagonal.}). The neutrino mass eigenstates are \begin{eqnarray} \left[ \begin{array}{c} \nu_L\\ \nu_R^c \end{array} \right] &=& U \left[ \begin{array}{c} \nu\\ N^c \end{array} \right], \quad U \simeq \left[ \begin{array}{cc} 1 & \theta^\dagger \\ -\theta & 1 \end{array} \right] \left[ \begin{array}{cc} U_L & \\ & U_R^* \end{array} \right], \label{eq:U} \end{eqnarray} where $\theta\equiv {\cal M}_N^{-1}m_D^{\rm diag}$ and $U_R$ is a unitary matrix defined to diagonalize ${\cal M}_N$ as ${\cal M}_N^{\rm diag}=U_R^\dagger {\cal M}_N U_R^*$. By taking the rotation of Eq.~(\ref{eq:U}), the Yukawa coupling $y_\nu$ is in general a nondiagonal matrix while the neutrino masses, ${\cal M}_\nu$ and ${\cal M}_N$, are simultaneously diagonalized. We call this field basis the mass basis. Thus, we obtain \begin{eqnarray} y_\nu^{\rm diag}y_\nu^{\rm diag}{}^\dagger &=& v^{-2} \left[ U_R({\cal M}_N^{\rm diag})^{1/2}R({\cal M}_\nu^{\rm diag})^{1/2} \right]\nonumber\\ &&\times \left[ U_R({\cal M}_N^{\rm diag})^{1/2}R({\cal M}_\nu^{\rm diag})^{1/2} \right]^\dagger, \label{eq:yuk} \end{eqnarray} where $R$ is an arbitrary $3\times 3$ complex orthogonal matrix satisfying $R^TR=1$~\cite{Casas:2001sr}. The mixing between $\nu_L$ and $N$ is then parametrized by $\Theta=\theta^\dagger U_R^*$, and \begin{eqnarray} \Theta^2 &\equiv& \Theta^\dagger\Theta = ({\cal M}_N^{\rm diag})^{-1/2}R{\cal M}_\nu^{\rm diag}R^\dagger({\cal M}_N^{\rm diag})^{-1/2}. \label{eq:active-sterile mixing} \end{eqnarray} The oscillations among RHNs can take place when their mass and interaction bases differ. We, in the following discussions, consider three RHNs with their masses ${\cal M}_N^{\rm diag}={\rm diag}\{M_1,M_2,M_3\}$ and take $N_1$ as the lightest one so that it can play the role of DM. For the active neutrino masses, we parametrize ${\cal M}_\nu^{\rm diag} = {\rm diag}\{m_1,m_2,m_3\}$ for the normal hierarchy (NH), where $\Delta m_{21}^2 \equiv m_2^2-m_1^2 = (7.50^{+0.19}_{-0.17})\times 10^{-5}~{\rm eV}^2, \Delta m_{31}^2 \equiv m_3^2-m_1^2 = (2.457^{+0.047}_{-0.047})\times 10^{-3}~{\rm eV}^2$ \cite{Gonzalez-Garcia:2015qrr}. For the inverted hierarchy (IH), we take ${\cal M}_\nu^{\rm diag} = {\rm diag}\{m_3,m_1,m_2\}$ and $\Delta m_{32}^2 \equiv m_3^2-m_2^2 = (-2.449^{+0.048}_{-0.047})\times 10^{-3}~{\rm eV}^2$. The lightest neutrino mass ($m_1$ for the NH case, and $m_3$ for the IH case) is taken as a free parameter. In our discussions below, whenever it is not necessary to distinguish the mass orderings, $m_1$ refers to the lightest mass for brevity. \section{DM production through RHN oscillation} We now check if enough abundance of RHN DM $\nu_{R1}$ can be produced from the RHN oscillations. In our scenario, the heavy RHNs $\nu_{R2}$ and $\nu_{R3}$ explain the left-handed neutrino masses by the seesaw mechanism and they can have sizable neutrino Yukawa couplings to be in the thermal equilibrium at a sufficiently high temperature. $\nu_{R1}$, on the other hand, has a sufficiently small coupling to the SM species, so that its production is dominated by the conversion from heavier RHNs. For clarity of the following quantitative discussion, we focus on the $\nu_{R1}$ abundance produced only from its mixing with $\nu_{R2}$ because $\nu_{R3}$ plays the same role as $\nu_{R2}$ in producing $\nu_{R1}$. The relevant reactions for the $\nu_{R2}$ thermalization are the scatterings caused by Yukawa interaction, $\nu_{R2} L \leftrightarrow t Q_3,~\nu_{R2} t \leftrightarrow L Q_3,~\nu_{R2} Q_3 \leftrightarrow L t$, those involving the gauge bosons, $\nu_{R2} V \leftrightarrow H L,~\nu_{R2} L \leftrightarrow H V,~\nu_{R2} H \leftrightarrow L V$ and the decay and inverse decay $\nu_{R2}\leftrightarrow LH$ [$Q_3 (t)$ is the left (right) handed top quark, and $V$ represents the $SU(2)_L$ and $U(1)_Y$ gauge bosons]. The Boltzmann equation for $\nu_{R1}$ \cite{Dolgov:2000ew} reads \begin{eqnarray} \frac{dn_{\nu_{R1}}}{dt} + 3Hn_{\nu_{R1}} = C_{\nu_{R1}} \end{eqnarray} where $C_{\nu_{R1}}$ represents the collision term integrated over the $\nu_{R1}$ momentum given by \begin{eqnarray} C_{\nu_{R1}} &\simeq& {\cal P}(\nu_{R2}\to\nu_{R1})(\gamma_{\nu_{R2}}^{\rm col}+\gamma_{\nu_{R2}}^{\rm ID}),\\ \gamma_{\nu_{R2}}^{\rm col} &=& \frac{T}{64\pi^4}\int^\infty_{s_{\rm min}} ds\hat\sigma\sqrt{s}K_1(\sqrt{s}/T),\\ \gamma_{\nu_{R2}}^{\rm ID} &=& \frac{M_2^2T}{\pi^2}\Gamma(\nu_{R2}\to LH)K_1(M_2/T). \end{eqnarray} Here ${\cal P}$ is the oscillation probability given by ${\cal P}(\nu_{R2}\to\nu_{R1}) = \frac{1}{2}\sin^22\theta_N$ ($\theta_N$ is the mixing angle between $\nu_{R1}$ and $\nu_{R2}$), $\Gamma(\nu_{R2}\to LH)\simeq (y_\nu y_\nu^\dagger)_{22}M_2/(8\pi)$ is the decay width, and $\hat\sigma$ is the reduced cross section for the $\nu_{R2}$ collisions with the kinematical cut $s_{\rm min}$ of the Mandelstam variable $s$, and $K_1$ is the modified Bessel function of the first kind. \footnote{A factor 1/2 in ${\cal P}$ comes from averaging out the RHN oscillation because the oscillation timescale is much shorter than the collision timescale involving $\nu_{R2}$. More quantitatively, this averaging is justified for $T\lesssim 10^6$ GeV and/or $\Delta M^2\equiv M_2^2-M_1^2 \gtrsim 1~{\rm GeV}^2$ because $t_{\rm osc}/t_{\rm col} \sim({y_\nu^2}/{10^{-14}})({g^2}/{10^{-2}})({\rm GeV}^2/{\Delta M^2})({T}/{10^6~{\rm GeV}})^2$ where $g$ represents a gauge coupling for a relevant gauge interaction. As we will discuss later, $y_\nu^2$ of order $10^{-14}$ is required for GeV-scale RHN to reach the thermal equilibrium and it is automatically realized by enforcing the seesaw mechanism. The finite temperature effects on the RHN mixing angle $\theta_N$ are suppressed by the neutrino Yukawa couplings in our scenario and we simply consider a constant $\theta_N$ in our estimation. The cases when these approximations are not applicable are left for the future work.} $\nu_{R1}$ is efficiently produced when the collision terms are large.\footnote{Some of collision terms, such as $\nu_R H\to LV$, possess the infrared divergences, which are regulated by the thermal mass of the propagator in our analysis for $T>T_C$ ($T_C$ is the critical temperature of the electroweak phase transition and we take $T_C = 160$ GeV) \cite{Pilaftsis:2003gt,Besak:2012qm,DOnofrio:2014rug,DOnofrio:2015gop}.} Figure \ref{fig:rate} shows $\Gamma_i/H$ where $\Gamma_i$ represents the rescaled reaction rates for the process $i$ by taking the neutrino Yukawa coupling as unity (so that the curves can be easily scaled by multiplying the Yukawa coupling of interest). For illustration purpose, we define the reaction rates $\Gamma_i=\gamma^{\rm col}_{\nu_{R2}}(i)/n_\gamma$, where $n_\gamma=2T^3/\pi^2$ is the radiation number density and $\gamma^{\rm col}_{\nu_{R2}}(i)$ are the collision terms involving the gauge bosons [$\gamma^{\rm col}_{\nu_{R2}}({\rm gauge})$] and the top quarks [$\gamma^{\rm col}_{\nu_{R2}}({\rm top})$]. The inverse decay rate is given by $\Gamma_{\rm ID}=\gamma^{\rm ID}_{\nu_{R2}}/n_\gamma$. The figure shows the plots for $M_2=1$ GeV (solid) and for $M_2=1$ TeV (dashed), and we note that the inverse decay takes place only for the latter because of the kinematics, namely, the (inverse) decay is available only for $M_2\gtrsim M_h$ with $M_h$ being the Higgs mass. The actual reaction rates can be obtained by multiplying these rescaled reaction rates by $(y_\nu y_\nu^\dagger)_{22}$. We can see, from Fig. \ref{fig:rate}, that $N_2$ can reach the thermal equilibrium ($\Gamma_i/H \gtrsim 1$) when $(y_\nu y_\nu^\dagger)_{22}$ is larger than ${\cal O}(10^{-13})$ for $M_2=1 - 10^3$ GeV, which is also in the desired numerical range to explain the neutrino masses by the seesaw mechanism. \begin{center} \begin{figure}[t] \includegraphics[width=0.45\textwidth]{fig_rate.pdf} \caption{ The ratios between the rescaled (i.e., divided by the Yukawa couplings) reaction rates and the Hubble parameter are shown (the actual reaction rates are obtained by multiplying the Yukawa couplings). The solid curves are for $M_2=1$ GeV and the dashed curves are for $M_2=1$ TeV. } \label{fig:rate} \end{figure} \end{center} \vspace*{-7mm} The produced $\nu_{R1}$ (interaction state) constitutes the DM $N_1$ (mass eigenstate), \footnote{The produced $\nu_{R1}$ is composed of $N_1$ and $N_2$ which propagate with different velocities. As the $\nu_{R1}$ energy gets redshifted, these two mass states are eventually well separated and thus $\nu_{R1}$ is expected to mostly develop the $N_1$ component as long as $M_1\ll M_2$, although the oscillation property may call for a careful study \cite{Akhmedov:2012uu}.} and the current $N_1$ relic number density can be estimated, in terms of the yield parameter $Y_{N_1}\equiv n_{N_1}/s$ ($s$ is the entropy density), by integrating the Boltzmann equation from $T_{\rm RH}$, the reheating temperature, to the current temperature $T=T_0$ \begin{eqnarray} Y_{N_1}^0\equiv Y_{N_1}(T=0) = \int^\infty_0dT {\cal P}(\nu_{R2}\to\nu_{R1})\frac{\gamma_{\nu_{R2}}}{sHT}, \end{eqnarray} where we have taken the limits $T_{\rm RH}\to\infty, T_0\to0$, and $\gamma_{\nu_{R2}}\equiv\gamma_{\nu_{R2}}^{\rm col}+\gamma_{\nu_{R2}}^{\rm ID}$. The corresponding DM density can then be estimated in terms of the yield parameter \begin{eqnarray} \Omega_{N_1}h^2 &\simeq& 0.12 \left[ \frac{\sin^22\theta_N}{8.8\times10^{-3}} \right] \left[ \frac{|y_\nu^{\rm diag}|^2_{22}}{10^{-13}} \right] \left[ \frac{M_1}{\rm keV} \right] \left[ \frac{\tilde Y_{N_1}^0}{10^{12}} \right],\nonumber\\ \end{eqnarray} where $\tilde Y^0_{N_1}$ is the rescaled yield parameter, defined by factoring out the oscillation probability and the Yukawa coupling, $\tilde Y^0_{N_1} \equiv Y^0_{N_1}/({\cal P}(\nu_{R2}\to\nu_{R1}) (y_\nu y_\nu^\dagger)_{22})$. We found the following simple fitting formula to grasp the characteristic features of the DM abundance in our scenario \begin{eqnarray} \log_{10} \tilde Y^0_{N_1} &\simeq& 12.8 \quad (M_2\lesssim M_h)\nonumber\\ &\simeq& 13.3-(1/2)\log_{10}(M_2/M_h) \quad (M_2\gtrsim M_h).\nonumber\\ \end{eqnarray} This behavior matches our expectation because, as emphasized in referring to Fig. \ref{fig:rate}, the most efficient production occurs when the production rate reaches maximal with respect to the Hubble expansion rate. $\tilde Y^0_{N_1}$ is hence little dependent on $M_2$ when $M_2$ is smaller than $M_h$, because $N_2$ is dominantly produced via the inverse decay in this case, and thus the temperature at which the production rate becomes maximal is at $T\simeq M_h$. For $M_2\gtrsim M_h$, on the other hand, the SM particles possess the thermal mass and the production rate becomes maximal around $T\sim M_2$, which leads to some power dependence of the yield parameter on $M_2$. This is illustrated through a concrete example in the next section. \section{Benchmark model} \label{bm} \begin{center} \begin{figure}[t] \includegraphics[width=0.45\textwidth]{fig_BM.pdf} \caption{ The $N_1$ relic abundance is shown as a function of $M_2$ by varying $M_1$ from 100 keV to 100 MeV. The solid and dashed curves show the NH and IH cases, respectively. } \label{fig:BM} \end{figure} \end{center} We here discuss a possible realization of our scenario. Let us begin with a simple mass matrix given by \begin{eqnarray} {\cal M}_N &=& \left[ \begin{array}{ccc} M_0&m&\\ m&M_2&\\ &&M_3 \end{array} \right], \end{eqnarray} where $m$ and $M_0$ are taken to be $M_0\lesssim m \ll M_2, M_3$. ${\cal M}_N$ is then diagonalized as ${\cal M}_N^{\rm diag}={\rm diag}\{M_1,M_2,M_3\}$ with $M_1 \simeq M_0-m^2/M_2$ by using $U_R$ which reads \begin{eqnarray} U_R^* &\simeq& \left[ \begin{array}{ccc} 1&\theta_N&\\ -\theta_N&1&\\ &&1 \end{array} \right] , \theta_N=m/M_2. \label{eq:UR1} \end{eqnarray} The resultant $N_1$ abundance in the NH case is then given by \begin{eqnarray} \Omega_{N_1}h^2 &\simeq& 0.12 \left[ \frac{m_2}{0.01~{\rm eV}} \right] \left[ \frac{M_1}{\rm keV} \right] \left[ \frac{(m/5~{\rm GeV})^2}{M_2/100~{\rm GeV}} \right] \left[ \frac{\tilde Y_{N_1}^0}{10^{13}} \right],\nonumber\\ \end{eqnarray} while, in the IH case, $m_2$ should be replaced by $m_1$. In the case of $M_0 \ll m^2/M_2$, we can take $\theta_N \simeq (M_1/M_2)^{1/2}$ due to $M_1\simeq m^2/M_2$, and thus we obtain \begin{eqnarray} \Omega_{N_1}h^2 &\simeq& 0.12 \left[ \frac{m_2}{0.01~{\rm eV}} \right] \left[ \frac{M_1}{0.52~{\rm MeV}} \right]^2 \left[ \frac{\tilde Y_{N_1}^0}{10^{13}} \right]. \label{eq:Oh2_NH} \end{eqnarray} For this simplified case, Fig. \ref{fig:BM} shows $\Omega_{N_1}h^2$ as a function of $M_2$ for various $M_1$ taken from 100 keV to 100 MeV in both the NH and IH cases which are depicted by solid and dashed curves, respectively. \footnote{It should be noted that, in Fig.~\ref{fig:BM}, $t_{\rm osc}/t_{\rm col} \ll 1$ is achieved for $T\lesssim 10^6\times M_2$ even in the large $M_2$ region, so that a factor 1/2 in ${\cal P}$ by averaging out the RHN oscillation is justified.} The green band in the figure indicates the observed value of the DM abundance given by $\Omega_{\rm DM}h^2=0.1197\pm0.0022$ \cite{Ade:2015xua}. On the other hand, since $\Theta^2_{11}$ depends on $\theta_N$ and we need a relatively large $\theta_N$ for our scenario to work, the $N_1$ is subject to the X-ray constraint given by $\Theta^2_{11} \lesssim 10^{-5}({\rm keV}/M_1)^5$ \cite{Boyarsky:2005us}. One may simply expect that the X-ray bound is easily circumvented because the Yukawa coupling of $\nu_{R1}$ can be negligibly small. We, however, point out that the light RHN can decay into the SM particles through its oscillation to a heavier RHN. In our current setup, we obtain $\Theta^2_{11} = M_1^{-1} (m_1|R_{11}|^2 + m_2|R_{12}|^2 + m_3|R_{13}|^2),$ where $R_{ij}$ represents the $(i,j)$ entry of the $R$ matrix. We can now take $R_{13} = 0$, since there is no mixing in this component, and $m_1=0$ is experimentally allowed. However, since we have $|R_{12}|^2 = 1/(1 + (M_1/M_2)\cot^2\theta_N) \sim 1/2$ in our setup with $M_1 / M_2 \ll 1$, large $M_1$ is not allowed because of the X-ray constraint $\Theta^2_{11} \simeq m_2/(2M_1)\lesssim 10^{-5}({\rm keV}/M_1)^5$, where $m_2\simeq \sqrt{\Delta m^2_{21}}$ in the NH case, and $m_2$ is replaced by $m_1\simeq\sqrt{|\Delta m^2_{32}|}$ in the IH case. One may naively expect that this decay of light RHN through a heavier RHN is suppressed by the hierarchically large mass ratio $M_1/M_2\ll 1$. If we did not enforce the simple seesaw mechanism to obtain the desirable light neutrino masses, this would be the case and the X-ray bound could be circumvented. We, however, in our model construction stick to the seesaw mechanism to account for the observed neutrino masses, which then inevitably increase $y_2$ if we choose a bigger value of $M_2$ to result in too big an X-ray decay rate. To keep the virtue of explaining the observed neutrino masses by the simple type-I seesaw mechanism and yet not to lose the attractive feature of simple RHN oscillation production, we now discuss a time-dependent RHN mixing to evade the X-ray constraint mentioned above. Such a time-dependent RHN mixing can be achieved by utilizing the dynamics of a real scalar filed $\phi$. Let us here consider the two flavor case for simplicity, but the extension to the three flavor system is straightforward. In the two flavor case, we impose $Z_2$ symmetry under which $\nu_{R2}$ is even, while $\nu_{R1}$ and $\phi$ are odd. \footnote{ Although our setup is similar to the idea discussed in Ref.~\cite{Berlin:2016bdv}, the DM production scenario is quite different, since our scenario does not rely on the oscillation between active and sterile neutrinos, and thus the temperature at which the production efficiently occurs takes rather a wide range, which can imprint an observable signature on the structure formation. } Now the mass matrix ${\cal M}_N$ in Eq. (\ref{eq:lagrangian}) is given by \begin{eqnarray} {\cal M}_N(\phi) = \left[ \begin{array}{cc} M_1 & \kappa \phi\\ \kappa \phi & M_2 \end{array} \right] \end{eqnarray} in the interaction basis. The dynamics of $\phi$ is governed by the equation of motion $\ddot{\phi} + 3H\dot{\phi} + V'(\phi) = 0$, where $V(\phi)$ is the potential that we take $V(\phi) \simeq (1/2)m_\phi^2\phi^2$. For $m_\phi\ll 3H$, $\phi$ is almost constant, namely, $\phi \simeq \sqrt{2\rho_\phi}/m_\phi$ with $\rho_\phi$ the energy density of $\phi$, and when $H$ drops below $m_\phi$, $\phi$ starts to oscillate. As we will see below, $m_\phi\ll 3H$ is always satisfied when the $N_1$ production rate is maximal, and thus we take $\phi$ as a constant in this regime. The mixing angle between $\nu_{R1}$ and $\nu_{R2}$ is given by $\sin\theta_N \simeq \kappa \phi / M_2$ in the case that $M_1 \ll M_2$, and thus in the constant $\phi$ regime we obtain $\sin^22\theta_N \simeq 4\kappa^2 \rho_\phi/(m_\phi^2M_2^2)$, where the relevant $\theta_N$ is determined by $\rho_\phi(T_{\rm max})$ with $T_{\max}$ being the temperature at which the production rate becomes maximal, namely, $T_{\rm max} \sim T_c$ for $M_2\lesssim T_c$ and otherwise $T_{\rm max}\sim M_2$. As mentioned above, $\phi$ is constant until it starts to oscillate, so we can take $\rho_\phi(T_{\rm max}) \simeq \rho_\phi(T_{\rm osc})$ with $T_{\rm osc}$ given by $m_\phi = 3H(T_{\rm osc})$. Then, we obtain \begin{eqnarray} \sin^22\theta_N &\simeq& 0.3\times \left[ \frac{r_g}{30} \right]^{1/4} \left[ \frac{\kappa}{10^{-9}} \right]^2 \left[ \frac{m_\phi}{10^{-4}~{\rm eV}} \right]^{-1/2}\nonumber\\ && \times \left[ \frac{M_2}{100~{\rm GeV}} \right]^{-2} \left[ \frac{r}{10^{-4}} \right],\label{eq:phi-mixing} \end{eqnarray} with $r_g= g_*(T_{\rm osc})/g_*(T_0)$, and $r=\rho_\phi^0/\rho_{\rm DM}$ with $\rho_\phi^0$ and $\rho_{\rm DM}$ being the energy density of $\phi$ and dark matter at the present. Here we have used $g_*(T_0)\simeq3.36$. We also require that $\phi$ never thermalizes by taking a sufficiently small $\kappa$ not to affect the big bang nucleosynthesis, which results in $\kappa^2 \lesssim M_2/M_{\rm Pl}$. In addition, $m_\phi$ should be smaller than $H(T_{\rm max})$ in order for $\phi$ at $T_{\rm max}$ to be constant, where $H(T_{\rm max})\simeq 10^{-5}$ eV for $M_2 < T_c$ and $H(T_{\rm max})\simeq 10^{-5}\times (M_2/T_c)^2$ for $M_2 > T_c$. It is worth mentioning that the dynamics of $\phi$ may be tied to inflationary models. In particular, the condition of $\rho_\phi^0\ll\rho_{\rm DM}$ implies that the initial amplitude of $\phi$ is bounded \begin{eqnarray} \phi\lesssim 4\times10^{11}~{\rm GeV}\left(\frac{r_g}{30}\right)^{1/2}\left(\frac{r}{10^{-4}}\right)^{1/2}\left(\frac{10^{-4}~{\rm eV}}{m_\phi}\right)^{1/4}. \end{eqnarray} On the other hand, $\phi$ could be largely displaced from the origin during inflation and its oscillation at a later time possibly dominates the dark matter energy density, in an analogous manner to the Polonyi/moduli problem \cite{Coughlan:1983ci,Ellis:1986zt,Goncharov:1984qm}. To suppress $\phi$ in our case, we may utilize a relatively strong coupling between $\phi$ and inflaton, which renders the adiabatic suppression of the amplitude of the coherent oscillations \cite{Linde:1996cx}. Its actual dynamics, however, depends on the inflationary models and how $\phi$ couples to the inflaton, which we leave unspecified for the future work. Finally let us comment on the $\theta_N$ at the present, which is relevant for the decay of $N_1$. Below $T_{\rm osc}$, since $\rho_\phi$ drops as a matter energy density, we obtain \begin{eqnarray} \frac{\sin^22\theta_N(T_0)}{\sin^22\theta_N(T_{\rm osc})} \simeq 1.2\times 10^{-46} \left[ \frac{r_g}{30} \right]^{-1/4} \left[ \frac{m_\phi}{10^{-4}~{\rm eV}} \right]^{-3/2}, \end{eqnarray} and therefore a sufficiently small mixing to avoid the X-ray constraint can be achieved. \section{Discussion/Conclusion} Before concluding our discussions, let us briefly point out another potentially interesting production mechanism: the production of $N_1$ from a heavier RHN decay. We can consider the decay of $N_2$ (and/or $N_3$) which is thermally decoupled while it is relativistic (otherwise $N_2$ number density would be too small due to the Boltzmann suppression). $N_1$ abundance then can be estimated as \begin{eqnarray} \Omega_{N_1}h^2 &\simeq& 10^{-10} \left[ \frac{\Theta^2_{11}}{10^{-12}} \right] \left[ \frac{M_1}{10~{\rm keV}} \right] \left[ \frac{g_*(T_0)}{g_*(T_{\rm FO})} \right] \end{eqnarray} where we used the branching fraction of $N_2$ decay for the process $N_2\to N_1+$(mesons, leptons), ${\rm Br}(N_2\to N_1)\simeq \Gamma(N_2 \to N_1)/\Gamma(N_2 \to SM) \simeq M_2 \Theta^2_{11}\Theta^2_{22}/ M_2 \Theta^2_{22} \simeq \Theta^2_{11}$, and the ratio of $g_*$ accounts for the change in the effective degrees of freedom from the $N_2$ freeze-out epoch to the present time. This production contribution is hence subdominant compared with RHN oscillation production in the parameter region of our interest. Let us next mention the small-scale structure constraints applicable to our scenario. We here discuss the Lyman-$\alpha$ forest constraints which can give the lower limit on the DM mass from the DM free streaming scale $\lambda_{FS}\sim 1 ~\mbox{Mpc} ( {\rm keV}/{M_1}) ({\langle p/T \rangle}/{3.15})$ \cite{kev01}. Too large a free streaming scale can be excluded due to the suppression of small-scale structure formation. The average momentum of $N_1$ produced by the nonresonant oscillation of thermalized $N_2$ can be estimated as $\langle p_1\rangle \sim 2.8T$, analogous to the conventional (nonresonant) active-sterile oscillation scenario. Taking account of momentum redshifting by a factor $(g_*(T_{N_2\rightarrow N_1})/g_*(T\ll {\rm MeV}))^{-1/3}$ due to the change in the effective degrees of freedom, Lyman-$\alpha$ data leads to the RHN DM mass bound $M_1 \gtrsim 10 $ keV for our scenario \cite{ir2017} (when $N_2\rightarrow N_1$ occurs most efficiently before the QCD phase transition which is the case for the parameter range discussed so far). Such a DM mass range can be realized in our scenario as explicitly demonstrated through the concrete examples in the last section while being compatible with both the right relic abundance and seesaw mechanism. Among the possible extensions of our DM scenarios, we plan to study the leptogenesis as well as the neutrino observables such as the neutrinoless double beta decay in our future work. For instance, even though we have focused on the DM production in this article, the neutrino Yukawa couplings in our model can be further constrained by seeking the production of desirable baryon asymmetry in the Universe. The realization of leptogenesis when $N_2$ and $N_3$ are heavy enough and/or are degenerate in their masses with sufficient $CP$ violations \cite{Fukugita:1986hr,Asaka:2005pn,Pilaftsis:2003gt} will be explored in our forthcoming paper. The $CP$ phases in the neutrino Yukawa couplings are of great importance not only for the leptogenesis but also for the DM production in our scenario, and the presented production mechanism for the RHN DM could uncover a new connection between DM and leptogenesis to bring considerable opportunities for subsequent studies. \begin{acknowledgments} This work was supported by IBS under the project code IBS-R018-D1. We thank A. Kamada, A. Merle and T. Asaka for useful discussions and, in particular, the anonymous referee for the constructive suggestions. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The evolution of short-period exoplanets is thought to be dictated by atmospheric escape. This conclusion is supported by two different approaches: i) the detection of planetary outflows and large escape rates in hot exoplanets \citep[e.g.,][]{2003Natur.422..143V, 2015Natur.522..459E, 2018A&A...620A.147B} and ii) the observation of demographic features possibly carved by atmospheric escape in the population of Neptunes and super-Earths \citep{2013ApJ...763...12B, 2013ApJ...775..105O, 2016A&A...589A..75M, 2017AJ....154..109F, 2018AJ....156..264F, 2020ApJS..247...28H}. These discoveries have led the community to attempt to combine the theoretical descriptions of escape based on demographic features to predict observable atmospheric signatures in transiting exoplanets \citep[e.g.,][]{2007A&A...461.1185L, 2016A&A...586A..75S, 2019MNRAS.484L..49K, 2020MNRAS.498L..53C}. This experiment has been challenging, mainly because of limitations in our instruments and our theories \citep[e.g.,][]{2017MNRAS.466.1868C, 2020AJ....160..258K, 2020MNRAS.498L.119G, 2021JGRE..12606639B}. There are four known spectroscopic windows for observing atmospheric escape: the Lyman-$\alpha$ line at 121.57~nm \citep{2003Natur.422..143V}, metallic chromospheric lines and continuum in the ultraviolet \citep{2010ApJ...714L.222F, 2019AJ....158...91S}, the metastable helium triplet at 1\,083~nm \citep{2000ApJ...537..916S, 2018ApJ...855L..11O}, and the Balmer series of H lines in the blue optical \citep{2012ApJ...751...86J, 2020A&A...638A..87W}. Each one of them has its own set of challenges. While UV observations have classically been used to this end with a variable degree of success \citep[e.g.,][]{2010A&A...514A..72L, 2010ApJ...714L.222F, 2013A&A...560A..54V, 2019AJ....158...50W, 2020A&A...634L...4D, 2021A&A...649A..40D, 2021A&A...650A..73B, 2021ApJ...907L..36G}, they are particularly complicated because only the {\it Hubble Space Telescope} ({\it HST}) can access this wavelength range at high spectral resolution; in addition, cool stars usually do not have UV continuum, limiting transmission spectroscopy only to chromospheric or transition-region emission lines whose count rates are very low \citep{2017A&A...599L...3B, 2019A&A...629A..47D}. One of these techniques, He transmission spectroscopy, has been shown to be reliable and attainable using ground- and space-based instruments \citep{2018Natur.557...68S, 2018Sci...362.1384A}. This spectral channel is not photon-starved and is devoid of interstellar medium absorption \citep{2009ApJ...703.2131I}, the main limitations of Lyman-$\alpha$ spectroscopy. The disadvantage is that the formation of metastable He in the upper atmospheres of exoplanets depends on a specific level of irradiation arriving at the planet \citep[e.g.,][]{2018Sci...362.1388N, 2019ApJ...881..133O, 2020A&A...640A..29D}. Nevertheless, He spectroscopy has the potential to become the main technique of atmospheric escape observations \citep{2019A&A...623A..58A, 2019A&A...629A.110A, 2020AJ....159..115K, 2020AJ....159..278V, 2021A&A...647A.129L, 2021ApJ...909L..10P}. Upper atmospheres extend to several planetary radii and can dwarf the size of planet-hosting stars depending on the properties of the system \citep[e.g.,][]{1963P&SS...11..901C, 2015GeoRL..42.9001C, 2017A&A...605L...7L, 2017GeoRL..4411706K, 2018A&A...620A.147B}. For this reason, when observing the upper atmospheres of exoplanets, the transit geometry can have important effects on the interpretation of the data. For example, if a transiting planet has a nonzero impact parameter, a large portion of its exosphere may not transit and thus not contribute to the observed in-transit absorption. Furthermore, a subtler effect in time-series analyses of transmission spectroscopy is the dilution of a planetary absorption signal when the data are co-added in phase space. Since upper atmospheres are extended, the in-transit absorption is variable with time. This variability dilutes the in-transit absorption because time series of transmission spectra are co-added in phase space to improve the signal-to-noise ratio of the combined transmission spectrum \citep[e.g.,][]{2015A&A...577A..62W}. There are currently no publicly available tools to predict and interpret metastable He transmission spectroscopy. Considering that there is a broad community interest in these observations, we developed {\tt p-winds}, an open-source, fully documented Python implementation of the one-dimensional, isothermal Parker wind\footnote{\footnotesize{In this manuscript, we use the terms "wind" and "outflow" interchangeably, but the first should not be confused with horizontal winds in the lower atmosphere.}} description \citep{1958ApJ...128..664P}, to model exoplanet atmospheres. This code is timely because many data sets used to study metastable He spectroscopy have recently become public. Furthermore, an open-source implementation allows for an independent verification of results as well as community contributions to the code. {\tt p-winds} implements limb darkening and a ray-tracing algorithm that allows the user to change the transit geometry (namely the transit impact parameter and phase in relation to mid-transit). In this manuscript we describe the overarching implementation of {\tt p-winds}, discuss the design decisions, and illustrate the usage of the code. In Sect. \ref{sect:methods} we describe the several modules implemented in the code to forward model the metastable He signature in a transiting exoplanet. In Sect. \ref{sect:results} we present case studies of the warm Neptunes HAT-P-11~b and GJ~436~b and their corresponding atmospheric escape rates retrieved by fitting {\tt p-winds} models to observations.\ Finally, in Sect. \ref{sect:conclusions} we discuss the conclusions of this work. \section{Methods}\label{sect:methods} The code {\tt p-winds} is largely based on the formulations of \citet{2018ApJ...855L..11O} and \citet{2020A&A...636A..13L}. In its current version, the code has four core modules (and two support modules) to model the upper atmospheres and ionization balance of H and He around planetary bodies, which we describe below. In principle, these modules can be used independently of one another depending on the objective of the user. The code to reproduce the examples shown in this section can be obtained via the {\tt p-winds} documentation. \subsection{The {\tt parker} module}\label{sect:parker} The {\tt parker} module calculates the structure of the upper atmosphere following the theoretical description of the solar wind by \citet{1958ApJ...128..664P}. In this model, a steady-state, spherically symmetric outflow follows the equation of mass conservation: \begin{equation} \dot{m} = 4 \pi r^2 \rho(r)v(r) \mathrm{,} \end{equation}where $\dot{m}$ is the mass loss rate, $r$ is the radius, $\rho$ is the gas density, and $v$ is the outflow velocity. This model also follows the steady-state momentum equation: \begin{equation} v \frac{dv}{dr} + \frac{1}{\rho}\frac{dp}{dr} + \frac{G M_\mathrm{pl}}{r^2} = 0 \mathrm{,} \end{equation}where $G$ is the gravitational constant, $p$ is the thermal pressure and $M_\mathrm{pl}$ is the planetary mass. The isothermal Parker solar wind model assumes that the outflow is completely ionized, yielding a constant mean molecular weight, $\mu$, and consequently a constant sound speed, $v_\mathrm{s}$, as a function of radial distance. This allows for a significant simplification of the problem. However, as pointed out by \citet{2020A&A...636A..13L}, the upper atmosphere of a hot planet differs from the solar wind in that $\mu(r)$ is not necessarily constant with radial distance. But if we assume that the ratio $T(r) / \mu(r)$ is constant over $r$, then the assumption of a constant sound speed profile still holds. If we let $\bar{\mu}$ be the value of mean molecular weight corresponding to a given temperature $T_0$, then the constant sound speed is calculated as \begin{equation} v_\mathrm{s} = \sqrt{\frac{k T_0}{\bar{\mu}}}\mathrm{.} \end{equation}According to \citet{2020A&A...636A..13L}, the temperature $T_0$ in this approach corresponds to roughly the maximum of the temperature profile obtained by more comprehensive, self-consistent models (see Sect. 3.1 in their manuscript). As we see in the following paragraphs, we arrive at a similar conclusion in our calculations as well. We calculate $\mu(r)$ as \begin{equation} \mu(r) = m_{\rm p}\,\frac{1 + 4\,n_{\rm He}/n_{\rm H}}{1 + n_{\rm He} / n_{\rm H} + f_{\rm ion}(r)} \mathrm{, with}\ f_{\rm ion} = \frac{n_{\rm H^+}}{n_{\rm H}} \mathrm{,} \end{equation}where $m_\mathrm{p}$ is the mass of a proton and $n_\mathrm{X}$ is the number density of species X. For clarity, we note that $n_{\rm H} = n_{\rm H^0} + n_{\rm H^+}$ and $n_{\rm He} = n_{\rm He\,1^1S}~{\rm (singlet)} + n_{\rm He\,2^3S}~{\rm (triplet)} + n_{\rm He^+}$. We assume that the electrons coming from He ionization do not significantly contribute to changes in $\mu$. According to \citet{2018ApJ...855L..11O}, who also make this same assumption, including electrons from He ionization increases their number density by up to $\sim$10\%. For a given temperature $T_0$, the corresponding average mean molecular weight, $\bar{\mu}$, is calculated as in Eq. A.3 of \citet{2020A&A...636A..13L}: \begin{equation} \bar{\mu} = \frac{GM_{\rm pl} \int \mu(r)\frac{dr}{r^2} + \int \mu(r) v(r) dv + kT_0 \int \mu(r)d(1/\mu)}{GM_{\rm pl} \int \frac{dr}{r^2} + \int v(r) dv + kT_0 \int d(1/\mu)} \mathrm{.} \end{equation} It is convenient to normalize the radii, velocities and densities to, respectively, the radius at the sonic point ($r_\mathrm{s}$), the constant speed of sound, and the density at the sonic point ($\rho_\mathrm{s}$). Based on the formulation of \citet{1999isw..book.....L}, the resulting equations describing the radial profiles of velocity and density are \begin{equation}\label{v_eq} \tilde{v}(r) \exp \left[ \frac{-\tilde{v}(r)^2}{2} \right] = \left( \frac{1}{\tilde{r}} \right)^2 \exp \left[ -\frac{2}{\tilde{r}} + \frac{3}{2} \right]\ \mathrm{and} \end{equation} \begin{equation}\label{rho_eq} \tilde{\rho}(r) = \exp \left[ \frac{2}{\tilde{r}} - \frac{3}{2} - \frac{\tilde{v}^2}{2} \right] \mathrm{,} \end{equation}where $\tilde{r}$, $\tilde{v}$, and $\tilde{\rho}$ are the normalized radial distance, velocity, and density, respectively. Calculating the structure of the upper atmosphere requires as input: the planetary parameters and the stellar spectrum from X-rays to ultraviolet (XUV) impinging at the top of the atmosphere, as well as values for the atmospheric temperature and escape rate; the latter two are free parameters in the Parker wind model. Equation \ref{v_eq} is transcendental and requires a numerical approach to determine its solutions. To this end, we utilize a Newton-Raphson method implemented in {\tt scipy.optimize}, which requires an initial guess for the optimization. Equation \ref{v_eq} has many solutions, but we are only interested in the solution that represents an escaping atmosphere (i.e., a transonic solution). In order to guarantee we converge to the correct solution, we enforce that the initial guess is below (above) the speed of sound when calculating the velocities below (above) the sonic point. The end product of the {\tt parker} module is the structure of the upper atmosphere from Eqs. \ref{v_eq} and \ref{rho_eq}. Since the structure of the upper atmosphere (densities and velocities) and the H ionization fraction are interdependent of each other, the code performs a loop that iteratively calculates all of these one-dimensional profiles until convergence is achieved (see Sect. \ref{sect:hydrogen}). As an example, we calculated the structure the hot Jupiter HD~209458~b and the result is shown in Fig. \ref{fig:structure_hd209}. We compare the structure computed with {\tt p-winds} (continuous curves) with a one-dimensional model of the same planet calculated self-consistently using the formulation of \citet[][dashed curves, assuming 90\% H and 10\% He]{2019MNRAS.490.3760A}. In order to be comparable, the {\tt p-winds} model was computed using the same mass loss rate, composition, and a monochromatic ionizing flux (energy $> 13.6$~eV) of 450~erg\,s$^{-1}$\,cm$^{-2}$ with an average photon energy of 20 eV. The isothermal Parker wind predicts a similar structure as the self-consistent model when we assume a $T_0$ corresponding to the maximum temperature from the latter. This is the same result that \citet{2020A&A...636A..13L} obtains in their description. \begin{figure} \centering \includegraphics[width=0.9\hsize]{structure_hd209458b_comparison.pdf} \caption{One-dimensional structure of the upper atmosphere of the hot Jupiter HD~209458~b computed with {\tt p-winds} (continuous curves). Velocities are shown in blue and densities in orange. For comparison, we also plot a model for the same planet computed self-consistently using the formulation of \citet{2019MNRAS.490.3760A} as dashed curves. The circles mark the sonic point.} \label{fig:structure_hd209} \end{figure} Naturally, a one-dimensional model does not capture outflow asymmetries that are sometimes observed in Lyman-$\alpha$ transit spectroscopy \citep[e.g.,][]{2010A&A...514A..72L, 2017A&A...605L...7L, 2018A&A...620A.147B}. More complex, three-dimensional models are necessary to completely describe these features \citep[e.g.,][]{2016A&A...591A.121B, 2021MNRAS.501.4383V, 2021ApJ...914...98W, 2021ApJ...914...99W, 2019A&A...623A..58A, 2021arXiv210707534M}. Simple one-dimensional models are nevertheless capable of retrieving atmospheric escape parameters with the assumption that the mass loss process takes place spherically and homogeneously throughout the surface of the planet \citep[e.g.,][]{2020A&A...636A..13L, 2021A&A...647A.129L}. Models that are faster to calculate are also useful when there is a need to explore a large parameter space, which is what we discuss in Sect. \ref{sect:results} and in an upcoming manuscript \citep{Vissapragada2021}. \subsection{The {\tt hydrogen} module}\label{sect:hydrogen} The {\tt hydrogen} module calculates the steady-state distribution of neutral and ionized H in the upper atmosphere. The quantity of interest here is $f_\mathrm{ion}$, whose radial profile is obtained by calculating the steady-state balance between advection and source-sink terms for H ions. In this case, the source is (photo-) ionization by high-energy photons, and the sink is recombination into neutral atoms. This radial distribution can be calculated with the following differential equation \citep[see Sect. 3.2 in][]{2018ApJ...855L..11O}: \begin{equation}\label{ss_H} v(r)\,\frac{d f_\mathrm{ion}}{d r} = (1 - f_\mathrm{ion})\,\Phi(r) - n_{\rm H}(r)\,f_\mathrm{ion}^2\,\alpha_\mathrm{rec} \mathrm{,} \end{equation}where $n_{\rm H}(r) = x\,\rho(r) / [(x + 4y)\,m_p]$, with $x$ being the H atoms number fraction in the outflow, $y = 1 - x$ is the He atoms number fraction, and $\Phi$ is the photoionization rate: \begin{equation} \Phi(r) = \int^{\lambda_0}_{0} \frac{\lambda}{hc}\,f_\lambda\,\sigma_\lambda\,e^{-\tau_\lambda(r)}\,d\lambda \mathrm{,} \end{equation}where $\lambda_0$ is the wavelength corresponding to the ionization energy of H (911.65 \AA) and $f_\lambda$ is the flux density (in units of energy $\cdot$ time$^{-1} \cdot$ area$^{-1} \cdot$ wavelength$^{-1}$) arriving at the top of the atmosphere. $\sigma_\lambda$ is the photoionization cross section, which we calculate in the support module {\tt microphysics}, following Eq. 10 in \citet{2018ApJ...855L..11O}, which is based on \citet{2006agna.book.....O}. The optical depth of neutral H is given by \begin{equation}\label{tau_eq} \tau_{\lambda,\,H^0}(r) = \int_{r}^{\infty} \sigma_\lambda\,n_{H^0}(r)\,dr = \frac{x\,\sigma_{\lambda}}{(x + 4y)\,m_p}\int_{r}^{\infty} (1 - f_{\rm ion})\,\rho(r)\,dr .\end{equation} The velocities $v$ and densities $\rho$ are calculated using the module {\tt parker}. $\alpha_{\rm rec}$ is the case-B H recombination rate at a given temperature \citep{2006agna.book.....O, 2015ApJ...808..173T}, calculated as \begin{equation} \alpha_{\rm rec} = 2.59 \times 10^{-13} \left(\frac{T_0}{10^4}\right)^{-0.7}~\mathrm{cm}^3~\mathrm{s}^{-1}\mathrm{.} \end{equation} As seen in Eq. \ref{tau_eq}, $\tau_\lambda$ depends on $f_\mathrm{ion}$, which is what we want to calculate in the first place. However, the optical depth depends more strongly on the densities of H than the ion fraction. So instead of solving a system of coupled nonlinear differential equations, a first solution can be achieved by assuming that the whole atmosphere is neutral at first. Later, we relax this assumption by recalculating the $\tau_\lambda$ and $f_{\rm ion}$ profiles iteratively until convergence is achieved (the user can define the convergence criterion). We solve Eq. \ref{ss_H} using {\tt solve\_ivp}, an explicit Runge-Kutta integrator of hybrid 4th and 5th orders implemented in \texttt{scipy.integrate}. The user inputs an initial guess for $f_{\rm ion}$ at the innermost layer of the upper atmosphere. The code also takes as input the stellar host spectrum arriving at the planet, or the monochromatic flux between 0 and 911.65 \AA, and the planetary parameters. The solution for the H distribution in 500 points including the relaxation takes approximately 400~ms on a CPU with frequency 3.1~GHz and four computing threads. Continuing the example for HD~209458~b from Sect. \ref{sect:parker}, we calculated the ion and neutral fraction of H in the upper atmosphere, and the resulting distribution is shown in Fig. \ref{fig:hion_hd209} (continuous curve). We compare this result with the ion fraction calculated with the self-consistent escape model from Sect. \ref{sect:parker} (dashed curve); in order to be comparable, both models are calculated assuming an impinging XUV monochromatic flux of 450~erg~s$^{-1}$~cm$^{-2}$. The {\tt p-winds} model overpredicts the ion fraction by a factor of a few when compared to the self-consistent model, likely because of the larger densities (see Fig. \ref{fig:structure_hd209}), which increase the optical depth of the atmosphere to ionizing irradiation. \begin{figure} \centering \includegraphics[width=0.9\hsize]{f_ion_comparison.pdf} \caption{Neutral H atom fraction in the upper atmosphere of the hot Jupiter HD~209458~b computed with {\tt p-winds} for the same setup from Sect. \ref{sect:parker} (continuous curve). We also show the neutral fraction calculated with a self-consistent escape model for comparison (dashed curve).} \label{fig:hion_hd209} \end{figure} \subsection{The {\tt helium} module} The {\tt helium} module calculates the steady-state distribution of neutral singlet, neutral triplet, and ionized He in the upper atmosphere. The quantities of interest here are $f_1 = n_{\rm He\,1^1S} / n_{\rm He}$ and $f_3 = n_{\rm He\,2^3S} / n_{\rm He}$. The radial profiles $df_1/dr$ and $df_3/dr$ are described by a coupled system of differential equations with source and sink terms: \begin{equation}\label{eq:he_dist} \begin{cases} v(r)\,d f_1 / d r = \mathrm{sources}_1 + \mathrm{sinks}_1 \\ v(r)\,d f_3 / d r = \mathrm{sources}_3 + \mathrm{sinks}_3 \end{cases}\mathrm{.} \end{equation}We refer the reader to \citet{2018ApJ...855L..11O} and Table 2 of \citet{2020A&A...636A..13L} for detailed equations of all the source and sink terms for He\footnote{\footnotesize{We note that, in Table 2 of \citet{2020A&A...636A..13L}, the units for the recombination and collisional processes are cm$^3$\,s$^{-1}$, and not cm$^{-3}$\,s$^{-1}$ as the authors list in their manuscript.}}. In our code we do include the He charge exchange terms pointed out by \citet{2020A&A...636A..13L}. We assume that the He ionization and the excited helium triplet do not significantly change the structure of the upper atmosphere. This allows us to decouple the {\tt helium} module from the {\tt parker} and {\tt hydrogen} modules. This is advantageous because the user can enter as input a H structure that was calculated by more complex and self-consistent models than isothermal Parker winds ones. It is important, however, that these models do include He in their calculation of the structure in order to produce consistent results for the metastable He distribution. The procedure to solve the distribution of He (Eq. \ref{eq:he_dist}) is similar to that for H. The user inputs an initial guess for $f_1$ and $f_3$ at the innermost atmospheric layer, the stellar host spectrum from 0 to 2600~\AA\ (or monochromatic fluxes in the bands 0-1200~\AA\ and 1200-2600~\AA), the structure of the upper atmosphere (profiles of density and velocity), and the planetary parameters. It is important to emphasize that neutral H also contributes to the optical depth between wavelengths 0-911~\AA, attenuating the amount of high-energy flux that ionizes and populates the He levels. The code takes this contribution into account, as in \citet{2018ApJ...855L..11O}. Initially, the code assumes that the entire upper atmosphere has constant $f_1$ and $f_3$, and then a first solution is obtained using {\tt odeint}\footnote{\footnotesize{When calculating the steady-state distribution of He, we opted to use {\tt odeint} instead of {\tt solve\_ivp} in this case because the first is more stable and 2.6 times faster than the second. {\tt solve\_ivp} is faster than {\tt odeint} when solving the distribution of H.}}, a Python wrapper for the {\tt LSODA} solver from the Fortran library {\tt odepack} implemented in {\tt scipy.integrate}. This solution is then relaxed by updating the optical depths, $f_1$ and $f_3$ iteratively until convergence is achieved. The solution can, however, become numerically unstable for large density gradients, which can sometimes happen near the $R = 1$~R$_{\rm pl}$. A practical work-around is to establish a cutoff near $1$~R$_{\rm pl}$ that removes this large density gradient and ignore this layer of the atmosphere in the modeling. However, the user should be aware that this solution could affect the interpretation of more compressed thermospheres, such as that of HD~189733~b \citep{2021A&A...647A.129L}; we have not yet attempted to model this planet with \texttt{p-winds}, and leave this for future work. We show the distribution of He in the upper atmosphere of HD~209458~b calculated as described above in Fig. \ref{fig:he_dist_hd209}. For comparison purposes, this time we assumed a model with the same input parameters as the one described in \citet[][namely an escape rate of $8 \times 10^{10}$~g~s$^{-1}$, a temperature of 9000~K, a H fraction of 0.9, and a solar irradiating spectrum]{2018ApJ...855L..11O}. Our results match the models of \citeauthor{2018ApJ...855L..11O}, as seen in Fig. 3 of their publication. The solution for the He distribution in 500 points including the relaxation takes approximately 2.5~s on a CPU with frequency 3.1~GHz and four computing threads. This is the main computational bottleneck of the {\tt p-winds} code. \begin{figure} \centering \includegraphics[width=0.9\hsize]{he_dist_hd209458b.pdf} \caption{Distribution of He in the upper atmosphere of HD~209458~b calculated with {\tt p-winds} assuming the same input parameters as \citet{2018ApJ...855L..11O}.} \label{fig:he_dist_hd209} \end{figure} \subsection{The {\tt transit} module} The {\tt transit} module has two independent functions that can be used to calculate the spectral signatures of the upper atmosphere in transmission. The first function, {\tt draw\_transit}, calculates two-dimensional intensity maps containing the host star and a transiting planet at a user-defined phase and impact parameter. The one-dimensional profiles of metastable He volumetric densities are required to calculate the two-dimensional array of column densities mapped to the same geometry as the transit. The output intensity map is normalized in a way that the disk-averaged stellar intensity is 1.0 when the planet is out of transit. Optionally, the user can also input a limb-darkening law. The most important function in this module is {\tt radiative\_transfer\_2d}, which, as the name implies, calculates the in-transit absorption spectrum caused by the opaque disk of the planet and its upper atmosphere. In each cell of the two-dimensional transit mapped by the $ij$ indexes, the resulting attenuated intensity $I_{ij}(\nu)$\footnote{\footnotesize{The radiative transfer routine uses input in wavelength space, but the actual calculations are performed in frequency space for code clarity and brevity. The grid size is defined by the user.}} of the stellar light caused by absorption of He in the upper atmosphere is given by \begin{equation}\label{eq:rad_transf} I_{ij}(\nu) = I_{ij,\,0}(\nu) \exp{(-\tau_{ij,\,{\rm He}})} \mathrm{,} \end{equation}where $I_{ij,\,0}(\nu)$ is the intensity emerging from the host star before filtering through the atmosphere and $\tau_{ij,\,{\rm He}}$ is the optical depth due to metastable He. $I_{ij}(\nu)$ is set to zero in the cells corresponding to the opaque disk of the planet. From here onward, we drop the $ij$ indexes for the sake of brevity, but the reader should implicitly assume that the radiative transfer is carried out cell-by-cell in the transit map. Formally, the optical depth is given by \begin{equation}\label{eq:opt_depth} \tau_{\rm He} = \int_{-R_{\rm atm}}^{R_{\rm atm}} \varphi_\nu(z)\,\sigma_{\rm He}\,n_{\rm He}(z)\, dz \mathrm{,} \end{equation}where $\varphi_\nu$ is the Voigt profile and $\sigma_{\rm He}$ is the cross section of metastable helium lines near 1.083~$\mu$m. Following \citet{2018ApJ...855L..11O} \citep[see also, e.g.,][]{2019MNRAS.490.3760A}, the He cross section is calculated as \begin{equation} \sigma_{\rm He} = \frac{\pi e^2}{m_e c} f \mathrm{,} \end{equation}where $f$ is the oscillator strength of the transition, $e$ is the electron charge, and $m_e$ is the electron mass. This formula is only valid in the Gaussian-cgs unit system, where $e$ is given in units of esu or statC (see, for example, \citealt{2010ApJ...723..116K} for a formula that can be used in other unit systems). The Voigt profile $\varphi_\nu$ is calculated using the {\tt voigt\_profile} implementation of {\tt scipy.special}, which takes three parameters: the bulk velocity shift $v_{\rm bulk}$ of the profile in relation to the rest wavelength, the standard deviation $\alpha$ of the Gaussian (in our case Doppler) term, and the Lorentzian half width at half maximum (HWHM). Similar to \citet{2020A&A...636A..13L}, the Gaussian width $\alpha$ is calculated as \begin{equation}\label{eq:gaussian_broadening} \alpha = \frac{\nu_0}{c} \sqrt{\frac{2\,k_B T}{m_{\rm He}}} \mathrm{,} \end{equation}where $m_{\rm He}$ is the mass of a He atom, $T$ is the temperature of the gas, $\nu_0$ is the central frequency of the transition. The Lorentzian HWHM is $\gamma = A_{ij} / 4\pi$, where $A_{ij}$ is the Einstein coefficient of the transition. We took the properties of the metastable He transitions near 1.083~$\mu$m from the National Institute of Standards and Technology (NIST) database\footnote{\footnotesize{\url{https://www.nist.gov/pml/atomic-spectra-database}.}}, and list them in Table \ref{triplet_properties}. \begin{table} \caption{Spectral line properties of the metastable He triplet in the near-infrared.} \label{triplet_properties} \centering \begin{tabular}{cccc} \hline\hline \multirow{2}{*}{Upper level $J$} & $\lambda_0$ & $A_{ij}$ & \multirow{2}{*}{$f$} \\ & (nm, in air) & (s$^{-1}$) & \\ \hline $0$ & $1\,082.909$ & $1.0216 \times 10^7$ & $5.9902 \times 10^{-2}$ \\ $1$ & $1\,083.025$ & $1.0216 \times 10^7$ & $1.7974 \times 10^{-1}$ \\ $2$ & $1\,083.034$ & $1.0216 \times 10^7$ & $2.9958 \times 10^{-1}$ \\ \hline \end{tabular} \end{table} \subsubsection{Line broadening by the planetary outflow} In reality, $\varphi_\nu$ depends on the three-dimensional position in relation to the planet because each position has a different line-of-sight velocity, which broadens the absorption line. Thus, the formal calculation of the Voigt profile is performed for each pencil of light between the star and the observer. In a given position $z$ along the pencil, the line-of-sight velocity, $v_{\rm LOS}$, as a function of distance, $r$, from the planet center is calculated using the formulation of \citet{2020A&A...633A..86S}: \begin{equation} |v_{\rm LOS}(r)| = |v_{\rm ver}| \frac{z^2}{\sqrt{r^2 + z^2}} \mathrm{,} \end{equation}where $v_{\rm ver}$ is the outflow velocity obtained from the Parker wind model. This calculation has to be performed for three spectral lines, and it adds an extra dimension for wavelength. For these reasons, the formal calculation of $\varphi_\nu$ taking into account all the four dimensions is computationally costly. In order to accelerate the radiative transfer, instead of calculating the Parker wind broadening in full dimensionality, we can optionally assume that it contributes to the Gaussian broadening term of the Voigt profile uniformly through the line of sight. With the dependence on the $z$ axis dropped, we can remove $\varphi_\nu$ from the integrand in Eq. \ref{eq:opt_depth}, yielding the approximation \begin{equation}\label{eq:rad_transf_approx} \tau_{\rm He} \simeq \varphi_\nu\,\sigma_{\rm He} \int_{-R_{\rm atm}}^{R_{\rm atm}} n_{\rm He}(z)\, dz = \varphi_\nu\,\sigma_{\rm He}\,\eta_{\rm He} \mathrm{,} \end{equation}where $\eta_{\rm He}$ is the column density of He. In order to validate this approximation, we need to assume that the Gaussian wind broadening has a constant velocity $v_{w}$ in the line of sight, and add it in quadrature to the square-velocity term of Eq. \ref{eq:gaussian_broadening}, yielding \begin{equation}\label{eq:alpha_approx} \alpha_{\rm approx} = \frac{\nu_0}{c} \sqrt{\frac{2\,k_B T}{m_{\rm He}} + v_{w}^2} \mathrm{.} \end{equation}The wind-broadening velocity term $v_{w}$ is calculated as the average of $v_{\rm LOS}$ weighted by the metastable He number density: \begin{equation}\label{eq:vw_average} v_{w} = \frac{\int_0^{R_{\rm sim}} v_{\rm LOS}(r)\,n_{\rm He\,2^3S}(r)\,dr}{\int_0^{R_{\rm sim}} n_{\rm He\,2^3S}(r)\,dr} \mathrm{.} \end{equation} In this approximation, the user can, optionally, include an additional source of broadening: microturbulence. We implement the same formulation of \citet{2020A&A...636A..13L}: \begin{equation}\label{eq:vw_turb} v_{\rm turb} = \sqrt{5k_B T / (3m_{\rm He})} \mathrm{.} \end{equation}The turbulence velocity term is added quadratically in Eq. \ref{eq:alpha_approx}. {\tt p-winds} allows the user to decide on which method to use to calculate the Voigt profile: the formal calculation (Eqs. \ref{eq:rad_transf} and \ref{eq:opt_depth}) or the density-weighted average broadening parameter (Eqs. \ref{eq:rad_transf_approx}, \ref{eq:alpha_approx}, and \ref{eq:vw_average}). The turbulent broadening (Eq. \ref{eq:vw_turb}) can optionally be included at the discretion of the user. This is done with the optional parameters {\tt wind\_broadening\_method} and {\tt turbulence\_broadening} when calling the {\tt radiative\_transfer\_2d} function; the default is the density-weighted average-velocity implementation, which is a good compromise of speed and accuracy. We assess the validity of the assumption we made above by calculating the formal and the average-velocity broadening methods for HD~209458~b and HAT-P-11~b. The first planet has a more compact upper atmosphere and lower outflow velocities than the second by a factor of $\sim$2. In the case of HD~209458~b, the average-velocity method produces an accurate approximation to the formal calculation (see the left panel of Fig. \ref{fig:profile_comparison}). In the case of the more extended atmosphere of HAT-P-11~b, the average-velocity method is accurate when turbulence broadening is also included (right panel of Fig. \ref{fig:profile_comparison}). In both cases, the average method is one order of magnitude faster in computation time than the formal method. \begin{figure*} \centering \begin{tabular}{cc} \includegraphics[width=0.47\hsize]{broadening_HD209458b.pdf} & \includegraphics[width=0.47\hsize]{broadening_HATP11b.pdf} \end{tabular} \caption{Comparison of the spectral line broadening in the He triplet lines using two different methods: formal definition of the optical depth (black) and the density-weighted average-velocity broadening (red). In the case of a more extended atmosphere of HAT-P-11~b, a better match between the formal and average methods is obtained when we include turbulence broadening (orange). These are forward models, and we are not yet attempting to fit them to observed signatures.} \label{fig:profile_comparison} \end{figure*} We emphasize that, at this point, we are only producing a forward model and not making an attempt to fit it to actual existing observations of this planet \citep[e.g.,][]{2020A&A...636A..13L}. The results we obtain from the example of HD~209458~b throughout this section, from the Parker wind structure to the predicted metastable He transmission spectrum, are consistent with those obtained by \citet{2018ApJ...855L..11O}. \subsubsection{Dilution of the transit signature} Usually, the absorption of light by upper atmospheres in transiting exoplanets is in the order of several percent or less in a narrow bandpass. Thus, transmission spectroscopy observations sometimes rely on averaging time series in phase-space to build enough signal to noise and produce a detectable signal. However, upper atmospheres are so extended that the in-transit absorption signature is variable with phase, and phase-averaging them dilutes the observed signature. In addition, inhomogeneities in the stellar surface, such as limb darkening, may become important, particularly when fitting (spectro-) photometric light curves. We illustrate this effect in Fig. \ref{fig:spec_ts}, where we simulated the phase-averaging for both HD~209458~b and HAT-P-11~b. In the case of the hot-Jupiter, the mid-transit spectrum (phase $= 0.0$) is comparable to the spectrum phase-averaged between second and third contacts (T2-T3), but it differs more significantly when the phase-averaging is taken between first and fourth contacts (T1-T4). This is due to a more compact upper atmosphere than HAT-P-11~b, whose extent produces a larger difference between the phase-averaged and the mid-transit spectra. Since the planet-to-star ratio is smaller than HD~209458~b, phase averaging between T1-T4 or T2-T3 does not make as much difference as it does for the hot Jupiter. \begin{figure*} \centering \begin{tabular}{cc} \includegraphics[width=0.47\hsize]{ts_ts_hd209b.pdf} & \includegraphics[width=0.47\hsize]{ts_ts_hp11b.pdf} \\ \end{tabular} \caption{Metastable He transmission spectrum of HD~209458~b (left panel) and HAT-P-11~b (right panel) for uniformly sampled phases, transit impact parameter $b = 0.499$ \textbf{and $b = 0.132$, respectively}, and limb darkening based on the results of \citet{2007ApJ...655..564K} and \citet{2010A&A...510A..21S}. The baseline $(R_{\rm p} / R_{\rm s})^2$ was removed, as in actual ground-based observations. Phase 0.0 and 0.5 represent, respectively, the times of mid-transit and of first (or fourth) contact. The dashed red spectrum is the average of all phases between the first and fourth contact. The dot-dashed black curve is the average between second and third contact. These are forward models, and we are not yet attempting to fit them to observed signatures.} \label{fig:spec_ts} \end{figure*} Previous one-dimensional descriptions of the metastable He transmission spectrum did not take into account the transit geometry, phase-averaging, and limb darkening. The {\tt p-winds} code allows the user to set the transit impact parameter, phase in relation to the first and fourth contacts, and set a limb-darkening law. To this end, we utilize the auxiliary open-source code {\tt flatstar}\footnote{\footnotesize{The code is freely available at \url{https://flatstar.readthedocs.io}.}} to simulate transit grids (see a brief description in Appendix \ref{app:flatstar}). In the current implementation, this transit grid only allows for circular orbits and it neglects the curvature of the transit chord. \section{Atmospheric escape retrievals}\label{sect:results} We further benchmarked the code {\tt p-winds} by performing retrievals of the atmospheric escape rate, temperature, line-of-sight bulk velocity, and the H fraction ($n_{\rm H}/n_{\rm atoms}$) of the warm Neptunes HAT-P-11~b and GJ~436~b. Both planets were observed in transmission spectroscopy by the CARMENES spectrograph (Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Échelle Spectrographs), but only the first had a strong in-transit signal and the second had only a non-detection. For the latter, we attempted to fit upper or lower limits of the atmospheric escape rate and outflow temperature. The Python algorithms to reproduce our retrievals are freely available online\footnote{\footnotesize{\url{https://zenodo.org/record/4906091}.}}. \subsection{Fitting the He signature of HAT-P-11~b}\label{retrieval} The metastable He signature of HAT-P-11~b was measured with the CARMENES spectrograph installed in the 3.5~m telescope at the Calar Alto Observatory \citep{2018Sci...362.1384A}. The transmission spectrum is openly available in the DACE platform\footnote{\footnotesize{\url{https://dace.unige.ch/openData/}}}. The central wavelengths of the metastable He transitions retrieved from the NIST database are listed as measured in air, but the wavelengths of the CARMENES spectrum are in vacuum. We converted the wavelengths of the latter to in-air using the following formula \citep[][IAU standard]{2000ApJS..130..403M}: \begin{multline} \lambda_{\rm air} = \lambda_{\rm vacuum} / n\ {\rm with}\\ n = 1 + 0.0000834254 + \frac{0.02406147}{130 - s^2} + \frac{0.00015998}{38.9 - s^2}\ {\rm and}\\ s = 10^4 / \lambda_{\rm vacuum} \mathrm{.} \end{multline} In general, we fit three free parameters: the atmospheric escape rate $\dot{m}$, the upper atmosphere temperature $T$, and the bulk line-of-sight velocity $v_{\rm bulk}$ of the upper atmosphere. It is also possible to run fits with additional parameters (such as the H fraction). The fit is performed by maximizing the likelihood $\mathcal{P}$ of a given transmission spectrum model $\mathcal{F}_{\rm model}$ to represent the observed transmission spectrum $\mathcal{F}$. Such a log-likelihood is given by \begin{equation}\label{eq:likelihood} \ln{\mathcal{P}(\mathcal{F} | \lambda, \sigma, \vec{p})} = -\frac{1}{2} \sum_n \left[ \frac{\left(\mathcal{F}_n - \mathcal{F}_{\mathrm{model},\,n}\right)^2}{\sigma_n^2} + \ln{\left(2\pi \sigma_n^2 \right)} \right] \mathrm{,} \end{equation}where $n$ stands for a given bin of the spectrum, $\sigma$ is the uncertainty of the measurement, and $\vec{p}$ is the vector containing the free parameters. To determine the uncertainties of the fit, we use the Markov chain Monte Carlo (MCMC) ensemble sampler {\tt emcee} \citep{2013PASP..125..306F}. The uncertainties we report here represent the confidence interval that encompasses the 16th to the 84th percentile of the posterior distribution of the free parameters. For HAT-P-11~b, we ran in total four different models: (1) no limb darkening, H number fraction fixed to 0.90; (2) a quadratic limb-darkening law with coefficients $c_1 = 0.63$ and $c_2 = 0.09$ \citep{2010A&A...510A..21S}, H number fraction fixed to 0.90; (3) no limb darkening, H fraction as a free parameter with a uniform prior of [0.80, 0.99]; and (4) same as (1), but using the formal implementation of the radiative transfer instead of the average-velocity broadening approximation. For model (4), instead of a full MCMC, we perform only a maximum-likelihood (Eq. \ref{eq:likelihood}) using the Nelder-Mead algorithm implemented in {\tt scipy.optimize.minimize}. The reason is because we simply want to assess the accuracy of the average-velocity broadening approximation for HAT-P-11~b in comparison to the formal, more computationally costly radiative transfer. HAT-P-11 does not have a full high-energy spectrum measurement, so we use the spectrum of a similar star from the MUSCLES Treasury Survey\footnote{\footnotesize{Available at \url{https://archive.stsci.edu/prepds/muscles/}.}} \citep{2016ApJ...820...89F, 2016ApJ...824..101Y, 2016ApJ...824..102L} as a proxy. We chose the star HD~40307, which has similar effective temperature, mass, radius, and surface gravity as HAT-P-11. We ran the MCMC for 7000 steps and 10 walkers in 10 cores of a computer cluster with an average frequency of 3.0 GHz per core. The autocorrelation time $t$ of the MCMC was, on average, 45 steps when we started from a first guess of $1 \times 10^{10}$~g~s$^{-1}$, $800$~K, and $-2.0$~km~s$^{-1}$, respectively, for $\dot{m}$, $T$, and $v_{\rm bulk}$. We remove a total of $2t$ burn-in steps from the beginning of the MCMC and take a sample thinned by $t / 2$, resulting in a flat chain of approximately 3600 samples. The computation of the MCMC chains took approximately 6.5 hours of computing time. Different planets will likely yield different computing times because the numerical bottleneck (calculating the He distribution) is highly dependent on the input parameters. We show the posterior distributions of the fit parameters for Model 1 in Fig. \ref{fig:hatp11b_corner} (see also Appendix \ref{app:posteriors}), and a sample of corresponding transmission spectrum models fit to the data in Fig. \ref{fig:hatp11b_t_spec}. Table \ref{hp11b_results} contains the retrieved atmospheric escape parameters for HAT-P-11~b based on the CARMENES transmission spectrum. All models we tested yield results consistent with one another within their uncertainties, based on the marginalized posterior distribution of the retrieved parameters. A comparison between the results for Models 1 and 4 reveals that, in the case of HAT-P-11~b, the limb darkening of the star does not significantly affect the retrieved escape parameters when fitting a ground-based transmission spectrum. In the case of Model 3, where we allowed the H fraction to vary between 0.80 and 0.99, the retrieval slightly favors fractions $> 0.96$, but the 3$\sigma$ upper limit of $> 0.80$ is not constraining. We show the resulting posterior distributions of the fit to Model 3 in Fig. \ref{fig:hatp11b_corner_4p}. For fractions above $0.92$, the retrieved escape rate tends to increase by a factor of several percent. The retrieved upper atmosphere temperature $T$ is highly anticorrelated with the H fraction. This degeneracy increases the uncertainties of $T$ by a factor of at least two. We show the resulting distribution of He in the upper atmosphere of HAT-P-11~b in Fig. \ref{fig:hatp11b_He_dist} based on the best-fit model to the CARMENES data. Finally, we show that the retrieved escape parameters of Models 1-3 (average-velocity approximation for the wind broadening) are fully consistent with that of Model 4 (formal radiative transfer calculation). Hence, we demonstrate that this approximation, which saves an order of magnitude in computation time, does not significantly affect the retrieved escape parameters. \begin{table*}[t] \caption{Upper atmosphere properties retrieved for HAT-P-11~b from the CARMENES transmission spectrum.} \label{hp11b_results} \centering \begin{tabular}{lcccc} \hline\hline Model & $\dot{m}$ & $T$ & $v_{\rm bulk}$ & H fraction \\ & ($\times 10^{10}$~g~s$^{-1}$) & ($\times 10^{3}$~K) & (km~s$^{-1}$) & \\ \hline 1 & $2.5^{+0.8}_{-0.6}$ & $7.2 \pm 0.7$ & $-1.9 \pm 0.8$ & $0.90$ (fixed) \\ 2 & $2.3^{+0.7}_{-0.5}$ & $7.2^{+0.7}_{-0.6}$ & $-1.9 \pm 0.8$ & $0.90$ (fixed) \\ 3 & $2.6^{+2.6}_{-0.8}$ & $7.1^{+1.0}_{-0.9}$ & $-1.9 \pm 0.8$ & $> 0.80$ (3$\sigma$) \\ 4 & $2.1$ & $6.7$ & $-1.9$ & $0.90$ (fixed) \\ \hline \end{tabular} \end{table*} \begin{figure} \centering \includegraphics[width=0.9\hsize]{hat-p-11_b_corner_M1.pdf} \caption{Posterior distributions of the mass loss rate, upper atmospheric temperature, and line-of-sight bulk velocity of HAT-P-11~b using {\tt p-winds} models (no limb darkening included) as a retrieval tool against a CARMENES transmission spectrum (Model 1).} \label{fig:hatp11b_corner} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\hsize]{fit_t_spec_4p.pdf} \caption{Transmission spectrum of HAT-P-11~b measured with CARMENES (black) and a sample of 100 {\tt p-winds} models (Model 1) fit to the data (red).} \label{fig:hatp11b_t_spec} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\hsize]{He_dist_hp11b.pdf} \caption{Distribution of He in the upper atmosphere of HAT-P-11~b based on the best-fit solution obtained by fitting {\tt p-winds} models (no limb darkening included) to the CARMENES transmission spectrum (Model 1).} \label{fig:hatp11b_He_dist} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\hsize]{hat-p-11_b_corner_M3.pdf} \caption{Same as Fig. \ref{fig:hatp11b_corner}, but including the H fraction as a free parameter to be fit (Model 3).} \label{fig:hatp11b_corner_4p} \end{figure} In order to compare our results with those obtained by the three-dimensional model EVE \citep{2013A&A...557A.124B} used by \citet{2018Sci...362.1384A}, we need to calculate the escape of metastable He only instead of total mass loss. For a total escape rate of $2.5 \times 10^{10}$~g~s$^{-1}$ and an upper atmosphere temperature of $7200$~K obtained from the retrieval described above, we calculate an average metastable helium fraction of $4.8 \times 10^{-6}$ and a $T/\mu$ fraction of 8000~K~amu$^{-1}$. This result translates into a metastable-He escape rate of $1.2 \times 10^5$~g~s$^{-1}$, which is compatible with the upper-limit rate of $\sim 3 \times 10^5$~g~s$^{-1}$ determined by \citet{2018Sci...362.1384A}. Our retrieved $T/\mu$ fraction when assuming a H fraction of 0.9 is discrepant with the results of \citet{2018Sci...362.1384A}, who \textbf{find} $T/\mu = 24\,000$~K~amu$^{-1}$. Some of the solutions of our retrieval with the H fraction as a free parameter do allow for high values of $T/\mu$ up to $12\,000$, but they are nevertheless incompatible with \citet{2018Sci...362.1384A}; the authors, however, do propose that a high $T/\mu$ may correspond to a low mean atomic weight, which can be obtained with a large fraction of ionized gas and free electrons. In an upcoming manuscript, \citet{Vissapragada2021} will discuss how solutions with high temperatures can be ruled out because they are not energetically self-consistent, assuming that the heating comes solely from the available high-energy irradiation budget. A possible explanation for the disagreement can be due to modeling difference, since \citet{2018Sci...362.1384A} use a hydrostatic model while we use a hydrodynamic one; since a hydrostatic thermosphere is less extended, it needs a higher temperature to increase the density of He enough to be detectable at high altitudes. The bulk velocity of $-1.9 \pm 0.8$~km~s$^{-1}$ is consistent with the net blueshift of $3$~km~s$^{-1}$ reported by \citet{2018Sci...362.1384A}, which was previously interpreted as a high-altitude wind flowing from the day- to the night-side of the planet. This net blueshift is not predicted by the one-dimensional Parker wind model, which is the reason for fitting it as a free parameter in our models. More complex, tridimensional models that take into account other physical processes may be necessary to determine the exact mechanism that causes this bulk velocity shift in the metastable He absorption signature. \subsection{Escape rate upper limit for a non-detection in GJ~436~b} GJ~436~b is a high-profile case of atmospheric escape because it possesses the deepest transmission spectrum feature detected to date: a repeatable 50\% in-transit absorption in Lyman-$\alpha$ \citep{2014ApJ...786..132K, 2015Natur.522..459E, 2017A&A...605L...7L, 2019A&A...629A..47D}, which is explained by a large volume of exospheric neutral H fed by escape \citep{2016A&A...591A.121B, 2021MNRAS.501.4383V}. In fact, \citet{2018ApJ...855L..11O} predict a metastable He signature as deep as 9\% in the core of the strongest line of the triplet. However, when GJ~436~b was observed by CARMENES, the results yielded only a non-detection \citep{2018Sci...362.1388N}. In this section we \textbf{attempt} to measure an upper limit of atmospheric escape rate to the non-detection of He in GJ~436~b and compare it to the result derived from Lyman-$\alpha$ transmission spectroscopy and modeling. The metastable He transmission spectrum of GJ~436~b is rather unfortunately not publicly available, but the pipeline-reduced spectral time series from \citet{2018Sci...362.1388N} is available in the CARMENES data archive\footnote{\footnotesize{\url{http://carmenes.cab.inta-csic.es/gto/jsp/nortmannetal2018.jsp}}}. We ran an MCMC of $10\,000$ steps and 10 walkers with three free parameters: total escape rate, upper atmospheric temperature, and the H fraction. We increased the number of steps compared to the HAT-P-11~b retrieval in order to better explore the parameter space, since we are expecting to obtain only upper/lower limits. We did not include limb darkening. Based on previous theoretical predictions for GJ~436~b \citep[e.g.,][]{2016A&A...586A..75S}, we set uniform priors of [$10^7$, $10^{12}$]~g~s$^{-1}$ for the mass loss and [$1\,000$, $10\,000$]~K for the temperature, and $[0.40, 0.99]$ for the H fraction. We used the high-energy spectrum of GJ~436 measured in the MUSCLES Treasury Survey as a source of irradiation. The resulting posterior distributions of the free parameters for GJ~436~b yield, at 99.7\% (3$\sigma$) confidence, an upper limit of $4.5 \times 10^9$~g~s$^{-1}$ for the escape rate and a lower limit of 2600~K for the upper atmospheric temperature (given the uniform priors above). In broad terms, mass loss rates above this value or temperatures below the lower limit would yield a detectable metastable He signature. With the H fraction as a free parameter, these 3-$\sigma$ limits become $3.4 \times 10^{10}$~g~s$^{-1}$ and 2400~K. This result is consistent with the escape rate of $\sim 2.5 \times 10^8$~g~s$^{-1}$ inferred by \citet{2016A&A...591A.121B}, and with the mass loss rate of $(6 - 10) \times 10^9$~g~s$^{-1}$ inferred by \citet{2021MNRAS.501.4383V}, both based on the same Lyman-$\alpha$ transmission spectroscopy data set. Given the flat prior of $[0.40, 0.99]$ for the H fraction, the resulting posterior distribution of this parameter is not constraining; however, it seems to favor higher values and peaks near 0.99 (see Fig. \ref{fig:gj436b_post2}). Interestingly, this result could be seen as an agreement with the prediction of a H-rich outflow for GJ~436~b due to selective escape, leading to a He-rich lower atmosphere \citep{2015ApJ...807....8H}. We cannot, however, draw strong conclusions on this matter because GJ~436~b had a non-detection of He. More detailed descriptions that fit both H and He simultaneously in the upper atmosphere of this planet are likely going to yield more definitive answers. For example, \citet{2021A&A...647A.129L} used H densities derived from Lyman-$\alpha$ observations to inform metastable He models, and determine that the warm Neptune GJ~3470~b has $n_{\rm H}/n_{\rm atoms} = 0.985^{+0.010}_{-0.015}$. \section{Conclusions}\label{sect:conclusions} We demonstrate in this manuscript the usage of the open-source Python code {\tt p-winds} to forward model the distribution of He atoms in the upper atmospheres of exoplanets as well as their corresponding metastable He transmission spectra of exoplanets. The code also enables the retrieval of atmospheric escape rates and temperatures based on observations at high resolution when coupled to an optimization algorithm, such as a maximum likelihood estimation or an MCMC sampler. A typical retrieval takes several hours to compute, depending on the setup. As an implementation of the method originally described by \citet{2018ApJ...855L..11O}, the forward models produced by {\tt p-winds} are fully compatible with that study. We also implement changes proposed by \citet{2020A&A...636A..13L}, such as the inclusion of charge exchange of He and H particles. Our implementation includes further improvements, such as the addition of transit geometry and the limb darkening of the host star, as well as allowing the H fraction ($n_{\rm H} / n_{\rm atoms}$) as a free parameter in the retrieval. We used {\tt p-winds} to fit the escape rate, outflow temperature, and the H fraction of the warm Neptune HAT-P-11~b based on CARMENES transmission spectroscopy previously reported in \citet{2018Sci...362.1384A}. For a model without limb darkening and with $n_{\rm H}/n_{\rm atoms}$ fixed at 0.90, we find that the escape rate of HAT-P-11~b is $(2.5^{+0.8}_{-0.6}) \times 10^{10}$~g~s$^{-1}$ and the planetary outflow temperature is $7200 \pm 700$~K. This temperature is in disagreement with the value of $T/\mu$ calculated by \citet{2018Sci...362.1384A}, and it is likely caused by a key difference between our models -- theirs contains a hydrostatic thermosphere, while ours is hydrodynamic. Including limb darkening does not have a significant impact on the retrieved parameters of HAT-P-11~b. Allowing the H fraction as a free parameter has a stronger impact because it yields an anticorrelation with the retrieved outflow temperature. It also increases the uncertainty of the retrieved atmospheric escape rate. We find that the H fraction is unconstrained, but with a preference for higher values. These results are in agreement with those of \citet{2020A&A...636A..13L, 2021A&A...647A.129L}, although those authors can constrain the H fraction by analyzing He transmission spectra in conjunction with H escape using Lyman-$\alpha$ observations. Finally, we also attempted to fit limits for the escape rate, outflow temperature, and H fraction of GJ~436~b based on a non-detection with the CARMENES spectrograph reported by \citet{2018Sci...362.1388N}. We find an upper limit of $3.4 \times 10^{10}$~g~s$^{-1}$ for the first and a lower limit of 2400~K for the second at 99.7\% confidence. Our upper and lower limit determinations show a preference for high values of $n_{\rm H}/n_{\rm atoms}$, with the posterior distribution peaking near 0.99. These results are fully compatible with the escape rate of $\sim 2.5 \times 10^8$~g~s$^{-1}$ inferred by \citet{2016A&A...591A.121B} based on Lyman-$\alpha$ transmission spectroscopy. For both HAT-P-11~b and GJ~436~b, we find a slight preference for high values of H in the atomic fraction, which is in line with the results of \citet{2020A&A...636A..13L, 2021A&A...647A.129L} for other hot gas giants. The main limitations of a one-dimensional, isothermal Parker wind model are: 1) It does not capture the three-dimensional nature of very extended atmospheres, particularly when they have both a thermospheric and an exospheric contributions \citep[see the case of WASP-107~b in][]{2019A&A...623A..58A, 2021arXiv210708999S}; 2) it does not take into account the variable profile of temperature with radial distance from the planet, which is seen in self-consistent models of escape \citep[e.g.,][]{2016A&A...586A..75S, 2019MNRAS.490.3760A}; and 3) it does not self-consistently consider the sources of heating and cooling that control the atmospheric escape process. The usefulness of simple models such as {\tt p-winds} lies in an efficient exploration of the parameter space that defines atmospheric escape (scalability) and ease of use (open-source, fully documented code) when more sophisticated models are not yet warranted. As for the next steps, we aim to improve {\tt p-winds} by including the escape of heavier atomic species, such as C, N, O, Mg, Si, and Fe. This will allow us to use the code to predict and interpret observations of metals escaping hot gas giants, such as the signatures reported by \citet{2013A&A...560A..54V} and \citet{2019AJ....158...91S}. We shall also add day-to-nightside winds to the atmospheric modeling, similar to \citet{2020A&A...633A..86S}. Another avenue to explore {\tt p-winds} in the future consists in coupling it with more complex tridimensional hydrodynamic escape models. \begin{acknowledgements} LADS acknowledges the helpful input of A. Wyttenbach, M. Stalport, A. Oklop{\v{c}}i{\'c}, J. St\"urmer, and M. Zechmeister to the development of this project. The authors also thank the referee, Manuel L\'opez-Puertas, for the helpful and detailed review. SV is supported by an NSF Graduate Research Fellowship and the Paul \& Daisy Soros Fellowship for New Americans. RA is a Trottier Postdoctoral Fellow and acknowledges support from the Trottier Family Foundation, and his contribution was supported in part through a grant from {\it Fonds de recherche du Québec – Nature et technologies}. This research was enabled by the financial support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (projects: {\sc Four Aces} grant agreement No 724427; {\sc Spice Dune} grant agreement No 947634; {\sc ASTROFLOW} grant agreement No 817540), and it has been carried out in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). The {\tt p-winds} code makes use of the open source software NumPy \citep{harris2020array}, SciPy \citep{2020SciPy-NMeth}, Pillow (\url{https://python-pillow.org}), and Astropy \citep{astropy:2018}. The results of this manuscript were also made possible by the open source software Matplotlib \citep{Hunter:2007}, OpenMPI (\url{https://www.open-mpi.org}), Jupyter \citep{jupyter}, MPI for Python \citep[{\tt mpi4py};][]{DALCIN20111124}, {\tt emcee} \citep{2013PASP..125..306F}, and {\tt schwimmbad} \citep{schwimmbad}. Finally, the authors also extend a special thanks to the platforms GitHub, Conda-Forge, Read the Docs, and Travis.ci for the valuable support of open-source initiatives. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Soft Hair on Black Holes} An infinite number of asymptotic symmetries for gravity and Abelian gauge theories were uncovered in the last few year thanks to the work of several authors, especially A. Strominger~\cite{Strominger_graviton1,Strominger_graviton,Strominger_photon,Strominger_QED,Campiglia:2015kxa}. A recent, intriguing paper \cite{Hawking} by Hawking, Perry, and Strominger argues that such new symmetries can be used to constrain the final states resulting from black hole evaporation \cite{Hawking74,Hawking75}, beyond the universal restrictions due to energy and charge conservation. {This fact is potentially relevant to the black hole information paradox~\cite{Hawking76}.} Two new ingredients enter in their discussion. The first one is the existence of the infinite-dimensional set of new symmetries mentioned above. Each symmetry generates a conserved charge. The second ingredient {involves a clever use of} such charges to create new black hole states out of old ones. The crucial claim of ref.~\cite{Hawking} is that these new states are distinguishable from the old ones. By itself, the existence of new conserved charges does not imply the existence of new black hole hair. In the specific case considered in~\cite{Hawking}, new $U(1)$ asymptotic charges are obtained by integrating a trivially conserved current, $J=\star d (\varepsilon \star F)$, over an appropriate Cauchy surface. In the absence of black holes or massive charged states, the surface can be pushed up to future null infinity $I^+=R\times S^2$. When the scalar function $\varepsilon$ is independent of the null generator of $I^+$, but has an arbitrary dependence on the angular coordinates $(z, \bar{z})$ of ${S}^2\in I^+$, the charge is \begin{equation} {\mathscr Q}=\int _{I^+} d (\varepsilon \star F) = \int_{I^+} \hat{d} \varepsilon \wedge \star F + \int_{I^+} \varepsilon d \star F \eeq{m1} The term ${\mathscr Q}_S\equiv \int_{I^+} \hat{d} \varepsilon \wedge \star F$, where $\hat{d}$ is the exterior derivative on $S^2 \in I^+$, is the ``soft charge'' of ref.~\cite{Strominger_photon}, while ${\mathscr Q}_H=\int_{I^+} \varepsilon d \star F=\int_{I^+} \varepsilon \star j$ is the hard charge. The last equality uses of course Maxwell's equations. In the presence of a classical black hole, even in the simplest case that no massive charged matter exists, $I^+$ is no longer a Cauchy surface. On the other hand, a black hole hair is an object defined on $I^+$ (and not on the horizon) that can be used to reconstruct the black hole state. It is the total derivative nature of the current $J$ that makes ${\mathscr Q}$ a potential black hole hair. Namely, as in the case of black hole electric charge and ADM mass, ${\mathscr Q}$ can be written as a surface integral over the sphere at spatial infinity, or $I^+_-$, \begin{equation} {\mathscr Q} = -\lim_{u\to -\infty} \oint \varepsilon \star F, \eeq{boundary} where $u$ is the retarded time. However, for a classical stationary black hole space-time all new charges are trivial \cite{Flanagan}, as expected from black hole no-hair theorems. \begin{figure}[h] \begin{center} \epsfig{file=vaidya.pdf, height=4in, width=3.7in} \end{center} \caption{In the presence of a classical black hole, the Cauchy surface is $I^+\cup H$, but the charge ${\mathscr Q}$ is an asymptotic quantity, as it can be written as a boundary integral at $I^+_-$. However, ${\mathscr Q}=0$ for classical stationary black holes unless $\varepsilon = {\rm constant}$, in which case it is a multiple of the black hole electric charge.} \label{vaidya} \end{figure} Consider next a quantum black hole which evaporates. Here the second ingredient in the Hawking-Perry-Strominger mechanism enters and becomes essential. As a warm-up example to the Hawking-Perry-Strominger mechanism, let's ask a simpler question:\footnote{{We thank Dan Harlow for suggesting this analogy to us.}} is the three-momentum vector a black hole hair? Hawking, Perry, and Strominger would argue that it is \cite{Hawking}. And indeed it has an implication for Hawking evaporation. If the early Hawking quanta, from an initially stationary black hole, carry away total momentum $\boldsymbol P$, then by momentum conservation the resulting black hole must have a nonzero momentum $-\boldsymbol P$ and so do the late Hawking quanta. This is a source of correlation between the early and late Hawking radiation, which makes the final state less mixed than a thermal state. However, the correlation is much too small to purify the Hawking radiation. This simple fact can be related to the existence of a symmetry operator that transforms the black hole state. Suppose that after the emission of early quanta the ADM mass of the black hole is $M$, and it is sufficiently large that we can talk about a metastable state $|M\rangle$ with some internal degrees of freedom, not explicitly shown in $|M\rangle$. A moving black hole state can be obtained from the stationary one, {described by} $|M\rangle$, by a boost $U(\Lambda)$, where $\Lambda(M,\boldsymbol 0) = (\sqrt{M^2+P^2},-\boldsymbol P)$. Lorentz symmetry implies that the S-matrix ${\mathcal{S}}$ commutes with boosts, so, if $|M\rangle$ evaporates into ${\mathcal{S}}|M\rangle\equiv |X\rangle$, then \begin{equation} {\mathcal{S}} U(\Lambda) |M\rangle = U(\Lambda) {\mathcal{S}} |M\rangle = U(\Lambda)|X\rangle. \end{equation} {The final state} $|X\rangle$ can be expanded in terms of asymptotic states \begin{equation} |X\rangle = \sum_{b} {\mathcal{S}}_{M\to b} |b\rangle, \qquad |b\rangle = \prod_{i=1}^m a_{p_i,\zeta_i}^\dagger |0\rangle \end{equation} where $b = \{({\bsb p}_1,\zeta_1),\cdots,({\bsb p}_m,\zeta_m)\}$ runs over outgoing states and $\zeta$ characterizes their discrete quantum numbers. Applying $U(\Lambda)$, the Hawking quanta which are momentum eigenstates get boosted, while the vacuum is boost invariant \begin{equation} U(\Lambda)|X\rangle = \sum_{b} {\mathcal{S}}_{M\to b} |\Lambda b\rangle, \qquad U(\Lambda) |0\rangle = |0\rangle. \end{equation} Thus the late-time observer can distinguish $|M\rangle$ from $U(\Lambda)|M\rangle$ {by measuring the $a_{p_i,\zeta_i}$ quanta. Notice that these are in general ``hard," since their momenta are generic}. One can ask if super-translation symmetries \cite{Strominger_graviton1,Strominger_graviton} and their analog in electrodynamics \cite{Strominger_photon} (hereafter denoted as {large $U(1)$} symmetries) lead to additional hair in a similar way. Naively, given that there are infinitely many conserved charges (involving energy flux and electric charge flux in every direction) then, depending on the angular distribution of early quanta, there will {exist} very non-trivial constraints on the late quanta. This would lead to much larger correlations between late and early radiation. \section{Shaving off the Soft Hair} We will now show that these conservation laws fix early (late) soft radiation in terms of early (late) hard radiation, but do not induce any cross correlation between early and late quanta. The easiest way to see this is to introduce a new basis of asymptotic states in which hard particles are dressed with soft photons and gravitons. In terms of {this new basis}, the soft part of the ${\mathcal{S}}$-matrix becomes trivial and all conservation laws are automatically satisfied. First, choose an IR cutoff $\lambda$, much {smaller} than the typical energy $E$ of the particles involved in the process. In the case of black holes, $E$ is the Hawking temperature. Write In and Out Hilbert spaces as products ${\mathcal H}^\pm = {\mathcal H}_h^\pm \otimes {\mathcal H}_s^\pm$ where ${\mathcal H}_s^+$ (${\mathcal H}_s^-$) includes soft outgoing (incoming) photons and gravitons with frequency less than $\lambda$. Any In state can be written as a superposition of states of the form $|a\rangle |\alpha\rangle$, where $a \in {\mathcal H}^-_h$ labels the momenta and quantum numbers of hard In states and $\alpha\in {\mathcal H}^-_s$ labels soft incoming photons/gravitons. Every Out state is similarly written as $|b\rangle |\beta\rangle$. The Weinberg soft theorems \cite{Weinberg,Weinberg-I} imply that, for fixed initial ($|a\rangle$) and final ($|b\rangle$) hard states, the S-matrix matrix factorizes into the product of: \begin{enumerate} \item A ``hard'' unitary matrix, $\hat{{\mathcal{S}}}$, which does not depend on soft degrees of freedom. This means that $\hat{{\mathcal{S}}}$ acts as the unit matrix on the space of soft photons/gravitons. \item Two ``soft dressing'' unitary matrices that act solely on the space of soft photons and that depend on $|a\rangle$ and $|b\rangle$. \footnote{Factorization breaks down for large number of soft quanta, when back-reaction becomes important. However, given that the total emitted energy in soft radiation is much less than $\lambda$ (by a factor of $\alpha$ in electrodynamics and $E^2/M_{\rm Pl}^2$ in gravity), the probability for large back-reaction is negligible and it vanishes in the limit $\lambda\rightarrow 0$.} \end{enumerate} {Explicitly: \begin{equation} \langle \beta|\langle b| {\mathcal{S}} |a\rangle |\alpha\rangle = \langle b| \hat {\mathcal{S}} |a\rangle \langle \beta|\Omega(b) \Omega^\dagger(a)|\alpha\rangle \eeq{factorize} where $\Omega = \Omega_{\rm ph} \Omega_{\rm gr}$; the photon soft factor is given by \begin{equation} \begin{split} \Omega_{\rm ph}(a) \equiv & \exp\Big(i\int^\lambda \frac{d^3{\bsb k}}{(2\pi)^3 2|{\bsb k}|} \\ &\sum_s a_{\rm ph}({\bsb k},s) \epsilon^*_\mu(s,{\bsb k})J^\mu(-|{\bsb k}|,-{\bsb k}) +h.c.\Big), \end{split} \end{equation} $a_{\rm ph}({\bsb k},s)$ is the ladder operator for the free photon field and \begin{equation} J^\mu(|{\bsb k}|,{\bsb k}) = -i \sum_{i\in a} \frac{Q_i p_i^\mu}{p_i\cdot k},\qquad\text{with}\quad k^\mu = (|{\bsb k}|,{\bsb k}). \end{equation} The graviton soft factor is \begin{equation} \begin{split} \Omega_{\rm gr}(a) \equiv \exp\Big(i\int^\lambda &\frac{d^3{\bsb k}}{(2\pi)^3 2|{\bsb k}|} \sum_s a_{\rm gr}({\bsb k},s) \epsilon^*_{\mu\nu}(s,{\bsb k}) \\ &T^{\mu\nu}(-|{\bsb k}|,-{\bsb k}) +h.c.\Big), \end{split} \end{equation} $a_{\rm gr}({\bsb k},s)$ is the ladder operator for the free graviton field and \begin{equation} T^{\mu\nu}(|{\bsb k}|,{\bsb k}) = -i\frac{\kappa}{2}\sum_{i\in a} \frac{p_i^\mu p_i^\nu}{p_i\cdot k}. \end{equation} To verify \eqref{factorize} note that Weinberg formula for the emission of multiple soft photons/gravitons is of the form \begin{equation}\label{s} {\mathcal{S}}_{b,\beta;a,\alpha} = F_{b,\beta;a,\alpha} {\mathcal{S}}_{b,0;a,0}, \end{equation} where \begin{equation} F_{b,\beta;a,\alpha} = \frac{\langle \beta|\Omega(b)\Omega^\dagger(a)|\alpha\rangle}{ \langle 0|\Omega(b)\Omega^\dagger(a)|0\rangle }. \end{equation} So we define \begin{equation}\label{shat} \hat {\mathcal{S}}_{b;a} \equiv \frac{{\mathcal{S}}_{b,0;a,0}}{\langle 0|\Omega(b)\Omega^\dagger(a)|0\rangle }, \end{equation} in terms of which the connected S-matrix reads as \eqref{factorize}. Note that $\hat{\mathcal{S}}$ is by construction independent of soft states. Note also that dividing by the vacuum expectation value $\langle 0|\Omega(b)\Omega^\dagger(a)|0\rangle$ in \eqref{shat} cancels the IR divergences in ${\mathcal{S}}_{b,0;a,0}$. The same techniques developed in \cite{Strominger_graviton,Strominger_photon,Strominger_QED} to establish the equivalence of super-translation and large $U(1)$ conservation laws with Weinberg soft formulas, can be used to show that in massless QED \begin{equation}\label{commut} [{\mathscr Q}_S , \Omega(a)] = \Omega(a)\sum_i Q_i \varepsilon(\hat {\bsb p}_i) , \end{equation} and \begin{equation}\label{eigen} {\mathscr Q}_H a_{p_i,\zeta_i}^\dagger|0\rangle = - Q_i \varepsilon(\hat {\bsb p}_i) a_{p_i,\zeta_i}^\dagger |0\rangle, \end{equation} and as a result \begin{equation} {\mathscr Q}^{I^+} {\mathcal{S}} = {\mathcal{S}} {\mathscr Q}^{I^-} = \sum_{a,b} |b\rangle\langle a| \langle b |\hat {\mathcal{S}}|a\rangle \Omega(b) {\mathscr Q}_S \Omega^\dagger(a) \end{equation} for all large $U(1)$ charges. Here we used the fact that after antipodal matching of $\varepsilon(z,\bar z)$ on $I^+$ and $I^-$, ${\mathscr Q}_S^{I^+}$ and ${\mathscr Q}_S^{I^-}$ are given by the same expressions in the Fock space of photons. Similar results hold for massive QED as well as gravitational scattering. Conversely, the independence of $\hat{{\mathcal{S}}}$ --defined as ${\mathcal{S}}$ modulo the soft factors $\Omega$-- from soft photon (or soft graviton) operators also follows directly from conservation of the current $J=\star d (\varepsilon \star F)$. To prove that, it suffices to consider parameters $\varepsilon$ that depend on the null coordinates $u,v$ as $\varepsilon_\omega(u,z,\bar{z})=\exp(i\omega u) \eta(z,\bar{z})$ on $I^+$ and $\varepsilon_\omega(v,z,\bar{z})=\exp(i\omega v) \eta(z,\bar{z})$ on $I^-$. Equation~\eqref{m1} becomes \begin{equation}\label{mm1} {\mathscr Q}_\omega= \int _{I^+} d (\varepsilon_\omega \star F) = \int_{I^+} \hat{d} \varepsilon_\omega \wedge \star F + \int_{I^+} \varepsilon_\omega d \star F + \int_{I^+} du \partial_u\varepsilon_\omega \wedge \star F . \end{equation} On $I^-$ a similar equation holds. The last term in eq.~\eqref{mm1} vanishes in the limits $\omega\rightarrow 0^\pm$. This can be proven using $|\int_{I^+} du \partial_u\varepsilon_\omega \wedge \star F|= |\oint_{S^2} \omega \eta \tilde{F}_{ur}|$. The Fourer transform $\tilde{F}_{ur} $ of the field strength $F_{ur}$ is $L^2$ since $\oint_{S^2}\int d\omega |\tilde{F}_{ur}|^2 =\oint_{S^2}\int dt |{F}_{ur}|^2$, by Parseval's identity, and $\oint_{S^2}\int dt |{F}_{ur}|^2 \leq \oint_{S^2}\int dt {\cal H}<\infty$. Here $\cal H$ is the EM energy density. Conservation of ${\mathscr Q}_\omega$ thus implies, after using \eqref{commut} and \eqref{eigen} and with obvious notation \begin{equation}\label{mm2} \lim_{\omega\rightarrow 0^\pm} [{\mathscr Q}_{\omega\, S} ,\hat{{\mathcal{S}}}]=0. \end{equation} Now it suffices to recall~\cite{Strominger_photon,Strominger_QED} that $\lim_{\omega\rightarrow 0^+}{\mathscr Q}_{\omega\, S}$ creates a soft photon, while $\lim_{\omega\rightarrow 0^-}{\mathscr Q}_{\omega\, S}$ annihilates it, to conclude that $\hat{{\mathcal{S}}}$ commutes with {\em all} soft photon creation and annihilation operators. By Shur's lemma this means that $\hat{{\mathcal{S}}}$ is a constant on Fock space of the soft photons, since that space is an irreducible representation of the canonical commutation relations.\footnote{Refs.~\cite{Strominger_photon,Strominger_QED} impose the weaker requirement that $\lim_{\omega\to 0^+} \frac{1}{2}({\mathscr Q}_{\omega}+{\mathscr Q}_{-\omega})$ commute with the S-matrix. However, to derive the soft theorem one has to use an additional identity, {valid on a dense subset of states}: \begin{equation} \lim_{\omega\to0} a_{\rm ph}(\omega \hat x,+) {\mathcal{S}} = - {\mathcal{S}} a_{\rm ph}^\dagger (\omega \hat x,-). \end{equation} This identity follows from the absence of monopole interaction \cite{Mirbabayi}. Combined together they imply that both $\lim_{\omega\to 0^\pm} {\mathscr Q}_{\omega}$ commute with ${\mathcal{S}}$.} We introduce now a new basis of scattering states, obtained from the old ones by dressing the hard particles as~\cite{chung} \begin{equation}\label{dressed} ||a,\alpha \rangle\rangle = \Omega(a) |a\rangle |\alpha\rangle, \quad |a\rangle \in {\mathcal H}^-_h,\quad |\alpha\rangle \in {\mathcal H}^-_s. \end{equation} In this basis the soft part $\alpha$ evolves trivially and all dynamics is in the hard part: \begin{equation}\label{SS} \langle\langle b,\beta|| {\mathcal{S}} || a,\alpha\rangle\rangle = \langle b|\hat{\mathcal{S}} |a\rangle \langle \beta|\alpha \rangle. \end{equation} Working in this basis makes it clear that during the Hawking evaporation (1) super-translation and large $U(1)$ symmetries put no constraint on the hard radiation, and (2) for a big black hole early and late hard quanta are separately accompanied by their own soft radiation $\Omega(a_{\rm early})$ and $\Omega(a_{\rm late})$. No information is carried over from the early stage of evaporation to the later period. In other words, the soft dynamics decomposes into superselection sectors that never mix during time evolution. \section{Additional Remarks} The factorized S-matrix (\ref{SS}) also explains why {neither} electromagnetic {nor} gravitational memory can be regarded as black hole hair. Imagine two black holes of equal mass $M$; one of them formed by colliding two high energy photons along the $x$ axis $|p_x,-p_x\rangle$ and the other by the same collision along the $y$ axis $|p_y,-p_y\rangle$. According to \cite{Strominger_memory} this directional information can be retrieved by looking at the soft gravitational emission $|\alpha\rangle$ from the formation process. Thus it seems that less information is needed to be stored in black hole for the whole process of black hole formation and evaporation to be unitary. However, this argument ignores the possibility of having soft incoming radiation. Once that is included, for any observed gravitational memory $|\alpha\rangle$ the kinematics of hard incoming states remains completely undetermined. In particular, the two initial states $||p_x,-p_x,\alpha\rangle\rangle$ and $||p_y,-p_y,\alpha\rangle\rangle$ produce mass-$M$ black holes with identical gravitational memories $|\alpha\rangle$. A generic state is an entangled superposition of soft and hard states \begin{equation}\label{m-ent} |V\rangle=\sum_{a\,\alpha}C(a,\alpha) ||a,\alpha\rangle\rangle, \qquad C(a,\alpha)\in \mathbb{C}, \end{equation} but any such entanglement cannot be used to extract information on the state $|V\rangle$ using operators that act only on hard modes. Specifically, a large $U(1)$ transformation is a unitary operator $U$ that, in the new basis of dressed states $\{ ||a,\alpha\rangle\rangle \}$, acts only on soft states; so, it does not affect the matrix elements of any operator $O$ that depends only on hard quanta because $U^{\dagger} O U=O$. In particular, $\langle V | O |V \rangle= \langle V' | O | V'\rangle$, $|V'\rangle=U|V\rangle$. The S-matrix, seems at first sight to mix hard and soft modes, but in the basis of dressed states $|| a,\alpha \rangle\rangle$, we have shown that it also factorizes into the product of an operator acting only on hard modes plus the identity operator acting on soft modes in the basis $\{ || a,\alpha \rangle\rangle \} $. {It is worth expanding on the last remark and come to the original analogy with Lorentz boosts, to} ask what is the fundamental difference between conservation laws associated to super-translations (and large $U(1)$'s) and momentum conservation. Note that after the emission of early quanta, the remaining black hole is not just boosted to cancel the net momentum transferred to the early radiation. Due to the soft graviton/photon radiation, it is also immersed in a vacuum with a different metric, {and a different} $A_\mu$ configuration. Inside the light cone created by the early soft radiation this is a pure gauge configuration which can be generated from the vacuum by a large gauge transformation. Let us focus for simplicity on the electromagnetic case and study the action of the generator of the large $U(1)$ {transformations} as in \cite{Hawking} \begin{equation}\label{QM} |\tilde M\rangle = {\mathscr Q} |M\rangle. \end{equation} The conservation of ${\mathscr Q}$ implies that $|\tilde M\rangle$ evaporates into ${\mathscr Q}|X\rangle$. However, we should now include the soft radiation in $|X\rangle$: \begin{equation} {\mathscr Q} |X\rangle = \sum_b \hat {\mathcal{S}}_{M\to b} {\mathscr Q} ||b,0\rangle\rangle = \sum_b \hat {\mathcal{S}}_{M\to b} ||b,\alpha\rangle\rangle \end{equation} where we used \eqref{commut}, \eqref{eigen}, and their analogs, and we defined: \begin{equation} |\alpha\rangle = {\mathscr Q}_S|0\rangle. \end{equation} This is an exactly zero-frequency photon. In reality ${\mathscr Q}_S$ is IR regulated by the distance that the early radiation has traveled until the detection of late quanta.\footnote{Continuation of asymptotic charges to finite distance was discussed in \cite{Mirbabayi}.} This distance is much larger than the box over which the late-time detector makes measurements. Hence, the late-time observer has no way of distinguishing $|\alpha\rangle$ from $|0\rangle$. Therefore, unlike a boost, which transforms late Hawking quanta but leaves the vacuum invariant, spontaneously broken super-translations and large $U(1)$'s leave measurable Hawking quanta invariant and {merely} unobservably transform the vacuum. It is amusing to notice that here the factorization of soft photons into superselection sectors {is crucial to explain why the information paradox persists}, while in the context of the ``baby universe'' picture of black hole evaporation, advocated in~\cite{polchinski-strominger,strominger-lh}, a superselection-sector factorization was crucial to that proposal for solving the puzzle. {Various constructions of different types of soft hair have been proposed in the literature. In particular, the hair defined in~\cite{cd1,*cd2,*cd3} are effects due to the finite number of quanta involved in a ``corpuscular'' description of black holes. The hair studied in~\cite{cd5,*cd6,*cd7}, as well as those in~\cite{s-j}, are associated to symmetries of the horizon. The soft hair considered in this paper are instead specifically only those connected with the large asymptotic $U(1)$ (or BMS) symmetries considered in~\cite{Hawking}.} {A deep, extensive analysis of QED, which uses dressed states similar to~(\ref{dressed}) and includes a construction of the S-matrix, was given in a remarkable series of paper by Kibble~\cite{kibble1,*kibble2,*kibble3,*kibble4}. We thank A. Schwimmer for making us aware of those paper. A similar construction of the S-matrix using coherent states was proposed also in~\cite{fk}. After this paper was posted to the archives, we received the draft of a manuscript~\cite{Gabai} that independently arrives to conclusions similar to ours. We thank the authors for sending it to us prior to publication.} \section*{Acknowledgements} It is a pleasure to thank D. Harlow, M. Kleban, J. Maldacena, A. Maloney, A. Schwimmer, A. Strominger, S. Yankielowicz and A. Zhiboedov for useful discussions and comments on the paper. MM is supported in part by NSF grants PHY-1314311 and PHY-0855425. MP would like to thank Kavli IPMU, Tokyo, for its kind hospitality during completion of this paper. MP is supported in part by NSF grant PHY-1316452.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The nuclear symmetry energy is the energy needed per nucleon to convert all protons in a symmetric nuclear matter to neutrons. Knowledge on the density dependence of nuclear symmetry energy is important for understanding the dynamics of heavy ion collisions induced by radioactive beams, the structure of exotic nuclei with large neutron or proton excess, and many important issues in nuclear astrophysics~\cite{LCK08,ireview98,ibook,baran05}. At normal nuclear matter density, the nuclear symmetry energy has long been known to have a value of about $30$~MeV from fitting the binding energies of atomic nuclei with the liquid-drop mass formula. Somewhat stringent constraints on the nuclear symmetry energy below the normal nuclear density have also been obtained during past few years from studies of the isospin diffusion~\cite{Tsa04,Liu07,Che05a,LiBA05c} and isoscaling~\cite{She07} in heavy-ion reactions, the size of neutron skins in heavy nuclei~\cite{Ste05b}, and the isotope dependence of giant monopole resonances in even-A Sn isotopes~\cite{Gar07}. For nuclear symmetry energy at high densities, transport model studies have shown that the ratio of negatively to positively charged pions produced in heavy ion collisions with neutron-rich nuclei is sensitive to its stiffness~\cite{bali02,Ferini:2006je}. Comparison of this ratio from an isospin-dependent Boltzmann-Uehling-Ulenbeck (IBUU) transport model based on the non-relativistic momentum-dependent (MDI) nuclear effective interactions~\cite{xiao09} with measured data from heavy ion collisions by the FOPI Collaboration~\cite{FOPI} at GSI seems to indicate that the nuclear symmetry energy at high density might be very soft. Although this study does not include the relativistic effects, which may affect the charged pion ratio as shown in Ref~\cite{Ferrini:2005jw}, it provides an important step in the determination of the nuclear symmetry energy at high densities. The transport model used in Ref.~\cite{xiao09} neglects, however, medium effects on pions, although it includes those on nucleons and produced $\Delta$ resonances through their isospin-dependent mean-field potentials and scattering cross sections. It is well-known that pions interact strongly in nuclear medium as a result of their $p$-wave couplings to the nucleon-particle--nucleon-hole and delta-particle--nucleon-hole ($\Delta$-hole) excitations, leading to the softening of their dispersion relations or an increased strength of their spectral functions at low energies~\cite{weise75,friedmann81,oset82,xia94,hees05,korpa08}. Including pion medium effects in the transport model has previously been shown to enhance the production of low energy pions in high energy heavy ion collisions, although it does not affect the total pion yield~\cite{xiong93}. Since pions of different charges are modified differently in asymmetric nuclear matter that has unequal proton and neutron fractions~\cite{korpa99}, including such isospin-dependent medium effects is expected to affect the ratio of negatively to positively charged pions produced in heavy ion collisions. \section{Pion $p$-wave interactions in nuclear medium} Considering only the dominant $\Delta$-hole excitations as in Ref.~\cite{ko89}, as the contribution from the nucleon particle-hole excitations is known to be small, the self-energy of a pion of isospin state $m_t$, energy $\omega$, and momentum $k$ in a hot nuclear medium due to its $p$-wave interaction is given by \begin{eqnarray}\label{pi} \Pi_0^{m_t} &\approx& \frac{4}{3} \left( \frac{f_\Delta^{}}{m_\pi^{}} \right)^2 k^2 F_\pi^2(k) \sum_{m_\tau,m_T^{}} \left|\left\langle {\textstyle\frac{3}{2}} \, m_T^{} | 1\, m_t\, {\textstyle\frac{1}{2}} \, m_\tau \right\rangle\right|^2\notag\\ &\times& \int \frac{d^3p}{(2\pi)^3}\frac{1}{e^{(m_N^{} + p^2/2m_N^{} +U_N^{m_\tau}-\mu_B^{}-2m_\tau\mu_Q^{})/T}+1} \left(\frac{1}{\omega-\omega_{m_T^{}}^{+}}+\frac{1}{-\omega-\omega_{m_T^{}}^{-}}\right),\notag\\ \end{eqnarray} with $\omega_{m_T^{}}^{\pm} \approx m_\Delta^{} +U_\Delta^{m_T^{}} + (\vec{k} \pm \vec{p})^2/2m_\Delta^{}-i\Gamma_\Delta^{m_T^{}}/2 -m_N^{}-U_N^{m_\tau} - p^2/2 m_N^{}$. In the above, $m_\pi \simeq 138$~MeV, $m_N^{} \simeq 939$~MeV, and $m_\Delta^{} \simeq 1232$~MeV are the masses of pion, nucleon, and $\Delta$ resonance, respectively; $f_\Delta^{} \simeq 3.5$ is the $\pi N\Delta$ coupling constant and $F_\pi(k) = [1+0.6(k^2/m^2_\pi)]^{-1/2}$~\cite{art} is the $\pi N\Delta$ form factor determined by fitting the decay width $\Gamma_\Delta\simeq 118$~MeV of $\Delta$ resonance in free space. The summation in Eq.~(\ref{pi}) is over the nucleon isospin state $m_\tau$, and the $\Delta$ resonance isospin state $m_T^{}$; and the factor $\langle {\textstyle\frac{3}{2}} \, m_T^{} | 1\, m_t\,{\textstyle\frac{1}{2}} \, m_\tau \rangle$ is the Clebsch-Gordan coefficient from the isospin coupling of pion with nucleon and $\Delta$ resonance. The momentum integration is over that of nucleons in the nuclear matter given by a Fermi-Dirac distribution with $\mu_B$ and $\mu_Q$ being, respectively, the baryon and charge chemical potentials determined by charge and baryon number conservations; $\rho_N^{m_\tau}$ and $U_N^{m_\tau}$ are, respectively, the density and mean-field potential of nucleons of isospin state $m_\tau$ in asymmetric nuclear matter; and $\Gamma_\Delta^{m_T^{}}$ and $U_\Delta^{m_T^{}}$ are, respectively, the width and mean-field potential of $\Delta$ resonance of isospin state $m_T^{}$. For the nucleon mean-field potential $U_N^{m_\tau}$, we have used the one obtained from the momentum-independent (MID) interaction~\cite{LCK08}, i.e., $U_N^{m_\tau}(\rho_B^{},\delta_{\rm like}) = \alpha(\rho_B^{}/\rho_0^{}) + \beta(\rho_B^{}/\rho_0^{})^\gamma + U_{\text{asy}}^{m_\tau}(\rho_B^{} ,\delta_{\rm like})$, with $U_{\text{asy}}^{m_\tau}(\rho_B^{} ,\delta_{\rm like})= -4\{ F(x)(\rho_B^{}/\rho_0^{}) + [18.6 - F(x)] (\rho_B^{}/\rho_0^{})^{G(x)}\} m_\tau \delta_{\rm like} + [18.6 - F(x)][G(x) - 1] (\rho_B^{}/\rho_0^{})^{G(x)} {\delta_{\rm like}}^{2}$ being the nucleon symmetry potential. The parameters $\alpha=-293.4$~MeV, $\beta=240.1$~MeV, and $\gamma=1.216$ are chosen to give a compressibility of $212$~MeV and a binding energy per nucleon of $-16$~MeV for symmetric nuclear matter at the saturation or normal nuclear density $\rho_0^{} = 0.16~{\rm fm}^{-3}$. The nucleon symmetry potential $U_{\rm asy}^{m_\tau}(\rho_B^{} ,\delta_{\rm like})$ depends on the baryon density $\rho_B^{} = \rho_n^{} + \rho_p^{} + \rho_{\Delta^-}^{} + \rho_{\Delta^0}^{} + \rho_{\Delta^+}^{} + \rho_{\Delta^{++}}^{}$ and the isospin asymmetry $\delta_{\rm like}=(\rho_n^{} - \rho_p^{} + \rho_{\Delta^-}^{} - \rho_{\Delta^{++}}^{}+ \rho_{\Delta^0}^{}/3 - \rho_{\Delta^+}^{}/3)/\rho_B^{}$ of the asymmetric hadronic matter, which is a generalization of the isospin asymmetry $\delta = (\rho_n^{} - \rho_p^{})/(\rho_n^{} + \rho_p^{})$ usually defined for asymmetric nuclear matter without $\Delta$ resonances~\cite{bali02}. The nucleon mean-field potential also depends on the stiffness of nuclear symmetry energy through the parameter $x$ via the functions $F(x)$ and $G(x)$. We consider the three cases of $x=0$, $x=0.5$, and $x=1$ with corresponding values $F(x=0) = 129.98$ and $G(x=0) = 1.059$, $F(x=0.5) = 85.54$ and $G(x=0.5) = 1.212$, and $F(x=1) = 107.23$ and $G(x=1) = 1.246$. The resulting nuclear symmetry energy becomes increasingly softer as the value of $x$ increases, with $x=1$ giving a nuclear symmetry energy that becomes negative at about 3 times the normal nuclear matter density. These symmetry energies reflect the uncertainties in the theoretical predictions on the stiffness of nuclear symmetry energy at high densities. For the mean-field potentials of $\Delta$ resonances, their isoscalar potentials are assumed to be the same as those of nucleons, and their symmetry potentials are taken to be the average of those for neutrons and protons with weighting factors depending on the charge state of $\Delta$ resonance~\cite{art}, i.e., $U_{\rm asy}^{\Delta^{++}} = U_{\rm asy}^p$, $U_{\rm asy}^{\Delta^+} = {\textstyle\frac{2}{3}} U_{\rm asy}^p + {\textstyle\frac{1}{3}} U_{\rm asy}^n$, $U_{\rm asy}^{\Delta^0} = {\textstyle\frac{1}{3}} U_{\rm asy}^p + {\textstyle\frac{2}{3}} U_{\rm asy}^n$, and $U_{\rm asy}^{\Delta^-} = U_{\rm asy}^n$. Including the short-range $\Delta$-hole repulsive interaction via the Migdal parameter $g^\prime$, which has values $1/3 \le g^\prime \le 0.6$~\cite{weise75,friedmann81,oset82,xia94,hees05,korpa08}, modifies the pion self-energy to $\Pi^{m_t} = \Pi_0^{m_t}/(1-g^\prime\Pi_0^{m_t}/k^2)$. The pion spectral function $S_\pi^{m_t}(\omega,k)$ is then related to the imaginary part of its in-medium propagator $D^{m_t}(\omega,k) = 1/[\omega^2-k^2-m_\pi^2-\Pi^{m_t}(\omega,k)]$ via $S_\pi^{m_t}(\omega,k) = -(1/\pi)\,\mbox{Im}\,D^{m_t}(\omega,k)$. The modification of the pion properties in nuclear medium affects the decay width and mass distribution of $\Delta$ resonance. For a $\Delta$ resonance of isospin state $m_T^{}$ and mass $M$ and at rest in nuclear matter, its decay width is then given by~\cite{ko89} \begin{eqnarray}\label{gamma} &&\Gamma_\Delta^{m_T^{}}(M)\approx -2 \sum_{m_\tau,m_t} |\langle {\textstyle\frac{3}{2}} \, m_T^{} | 1\, m_t\, {\textstyle\frac{1}{2}} \, m_\tau \rangle|^2 \int \frac{d^3{\bf k}}{(2\pi)^3} \left( \frac{f_\Delta^{}}{m_\pi^{}} \right)^2 F_\pi^2(k)\nonumber\\ &&\times\left[\frac{1}{z_\pi^{-1}e^{(\omega-m_t\mu_Q^{})/T}-1}+1\right]\left[1-\frac{1}{e^{(m_N^{} + k^2/2m_N^{} +U_N^{m_\tau}-\mu_B^{}-2m_\tau\mu_Q^{})/T}+1}\right]\\ &&\times\mbox{Im}\, \left[\frac{k^2}{3}\frac{D^{m_t}(\omega,k)} {(1-g^\prime\Pi_0^{m_t}(\omega,k)/k^2)^2} + {g^\prime}^2\frac{\Pi^{m_t}(\omega,k)}{k^2} \right].\nonumber \end{eqnarray} In the above, the first term in the last line is due to the decay of the $\Delta$ resonance to pion but corrected by the contact interaction at the $\pi N\Delta$ vertex, while the second term contains the contribution from its decay to the $\Delta$-hole state without coupling to pion. The first two factors in the momentum integral take into account, respectively, the Bose enhancement for the pion and the Pauli blocking of the nucleon. To include possible chemical non-equilibrium effect, a fugacity parameter $z_\pi^{}$ is introduced for pions. The pion energy $\omega$ is determined from energy conservation, i.e., $M + U_\Delta^{m_T^{}} = \omega + m_N^{} + k^2/2m_N^{} + U_N^{m_\tau}$. The resulting mass distribution of $\Delta$ resonances is then given by $P_\Delta(M) = A[\Gamma_\Delta^{m_T^{}}(M)/2]/[(M-m_\Delta^{})^2 +{\Gamma_\Delta^{m_T^{}}}^2(M)/4]$, where $A$ is a normalization constant to ensure the integration of $P_\Delta(M)$ over $M$ is one. \begin{figure}[h] \centerline{\includegraphics[width=4.5in,height=4.5in,angle=0]{fig1.EPS}} \caption{(Color online) Spectral functions of pions in asymmetric nuclear matter of density $2\rho_0^{}$ and isospin asymmetry $\delta_{\rm like}=0.133$ as functions of pion energy for different pion momenta of (a) $m_\pi$, (b) $2 m_\pi$, (c) $3 m_\pi$, and (d) $4 m_\pi$. All are calculated with the Migdal parameter $g^\prime=1/3$.}\label{pion} \end{figure} \begin{figure}[h] \centerline{\includegraphics[width=3.5in,height=3.5in,angle=0]{fig2.EPS}} \caption{(Color online) Mass distributions of $\Delta$ resonances at rest in asymmetric nuclear matter of density $2\rho_0^{}$ and isospin asymmetry $\delta_{\rm like}=0.133$. The solid line corresponds to that in free space. The distributions near the threshold and at the peak are enlarged in the insets.} \label{delta} \end{figure} We have solved Eqs.~(\ref{pi}) and (\ref{gamma}) self-consistently to obtain the pion spectral functions and the mass distributions of $\Delta$ resonances in asymmetric nuclear matter. The results obtained with the Migdal parameter $g^\prime = 1/3$ are illustrated in Fig.~\ref{pion} and Fig.~\ref{delta} for an asymmetric nuclear matter of isospin asymmetry $\delta_{\rm like}\simeq 0.133$, twice the normal nuclear matter density $\rho_B^{} = 2\rho_0^{}$, temperature $T\simeq 43.6~{\rm MeV}$, and chemical potentials $\mu_B^{} \simeq 941.89~{\rm MeV}$ and $\mu_Q^{} \simeq -18.26~{\rm MeV}$, corresponding to those to be used in our thermal model and also similar to those reached in the transport model with the nuclear symmetry energy $x=1$ for central Au+Au collisions at the beam energy of $0.4~{\rm AGeV}$~\cite{xiao09}. Shown in Fig.~\ref{pion} are the pion spectral functions as functions of pion energy for different values of pion momentum. It is seen that for low pion momenta the spectral function at low energies has a larger strength for $\pi^-$ (dotted line) than for $\pi^0$ (solid line), which has a strength larger than that for $\pi^+$ (dashed line). This behavior is reversed for high pion energies. Fig.~\ref{delta} shows the mass distributions of $\Delta$ resonances at rest in asymmetric nuclear matter as functions of mass. One sees that they are similar to that in free space (solid line) as a result of the cancelation between the pion in-medium effects, which enhance the strength at low masses, and the Pauli-blocking of the nucleon from delta decay, which reduces the strength at low masses. This is consistent with the observed similar energy dependence of the photo-proton and photo-nucleus absorption cross sections around the $\Delta$ resonance mass~\cite{vanPee:2007tw}. Furthermore, the strength around the peak and near the threshold of the $\Delta$ resonance mass distribution slightly decreases with increasing charge of the $\Delta$ resonance due to nonzero isospin asymmetry of the nuclear medium. \section{Charged pion ratio in hot dense asymmetric nuclear matter} To see the above isospin-dependent pion in-medium effects on the $\pi^-/\pi^+$ ratio in heavy ion collisions, we have used a thermal model which assumes that pions are in thermal equilibrium with nucleons and $\Delta$ resonances~\cite{bertsch}. In terms of the spectral function $S_i(\omega,k)$, the density of a particle species $i$ is then given by \begin{eqnarray}\label{density} \rho_i^{} \approx g_i^{} \int \frac{d^3{\bf k}}{(2\pi)^3} d\omega^{n_i^{}} S_i(\omega,k)\frac{1}{z_i^{-1} e^{(\omega - B_i\mu_B^{}-Q_i\mu_Q^{} )/T} \pm 1}. \end{eqnarray} In the above, $g_i^{}$, $B_i$, and $Q_i$ are the degeneracy, baryon number, and charge of the particle. The fugacity parameter $z_i^{}$ is introduced to take into account possible chemical non-equilibrium effect. The exponent $n_i^{}$ is $2$ for pions and $1$ for nucleons and $\Delta$ resonances. For the spectral functions of $\Delta$ resonances, we neglect their momentum dependence and replace the integration over the energy $\omega$ by that over mass. The $\omega$ in the Fermi-Dirac distribution for $\Delta$ resonances is then simply $\omega=M+k^2/2M+U_\Delta^{m_T}$. For nucleons, their spectral functions are taken to be delta functions if we neglect the imaginary part of their self-energies, i.e., $S_N^{m_\tau}(\omega,k) = \delta ( \omega^{} - m_N^{} - k^2/2m_N^{} - U_N^{m_\tau} ).$ According to studies based on the transport model~\cite{bali02,xiao09,xiong93}, the total number of pions and $\Delta$ resonances in heavy ion collisions reaches a maximum value when the colliding matter achieves the maximum density, and remains essentially constant during the expansion of the matter. For Au+Au collisions at the beam energy of $0.4~{\rm AGeV}$, for which the $\pi^-/\pi^+$ ratio has been measured by the FOPI Collaboration at GSI~\cite{FOPI}, the IBUU transport model gives a maximum density that is about twice the normal nuclear matter density and is insensitive to the stiffness of the nuclear symmetry energy, as it is mainly determined by the isoscalar part of the nuclear equation of state~\cite{xiao09}. This density is thus used in the thermal model. The temperature in the thermal model is determined by fitting the measured pion to nucleon ratio, which is about $0.014$ including pions and nucleons from the decay of $\Delta$ resonances~\cite{FOPI}, without medium effects and with unity fugacity parameters for all particles, and the value is $T \simeq 43.6$~MeV. The assumption that pions and $\Delta$ resonances are in chemical equilibrium is consistent with the short chemical equilibration times estimated from the pion and $\Delta$ resonance production rates. The isospin asymmetry of the hadronic matter is then taken to be $\delta_{\rm like} \simeq 0.080$, $0.106$, and $0.143$, corresponding to net charge densities of $0.920\rho_0^{}$, $0.894\rho_0^{}$ and $0.857\rho_0^{}$, for the three symmetry energies given by $x=0$, $0.5$, and $1$, respectively, in order to reproduce the $\pi^-/\pi^+$ ratios of $2.20$, $2.40$, and $2.60$ predicted by the IBUU transport model of Ref.~\cite{xiao09} using corresponding symmetry energy parameters without pion in-medium effects. Since the medium effects enhance the pion and $\Delta$ resonance densities, to maintain the same pion to nucleon ratio as the measured one requires the fugacity parameters for pions and $\Delta$ resonances to be less than one. Also, the pion in-medium effects have been shown to affect only slightly the pion and the $\Delta$ resonance abundance~\cite{xiong93}, indicating that both pions and $\Delta$ resonances are out of chemical equilibrium with nucleons when medium effects are included, as expected from the estimated increasing pion and $\Delta$ resonance chemical equilibration times as a result of the medium effects. Because of the small number of pions (about 0.3\%) and $\Delta$ resonances (about 1.1\%) in the matter, the density, temperature, and net charge density of the hadronic matter are expected to remain unchanged when the pion in-medium effects are introduced. They lead to, however, a slight reduction of the isospin asymmetry to $\delta_{\rm like} \simeq 0.073$, $0.097$, and $0.133$ for the three symmetry energies, respectively. We note that with the fugacity of nucleons kept at $z_N=1$, the fugacity parameters of about $z_\pi^{} = 0.061$ and $z_\Delta^{}= 0.373$ are needed to maintain same total number of pions and $\Delta$ resonances as in the case without pion in-medium effects, and that the required values for the fugacity parameters increase only slightly for the other two symmetry energies considered here. \begin{figure}[h] \centerline{\includegraphics[width=3.5in,height=3.5in,angle=0]{fig3.EPS}} \caption{(Color online) The $\pi^-/\pi^+$ ratio in Au+Au collisions at the beam energy of $0.4~{\rm AGeV}$ for different values of nuclear symmetry energy ($x=0$, $0.5$, and $1$) and the Migdal parameter $g^\prime=1/3$, $0.4$, $0.5$, and $0.6$ in the $\Delta$-hole model for the pion $p$-wave interaction. Results for $g^\prime=\infty$ correspond to the case without the pion in-medium effects.} \label{ratio} \end{figure} Results on the $\pi^-/\pi^+$ ratio in Au+Au collisions at the beam energy of $0.4~{\rm AGeV}$ are shown in Fig.~\ref{ratio}. With the value $g^\prime = 1/3$ for the Migdal parameter, values for the $\pi^-/\pi^+$ ratio are $2.32$, $2.60$, and $2.94$ for the three symmetry energy parameters $x=0$, $0.5$, and $1$, respectively, which are larger than corresponding values for the case without including the pion in-medium effects as shown by those for $g^\prime = \infty$ in Fig.~\ref{ratio}. These results indicate that the isospin-dependent pion in-medium effects on the charged pion ratio are comparable to those due to the uncertainties in the theoretically predicted stiffness of the nuclear symmetry energy. The measured $\pi^-/\pi^+$ ratio of about $3$ by the FOPI Collaboration, shown in Fig.~\ref{ratio} by the dash-dotted line together with the error bar, which without the pion in-medium effects favors a nuclear symmetry energy softer than the one given by $x=1$, is now best described by a less softer one. Fig.~\ref{ratio} further shows the results obtained with larger values of $g^\prime=0.4$, $0.5$ and $0.6$ for the Migdal parameter. It is seen that the isospin-dependent pion in-medium effects are reduced in these cases compared to the case of $g^\prime = 1/3$ as the repulsive interaction between $\Delta$-hole states becomes stronger, thus reducing the pion in-medium effects. With these larger values of $g^\prime$, symmetry energies softer than that given by $x=1$ are then needed to describe the measured $\pi^-/\pi^+$ ratio. \section{Pion $s$-wave interactions in nuclear medium} The above study does not include the $s$-wave interactions of pions with nucleons. Calculations based on the chiral perturbation theory have shown that the pion $s$-wave interaction modifies the mass of a pion in nuclear medium, and for asymmetric nuclear matter this effect depends on the charge of the pion~\cite{Kaiser:2001bx}. Up to the two-loop approximation in chiral perturbation theory~\cite{Kaiser:2001bx}, the self energies of $\pi^-$, $\pi^+$, and $\pi^0$ in asymmetric nuclear matter of proton density $\rho_p$ and neutron density $\rho_n$ are given, respectively, by \begin{eqnarray} \Pi^-(\rho_p,\rho_n)&=&\rho_n[T^-_{\pi N}-T^+_{\pi N}]-\rho_p[T^-_{\pi N}+T^+_{\pi N}]+\Pi^-_{\rm rel}(\rho_p,\rho_n)+\Pi^-_{\rm cor}(\rho_p,\rho_n)\notag\\ \Pi^+(\rho_p,\rho_n)&=&\Pi^-(\rho_n,\rho_p)\notag\\ \Pi^0(\rho_p,\rho_n)&=&-(\rho_p+\rho_n)T^+_{\pi N}+\Pi^0_{\rm cor}(\rho_p,\rho_n). \end{eqnarray} In the above, $T^\pm$ are the isospin-even and isospin-odd $\pi N$-scattering $T$-matrices which have the empirical values $T^-_{\rm \pi N}\approx 1.847~{\rm fm}$ and $T^+_{\rm \pi N}\approx -0.045~{\rm fm}$ extracted from the energy shift and width of the 1s level in pionic hydrogen atom. The term $\Pi^-_{\rm rel}$ is due to the relativistic correction, whereas the terms $\Pi^-_{\rm cor}$ and $\Pi^0_{\rm cor}$ are the contributions from the two-loop order in chiral perturbation theory. Numerically, it was found in Ref.~\cite{Kaiser:2001bx} that changes of pion masses in asymmetric nuclear matter of density $\rho=0.165~{\rm fm}^{-3}$ and isospin asymmetry $\delta=0.2$ are $\Delta m_{\pi^-}=13.8~{\rm MeV}$, $\Delta m_{\pi^+}=-1.2~{\rm MeV}$, and $\Delta m_{\pi^0}=6.1~{\rm MeV}$. \begin{figure}[h] \centerline{\includegraphics[width=3.5in,height=3.5in,angle=0]{fig4.EPS}} \caption{(Color online) Similar to Fig.~\ref{ratio} with both pion $s$-wave and $p$-wave interactions included.}\label{ratiosp} \end{figure} Taking into account the isospin-dependent pion self energies due to pion $s$-wave interactions in asymmetric nuclear matter changes the results shown in Figs.~\ref{pion} and \ref{delta}. For the pion spectral function, the one for $\pi^+$ now has a larger strength at low energies than that for $\pi^-$. Similarly, the strength near the threshold of the $\Delta$ resonance mass distribution now increases with increasing charge of the $\Delta$ resonance, although that around the peak still decreases with increasing $\Delta$ resonance charge. As a result, the $\pi^-/\pi^+$ ratio in Au+Au collisions at the beam energy of $0.4~{\rm AGeV}$ is slightly reduced after the inclusion of both pion $s$-wave and $p$-wave interactions in asymmetric nuclear matter as shown in Fig.~\ref{ratiosp}. \section{Summary} The pion spectral function in asymmetric nuclear matter becomes dependent on the charge of a pion. For the $p$-wave interaction of the pion, modeled by its couplings to the $\Delta$-hole excitations in nuclear medium, it leads to an increased strength of the $\pi^-$ spectral function at low energies relative to that of the $\pi^+$ spectral function in dense asymmetric nuclear matter. In a thermal model, this isospin-dependent effect increases the $\pi^-/\pi^+$ ratio from heavy ion collisions, and the effect is comparable to that due to the uncertainties in the theoretically predicted stiffness of the nuclear symmetry energy at high densities. However, including also the pion $s$-wave interaction based on results from the chiral perturbation theory reverses the isospin-dependent pion in-medium effects, leading instead to a slightly reduced $\pi^-/\pi^+$ ratio in neutron-rich nuclear matter. Taking into consideration of the isospin-dependent pion in-medium effects in the transport model thus would have some influence on the extraction of the nuclear symmetry energy from measured $\pi^-/\pi^+$ ratio. \section*{Acknowledgments} This talk was based on work supported in part by the US National Science Foundation under Grant No. PHY-0758115 and the Welch Foundation under Grant No. A-1358.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The interpolation property has been studied intensively over the years for both predicate and propositional logics. Indeed, interpolation was proved for classical predicate logic by Craig \cite{Cr57} in 1957 and for intuitionistic predicate logic by Schütte \cite{Sc62}. In 1977 Maksimova \cite{Ma77} completely solved the interpolation problem for intermediate propositional logics by showing that exactly 7 of these logics have the interpolation property. Her work uses the algebraic semantics available for these propositional logics. In the setting of predicate logics, algebraic semantics are not as well understood and the question still remains open for many intermediate predicate logics despite the fact that the question has been actively pursued. Recent advances on the subject include the 2013 paper by Mints, Olkhovikov, and Urquhart \cite{MOU13} in which they show that constant domain intuitionistic logic does not have the interpolation property and the very recent contribution \cite{BaLo17} showing, among other, that constant domain intermediate logics based on finite algebras of truth values as well as some fragments of G\"odel logic do have the interpolation property. However, the question remains open for full predicate G\"odel logic, the logic of all linearly ordered constant domain Kripke models. Consider the following two formulas: \begin{align*} \Gamma &: \forall x \exists y (Py \wedge (Qy \to Rx)) \wedge \neg \forall x Rx, \\ \Delta &: \forall x (Px \to (Qx \vee S)) \to S. \end{align*} It was proved in \cite{MOU13} that $\Gamma \to \Delta$ is valid in constant domain intuitionistic predicate logic ($\mathbf{CD}$), but that there does not exist an interpolant for $\Gamma \to \Delta$ in the common language containing only the predicate symbols $P$ and $Q$. We will show that the example from \cite{MOU13} does not provide a counterexample to interpolation for predicate Gödel logic, $\mathbf{G}$. In particular, we show that \begin{align*} \Theta &: \forall x (\neg Px \vee \exists y (Py \wedge (Qy \to Px))) \wedge \neg \forall x (\neg Px \vee Qx) \end{align*} is an interpolant for $\Gamma \to \Delta$ in $\mathbf{G}$. That is, we will show (Theorem~\ref{thm:main}) that $\Gamma \to \Theta$ and $\Theta \to \Delta$ are both valid in $\mathbf{G}$. (In fact, we will show that $\Theta \to \Delta$ is even valid in $\mathbf{CD}$.) Our interpolant came about by extracting a formula from our analysis of the proof of the counterexample in \cite{MOU13}, showing that a pair of countermodels such as those exhibited in \cite{MOU13} necessarily involve models which are \emph{not} linearly ordered. One may hope that a generalisation of these ideas, along with a strengthening of the completeness result for the semantic tools of Olkhovikov \cite{Ol14} may provide a route to a resolution of this longstanding question. \section{Semantics} We recall the constant domain semantics for $\mathbf{CD}$ and $\mathbf{G}$ \begin{definition} Let $\mathcal{L}$ be a finite set of unary predicate symbols.\footnote{We only need to consider unary predicates in this note.} A \emph{model} (over $\mathcal{L}$) is a tuple $M = (W,\leq,w_0,A,(P^W)_{P \in \mathcal{L}})$, where $(W,\leq)$ is a quasi-order, the \emph{base point} $w_0$ is an element of $W$ such that $w_0 \leq w$ for all $w \in W$, $A$ is a set, and for each $P \in \mathcal{L}$, $P^W$ is an order-preserving function from $W$ to $\mathcal{P}(A)$, i.e., if $w \leq w'$ in $W$, then $P^W(w) \subseteq P^W(w')$. We will often suppress the superscript $W$ in $P^W$ when no confusion can arise. For any model $M = (W,\leq,w_0,A,(P^W))$ and finite set of variables $X$, the \emph{forcing relation}, $\Vdash$, at a world $w$ and an assignment $\a \colon X \to A$, is defined by induction on the complexity of formulas $\phi$ all of whose free variables lie in $X$, as follows: \begin{itemize} \item $w, \a \Vdash Px$ iff $\a(x) \in P^W(w)$; \item $w, \a \Vdash \phi \vee \psi$ iff $w, \a \Vdash \phi$ or $w, \a \Vdash \psi$; \item $w, \a \Vdash \phi \wedge \psi$ iff $w, \a \Vdash \phi$ and $w, \a \Vdash \psi$; \item $w, \a \Vdash \phi \to \psi$ iff for all $w' \geq w$, if $w', \a \Vdash \phi$ then $w', \a \Vdash \psi$; \item $w, \a \not\Vdash \bot$; \item $w, \a \Vdash \exists y \phi$, where $y\not\in X$, if and only if there exists $b \in A$ such that $w, \a \cup \{(y,b)\} \Vdash \phi$; \item $w, \a \Vdash \forall y \phi$, where $y\not\in X$, if and only if for all $b \in A$, we have $w, \a \cup \{(y,b)\} \Vdash \phi$. \end{itemize} As usual, $\neg \phi$ is regarded as an abbreviation of $\phi \to \bot$, so that the semantic clause becomes: \begin{itemize} \item $w, \a \Vdash \neg \phi$ iff for all $w' \geq w$, $w', \a \not\Vdash \phi$. \end{itemize} By a slight abuse of notation, if $\phi(\overline{x})$ is a formula, and $\a \colon \overline{x} \to A$ is an assignment, then we also write $w \Vdash \phi(\a)$ instead of $w, \a \Vdash \phi(\overline{x})$. For example, if $a \in A$ and $w \in W$, then $w \Vdash Pa$ means $a \in P^W(w)$. A \emph{linear model} is a model such that, for any two formulas $\phi$, $\psi$, the instance of the scheme $(\phi \to \psi) \vee (\psi \to \phi)$ is forced in the base point $w_0$ under every assignment. Up to logical equivalence of models, this means that we may assume that the quasi-order $\leq$ is total. \end{definition} It is straight forward to establish that any model is \emph{persistent}: for any formula $\phi(\overline{x})$ and assignment $\a \colon \overline{x} \to A$, if $w \leq w'$ and $w,\a \Vdash \phi$ then $w',\a \Vdash \phi$. Therefore, if a formula is forced in the base point, it is forced everywhere in the model. We will use the following completeness theorem for $\mathbf{G}$ \cite{Co92}. \begin{theorem}\label{thm:completeness} For any sentence $\phi$ using only predicate symbols from $\mathcal{L}$, the following are equivalent: \begin{enumerate} \item the sentence $\phi$ is valid in $\mathbf{G}$; \item for any linear model $M$ with base point $w_0$, we have $w_0 \Vdash \phi$. \end{enumerate} \end{theorem} \begin{corollary}\label{cor:completeness} Suppose that, for any linear model $M$ with base point $w_0$ such that $w_0 \Vdash \phi$, we have $w_0 \Vdash \psi$. Then $\phi \to \psi$ is valid in $\mathbf{G}$. \end{corollary} \begin{proof} Let $M$ be a linear model with base point $w_0$. Let $w \geq w_0$ be an arbitrary point in which $\phi$ is forced. Then the restriction of $M$ to a model $M'$ on the set ${\uparrow} w$ is a linear model with base point $w$, and $\phi$ is still forced in $w$ in $M'$. By assumption, $w \Vdash \psi$. Thus, by definition of the semantic clause for $\to$, $w_0 \Vdash \phi \to \psi$. Since $M$ was arbitrary, $\phi \to \psi$ is valid in $\mathbf{G}$ by Theorem~\ref{thm:completeness}. \end{proof} \section{Interpolant} Given this completeness theorem, we will now establish that $\Gamma \to \Theta$ and $\Theta \to \Delta$ are valid in $\mathbf{G}$ by checking that these two formulas hold in all linear models. In fact, we will see that $\Theta \to \Delta$ holds in all models, and is hence valid in $\mathbf{CD}$. By the main result of \cite{MOU13}, $\Gamma \to \Theta$ is not valid in $\mathbf{CD}$. We recall an important lemma from \cite{MOU13}. It characterises the validity of second-order formulas $\exists R \Gamma$ and $\forall S \Delta$ on a model in the language $\{P,Q\}$ as first-order properties of that model. In what follows, if $M = (W,\leq,w_0,A,(P^W)_{P \in \mathcal{L}})$ is a model over a language $\mathcal{L}$, $R$ is a symbol not in $\mathcal{L}$, and $R^W \colon W \to \mathcal{P}(A)$ is order-preserving, then we will write $(M,R^W)$ for the expanded model $M' = (W,\leq,w_0,A,(P^W)_{P \in \mathcal{L} \cup \{R\}})$. \begin{lemma}\label{lem:fo-char} Let $M = (W,\leq,w_0,A,P^W,Q^W)$ be a model over $\mathcal{L} = \{P,Q\}$ with base point $w_0$. \begin{enumerate} \item The following are equivalent: \begin{enumerate} \item There exists order-preserving $R^W \colon W \to \mathcal{P}(A)$ such that, in the expanded model $(M,R^W)$, we have $w_0 \Vdash \Gamma$; \item For every $w \in W$, there exists $a \in A$ such that $w_0 \Vdash Pa$ and $w \not\Vdash Qa$. \end{enumerate} \item The following are equivalent: \begin{enumerate} \item For every order preserving $S^W \colon W \to \mathcal{P}(A)$, in the expanded model $(M,S^W)$, we have $w_0 \Vdash \Delta$; \item For every $w \in W$, there exists $a \in A$ such that $w \Vdash Pa$ and $w\not\Vdash Qa$. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} See \cite[Lemma 4.2]{MOU13}. \end{proof} We are now ready to prove that $\Theta$ is an interpolant. \begin{theorem}\label{thm:main} Let $\Gamma, \Theta$ and $\Delta$ be the formulas defined above. \begin{enumerate} \item The implication $\Gamma \to \Theta$ is valid in $\mathbf{G}$. \item The implication $\Theta \to \Delta$ is valid in $\mathbf{CD}$. \end{enumerate} \end{theorem} \begin{proof} 1. We establish the sufficient condition from Corollary~\ref{cor:completeness}. Let $M$ be a linear model over $\{P,Q,R\}$ with base point $w_0$ such that $w_0 \Vdash \Gamma$. We need to show that $w_0 \Vdash \Theta$, that is: (a) $w_0 \Vdash \forall x (\neg Px \vee \exists y (Py \wedge (Qy \to Px)))$ and (b) $w_0 \Vdash \neg \forall x (\neg Px \vee Qx)$. \begin{enumerate} \item[(a)] Let $a \in A$ be arbitrary. We show that $w_0 \Vdash \neg Pa \vee \exists y (Py \wedge (Qy \to Pa))$. If $w_0 \Vdash \neg Pa$, we are done immediately. Suppose that $w_0 \not\Vdash \neg Pa$. We show that $w_0 \Vdash \exists y(Py \wedge (Qy \to Pa))$. Since $w_0 \not\Vdash \neg Pa$, pick $w_1 \geq w_0$ such that $w_1 \Vdash Pa$. By Lemma~\ref{lem:fo-char}.1, pick $b$ such that $w_0 \Vdash Pb$ and $w_1 \not\Vdash Qb$. Then $w_0 \not\Vdash Pa \to Qb$, since $w_1 \geq w_0$ and $w_1 \Vdash Pa$, but $w_1 \not\Vdash Qb$. Since $M$ is a linear model $w_0 \Vdash (Pa \to Qb) \vee (Qb \to Pa)$, so we conclude that $w_0 \Vdash Qb \to Pa$. Since also $w_0 \Vdash Pb$, we have proved that $w_0 \Vdash Pb \wedge (Qb \to Pa)$, so that $w_0 \Vdash \exists y(Py \wedge (Qy \to Pa))$. \item[(b)] Let $w \in W$ be arbitrary. By Lemma~\ref{lem:fo-char}.1, pick $a$ such that $w_0 \Vdash Pa$ and $w \not\Vdash Qa$. Then, since $w_0 \leq w$, we have $w \Vdash Pa$. Thus, $w \not\Vdash \neg Pa \vee Qa$. Hence, $w \not\Vdash \forall x (\neg Px \vee Qx)$. Since $w$ was arbitrary, we get that $w_0 \Vdash \neg \forall x (\neg Px \vee Qx)$. \end{enumerate} 2. We establish the sufficient condition from the analogous version of Corollary~\ref{cor:completeness} for $\mathbf{CD}$. Let $M$ be a model over $\{P,Q,S\}$ with base point $w_0$ such that $w_0 \Vdash \Theta$. We need to show that $w_0 \Vdash \Delta$. By Lemma~\ref{lem:fo-char}.2, it suffices to prove that: \begin{align}\tag{*} \text{for every $w$, there exists $c \in A$ such that $w \Vdash Pc$ and $w\not\Vdash Qc$.} \end{align} Let $w$ be arbitrary. Since $w_0 \Vdash \neg \forall x(\neg Px \vee Qx)$, in particular we have that $w \not\Vdash \forall x (\neg Px \vee Qx)$. Pick $a \in A$ such that $w \not\Vdash \neg Pa \vee Qa$, that is, $w \not\Vdash \neg Pa$ and $w \not\Vdash Qa$. Since $w_0 \Vdash \Theta$, in particular $w \Vdash \Theta$, and instantiating the first conjunct with $x = a$, we see that $w \Vdash \neg Pa \vee \exists y (Py \wedge (Qy \to Pa))$. Since $w \not\Vdash \neg Pa$, we conclude that $w \Vdash \exists y (Py \wedge (Qy \to Pa))$. Pick $b \in A$ such that $w \Vdash Pb$ and $w \Vdash Qb \to Pa$. We distinguish two cases: (i) $w \Vdash Qb$. In this case, since $w \Vdash Qb \to Pa$, we have $w \Vdash Pa$. Since also $w \not\Vdash Qa$, we can take $c := a$ in (*). (ii) $w \not\Vdash Qb$. In this case, we can take $c := b$ in (*). This establishes (*), so that $w_0 \Vdash \Delta$. \end{proof} \subsection*{Acknowledgement} The authors gratefully acknowledge the support of the project DuaLL, funded by the European Research Council under the Horizon 2020 program.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Since the discovery of iron-based superconductivity in 2008 \cite{Kamihara}, several families of superconducting ferropnictides were synthesized \cite{Stewart,Johnston}. All iron pnictides possess a layered crystal structure comprising quasi-two-dimensional Fe-As blocks separated by spacers along the $c$-direction. Superconductivity develops namely in Fe-As layers whose structure remains nearly the same for all the iron pnictides, whereas the difference is in the spacer blocks structure \cite{Stewart,Johnston}. For the so called 1111-family, the whole structure consists of a stack of superconducting Fe-As blocks and nonsuperconducting $Re$-O spacers ($Re$ is a rare-earth metal). 1111-oxypnictides possess the simplest band structure as compared to other pnictides \cite{Singh}. Band-structure calculations showed that iron $3d$ orbitals make the main contribution to the normal-state density of states (DOS) at the Fermi level, forming electron and hole sheets of the Fermi surface. The hole sheets represent two concentric cylinders near the $\Gamma$ point of the first Brillouin zone, whereas the electron sheets are formed by two cylinders of elliptic cross section near the M points. Both electron and hole cylinders are slightly warped along the $c$-direction. As was demonstrated in angle-resolved photoemission spectroscopy (ARPES) studies \cite{Charnukha}, these Fermi surface sheets are considered to be formed by two effective (hole and electron) bands. The ARPES studies \cite{Charnukha} also revealed a feature typical of optimally doped Sm-1111: singular Fermi surface sheets near the $\Gamma$ and M points. Under electron doping, superconducting critical temperatures of SmOFeAs varies in the wide range up to $T_C \approx 57$\,K \cite{Fujioka}. Therefore, Sm-1111 is an ideal candidate for investigating the role of electron doping on the superconducting properties. To describe multiband superconductivity in iron pnictides, the two basic models were suggested: $s^{++}$-model of coupling through orbital fluctuations enhanced by phonons \cite{Onari1,Onari2}, and $s^{\pm}$-model of spin-fluctuation-mediated superconductivity \cite{MazinRev,Mazin}. To date, both models have not got yet unambiguous experimental evidence. Some theoretical studies predict a certain influence of impurity scattering on the gap values in iron-based superconductors \cite{MMK}. Therefore, direct $\Delta_{L,S}(T_c)$ data are of the most importance to answer the key question concerning the underlying pairing mechanism. The experimentally determined gap values in Sm-1111 as well as in other oxypnictides in whole are rather contradictory \cite{Naidyuk,Chen,DagheroRu,Daghero,Wang,Noat,Fasano,Millo,Malone}. For example, $2\Delta_L/k_BT_C$ in Sm-1111 determined by point-contact (PCAR) spectroscopy varies by a factor of six, from nearly weak-coupling BCS-limit 3.6 \textendash 3.7 in \cite{Naidyuk,Chen} up to 22 in \cite{DagheroRu}. This fact raises the problem of accurate superconducting order parameter determination by various experimental probes. Thorium substitution in Sm$_{1-x}$Th$_x$OFeAs oxypnictide supplies charge carriers to superconducting Fe-As layers giving rise to superconductivity. It opens an unique possibility to explore the evolution of the superconducting order parameter versus critical temperature in the same compound with no direct influence to the geometry of Fe-As tetrahedrons \cite{Zhigadlo2010}. To the best of our knowledge, here we present the first data on the evolution of the superconducting gap, $\Delta(T_C)$, and the characteristic ratio, $2\Delta(T_C)/k_BT_C$, for 1111-oxypnictides with heterovalent substitution in wide range of $T_C$. The paper contains a systematic study of current-voltage characteristics (CVC) and dynamic conductance spectra for SnS-Andreev contacts in optimal and underdoped Sm$_{1-x}$Th$_x$OFeAs samples with various thorium doping. Here we present the data in the range $T_C = 35 \textendash 54$\,K and nominal Th concentrations $x = 0.08 \textendash 0.3$, and the pioneer data with $T_C = 21 \textendash 37$\,K sample series ($x \lesssim 0.08$). Using intrinsic multiple Andreev reflections effect (IMARE) spectroscopy, we directly determined the bulk values of two superconducting gaps $\Delta_L$ and $\Delta_S$, their temperature dependences, and BCS-ratios. We found a scaling between both gaps and critical temperature, and nearly constant BCS-ratios within all studied $T_C$ range. We find that the gap temperature dependences $\Delta_{L,S}(T)$ are well described by the two-band Moskalenko and Suhl system of equations \cite{Mosk,Suhl} with a renormalized BCS-integral (RBCS). From this fitting, we have determined the intraband and interband coupling parameters and prove that the intraband coupling is stronger than the interband coupling in Sm-based oxypnictides. \section{Experimental details} \subsection{Synthesis} Polycrystalline Sm$_{1-x}$Th$_x$OFeAs samples with various thorium doping and critical temperatures ($T_C = 21 \textendash 54$\,K) were synthesized by high-pressure method. Overall details of the sample cell assembly and high-pressure synthesis process may be found in Refs. \cite{Zhigadlo2010,Zhigadlo2012}. Powders of SmAs, ThAs, Fe$_2$O$_3$, and F of high purity ($\geq 99.95 \%$) were weighed according to the stoichiometric ratio, thoroughly ground, and pressed into pellets. Then, the pellet containing precursor was enclosed in a boron nitride crucible and placed inside a pyrophyllite cube with a graphite heater. All the preparatory steps were done in a glove box under argon atmosphere. The six tungsten carbide anvils generated pressure on the whole assembly. In a typical run, the sample was compressed to 3\,GPa at room temperature. While keeping the pressure constant, the temperature was ramped up within 1\,h to the maximum value of 1430\,$^{\circ}$C, maintained for 4.5\,h, and finally quenched to the room temperature. Afterward, the pressure was released and the sample removed. Subsequently recorded X-ray powder diffraction patterns revealed high homogeneity of the samples and the presence of a single superconducting phase \cite{Zhigadlo2010}. The amount of additional nonsuperconducting phases SmAs and ThO$_2$ was vanishingly small. The bulk character of superconductivity in Sm$_{1-x}$Th$_x$OFeAs samples was confirmed by magnetization measurements. \subsection{Preparation of weak links by the break-junction technique} In our experiments, we used a break-junction technique \cite{Moreland} to produce symmetrical SnS contacts. The sample prepared as a thin rectangular plate with dimensions about $3 \times 1.5 \times 0.1$\,mm$^3$ was attached to a springy sample holder by four-contact pads made of pasty (at room temperature) In-Ga alloy. After cooling down to $T = 4.2$\,K, the sample holder was gently curved, which caused cracking of the bulk sample. The microcrack generates cryogenic clefts and separate the bulk sample into two parts with a weak link between them, thus forming ScS contact (where $c$ is a constriction). Cleavage of a layered sample causes its exfoliation along the $ab$-planes and an appearance of steps and terraces at cryogenic clefts (Fig. 1a). This is typically the case for both single crystals and polycrystalline samples \cite{EPL}. As an illustration, we considered in Ref.~\cite{EPL} a model polycrystalline sample with randomly oriented $ab$-planes of grains, where intergrain connection is just 10 \% stronger than the interlayer one (along the $c$ direction for any of grains). In this case we expect quite a considerable amount of split crystallites in the ab-plane $(2 \textendash 6\%)$. In a more realistic situation, when the strength of intergrain connection exceeds the interlayer ultimate strength by $20\%$, about $4 \textendash 11\%$ of grains would split, causing the appearance of large amount of steps and terraces. These estimates are supported by the electron microscope image of polycrystalline sample cleft shown in Fig.~1\,b. \subsection{SnS Andreev junction and arrays of junctions} Under fine tuning of the sample holder curvature, the two cryogenic clefts slide apart touching through various terraces. This enables to vary the cross-size of the resulting ScS contact in order to realize a ballistic regime. In the majority of Fe-based superconductors we studied, the constriction is electrically equivalent to a thin layer of normal metal, and the resulting current-voltage characteristic (CVC) and $dI(V)/dV$ are typical for clean classical SnS-Andreev junction with high transparency of about $95 \textendash 98 \%$ \cite{Andreev,OTBK,Arnold,Averin}. Such contacts exhibit a multiple Andreev reflections effect which manifests itself as a pronounced excess current at low bias voltages (so called foot) in CVC, and a subharmonic gap structure (SGS) in the $dI(V)/dV$ spectrum. At temperatures below $T_C$ SnS-contact demonstrates an excess conductance at any bias, whereas the SGS represents a series of dynamic conductance minima at certain positions: \begin{equation} V_n(T) = 2\Delta(T)/en, \end{equation} where $n$ is a natural subharmonic order. In principle, the first Andreev minimum could be slightly shifted towards lower biases, $V_{n_L=1} \lesssim (2\Delta/e)$ \cite{OTBK,Arnold,Kummel,Averin}. If so, the gap value may be determined from positions of the higher order SGS dips with $n \geqslant 2$. In case of a two-gap superconductor, two subharmonic gap structures should be expected. The number of observed SGS dips strongly depends on the ratio between the carrier mean free path $l$ and the contact size $a$: $n_{max} \approx l/2a$ \cite{Kummel}. The break-junction experiments with layered samples, beside the single SnS contacts, also show arrays of SnS contacts \cite{EPL}. In this case, the CVC and $dI(V)/dV$ demonstrate Andreev minima at positions which are integer multiplies $m$ of those for the single SnS junction: \begin{equation} V_n(T) = m \times 2\Delta(T)/en. \end{equation} This obviously corresponds to a stack of $m$ sequentially connected identical SnS junctions. The numbers $m$ can be easily determined by comparing $dI(V)/dV$ curves for various arrays: after scaling the bias voltage axis by $m$, the positions of SGS dips in dynamic conductance spectra should coincide. The Andreev dips in CVC and $dI(V)/dV$ for such arrays are more pronounced than those for single SnS junction; the larger $m$, the sharper peculiarities are usually observed \cite{EPL}. This firm experimental fact indicates that the origin of such arrays with high-quality characteristics could not be thought about as a chain of independent nonequivalent grain-grain contacts \cite{remark_1}. By contrast, probing the Andreev arrays ensures one to minimize surface defects influence and measure namely \textit{bulk properties} of the sample \cite{EPL}. The intrinsic multiple Andreev reflections effect (IMARE) occurring in such arrays is similar to the intrinsic Josephson effect in SIS contacts (where $I$ is an insulator) \cite{Nakamura}; both effects were observed first in cuprates \cite{PonomarevIMARE,PonomarevIJE}. \begin{figure} \includegraphics[width=20pc]{Fig1.eps} \caption{a){Schematic drawing} of steps and terraces touching each other in the microcrack a layered sample. The current flowing along the $c$-direction is depicted by arrow. b) The electron microscope image of cleft in Sm$_{1-x}$Th$_x$OFeAs demonstrating steps and terraces at the surface of cracked crystal grains.} \end{figure} The IMARE spectroscopy realized by the break-junction technique has a number of advantages: a) the microcrack generates terraces of about atomic size. They remain tightly connected during sliding that prevents impurity penetration into the microcrack and protects the purity of cryogenic clefts; b) the contact point is far from current and potential leads, which prevents junction overheating and provides true four-point connection; c) by fine bending of the sample holder, one can probe several tens of Andreev arrays with various diameter and number of junctions in the stack $m$ in one and the same sample and during the same cooldown; it enables to collect statistics and to check the data reproducibility; d) unlike asymmetric NS and NIS junctions \cite{BTK,Dynes}, in SnS-Andreev contacts the gap value may be determined directly (from the positions of SGS dips), and no fitting of $dI(V)/dV$ is needed; the latter remains true at any temperatures $0 \leq T < T_C$ \cite{OTBK,Kummel}, therefore, one can obtain precise temperature dependences of the gaps; e) by probing the Andreev arrays one unambiguously determines bulk values of superconducting gaps. The dynamic conductance spectra were measured directly by a standard modulation technique \cite{LOFA}. We used a current source with ac frequency less than 1\,kHz. The results obtained with this setup are insensitive to the presence of parallel ohmic conduction paths; if any path is present, dynamic conductance curves shift along the vertical axis only, while the bias stay unchanged. \section{Results and discussion} \subsection{IMARE in optimally doped samples} \begin{figure} \includegraphics[width=20pc]{Fig2.eps} \caption{Dynamic conductance spectrum (circles, left scale) and current-voltage characteristic (black line, right scale) measured at $T = 4.2$\,K for SnS Andreev contact in optimal Sm-1111 sample with critical temperature $T_C^{bulk} = 52 \pm 2$\,K and nominal $x \approx 0.3$. Blue line corresponds to a rough dI(V)/dV fit based on the MARE model \cite{Kummel}. Gray lines and $n_L$ label indicate the subharmonic gap structure dips for the large gap $\Delta_L \approx 11.9$\,meV. The bias voltage is normalized to that for a single contact.} \end{figure} Figure 2 shows normalized CVC (black line; right Y-axis) and dynamic conductance (red line; left Y-axis) for ScS array formed at $T = 4.2$\,K in nearly optimal Sm-1111 sample ($\sharp 2$) with critical temperature $T_C = 52 \pm 2$\,K and nominal thorium concentration $x \approx 0.3$. The array contains $m = 3$ ScS junctions; in order to normalize CVC and $dI(V)/dV$ to those for a single junction, the X-axis was divided by a factor of 3 in Fig.~2. The CVC has a pronounced foot area at low bias voltages. The excess current there is larger than that in NS-contact, where the low-bias conductance is about twice larger than at high-bias \cite{BTK}. The CVC and dynamic conductance spectrum are typical for a highly-transparent ($\approx 95 \%$) SnS-Andreev contact \cite{Kummel,Averin}. Obviously, the theoretical dependence (blue curve in Fig. 2) based on the MARE model \cite{Kummel} extended for the case of $\thicksim 10\,\%$ gap anisotropy fits the experimental data (circles) very well. The model \cite{Kummel}, beside $l/2a$ ratio, accounts finite temperatures and possible presence of an Andreev band within the gap. The latter causes the complex fine structure in the fit (satellite dips beyond the subharmonics) unobservable in the experiment; this issue requires a special study. A slight deviation from the expected position (formula (1)) of Andreev dips ($10 \,\%$ uncertainty) is rather conventional. For details, see the Appendix. Since the four subharmonics are observable (the $n=4$ feature is resolved in $d^2I/dV^2$, not shown here), the effective contact diameter is less though comparable to the mean free path, $l/2a \approx 3 \textendash 4$. This is the reason why the intensity of SGS dips in the experimental spectrum decreases more rapidly as compared to the fit, where $l/2a = 5$. Nevertheless, the clear SGS is the strong evidence for MARE realized in ballistic SnS-contact only. The other way to check whether the contact is ballistic, is to take a normal-state bulk resistivity for optimal Sm(Th)-1111 single crystal $\rho \approx 0.09$\,m$\Omega \cdot \rm{cm}$\, \cite{Zhigadlo2010}, the average product of bulk resistivity and carrier mean free path $\rho l^{el} \approx 5 \times 10^{-10}$\,$\Omega \cdot \rm{cm^2}$ for Sm-1111 \cite{Tropeano1,Tropeano2} which implied to be nearly constant. The resulting elastic mean free path value $l^{el} \approx 55$\,nm for our sample. Then, taking the resistance of single ScS junction in the array under study $R \approx 25$\,$\Omega$ (see Fig.2), and using Sharvin's formula for a ballistic ($a < l$) contact \cite{Sharvin}: $R = \frac{4}{3\pi}\frac{\rho l}{a^2}$, we get $a \approx 28~{\rm nm} < l^{el}$, thus proving the contact to be ballistic. Going into details, for the experimental observation of MARE namely $l^{in}/2a$ ratio is essential ($l^{in}$ \textemdash inelastic mean free path). Usually, $l^{in}$ is several times larger than $l^{el}$ well providing the ballistic regime. The estimated contact diameter is also many times smaller than the typical crystallite dimension $\thicksim 70 \times 70 \times 20$\,$\mu \rm{m^3}$ \cite{Zhigadlo2010}. The latter confirms the assumption that the SnS array was formed on steps and terraces of a split crystallite. \begin{figure} \includegraphics[width=20pc]{Fig3.eps} \caption{Normalized dynamic conductance spectra (left scale) measured at $T = 4.2$\,K for SnS Andreev contacts in optimal Sm-1111 samples with critical temperatures $T_C^{bulk} = 52 \pm 2$\,K. The number of SnS junctions in the arrays are $m = 7$ (upper spectrum), and $m = 2$ (bottom spectrum). Gray lines and $n_L$ label indicate the subharmonic gap structure dips for the large gap $\Delta_L = 12.3 \pm 1.2$\,meV. $I(V)$ characteristics (right scale) corresponding to the bottom $dI(V)/dV$ curve measured at $T = 4.2$\,K and at $T = T_C^{local} \approx 50$\,K are shown for comparison.} \end{figure} The numbers and their underlining horizontal strips in Fig.~2 mark the positions and error bars of sharp dips in the dynamic conductance located at $|V_{n_L=1}| \approx 23.5$\,mV, $|V_{n_L=2}| \approx 12.4$\,mV, and $|V_{n_L=3}| \approx 7.6$\,mV. These figures satisfy Eq.~(1) as the first, second and third SGS dip for the large gap $\Delta_L \approx 11.9$\,meV with the BCS-ratio $2\Delta_L/k_BT_C \approx 5.3$. The SGS minima have similar shape and become less intensive with subharmonic order $n$ increasing; this is in accord with theory \cite{Kummel}. The interpretation of the minima in Fig.~2 is straightforward. For example, the minima at $\approx 23.5$\,mV and $\approx 12.4$\,mV cannot be considered as $n=2$ and $n=3$ SGS harmonics, respectively. As follows from Eq.~(1), the ratio $V_n/V_{n+1} = 2$ is true only when $n=1$. Weaker peculiarities at $|V| \approx 2.9$\,mV are located neither at the expected positions of the fourth SGS dips $|V_{n_L=4}| \approx 6.2$\,mV, nor of the small gap $|V_{n_S=1}| \approx 6$\,mV (as was shown in our previous studies \cite{SmJETPL} of Sm-1111; this gap is not identified reliably and might be interpreted as the beginning of the foot area. \begin{figure} \includegraphics[width=20pc]{Fig4.eps} \caption{Normalized dynamic conductance spectra measured at $T = 4.2$\,K for Andreev arrays in optimal Sm-1111 samples with critical temperatures $T_C = 50 \pm 2$\,K. The number of SnS-junctions in the arrays (from the top) are $m = 7, 6, 6, 7, 5$, correspondingly. Gray vertical areas and $n_L$ label indicate subharmonic gap structure dips for the large gap $\Delta_L = 11.5 \pm 1.2$\,meV. Vertical dashed lines, arrows and $n_S$ label point to the Andreev peculiarities for the small gap $\Delta_S = 2.5 \pm 0.5$\,meV.} \end{figure} All the contact properties described above (the presence of the foot area and excess conductance, SGS and ballistic regime) prove these break-junctions to be namely SnS-junctions with high-transparent interface. The same is true for the experimental data presented below. \begin{figure} \includegraphics[width=20pc]{Fig5.eps} \caption{Normalized dynamic conductance spectra measured at $T = 4.2$\,K for Andreev arrays in underdoped Sm-1111 samples with critical temperatures $T_C^{bulk} = 26 \pm 1$\,K. The number of SnS-junctions in the arrays (from the top) are $m = 9, 8, 6, 4$, correspondingly. The $n_L$ label and gray vertical areas indicate subharmonic gap structure dips for the large gap $\Delta_L = 6.3 \pm 1.0$\,meV. Dashed vertical lines and $n_S$ label point to the Andreev peculiarities for the small gap $\Delta_S = 1.7 \pm 0.3$\,meV.} \end{figure} The normalized dynamic conductance spectra measured at $T = 4.2$\,K for SnS-arrays of $m = 2$ (lower curve) and $m = 7$ (upper curve) junctions in the stacks are compared in Fig.~3. The data were obtained in different nearly optimal Sm-1111 samples with the same critical temperature $T_C = 52 \pm 2$\,K. The $dI(V)/dV$ curves were offset vertically for clarity. For the two-junction array (obtained in sample $\sharp 18$), we also show CVC with excess current measured at $T = 4.2$\,K and linear CVC measured close to the local critical temperature $T_C \approx 50$\,K (corresponding to the transition of the contact area with dimension $\thicksim 10$ -- 30\,nm to the normal state). The contact resistance increases with temperature, from $R(4.2\,{\rm K}) \approx 16$\,$\Omega$ to $R(50\,{\rm K}) \approx 41$\,$\Omega$ which agrees with the theory predictions for ballistic SnS-contacts \cite{Klapwijk}. The dynamic conductance spectra demonstrate pronounced dips at $|V_{n_L=1}| \approx 24$\,mV, $|V_{n_L=2}| \approx 12.3$\,mV being the SGS minima of $n=1,2$ order. As for these contacts $a \approx 35 {\rm nm} \approx 0.6 l$, the third-order Andreev peculiarities at $|V_{n_L=3}| \approx 8.3$\,mV are strongly smeared. Remarkably, despite the $dI(V)/dV$ in Fig.~3 were obtained with different samples, the dynamic conductance spectra look very similar. The resulting gap value $\Delta_L \approx 12.3$\,meV with $2\Delta_L/k_BT_C \approx 5.5$ is reproducible for both samples. If we assume that the lower $dI(V)/dV$ is produced by $m = 3$ rather than by $m = 2$ junction array, we immediately obtain the large gap value $\Delta_L \approx 8$\,meV leading to $2\Delta_L/k_BT_C \approx 3.6$ which seems to be too low for 1111 pnictides \cite{EPL,UFN,LOFA,PonFPS,SmJETPL}. For another SnS array presented in Fig.~3 (upper $dI(V)/dV$, sample $\sharp 3$), the bias voltage of its raw dynamic conductance was divided by $m = 7$. After such normalization, the positions of the main gap peculiarities are in good agreement, thus demonstrating IMARE for Sm-1111. Herewith, the dynamic conductance of 7-junction array shows sharper Andreev dips than those of the 2-junction array. This could be due to diminishing of surface influence on superconducting properties of arrays with a large $m$ \cite{EPL}. \begin{figure} \includegraphics[width=20pc]{Fig6.eps} \caption{The positions of subharmonic gap structure dips $V_n$ versus the inverse number $1/n$. The $V_n$ of the large gap for Andreev contacts with maximal $T_C = 52 \pm 2$\,K (see Figs. 2,3) are shown by solid circles, the data related to Sm-1111 with $T_C = 50 \pm 2$\,K (see Fig. 4) are shown by open symbols. For underdoped Sm-1111 with $T_C = 26 \pm 1$\,K (see Fig. 5), the data for the large gap are shown by solid triangles, for the small gap \textemdash by solid squares of corresponding color. Gray lines are guidelines.} \end{figure} A number of dynamic conductance spectra measured at $T = 4.2$\,K for Andreev arrays with various number of junctions $m$ obtained in nearly optimal samples with $T_C = 50 \pm 2$\,K are presented in Fig.~4. The large gap minima are marked with gray vertical areas and $n_L = 1,2$ labels. The position of the first SGS minimum is slightly shifted from the expected $|V_{n_L=1}| = 2\Delta_L/e$ position \cite{Kummel}, therefore it is reasonable to determine the large gap value from the second SGS dip. Four upper curves with $m = 6, 7$ were obtained with one and the same sample $\sharp 3$ by a fine mechanical tuning. Under the gentle readjustment, the number of SnS-junctions in the stack varied by one, therefore, in the raw $dI(V)/dV$ characteristics the position of the second Andreev dip jumped by $\pm \Delta/e$. Taking the difference between $n_L = 2$ positions, we normalized the spectra by corresponding natural numbers $m$ and got the large order parameter $\Delta_L \approx 11.5$\,meV with $2\Delta_L/k_BT_C \approx 5.3$. We stress again good reproducibility of the spectra and their fine structure. The lower curve in Fig.~4 obtained with another sample ($\sharp 1$) corresponds to a 5-junction array. At lower biases, in each spectra one can see features at $|V_{n_S=1}| \approx 5$\,meV, which we interpret as the main Andreev peculiarities for the small gap $\Delta_S \approx 2.5$\,meV ($2\Delta_S/k_BT_C \approx 1.2$). Note that the latter bias voltages do not coincide with the expected $|V_{n_L=4}| \approx 5.8$\,mV for the fourth-order $\Delta_L$ peculiarities. Analyzing our data on nearly optimal Sm-1111, we note that the small gap peculiarities are observed not in each spectra. One may suggest several reasons for the strongly smeared SGS of the small gap, including small mean free path in the bands with $\Delta_S$. The specific band structure in Sm-1111 also may contribute: as revealed by recent ARPES studies \cite{Charnukha}, the respective FS sheets are not cylinders and have singularities in optimal Sm-1111. Nonetheless, the positions of peculiarities marked as $n_S=1$ are scaled by $m$, the resulting $\Delta_S$ value and temperature dependence $\Delta_S(T)$ are reproducible, thus showing the bulk nature of these peculiarities. \begin{figure} \includegraphics[width=20pc]{Fig7.eps} \caption{Normalized dynamic conductance spectrum in underdoped Sm-1111 (see Fig. 5, lower curve) measured at $T = 4.2$ -- 27\,K. The spectra were offset vertically for clarity, nevertheless, the contact conductance decreases with temperature. Linear background was suppressed. Local critical temperature is $T_C^{local} = 26 \pm 1$\,K. The gray vertical dashed lines indicate subharmonic gap structure dips ($n_L = 1,2$) for the large gap $\Delta_L \approx 6.3$\,meV. The $n_L=3$ subharmonic is poorly visible.} \end{figure} \subsection{Underdoped samples} We also observed IMARE with underdoped Sm-1111 samples with nominal thorium concentration $x \lesssim 0.08$. Figure 5 shows excess-conductance $dI(V)/dV$ curves for Andreev arrays formed at $T = 4.2$\,K in the samples with a factor of two lower critical temperature, $T_C = 26 \pm 1$\,K. The array (presented by the upper dynamic conductance spectrum in Fig.~5) was obtained in sample $\sharp 24$, whereas three other curves correspond to SnS arrays formed in another sample $\sharp 21$. Selecting natural numbers $m = 9, 8, 6, 4$, we achieve a coincidence between the positions of the large gap SGS (marked as $n_L = 1, 2, 3$ and highlighted by gray vertical areas in Fig.~5), and for the small gap peculiarities (dashed lines, $n_S = 1, 2$ label). Intensive minima of the first and second order located at $|V_{n_L=1}| \approx 12.6$\,mV, $|V_{n_L=2}| \approx 6.3$\,mV, and third-order peculiarities at $|V_{n_L=3}| \approx 4.2$\,mV unambiguously determine the large gap $\Delta_L \approx 6.3$\,meV. For the highest quality Andreev spectra in underdoped Sm-1111 (see dynamic conductance for the 6-junction stack in Fig.~5), we also observe SGS for the small gap comprising the first ($|V_{n_S=1}| \approx 3.3$\,mV) and the second ($|V_{n_S=2}| \approx 1.7$\,mV) peculiarities. This gives the small gap value $\Delta_S \approx 1.7$\,meV. The determined values of both gaps are reproducible. $dI(V)/dV$ curves are symmetrical and have no signatures of overheating. \begin{figure} \includegraphics[width=20pc]{Fig8.eps} \caption{Temperature dependence of the positions of the first (circles) and the second (squares) Andreev dips for the large gap in the $dI(V)/dV$ shown in Fig. 7. The upper inset shows the temperature dependence of excess current in $I(V)$ for this contact. The lower inset shows the change in current-voltage characteristic at $T = 4.2$\,K, and at $T = 27$\,K.} \end{figure} A summary of the data for SnS contacts obtained in nearly optimal and underdoped Sm-1111 samples is presented in Fig.~6. According to Eqs.~(1,2) positions $V_n$ of the SGS dips should depend linearly on their inverse order $1/n$, and the line should also pass the origin. The $V_n$ positions of the large gap peculiarities for optimal samples with $T_C =52 \pm 2$\,K (see Figs.2,3) are shown by solid circles, for the samples with $T_C = 50 \pm 2$\,K (see Fig.~4) \textemdash by open symbols. The experimental points are confined into the segment (dash-dot lines) passing through the (0;0)-point; the $V_n$ dispersion is obviously caused by the $T_C$ variation. The average gap values are: $\Delta_L = 12.4 \pm 1.2$\,meV for Sm-1111 with $T_C \approx 52$\,K, $\Delta_L = 11.5 \pm 1.2$\,meV, $\Delta_S = 2.5 \pm 0.5$\,meV for Sm-1111 with $T_C \approx 50$\,K. For underdoped samples with $T_C = 26 \pm 1$\,K, the large gap SGS positions are presented in Fig.~6 by triangles, the SGS positions for $\Delta_S$ \textemdash by squares. The data demonstrate two linear dependences starting from the origin. For $T_C \approx 26$\,K, we get average values $\Delta_L = 6.3 \pm 0.6$\,meV and $\Delta_S = 1.7 \pm 0.3$\,meV. The corresponding BCS-ratios $2\Delta_L/k_BT_C \approx 5.6$, $2\Delta_S/k_BT_C \approx 1.5$ are nearly the same as obtained for Sm-1111 with high $T_C$. \begin{figure} \includegraphics[width=20pc]{Fig9.eps} \caption{Normalized dynamic conductance spectra measured at $T = 4.2$ \textendash 27.5\,K for Andreev array (2$^{nd}$ curve from the top in Fig. 5) in underdoped Sm-1111. The spectra were offset vertically for clarity, nevertheless, the contact conductance decreases with temperature. Local critical temperature is $T_C^{local} = 27.5 \pm 1$\,K. The $n_L$ labels and vertical lines indicate subharmonic gap structure for the large gap $\Delta_L \approx 6.3$\,meV. Dashed lines and $2\Delta_S$ label point to the SGS for the small gap $\Delta_L \approx 2.0$\,meV. Lower dashed spectra ($T=4.2$\,K) was recorded after the thermal cycling, to demonstrate the mechanical stability of the break-junction.} \end{figure} \subsection{Temperature dependence of the superconducting gaps} Temperature evolution of the dynamic conductance spectrum of Andreev array in underdoped Sm-1111 sample (see Fig. 5, lower curve) is shown in Fig.~7. The $dI(V)/dV$ curves are offset with temperature increase, and the linear background is subtracted. The lower spectrum (measured at $T = 4.2$\,K) in Fig. 7 demonstrates clear SGS for the large gap (the positions of the first and the second SGS dips are labeled as $2\Delta_L$ and $\Delta_L$, respectively). As temperature increases, the dips move towards zero bias, whereas the upper spectrum (measured at $T = 27$\,K) in Fig.~7 becomes nearly linear which corresponds to the normal state. Similarly to the Andreev arrays in nearly optimal samples (see Fig.~3), the contact resistance increases with the temperature, as shown in the lower inset of Fig.~8. At $T = 4.2$\,K, the contact resistance is $R \approx 70$\,$\rm{\Omega}$ and is large enough to provide a ballistic mode. The excess current probed at high bias voltage $eV \approx 2\Delta_L(4.2\rm{K})$ being maximal at $T = 4.2$\,K turns to zero at $T_C^{local}$ as shown in the upper inset of Fig.~8. Positions of the first (circles) and the second (squares) dynamic conductance minima versus temperature, corresponding to $2\Delta_L(T)$ and $\Delta_L(T)$ dependences \cite{Kummel}, are presented in Fig.~8. Both $dI(V)/dV$ peculiarities have similar temperature dependence, thus proving these peculiarities to be related to the same SGS. The dependences deviate from the single-gap BCS-like curve (dashed line in Fig.~8) being slightly bent down in comparison with the BCS-type $T$-dependence. Since the data of Fig. 8 are obtained for SnS-array and demonstrate namely bulk properties, it does not represent the surface gap. Thus, the observed deviation of the temperature course points to the presence of the second superconducting condensate and the respective gap $\Delta_S$, which was not resolved by IMARE spectroscopy (see Fig.~7). The latter could be due to a low concentration of carriers in the bands with $\Delta_S$ \cite{Charnukha}. \begin{figure} \includegraphics[width=20pc]{Fig10.eps} \caption{Temperature dependence of the large gap (solid circles) and of the small gap (open black circles) for underdoped Sm-1111 (see Fig. 9). Local critical temperature is $T_C^{local} = 27.5 \pm 1$\,K. The normalized dependence $\Delta_S(T)/\Delta_S(0) \times \Delta_L(0)$ is presented by squares for comparison. Theoretical fit by two-gap Moskalenko and Suhl equations \cite{Mosk,Suhl} is shown by solid lines, single-gap BCS-like curves are shown by dashed lines. Bulk resistive transition (for a sample with nominal $x < 0.08$) is shown by open rhombs (right scale).} \end{figure} \begin{figure} \includegraphics[width=20pc]{Fig11.eps} \caption{Temperature dependences of the large gap (solid symbols) and of the small gap (open symbols of corresponding colour and shape) for Sm-1111 samples with various thorium doping. $T_C^{local} = 26 \textendash 49$\,K. The $\Delta(T)$ shown by blue circles are similar to those in Fig. 10. Theoretical fits by two-gap Moskalenko and Suhl equations \cite{Mosk,Suhl} are shown by solid lines, single-gap BCS-like curves are shown by dashed lines. Temperature dependence of bulk resistivity near superconducting transition ($x \approx 0.3$) is shown by gray open circles (right scale).} \end{figure} Figure 9 shows temperature evolution of the dynamic conductance for another Andreev array measured with the same sample as that shown in Fig.~7. Here, the features of the weaker condensate are more clearly pronounced. At $T = 4.2$\,K, the SGS peculiarities for the large gap $\Delta_L \approx 6.3$\,meV are labelled as $2\Delta_L$ and $\Delta_L$; the position of the first peculiarity for the small gap $\Delta_S \approx 2$\,meV is labelled by $2\Delta_S$. The spectra are offset vertically with temperature increase. The dashed-line spectrum corresponds to $dI(V)/dV$ measured at liquid helium temperature after thermocycling (to $T_C$ and back). The spectrum remains quantitatively similar to the initial $dI(V)/dV$ measured at $T = 4.2$\,K. The reproducibility of the spectra demonstrates high mechanical stability of the break junction. The positions of both SGS peculiarities decrease with temperature and turn to zero at local critical temperature $T_C^{local} \approx 27.5$\,K. The temperature dependences for the large gap (solid circles) and for the small gap (large open circles) were directly determined similarly to Fig.~8; they are presented in Fig. 10. The $\Delta_L(T)$ temperature dependence slightly bends down as compared to the BCS-type curve shown by the dashed line. As temperature increases, the small gap starts decreasing more rapidly, then almost linearly tends to the common critical temperature $T_C^{local}$. The character of the $\Delta_L(T)$ temperature dependence differs from $\Delta_S(T)$; this becomes obvious from the normalized temperature dependence $\Delta_S(T)/\Delta_S(0) \times \Delta_L(0)$ shown by squares in Fig.~10. The different behavior, confirms therefore, that the peculiarities observed in the dynamic conductance spectra are related to two distinct SGS's, and two different superconducting condensates, respectively. For comparison, we show the temperature dependence of bulk resistivity of the corresponding sample with nominal $x < 0.08$ (open rhombs in Fig.~10). The set of $\rho(T)$ data obtained with the polycrystalline samples showed that $\rho(T_C)$ nearly four times exceeds $\rho^{single}(T_C)$. Thus, the absolute values of $\rho^{single}$ were roughly estimated by normalizing of raw $\rho(T)$ by a factor of 4 in Figs. 10,11. \begin{figure} \includegraphics[width=20pc]{Fig12.eps} \caption{The dependence of the large gap (solid squares) and the small gap (open squares) on the critical temperature for Sm-1111 with various thorium doping. The data of the present work are shown by squares (red squares depict data with nominal $x \lesssim 0.08$ series, blue squares \textemdash $x \approx 0.08 \textendash 0.3$). The data statistics obtained earlier by us with $x \approx 0.08 \textendash 0.3$ samples \cite{UFN,SmJETPL,FPSfit} is shown by triangles. BCS-limit 3.52 is shown by dash-dot line for comparison. Black lines are guidelines.} \end{figure} \subsection{Inter- and intraband coupling} All temperature dependences of the large and small superconducting gaps we have measured agree well with predictions of two-band BCS-like Moskalenko and Suhl system of equations \cite{Mosk,Suhl} with a renormalized BCS-integral (RBCS) \cite{MgB2fit}. The equations describe the $\Delta_{L,S}(T)$ variation governed by diagonal (intraband) and off-diagonal (interband) coupling constants $\lambda_{ij} \equiv V_{ij}N_j$, where $N_j$ is the normal-state density of states at the Fermi level in the $j^{th}$ band, $V_{ij}$ is the matrix interaction elements ($V_{ij} \equiv V_{ji}$), $i,j = L,S$. To obtain theoretical $\Delta_{L,S}(T)$, we used the following fitting parameters: the relation between off-diagonal coupling constants $\lambda_{LS}/\lambda_{SL}$, the relation between intra- and interband coupling rate $\sqrt{V_LV_S}/V_{LS}$, and the eigen BCS-ratio for the small gap $2\Delta_S/k_BT_C^S$, here $T_C^S$ is the eigen critical temperature of the $\Delta_S$ condensate in a hypothetical case of the zero interband interactions $(V_{LS} = 0)$. Note, the sign of the interband $\lambda$ would not change their ratio, thus the sign can not be determined by such fitting procedure. The only restriction for these fitting parameters is obvious: $2\Delta_S/k_BT_C^S > 3.52$. The theoretical fits with 3 adjustable parameters are shown in Fig.~10 by solid lines; they capture correctly the experimental $\Delta_{L,S}(T)$ dependences. \begin{figure} \includegraphics[width=20pc]{Fig13.eps} \caption{The dependence of BCS-ratio for the large gap (solid squares) and for the small gap (open squares) on the critical temperature for Sm-1111 with various thorium doping. The data of the present work are shown by squares (red squares depict data with nominal $x \lesssim 0.08$ series, blue squares \textemdash $x \approx 0.08 \textendash 0.3$). The data statistics obtained earlier by us with $x \approx 0.08 \textendash 0.3$ samples \cite{UFN,SmJETPL,FPSfit} is shown by triangles. BCS-limit 3.52 is shown by dash-dot line for comparison.} \end{figure} In order to explore whether the generic $\Delta_{L,S}(T)$ temperature behavior is intrinsic to Sm-1111 compounds with various doping, we plotted in Fig.~11 several temperature dependences of both gaps obtained with Sm-1111 samples with various doping level. The $\Delta_L(T)$ dependences are presented by solid symbols, the $\Delta_S(T)$ \textemdash by open symbols. The temperature dependence of bulk resistivity near the superconducting transition for optimal single crystal is shown by gray open circles. Significantly, the $\Delta_L(T)$ dependences with $T_C^{local} = 26 \pm 1$\,K (solid circles, rhombs, and down triangles in Fig. 11) obtained with one and the same sample $\sharp 21$ look similarly. The value $\Delta_L(4.2\rm{K}) \approx 6.3$\,meV and the shape of its temperature dependence are reproducible and independent on both the contact resistance and the number of SnS-junctions in the array. Generally speaking, regardless of thorium doping, the typical features of $\Delta_{L,S}$ remain the same within all the $T_C$ range from 27\,K to 49\,K. The large gap temperature dependence passes slightly below the single-gap BCS-type (shown by dashed lines in Fig. 11), whereas the small gap dependence follows BCS-type only at $T \lesssim T_C^S$, then slow fades till $T_C^{local}$. The gap temperature dependences of the same type were observed in other oxypnictide groups, such as Gd-1111 and La-1111 \cite{UFN,FPSfit}. The observed $\Delta_{L,S}(T)$ behaviour is typical for a relatively weak interband coupling as compared to intraband one, and a higher normal density of states in the bands with the small gap. In the presence of Coulomb repulsion between the quasiparticles the effective coupling constant should be calculated as $\lambda = \lambda^0 - \mu^{\ast}$ (here $\lambda^0$ is a full electron-boson coupling constant, and $\mu^{\ast}$ is a Coulomb repulsion). It is known, the experimental $\Delta_{L,S}(T)$ dependences are determined by namely effective coupling constant $\lambda_{ij}$ (see for example \cite{Kogan}), whereas the ratio of normal density of states (DOS) for both bands is determined by full coupling constant: $N_S/N_L = \lambda_{LS}^0 / \lambda_{SL}^0$. Supposing zero Coulomb repulsion as suggested in \cite{MazinRev,Mazin} for $s^{\pm}$ model ($T_C^{local} \approx 49$\,K), the relative coupling constants are $\lambda_L^0 : \lambda_S^0 : |\lambda_{LS}^0| : |\lambda_{SL}^0| = 1:0.65:0.3:0.03$, which leads to extremely high ratio of normal densities of states in the two bands $\lambda_{LS}/\lambda_{SL} = N_S/N_L \approx 10$. The latter is far from theoretical predictions, therefore one should use nonzero Coulomb repulsion constants $\mu_{LS}^{\ast}$ to estimate the full coupling constants $\lambda_{ij}$. In case of positive interband $\lambda_{LS}$, the DOS ratio is $N_S / N_L \approx 2$, and the relation between intra- and interband coupling rate approximately 2.5. The estimated relative $\lambda_{ij}$ are close to those calculated by us earlier in 1111-oxypnictides based on Gd, Sm, and La \cite{UFN}. \subsection{Summary of the data} By summarizing the gap values determined by IMARE spectroscopy of the Sm-1111 samples with $T_C = 21 \textendash 54$\,K, one may unreveal the influence of thorium doping on the superconducting properties (Figs.~12, 13). The $\Delta_L$ values are shown by solid symbols, the $\Delta_S$ values \textemdash by open symbols. The data of the present work are shown by squares. Blue squares correspond to samples with the nominal Th concentrations $x \approx 0.08 \textendash 0.3$ and well-reproduce the data obtained by us earlier (triangles) \cite{UFN,SmJETPL,FPSfit}. The pioneer data with nominal $x \lesssim 0.08$ series are shown by red squares, obviously, they follow the general course. Both superconducting gaps are in direct ratio with critical temperature as demonstrated in Fig.~12. Evidently, although the gap values are determined in the Andreev arrays with various cross-section, number of sequential contacts and, correspondingly, resistance of various Sm-1111 samples, the $\Delta_{L,S}(T_C)$ data in Fig.~12 is scattered insignificantly. We observe a good scaling of both superconducting gaps with critical temperature within the wide range of thorium doping and the wide range of critical temperatures, $21\,\rm{K} \leq T_C \leq 54\,\rm{K}$. The family of 1111-superconductors with Gd, La, and Ce, as well as FeSe chalcogenide also follow this tendency \cite{UFN}. The linear $\Delta_{L,S}(T_C)$ dependences correspond to nearly constant BCS-ratios $2\Delta_{L,S}/k_BT_C$ for both gaps (Fig. 13). For the large gap, the BCS-ratio lies in the range $2\Delta_L/k_BT_C^{local} = 5.0 \textendash 5.7$. It is obvious that the interband interaction increases this ratio due to decreasing $T_C^{local}$. From fitting the $\Delta_{L,S}(T)$ dependences in the framework of Moskalenko and Suhl equations we have estimated the eigen BCS-ratio for the large gap: $2\Delta_L/k_BT_C^L = 4.1 \textendash 4.6$. The latter value exceeds the weak-coupling BCS limit 3.52 and points to a strong electron-boson coupling. The value obtained is close to those determined for 1111 oxypnictides by PCAR spectroscopy \cite{Miyakawa,Tanaka,Samuely}, nuclear magnetic resonance \cite{Mukuda}, and scanning tunneling microscopy \cite{Noat}. The BCS-ratio for the small gap $2\Delta_S/k_BT_C^{local} = 1.1 \textendash 1.6$ lies well below the BCS-limit, obviously, because $T_C^{local} \gg T_C^S$. By contrast, the eigen BCS-ratio for the small gap estimated from Moskalenko and Suhl fits is $2\Delta_S/k_BT_C^S = 3.5 \textendash 4$ (see also \cite{UFN,SmJETPL,FPSfit}). In Sm(Th)-1111, thorium atoms are located in Sm(Th)O-spacers, do not affect superconducting FeAs blocks directly and act as charge donors. Therefore, one may conclude that (Sm,Th) substitution do not change significantly the mechanism of superconductivity in Sm$_{1-x}$Th$_x$OFeAs. \section{Conclusions} By using intrinsic multiple Andreev reflections effect (IMARE) spectroscopy, we explored evolution of the superconducting properties of Sm$_{1-x}$Th$_x$OFeAs compound with thorium doping. We determined the two superconducting gap values $\Delta_{L,S}$ for Sm$_{1-x}$Th$_x$OFeAs samples in a wide range of critical temperatures $T_C = 21 \textendash 54$\,K. We observed a good scaling of both $\Delta_L$ and $\Delta_S$ with $T_C$ in the whole explored range of $T_C$. The BCS-ratio for the large gap $2\Delta_L/k_BT_C^{local} = 5.0 \textendash 5.7$ and its eigen BCS-ratio (in a hypothetical case of zero interband coupling) $2\Delta_L/k_BT_C^L = 4.1 \textendash 4.6$ exceed the BCS-limit 3.52, thus suggesting a strong electron-boson coupling. For the small gap, $2\Delta_S/k_BT_C^{local} = 1.1 \textendash 1.6 \ll 3.52$, whereas its eigen BCS-ratio $2\Delta_S/k_BT_C^S = 3.5 \textendash 4.0$ (when $V_{LS}=0$). The determined temperature dependences of the superconducting gaps $\Delta_{L,S}(T)$ are reproducible within the studied $T_C$ range and are well described with the two-band Moskalenko and Suhl system equations with a renormalized BCS-integral (RBCS). According to our estimates, the interband coupling is weaker than the intraband one by a factor of $\approx 2.5$, and the Coulomb repulsion constants $\mu^{\ast}$ are not negligible. The thorium substitution does not significantly change the mechanism of superconductivity in Sm$_{1-x}$Th$_x$OFeAs, making Sm(Th)O-spacers of crystal structure to act as charge reservoirs.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{}} \section{Introduction: Hunting for Gamma-ray Binaries} At X-ray energies, the extra-solar sky is dominated by the emission from accreting binary systems containing black holes and neutron stars. However, at higher energies (GeV to TeV) very few interacting neutron star or black hole binaries are known sources~\citep{Hill2011}. The relative paucity of gamma-ray binaries can be attributed to the necessity for not only a power supply, but also non-thermal mechanisms~\citep{Dubus2006, Mirabel2006}. There are, however, evolutionary reasons to expect more gamma-ray binaries to exist~\citep{Meurs1989}, and there are many unidentified \textit{Fermi} LAT sources. Gamma-ray binaries are expected to show orbitally-modulated gamma-ray emission due to changes in viewing angle and, in eccentric orbits, the degree of the binary interaction. Periodic modulation has indeed been seen in LS 5039 (3.9 day period), LS I +61 303 (26.5 days), Cygnus X-3 (4.8 hours)~\citep{Hill2011}, and 1FGL J1018.6-5856 (16.65 days)~\citep{Corbet2011, Fermi2012} and emission is orbital phase dependent for PSR B1259-63 (3.4 years)~\citep{Abdo2011}. A search for periodic modulation of gamma-ray flux from \textit{Fermi} LAT sources may thus yield further gamma-ray binaries, potentially revealing the predicted HMXB precursor population. The second \textit{Fermi} LAT catalog of gamma-ray sources (``2FGL''~\citep{Nolan2012}) contains 1873 sources, many of which do not have confirmed counterparts at other wavelengths and thus are potentially gamma-ray binaries. In order to search for modulation we regularly update 0.1 - 200 GeV light curves for all 2FGL sources and calculate power spectra of these. We use aperture photometry with a 3$^{\circ}$\ radius, with photons weighted by the probability that they came from the source of interest to increase the signal-to-noise level. To avoid solar gamma-ray emission, we exclude times when the Sun was closer than 5$^{\circ}$\ to an aperture. We then calculate power spectra of all light curves to search for periodic modulation. To account for variations in exposure, each time bin's contribution to the power spectrum is weighted by its relative exposure~\citep{Fermi2012}. \section{Complex Modulation Patterns in Two \textit{Fermi} Sources} From an examination of the power spectra for all sources in the 2FGL catalog, orbital modulation is strongly detected in the known gamma-ray binaries LS 5039, LS I +61 303, and 1FGL J1018.6-5856 (= 2FGL J1019.0-5856). Artifacts near 1 day and the precession period of \textit{Fermi}'s orbit at \sqig53 days are also seen in the power spectra of a number of sources\footnote{\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT_caveats_temporal.html}}. In addition to these, we noted complex sets of peaks in the power spectra of 2FGL J0753.2+1937 and 2FGL J2356.3+0432. 2FGL J0753.2+1937 does not have an identified counterpart at other wavelengths, while 2FGL J2356.3+0432 is identified with the blazar MG1 J235704+0447~\citep{Nolan2012}. Although these two sources are widely separated on the sky, it was determined that the peaks in both sources were all harmonics of a 27.3 day period (Figure~\ref{fig-power}). When the light curves are folded on this period, brief flares are seen in both sources (Figure~\ref{fig-fold}). \begin{figure} \includegraphics[angle=-90,width=83mm]{moon_power_lab.ps}% \caption{Both 2FGL J2356.3+0432 (top) and 2FGL J0753.2+1937 (bottom) show a complex pattern of peaks in their power spectra. These peaks are harmonics of a 27.3 day period - up to the 17$^{\rm th}$ harmonic is detectable.}\label{fig-power} \end{figure} \begin{figure} \includegraphics[angle=-90,width=83mm]{moon_fold.ps}% \caption{Modulation for both 2FGL J2356.3+0432 and 2FGL J0753.2+1937 is caused by sharp ``flares'' that recur with a 27.3 day period, but with different epochs of maximum flux.}\label{fig-fold} \end{figure} \section{Lunar Gamma-Rays} Interactions of cosmic rays with the Moon's surface result in the production of gamma rays. This makes the Moon a rather bright source for the \textit{Fermi} LAT with a flux above 100 MeV of \sqig10$^{-6}$ ph cm$^{-2}$ s$^{-1}$~\citep{Abdo2012}, and it was even detectable with EGRET~\citep{Thompson1997}. We note that the Sun is also a gamma-ray source. Although the 2FGL catalog notes sources potentially affected by solar emission, no such analysis was done for the Moon~\citep{Nolan2012}. The Moon's sidereal period is 27.321 days. The sharp recurrent flares from 2FGL J0753.2+1937 and 2FGL J2356.3+0432 can be understood as due to repeated passages of the Moon sufficiently close to these sources to affect the light curves. \section{Optimizing Lunar Detection: Summed Harmonics} Power spectra are not ideal for detecting brief flaring activity as this strongly non-sinusoidal modulation results in the power being spread over a very large number of harmonics. We investigated other period-detection techniques such as Stellingwerf's phase dispersion minimization~\citep{Stellingwerf1978} method and others. It was found that lunar modulation was best detected by creating a modified power spectrum, similar to the Z$^2$ test~\citep{Buccheri1983}, with each point replaced with a sum of itself and up to the 10th harmonic. This is illustrated in Figure~\ref{fig-harm} where we show the power spectrum, and summed-harmonic power spectrum of 2FGL J0816.9+2049, a source which is identified with the blazar 2FGL J0816.9+2049~\citep{Nolan2012}. From harmonic-summed power spectra of the entire 2FGL catalog we detected 38 sources that suffered from significant lunar contamination (Table~\ref{table-sources}). \begin{figure} \includegraphics[angle=-90,width=83mm]{harm_demo.ps}% \caption{Lunar contamination of 2FGL J0816.9+2049 is not directly detected in the power spectrum (bottom). However, the summed harmonic modification of this (top) clearly shows a 27.3 day period due to the Moon.}\label{fig-harm} \end{figure} \begin{table \caption{\textit{Fermi} LAT Sources with Apparent Lunar Contamination}\label{table-sources} \begin{tabular}{|l|r|r|} \hline \multicolumn{1}{|c}{\textbf{Source}} & \multicolumn{1}{|c}{\textbf{R.A. (J2000)}} & \multicolumn{1}{|c|}{\textbf{Decl. (J2000)}} \\ \multicolumn{1}{|c}{\textbf{}} & \multicolumn{1}{|c}{\textbf{(degrees)}} & \multicolumn{1}{|c|}{\textbf{(degrees)}} \\ \hline 2FGL J0009.0+0632 & 2.262 & 6.542\\ 2FGL J0022.5+0607 & 5.643 & 6.124\\ 2FGL J0023.5+0924 & 5.892 & 9.407\\ 2FGL J0114.7+1326 & 18.675 & 13.441 \\ 2FGL J0257.9+2025c & 44.480 & 20.423 \\ 2FGL J0322.0+2336 & 50.516 & 23.611 \\ 2FGL J0326.1+2226 & 51.536 & 22.439 \\ 2FGL J0440.5+2554c & 70.146 & 25.903 \\ 2FGL J0709.0+2236 & 107.274 & 22.600 \\ 2FGL J0725.6+2159 & 111.400 & 21.990 \\ 2FGL J0753.2+1937 & 118.320 & 19.623 \\ 2FGL J0816.9+2049 & 124.250 & 20.823 \\ 2FGL J0839.4+1802 & 129.863 & 18.036 \\ 2FGL J0913.0+1553 & 138.251 & 15.893 \\ 2FGL J0923.5+1508 & 140.895 & 15.138 \\ 2FGL J0946.5+1015 & 146.648 & 10.259 \\ 2FGL J1007.7+0621 & 151.932 & 6.353 \\ 2FGL J1016.0+0513 & 154.014 & 5.229 \\ 2FGL J1018.6+0531 & 154.659 & 5.524 \\ 2FGL J1040.7+0614 & 160.182 & 6.246 \\ 2FGL J1058.4+0133 & 164.615 & 1.566 \\ 2FGL J1059.0+0222 & 164.767 & 2.374 \\ 2FGL J1107.5+0223 & 166.878 & 2.386 \\ 2FGL J1221.4$-$0633 & 185.358 & $-$6.553 \\ 2FGL J1256.5$-$1145 & 194.139 & $-$11.753 \\ 2FGL J1318.9$-$1228 & 199.745 & $-$12.476 \\ 2FGL J1424.2$-$1752 & 216.054 & $-$17.880 \\ 2FGL J1544.1$-$2554 & 236.042 & $-$25.912\\ 2FGL J1553.2$-$2424 & 238.322 & $-$24.404 \\ 2FGL J2000.8$-$1751 & 300.217 & $-$17.857 \\ 2FGL J2006.9$-$1734 & 301.734 & $-$17.582 \\ 2FGL J2031.4$-$1842 & 307.868 & $-$18.703 \\ 2FGL J2108.6$-$1603 & 317.159 & $-$16.062 \\ 2FGL J2120.6$-$1301 & 320.152 & $-$13.030 \\ 2FGL J2124.0$-$1513 & 321.023 & $-$15.223 \\ 2FGL J2154.0$-$1138 & 328.503 & $-$11.634 \\ 2FGL J2225.6$-$0454 & 336.424 & $-$4.901 \\ 2FGL J2356.3+0432 & 359.091 & 4.541 \\ \hline \end{tabular} \end{table} \begin{figure*} \includegraphics[width=167mm,trim=0cm 0cm 0cm 0cm,clip=true]{multi_moon_fold.ps}% \caption{Light curves of selected \textit{Fermi} LAT sources from Table~\ref{table-sources} folded on the Moon's sidereal period. The vertical red lines are offset based only on the R.A. of each source and so roughly approximate the Moon's path.\label{fig-muti-fold}} \end{figure*} \section{Removing the Moon} \textit{Fermi} spacecraft files do not currently include lunar coordinates. One of us (PSR) has provided a utility (``\texttt{moonpos}'') that uses the JPL SPICE toolkit~\citep{Acton1996}\footnote{\url{http://naif.jpl.nasa.gov/naif/toolkit.html}} to add lunar coordinates. This is available from the \textit{Fermi} Science Support Center on the User Contributions web page\footnote{\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/user}}. The addition of lunar coordinates enables filtering to exclude data obtained when the Moon was close to a source via the standard analysis tool \texttt{gtmktime}. We find that excluding data within 8 degrees of a source removes almost all contamination. \section{Applications to Searches for Flaring Binaries} The technique of summing harmonics to reveal the presence of lunar contamination is also useful in the search for gamma-ray binaries. For example, the binary PSR B1259-63 is only active for a brief portion of its 3.4 yr orbit~\citep{Abdo2011}. Other systems exhibiting similar repeating brief flares will be more readily detected using harmonically-summed power spectra. So far no definite non-lunar periodic flaring has been detected for any source, but the hunt continues. \section{Summary} \begin{itemize} \setlength{\itemsep}{-12pt} \setlength{\parsep}{0pt} \item Lunar gamma-ray emission can significantly contaminate the light curves of LAT sources near the ecliptic plane.\\ \item Lunar modulation at 27.3 days is directly detected in the power spectra of a few sources.\\ \item Adding power-spectrum harmonics (\sqig10) reveals the 27.3 day signal in 38 2FGL sources.\\ \item Software has been developed to facilitate exclusion of lunar contaminated data. This is publicly available. \end{itemize} \noindent We advocate:\\ (i) lunar proximity filtering should be done for any source close to the ecliptic.\\ (ii) lunar coordinates should be included in the standard \textit{Fermi} spacecraft files. The summed-harmonic technique is being used to search for gamma-ray binaries that briefly flare for only a short fraction of their orbit. \begin{acknowledgments} This work was partially supported by the NASA \textit{Fermi} Guest Observer Program (NNX12AH82G). \end{acknowledgments} \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:intro}Introduction} \noindent Diffusion and transport play a central role in internal dynamical processes of many complex systems and often represent their main drivings. As an example, efficiency in the transport of chemicals and particles affects reaction rates through the probability that two or more reacting molecules ``meet'' each other \cite{rice1985,dorsaz_prl10}. Another example is given by turbulent diffusion that is one of the main mechanisms, the other one is advection by large-scale motions, driving the dispersion of contaminants and pollutants (gas, aerosol particles, dust, seeds) in the atmosphere \cite{kaplan1993,paradisi_pa01,paradisi_pceb01,paradisi_npg12,cheng_csb2014,goulart_pa17}. \noindent The first observation concerning diffusive motion of particles in fluids dates back to a period between the 18th and 19th centuries \cite{ingenhousz1784,bywater1819,brown_pm1828,brown_pm29} (see, e.g., \cite{abbott_ieee-te1996} for a interesting historical perspective). The so-called {\it normal, Brownian} or {\it standard diffusion} was firstly observed. This is defined by two conditions: (i) Mean-Squared Displacement (MSD) grows linearly in time: $\langle x^2 \rangle = 2 D t$ and (ii) Probability Density Function (PDF) of particle displacements is Gaussian \footnote{ Both conditions follow from the well-known Central Limit Theorem, which states the emergence of a Gaussian random variable from the sum of many random contributions that are both finite-size (i.e., finite variance) and statistically independent (i.e., uncorrelated). }. \noindent Normal diffusion has been historically the first observed and theoretically investigated diffusive motion as it emerges in non-complex, i.e., not self-organized systems, thus without coherent structures or complex heterogeneous conditions that can affect diffusive motion in a non-trivial way (e.g., by introducing long-range correlations). This was the condition usually observed in the kind of experiments made by Brown, Perrin and others, usually a still liquid or a gas at equilibrium (\cite{ingenhousz1784,bywater1819,brown_pm1828,perrin_cr1908,perrin_acp1909}. \noindent On the contrary, when complex systems are considered, that is, characterized by the emergence of self-organized states, i.e., coherent large-scale, long-time lasting, structures, deviations from the linear time-dependence of the variance are typically observed \cite{metzler_etal-physrep-2000,paradisi_csf15_preface,paradisi_complex_csf15}: \begin{equation} \langle x^2 \rangle = 2 D_\phi t^\phi = 2 D_H t^{2 H}\ \ {\rm with}\ \ \phi \ne 1\ (H \ne 1/2 \ )\ . \label{anom_diff} \end{equation} $H$ is the Hurst exponent or {\it second moment scaling} and $H=1/2$ identifies the normal diffusion scaling. This condition is known as {\it anomalous diffusion} \cite{metzler_etal-physrep-2000,metzler_etal-jpa-2004,klages_anomalous_2008}. It is worth noting that Eq. (\ref{anom_diff}) shows that not only the global efficiency of the diffusion, but also the particular kind of transport is a crucial property, being the first one measured by the generalized position diffusivity $D_\phi$ and the second one encoded in the diffusion scaling $\phi = 2 H$. \noindent The first observed anomalous diffusion dates back to the Richardson's t-cubed law for the relative particle diffusion in turbulence, which was already reported in 1926 \cite{richardson_prsa1926}. Another historically important example comes from the motion of charge carriers in amorphous semiconductors, which was extensively studied by Montroll and co-workers (see, e.g., \cite{montroll1964,montroll_etal-jmp-1965,scher_etal-prb-1975}). In the last three decades, the number of complex self-organized systems displaying anomalous diffusion has increased very rapidly \cite{solomon_prl93,kraichnan_prl94,claused_pa97,venkataramani_pd98,periasamy_bj98,delcastillo-negrete_pp04,furstenberg_csf15}. In particular, in the field of biological transport many new experimental findings are being published every year \cite{tolicnorrelykke_etal-prl-2004,golding_etal-prl-2006,gal_experimental_2010,barkai_etal-phystod-2012,manzo_etal-rpp-2015,ariel2015swarming,he_nc16} and this is attracting a great interest in the scientific community of theoreticians with many models being proposed and compared with data \cite{jeon_etal-prl-2011,hofling_etal-rpphys-2013,manzo_etal-prx-2015,molina_etal-pre-2016}. In particular, many papers are being devoted to model the random diffusive motion of macromolecules in the cell cytoplasm and membrane \cite{manzo_etal-prx-2015,reverey_etal-scirep-2015,molina_etal-pre-2016,metzler_etal-bba-2016,jeon_prx16}, or in artificial {\it in vitro} environments \cite{dix_arb08,nawrocki_jpcb17}, such as a mixture of water, proteins and lipids, used to mimick and investigate mechanisms occurring in biology (e.g., trapping of proteins by simultaneously forming lipid vesicles) \cite{luisi_bbab09,luisi_cbc10,paradisi_bmcsb15}. \noindent The first proposed model for anomalous diffusion is the Continuous Time Random Walk (CTRW), which was introduced and extensively studied and applied by Montroll and co-workers \cite{montroll1964,montroll_etal-jmp-1965,scher_etal-prb-1975} (see \cite{weiss1983} for a review). Its very first version is a random walk with statistically independent random jumps and random times that are also decoupled with each other. Random times, also called Waiting Times (WTs), describe a trapping mechanism due to a sequence of potential wells \cite{bouchaud_etal-physrep-1990,bouchaud-jpf-1992}, thus this particular CTRW model can describe only subdiffusion ($\phi<1$). The WT is the intermediate long time between two crucial short-time events, each one given by the escape from a given well and the jump into another one, thus CTRW is essentially driven by the sequence of WTs, described by a renewal point process \cite{bianco_cpl07,paradisi_epjst09,paradisi_npg12,paradisi_romp12,paradisi_csf15_pandora,paradisi_springer2017}. Several CTRW models have been introduced and investigated, but the subdiffusive CTRW remains probably the most studied and applied one, with the exception of so-called L\'evy Walk (LW) model, which is a CTRW whose jumps and WTs are coupled \cite{shlesinger_random_1982,shlesinger_pa86,zaburdaev_rmp15}. Unlike the subdiffusive uncoupled CTRW, LWs can indeed reproduce superdiffusive behavior. CTRW represents an important modeling approach extensively applied to many complex systems, such as biological transport (see, e.g., \cite{burov_etal-pccp-2011,reverey_etal-scirep-2015,metzler_etal-bba-2016} for subdiffusive CTRWs and \cite{de_jager_levy_2011,ariel2015swarming} for L\'evy Walks and search strategies). Other models do not consider the existence of crucial jump events, while explicitly including the long-range correlations of the process in the dynamical equations. This is the case of Fractional Brownian Motion (FBM) \cite{mandelbrot_etal-siamrev-1968,biagini2008} and of a viscoelastic model such as the Generalized Langevin Equation (GLE) \cite{heppe-jfm-1998,goychuk_acp13,goychuk_po14,stella_prb14}, which are both essentially based on Gaussian stochastic processes. \vspace{.3cm} \noindent {\it Anomalous diffusion from heterogeneity} \vspace{.05cm} \noindent CTRW and FBM had some success in applications to biological transport. However, each of these models does not seem able to take into account all the observed statistical features of transport \cite{massignan_etal-prl-2014,manzo_etal-prx-2015,molina_etal-pre-2016}, so that a unified reasonable physical picture describing experimental data does not yet exist. A new direction in theoretical modeling recently emerging in the scientific community comes from a quite simple observation: diffusion in biological environments like the cell cytoplasm or membrane is mainly affected by the very complex heterogeneity, the crowding and the presence of different kinds of structures (e.g., cytoskeleton). For this reason, a great attention on the role of heterogeneous environments in anomalous diffusion is rapidly increasing and an intense debate is raising in the scientific community, especially in the context of anomalous biological transport \cite{hofling_etal-rpphys-2013,manzo_etal-prx-2015,molina_etal-pre-2016,lanoiselee_jpamt18,sposini_njp18}. \noindent Due to the above reasons, in very recent years the proposal of Heterogeneous Diffusivity Models (HDMs) is taking momentum in the scientific community \cite{cherstvy_etal-njp-2013,jeon_etal-pccp-2014,cherstvy_etal-pccp-2016}. Superstatistics is probably the first model of anomalous diffusion that is based on the idea of a heterogeneous environment \cite{beck_prl01,beck_pa03,vanderstraeten_pre09}, but a great attention is nowadays focused towards other approaches trying to go beyond superstatistics. In particular, Diffusing Diffusivity Models (DDMs) are being proposed and studied in very recent literature \cite{chubynsky_etal-prl-2014,chechkin_prx17,jain_jcs17,lanoiselee_jpamt18,sposini_njp18}. In DDMs an additional stochastic equation is introduced to describe the position diffusivity. A similar but different approach, included into the class of HDMs, follows from the very first idea of Schneider's grey Brownian Motion (gBM) \cite{schneider_1990,schneider_1992}. In the gBM a random amplitude multiplying a Gaussian process, usually the FBM, is introduced. This amplitude characterizes the motion of single trajectories, so that the diffusion properties of the ensemble are affected by the amplitude distribution. In particular, gBM is associated with a Mainardi distribution of the amplitude \cite{mainardi_etal-ijde-2010,pagnini-fcaa-2013} and, in this case, the displacement PDF satisfies a time-fractional diffusion equation \cite{gorenflo_etal-nd-2002,paradisi_caim15, sandev_fcaa18}. In the last decade the gBM model was extended to the generalized grey Brownian Motion (ggBM) \cite{mura-phd-2008,mura_etal-pa-2008,mura_etal-jpa-2008,mura_etal-itsf-2009,pagnini_etal-ijsa-2012,pagnini_etal-ptrsa-2013}. The ggBM was shown to satisfy the Erd\'elyi--Kober fractional diffusion equation \cite{pagnini-fcaa-2012}, which includes the time-fractional diffusion equation, describing the gBM distribution, as a particular case. A further generalization is given by the ggBM-like model discussed by Pagnini and Paradisi, 2016 \cite{pagnini_etal-fcaa-2016}, which was proven to satisfy the space-time fractional diffusion equation \cite{gorenflo_etal-cp-2002,gorenflo_etal-pa-2002,mainardi_etal-fcaa-2001} regardless of the particular Gaussian process describing single trajectory's dynamics \footnote{ It is worth noting that, similarly to ggBM, the model discussed in Ref. \cite{pagnini_etal-fcaa-2016} reduces to time-fractional diffusion for a proper choice of parameters. }. For this reason, this class of ggBM-like models is here denoted as Randomly Scaled Gaussian Processes (RSGPs), as it extends the ggBM not only to much more general space-time fractional diffusion, but it also includes whatever Gaussian process as the process driving single trajectory dynamics. The DDM approach has been recently compared against a ggBm-like approach with a random scale governed by the same stochastic differential equation \cite{sposini_njp18}. The potential application of ggBM-like models to biological transport was discussed by showing that the behavior of a set of different statistical indices are qualitatively accounted for by this kind of modeling approach \cite{molina_etal-pre-2016}. However, to our knowledge, DDMs, gBM and ggBM-like models do not directly describe the particle velocity's dynamics and, thus, the role of friction and velocity diffusivity are not explicitly taken into account. \vspace{.3cm} \noindent {\it Heterogeneous ensemble of Brownian particles and RSGPs} \vspace{.05cm} \noindent To overcome this limitation, the dynamics of a Heterogeneous Ensemble of Brownian Particles (HEBP) have been recently investigated by Vitali et al., 2018 \cite{vitali_jrsi18,dovidio_spta18}, where a stochastic model that takes explicitly into account the heterogeneity is derived, but this model does not belong neither to the class of DDMs nor to that of HDMs. It is instead based on a linear Langevin equation for a friction-diffusion (i.e., Ornstein--Uhlenbeck) process that describe the velocity dynamics. A population of relaxation time and velocity diffusivity parameters are then considered, that is, mathematically treated as random variables whose statistical distributions are derived by imposing the emergence of anomalous diffusion, long-range correlations and power-law decay in the position distribution of the particle ensemble (see following section for model details). This means to assume that particles in the ensemble follow different dynamics depending on the different physical parameters. Due to linearity, this model is easily recognized to be equivalent to a RSGP for both position and velocity: \begin{equation} x(t) = \sqrt{2D}\, x_G(t)\ ;\quad v(t) = \sqrt{2D}\, v_G(t)\ , \label{rsgp} \end{equation} being $x_G(t)$ and $v_G(t)$ proper Gaussian processes and $D$ a random velocity diffusivity (see \ref{rsgp-ggou}). In the RSGP model \eqref{rsgp} the single trajectory is still described by a Gaussian process, but this is no more a FBM. It instead follows from the joint effect of the different relaxation time scales $\tau$. For proper distribution of $\tau$, this causes the emergence of long-range correlations and anomalous, but still Gaussian, diffusion with scaling $\phi = 2 H \ne 1$. Conversely, the non-Gaussianity in the Probability Density Function (PDF) of the position is related to inhomogeneities in the velocity diffusivity $D$ \cite{vitali_jrsi18}. An interesting point deserving attention is that the HEBP/RSGP model proposed by Vitali et al., 2018 \cite{vitali_jrsi18} has a clear physical meaning as it describes the dynamics of an ensemble of Brownian particles with heterogeneous physical properties and moving in a viscous medium in thermal equilibrium, thus giving a well-posed physical basis to ggBM and ggBM-like processes (i.e., RSGPs) \cite{dovidio_spta18}. \vspace{.3cm} \noindent {\it The problem of infinite energy} \vspace{.05cm} \noindent As known, anomalous diffusion is often observed jointly with non-Gaussian PDFs displaying slow power-law decaying tail: $p(x,t) \sim 1/x^{1+\alpha}$ with $0 < \alpha < 2$. For this kind of non-Gaussian PDFs, the HEBP model developed by Vitali et al., 2018 \cite{vitali_jrsi18} share with other anomalous diffusion processes, such as L\'evy flights, the problem of an infinite variance, thus formally allowing for a physically meaningless infinite energy in the system. Furthermore, this does not allow to have a fluctuation-dissipation theorem for the equilbrium velocity PDF in a stationary thermal bath. \noindent To overcome this limitation, while remaining in the framework of heterogeneity-driven anomalous diffusion, we here discuss a simple and natural modification of the HEBP model proposed in Vitali et al., 2018 \cite{vitali_jrsi18} and show that this modification is sufficient to get a physically meaningful model, at the same time being able to reproduce behaviors similar to those of other anomalous diffusion processes, in particular L\'evy Walk (LW) models. In fact, similarly to LW, our proposed model is shown to display a power-law decay of the distribution for intermediate values of the position, at the same time keeping the finiteness of moments and, thus, of energy due to an exponential cut-off in the distribution tails. However, in spite of its finite energy, LW cannot describe a friction-diffusion process and, thus, fluctuation-dissipation theorem does not apply. \vspace{.2cm} \noindent The paper is organized as follows. In Section \ref{sec:descr} we discuss the stochastic model for the HEBP and we show numerical simulations of the model. In particular, an anomalous-to-normal transition is shown to occur in Section \ref{anom-to-normal}. In Section \ref{sec:lw} the comparison of our model with two different LWs is carried out. Finally, in Section \ref{discuss} some discussions and final remarks are sketched. \section{\label{sec:descr} Heterogeneous ensemble of Brownian particles} \subsection{Preliminary considerations} \label{model_1} \noindent Starting from the Langevin equations associated to each Brownian particle of the ensemble, the HEBP approach leads to anomalous diffusion with uncorrelated white noise. Thus, HEBP models are substantially different from approaches based on the generalized Langevin equation or on Langevin equations with colored noises and, in general, on noises with long-range spatiotemporal correlations with even "anomalous" thermodynamics \cite{gheorghiu_etal-pnas-2004}. In HEBP models anomalous diffusion emerges as a consequence of heterogeneity in the particle ensemble, while classical thermodynamics still hold. Heterogeneity is then responsible for long-range correlations, in agreement with approaches based on polydispersity \cite{gheorghiu_etal-pnas-2004}. In particular, in the present approach anomalous behavior is displayed during an intermediate asymptotic transient regime in the Barenblatt's sense \cite{barenblatt-1979}, thus requiring an underdamped (white noise) Langevin approach. These last two features of anomalous diffusion are consistent with the findings in the case of the underdamped scaled Brownian motion \cite{bodrova_etal-sr-2016}, and, implicitly, with the role of friction when a complex potential is applied \cite{sancho_etal-prl-2004}. \noindent HEBP models are compared in literature with similar approaches based on fluctuating friction \cite{rozenfeld_etal-pla-1998,luczka_etal-pa-2000,luczka_etal-appb-2004}, fluctuating mass \cite{ausloos_etal-pre-2006} and with the already cited DDM approach \cite{chechkin_prx17,sposini_njp18}. Further approaches using a population of the involved parameters were proposed on the basis of a Gaussian processes, see for example the Markovian continuous time random walk model with a population of time-scales \cite{pagnini-pa-2014} or the ggBM \cite{mura_etal-jpa-2008,molina_etal-pre-2016} that actually is the fBm with a population of length-scales. Interestingly, approaches based on fluctuating friction or mass, as such as HEBP models, are underdamped processes on the contrary of the DDMs and ggBm-like processes \cite{mura_etal-jpa-2008,pagnini_etal-fcaa-2016,sposini_njp18}, that are overdamped. In systems displaying anomalous diffusion, underdamped processes were shown to be a preferable approach \cite{bodrova_etal-sr-2016}. All these approaches take into account a distributed parameter and, then, they can be linked to superstatistics \cite{beck_pa03}. The discussion of the present approach within the idea of superstatistics is reported in Section \ref{superstat}. \vspace{.2cm} \noindent In the HEBP model introduced here the ensemble of particles differ in their density (mass divided by volume). The main difference with mentioned approaches is that in our formulation fluctuations refer to differences among particles and not to changes in time. Particles differ in their mass $m$, in their friction coefficient $\gamma$ and in their noise amplitude $b$, related to velocity diffusivity through: $D=b/m^2$. The fluctuation-dissipation theorem states $b=\kappa_{\rm{B}} T \, \gamma$, where $\kappa_{\rm{B}}$ and $T$ are the Boltzmann constant and the temperature, respectively. Then, the set of distributed independent parameters $\{m,\gamma,b\}$ reduces to the set $\{m,\gamma\}$. Moreover, by assuming that the present one-dimensional model is indeed a Cartesian direction of a three-dimensional isotropic and spatially independent process, the friction coefficient is given by the Stokes law $\gamma=6 \pi \nu r$ where $\nu$ is the viscosity of the medium and $r$ the radius of the Brownian particle. This means that by the combination of the fluctuation-dissipation theorem and the Stokes law the set of distributed independent parameters is $\{m,r\}$. Considering the definitions $\tau=m/\gamma$ and $D=b/m^2$, the particle density (mass divided by volume) is approximately $3 m/(4\pi r^3) = 162 \, \pi^4 \nu^3 \, D^2 \tau^5$ and the differences among particles in terms of $\{m,r\}$ translate into differences in terms of $\{\tau,D\}$, namely the ensemble of particles is characterized by a population of diffusivities $D$ and a population of relaxation times $\tau$. In this framework, we highlight that both the populations of masses and radii contribute to the emergence of the anomalous scaling, by means of the relaxation times $\tau=m/\gamma=m/(6\pi \nu r)$, and to the shape of the resulting probability density functions of particle dispersion, by means of the diffusivity $D=\kappa_{\rm{B}} T \, 6 \pi \nu \, r/m^2$. \subsection{Model description} \label{model_2} \noindent We consider an ensemble of particles with heterogeneous physical parameters moving in a viscous medium. Each particle moves according to a linear Langevin equation for a friction-diffusion, i.e., Ornstein--Uhlenbeck process: \begin{eqnarray} &&\frac{dx}{dt} = v\ , \label{eq:kinematic} \\ \ \nonumber \\ && \frac{dv}{dt} = -\frac{1}{\tau} v(t) + \sqrt{2 D}\, \xi(t) \ . \label{eq:langevin_new} \end{eqnarray} As anticipated in the previous Section \ref{model_1}, $\tau$ and $D$ are the viscous relaxation time and the velocity diffusivity, respectively. The fluctuation-dissipation theorem for the single particle in the HEBP is given by: \begin{equation} \tau D\ = \frac{\kappa_{\rm B} T}{m}= \langle v^2 | \tau, D \rangle_{\rm eq}\ . \label{fluct-diss_single} \end{equation} In our HEBP model each single particle has a different pair of parameters $(\tau, D)$, which meet fluctuation-dissipation relation \ref{fluct-diss_single}) and remain constant throughout the motion. The complexity in the dynamics of the ensemble is mathematically introduced by means of an effective randomness in the parameters of the Langevin equation (\ref{eq:langevin_new}) and, thus, by means of proper statistical distributions for $\tau$ and $D$. Interestingly, for each pair $(\tau,D)$, every trajectory itself remains an ordinary Brownian motion in a viscous medium, i.e., a Ornstein--Uhlenbeck process with Wiener (Gaussian) noise. Thus, the overall complexity emerges as an average behavior of the entire ensemble of particles, which individually move according to a standard Ornstein--Uhlenbeck (Gaussian) process. \noindent In order to get both anomalous diffusion, due to long-range correlations, and power-law behavior in the PDF $p(x,t)$, we choose the following distributions of $\tau$ and $D$ \cite{vitali_jrsi18}: \begin{eqnarray} &&g(\tau) = \frac{\eta}{\Gamma(\eta)} \frac{1}{\tau} L_{\eta}^{-\eta}\left(\frac{\tau}{\tau_*}\right)\ ,\quad 0<\eta<1\ ; \label{eq:tau_distr}\\ && f(D) = \frac{1}{D_*} L_{\alpha/2}^{-\alpha/2}\left(\frac{D}{D_*}\right)\ , \quad 1<\alpha<2\ ; \label{eq:D_distr} \end{eqnarray} where $\Gamma(\cdot)$ is the Gamma function, $L_{\alpha}^{-\alpha}(\cdot)$ the L\'evy extremal density with stability index $\alpha$ \cite{gnedenko-kolmogorov1954,feller1971}, $\tau_*$ is a reference time scale and $D_*$ a reference scale for the velocity diffusivity \footnote{ As well known, the L\'evy's Generalized Central Limit Theorem states that L\'evy stable densities $L_\alpha^\theta(x)$, with $\theta$ asymmetry parameter, have a basin of attraction for a class of PDFs with slowly decaying power-law tails: $p(x,t) \sim 1/|x|^{1+\alpha}$ with $0 < \alpha \le 2$. As a consequence, the choice of $g(\tau)$ and $f(D)$ is a robust one and is expected to apply in the context of complex systems, i.e., systems with self-organizing features and emergent structures where power-law tails and anomalous transport often emerge due to cooperative dynamics. }. In the following we set $\tau_*=D_*=1$. \noindent As well-known, with the exception of the Gaussian case ($\alpha=2$), the Mean Square Displacement (MSD) of a L\'evy stable density $L_\alpha^\theta$ diverges and, for $0 < \alpha \le 1$, also the mean $\langle D \rangle$ is infinite, which is exactly the case of Eq. (\ref{eq:D_distr}) for the considered range of parameters. Conversely, the average relaxation time is finite and is given by: $\langle\tau\rangle = \eta \tau_*/\Gamma(1/\eta)$. $\eta$ is the model parameter determining the \textsl{space-time scaling} of the diffusion process, while $\alpha$ affects the power-law decay emerging in the position PDF $p(x,t)$. \noindent It was proved in Ref. \cite{vitali_jrsi18} that the process conditioned to a particular value of $D$ is a Gaussian stochastic process with long-range velocity correlation. In particular, the stationary correlation function and the MSD are given by: \begin{eqnarray} &&R(t | D) = D\, \frac{\Gamma( 1 + \eta )}{\Gamma( 1 - \eta )} \left( \frac{\Gamma(1/\eta)}{\eta} \right)^\eta \langle \tau \rangle^{1+\eta} \, t^{-\eta}\ ,\quad 0<\eta<1\ ; \label{corr_free} \\ \ \nonumber \\ &&\sigma_X^2(t | D) = \langle x^2 | D \rangle = 2 C D t^\phi\ , \quad 1 < \phi = 2-\eta < 2\ ; \label{var_superdiff_1} \\ \ \nonumber \\ &&C = \frac{\Gamma(\eta+1)}{\Gamma(3-\eta )} \left( \frac{\Gamma(1/\eta)}{\eta}\right)^{\eta}\langle\tau\rangle^{1+\eta}\ , \label{var_superdiff_2} \end{eqnarray} thus resulting in a {\it superdiffusive scaling} regime.\\ The one-time marginal PDF is a Gaussian density with zero mean and variance (MSD) $\sigma_X^2$: ${\cal G}(x,\sigma_X(t | D))$. By averaging Eq. (\ref{fluct-diss_single}) over $\tau$, we get for any fixed, finite $D$ \cite{vitali_jrsi18}: \begin{equation} \langle v^2 | D \rangle_{\rm eq} = \langle \tau \rangle D\ . \label{cond_var} \end{equation} It is worth noting that, similarly to the Fractional Brownian Motion (FBM), this model belongs to the class of Gaussian stochastic processes with stationary increments and long-range correlations, as it can be seen from the power-law behavior in Eqs. (\ref{corr_free}) and (\ref{var_superdiff_1}). Thus, this is a valid alternative model, as it shares with FBM the emergence of anomalous diffusion scaling, but with different velocity correlation function derived within the well-defined physical framework of complex heterogeneity. \noindent When $D$ is distributed according to the PDF $f(D)$ given in Eq. (\ref{eq:D_distr}), the probability of finding a particle in $x$ at time $t$ is given by \cite{vitali_jrsi18}: \begin{eqnarray} p(x,t) &=& \int_0^\infty {\cal G}\left(x,\sigma_X(t|D)\right) \frac{1}{D_*} L_{\alpha/2}^{-\alpha/2} \left(\frac{D}{D_*} \right) \mathrm{d}D = \nonumber \\ \label{eq:space_fract} &=& \frac{1}{\sqrt{ C D_* t^{\phi}} } L_\alpha^0 \left( \frac{x}{\sqrt{C D_* t^{\phi}} }\right)\ , \end{eqnarray} with $\sigma_X(t | D)$ and $C$ given by Eqs. ~\eqref{var_superdiff_1} and ~\eqref{var_superdiff_2}, respectively, while $L_{\alpha}^{0}\left(x \right)$ is a L\'evy symmetric $\alpha$-stable density. This PDF is clearly self-similar with {\it space-time scaling} $z=x/t^{\phi/2}$, being $p(x,t) = 1/t^{\phi/2} F(x / t^{\phi/2})$. \noindent The {\it formal} average of the fluctuation-dissipation relationship, Eq. (\ref{cond_var}), is given by: \begin{equation} \langle v^2 \rangle_{\rm eq} = \langle \tau \rangle \langle D \rangle\ . \label{fluct-diss} \end{equation} Being $\langle D \rangle=\infty$, this implies a physically meaningless infinite energy in the equilibrium/stationary state: $\langle v^2 \rangle_{\rm eq} = \infty$ \footnote{ Actually, Eq. (\ref{fluct-diss}) is just a formal expression that, rigorously, could not even be written when the mean diffusivity is infinite. }. \noindent Considering that $D$ is connected with the mass $m$ and that, in real systems, particles masses are finite, it is reasonable to assume that the PDF $f(D)$ a maximum alllowed value for the diffusivity. We then limit the possible values of the diffusivity $D$ by assuming a cut-off in the PDF $f(D)$ at some maximum value $D_\mathrm{max}$\footnote{ A smoother (e.g., exponential) cut-off could be chosen, but we expect that the particular choice of the cut-off does not substantially change the results. }. Consequently, the integral in Eq. (\ref{eq:space_fract}) becomes: \begin{equation} \label{dmax_model} p(x,t) = \int_0^{D_{\mathrm{max}}} {\cal G}\left(x,\sigma_X(t|D)\right) \frac{1}{D_*} L_{\alpha/2}^{-\alpha/2} \left(\frac{D}{D_*} \right) \mathrm{d}D\ . \end{equation} The PDF $p(x,t)$ is no more given by the symmetric L\'evy stable density $L_\alpha^0$ as in Eq. (\ref{eq:space_fract}), but it still satisfies the self-similarity condition: $p(x,t) = 1/t^{\phi/2} F(x/t^{\phi/2})$. \noindent The more interesting aspect is that model (\ref{dmax_model}) satisfies the fluctuation-dissipation relationship averaged over $D$, Eq. (\ref{fluct-diss}), thus also giving a finite energy. This relationship can also be used to numerically estimate the value of $D_{\mathrm{max}}$ for given $\langle D \rangle$ or, equivalently, given $\langle v^2 \rangle_{\rm eq}$ and $\langle \tau \rangle$: \begin{equation} \langle D \rangle = \frac{{\langle v^2 \rangle}_{\rm eq}} {\langle \tau \rangle} = \int_0^{D_{\mathrm{max}}} {D \, f(D) \, \mathrm{d}D}\ . \label{dmax_evaluate} \end{equation} Both this integral and the above integral in Eq. (\ref{dmax_model}) can be only numerically evaluated, as a further analytical approach is not possible. \noindent For properly chosen values of $D_{\mathrm{max}}$, a power-law decay $p(x,t) \sim 1/|x|^{1+\alpha}$ is expected to emerge in a intermediate range, before a rapidly decaying cut-off appears at large $x$ values. Further, all moments are finite, but they could have very large values depending on the chosen value of $D_{\mathrm{max}}$. In any case, an anomalous superdiffusive scaling is expected in the MSD: $\langle x(t)^2 \rangle \sim t^\phi$. \subsection{\label{sec:sim} Numerical simulations of the HEBP model} \noindent In order to verify the scaling features for the HEBP with truncated diffusivity PDF, Eq. (\ref{dmax_model}), we carried out both Monte-Carlo (MC) simulations of stochastic trajectories, computed from the Langevin equation (\ref{eq:langevin_new}), and a direct numerical evaluation of the integral in Eq. (\ref{dmax_model}). In particular, the emergence of power-law behavior in the position PDF $p(x,t)$, and the relative range of validity, needs to be numerically estimated. \noindent The numerical simulations of Langevin equation (\ref{eq:langevin_new}) and Eq. ~\eqref{eq:kinematic} were carried out using the algorithms discussed in \ref{num_algo}. A sample set of couples $(\tau,D)$ was drawn from the distributions $g(\tau)$ and $f(D)$, Eqs. (\ref{eq:tau_distr}-\ref{eq:D_distr}), and stochastic trajectories were simulated. In Fig. ~\ref{fig:dstaus} the theoretical distributions $g(\tau)$ and $f(D)$ are compared with the respective numerical histograms of drawn values of $\tau$ and $D$. In the numerical simulations the following parameters were used: $\alpha=3/2$, $\eta=1/2$, $D_\mathrm{max}=10^4$, initial conditions $x_{i,0} = 0$ and $v_{i,0} = 0$. Being $\tau_*=1$, it results: $\langle \tau \rangle = \eta \tau_*/\Gamma(1/\eta)=1/2$. From the numerical computation of Eq. ~\eqref{dmax_evaluate} we get: $\langle D \rangle \simeq 8.31$ and, then: $\langle v^2 \rangle_{\rm eq} = \langle\tau \rangle \langle D \rangle \simeq 4.16$. Being $\phi=2-\eta=3/2$, we expect: $\langle x^2(t) \rangle \propto t^{3/2}$. From Fig. ~\ref{fig:msd_shannon}(a) it is evident that the theoretical scaling $\phi=3/2$ is numerically verified at sufficiently long times. \noindent In order to check the self-similarity of the position PDF: \begin{equation} \label{eq:pow_law_pdf_scale} p(x, t) = \frac{1}{t^\delta} F \left( \frac{x}{t^\delta} \right), \end{equation} we use the Diffusion Entropy Analysis (DEA) \cite{grigolini_fractals01,allegrini_pre02,paradisi_csf15_pandora}, which is based on the computation of the Shannon entropy of the diffusion process: \begin{equation} \label{eq:shannon} S(t) = -\int_{-\infty}^\infty \mathrm{d}x\, p(x, t) \ln p(x, t) = A + \delta \ln t\ . \end{equation} $\delta$ is the space-time scaling of the PDF. For monoscaling diffusion \footnote{ Monoscaling diffusion processes belong to the general class of monoscaling/monofractal processes or signals, defined by the condition: $X(at) = a^H X(t)$. }, Eq. ~\eqref{eq:pow_law_pdf_scale} holds exactly, so that the PDF $p(x,t)$ is self-similar with self-similarity index $\delta$, thus also equal to the Hurst exponent $H$. The theoretical expectation for the HEBP is: $\delta = H = \phi/2 = 1-\eta/2$. \begin{figure}[!h] \includegraphics[width=\linewidth]{rnds} \caption{ \label{fig:dstaus}(color online) Distributions $g(\tau)$ (red) and $f(D)$ (blue). Lines: theoretical expressions, ~\eqref{eq:tau_distr} and ~\eqref{eq:D_distr}. Circles: histograms of the sample set of $\tau$ and $D$. $\tau_* = 1$; $D_* = 1$; $\alpha=3/2$; $\eta=1/2$.} \end{figure} \begin{figure}[!h] \includegraphics[width=1\linewidth]{msd_shannon} \caption{ \label{fig:msd_shannon}(color online) \textbf{(a)} MSD computed from the MC simulations (green circles) compared with the analytical prediction: $\langle x^2(t) \rangle \propto t^\phi$; $\phi=3/2=1.5$ (red dashed line). \textbf{(b)} Comparison of the DEA behavior computed from MC simulations (blue circles) with DEA computed from the analytically obtained PDF, Eq.~\eqref{dmax_model} (dashed black line). $\delta = 3/4 = 0.75$. } \end{figure} The DEA was computed using the histograms estimated from numerical MC simulations and from the numerical computation of the analytical expression (\ref{dmax_model}). The comparison of the two different estimates is shown in Fig. ~\ref{fig:msd_shannon}(b). It is evident that the DEA computed from the analytical expression shows very good agreement with the theoretical scaling: $\delta=\phi/2 = 1-\eta/2=3/4$. On the contrary, in the DEA computed from MC simulations a net straight line in the graph $(\ln t,S(t))$ does not emerge clearly in the studied range, even if a rough agreement with the theoretical expectation is seen. This is probably due to statistical limitations of MC simulations, thus proving that estimation of scaling in such processes could be quite a delicate task when dealing with real experimental data. \noindent In Fig.~\ref{fig:pdf9600_cmp_an} we compare the coordinate PDFs computed from Eq.~\eqref{dmax_model} with those evaluated from the MC simulations (being diffusion symmetrical, the PDFs are plotted in the range $x>0$). \begin{figure}[!h] \includegraphics[width=1\linewidth]{pdfs_cmp_an} \caption{\label{fig:pdf9600_cmp_an}(color online) Comparison of the coordinate PDFs of the MC simulated motion (dashed lines) with that obtained from the analytical expression ~\eqref{dmax_model} (straight lines) for different times.} \end{figure} The analytical expression clearly shows a well-defined power-law tail in a intermediate range of $|x|$: $p(x, t) \propto 1/|x|^{1+\alpha}$, followed by a rapid cut-off for large $|x|$. Regarding the space-time scaling $z=x/t^{\delta}$, in agreement with DEA, the analytical PDFs have an exact self-similarity index $\delta=3/4$. Conversely, the decay of PDFs derived from MC simulations is slightly more complicated, but the general behavior is compatible with the analytical one. Even the self-similarity space-time scaling roughly approximates the theoretical expectation $\delta=3/4$, but small deviations are evident, is agreement with the DEA displayed in Fig. ~\ref{fig:msd_shannon}(b) (blue circles). \subsection{Heterogeneous ensemble of Brownian particles and superstatistics} \label{superstat} \noindent Superstatistics approach takes into account large-deviations of intensive quantities of systems in nonequilibrium stationary states \cite{beck_prl01,beck_pa03,abe_etal-pre-2007} and it was motivated by some preliminary success obtained when fluctuations of parameters were considered \cite{wilk_etal-prl-2000,beck-epl-2002}. In general, superstatistics is successful to model: turbulent dispersion considering energy dissipation fluctuations \cite{beck_prl01,reynolds-prl-2003}, renewal critical events in intermittent systems \cite{paradisi_cejp09,akin_jsmte09}, and for different distributions of the fluctuating intensive quantities different effective statistical mechanics can be derived \cite{beck_pa03}, e.g., Tsallis statistics with $\chi^2$-distribution. \noindent The main idea of superstatistics is that a Brownian test particle experiences fluctuations of some intensive parameters by moving from cell to cell \cite{beck_pa03}. Following this idea, the random value of the fluctuating parameter is generated at any change of cell. The main assumption behind this picture is that each cell is in equilibrium during the residence time of the particle: within the cell there are no fluctuations but a different value assigned to each cell. The local value of the fluctuating parameter changes in the various cells on a time scale that is much longer than the relaxation time that the single cells need to reach local equilibrium. This means that the fluctuating parameter follows a slow dynamics and then the integration over the fast variable is taken after the integration over the slow variable which is in opposition to what an adiabatic scheme requires \cite{abe_fp14}. This fact can be considered just an order of integration that does not affect the computation of the expected values but it is more deep when the entropy is considered \cite{abe_etal-pre-2007,abe_fp14}. This inconsistency is solved by considering a dynamical equation also for the slow fluctuating quantity \cite{abe_fp14}, an example of such dynamical equation was already considered in Ref. \cite{reynolds-prl-2003}. \noindent The HEBP approach is clearly based on a different picture, even if the superposition of Langevin equations may suggest some analogies. Here the superposition gives rise to anomalous diffusion because it reproduces the effects of the ensemble heterogeneity. In fact, in the present approach the fluctuations are not due to different values in different cells but to the population of density (mass divided by volume) of the ensemble. As a consequence of this, the present approach does not take into account slow and fast dynamics and then the issue concerning the order of integration does not arise. \subsection{Anomalous-to-normal transition} \label{anom-to-normal} \noindent Here we briefly show the effect of limited statistics of $\tau$ on the diffusion scaling. Statistical limitation in the number of $\tau$ randomly drawn from $g(\tau)$ also results in the existence of a maximum relaxation time $\tau_{\rm max}$. Thus, the statistical limitation in the sample set of $\tau$ mimicks the existence of a $\tau_{\rm max}$ in real experimental systems. This is a reasonable assumption, also considering the relation of $\tau$ with mass and size of the particle (see previous Section \ref{model_1}), whose distributions are necessarily limited. \begin{figure}[!h] \includegraphics[width=\linewidth]{tau-histo_eta05_tauav052_taumax297p2.pdf} \caption{ \label{fig:tau-histo}(color online) Histogram of a sample set of $\tau$ limited to $10000$ draws from $g(\tau)$. $\tau_* = 1$. } \end{figure} In Fig. \ref{fig:tau-histo} we report the histogram for a sample set with $10000$ random draws from $g(\tau)$. The parameters are: $\eta=1/2$, $\langle\tau\rangle=1/2$. It can be seen that, for this limited statistics, $g(\tau)$ is well-reproduced up to a value of $\tau$ less than $10$, while for larger values there are fluctuations and, for $\tau$ greater than about $30-50$, also some apparent outliers are seen till a maximum value $\tau_{\rm max} = 297.2$. The experimental/numerical mean relaxation time is $\langle\tau\rangle_{\rm exp}=0.52$. \begin{figure}[!h] \includegraphics[width=\linewidth]{al05_AnomNormal_tauav052_taumax297p2.pdf} \caption{ \label{fig:anom-norm}(color online) Numerical simulations of the heterogeneous ensemble of Brownian particles for the $\tau$ sample set of Fig. \ref{fig:tau-histo}. Top panel: position MSD; bottom panel: velocity MSD. } \end{figure} \noindent The maximum relaxation time $\tau_{\rm max}$ can be considered a kind of time scale after which all trajectories reach the condition of a variance increasing linearly in time, even if with different multiplicative factors. As a consequence, we expect normal diffusion to occur in a very long time regime. This is confirmed in Fig. \ref{fig:anom-norm}, where the anomalous diffusive scaling $\phi=2-\eta=3/2$ emerges in the approximate time interval $[15 \langle \tau \rangle,450 \langle \tau \rangle]$, after which there is a transition to a normal diffusion regime starting at $t \sim 600 \langle \tau \rangle$. It is worth noting that, in general, the transition time scale does not depend only on $\tau_{\rm max}$, but also on the detailed statistics of the numerical histogram in the neighborhood of $\tau_{\rm max}$ itself. In particular, a situation where $\tau_{\rm max}$ is an outlier is quite different from a condition where $\tau$-set is, in some sense, dense near $\tau_{\rm max}$, which cannot then be considered an outlier. \noindent In summary, we can argue that, depending on the experimental/numerical set of relaxation times $\tau$, our HEBP model reproduces a transition from anomalous to normal diffusion. \section{\label{sec:lw}Comparison with L\'evy walk models} LW is one of the best known models of anomalous diffusion with finite MSD and was firstly introduced by Shlesinger, Klafter and Wong in 1982 \cite{shlesinger_random_1982}. The number of papers devoted to LWs is very large (see, e.g., \cite{allegrini_pre02,klages_anomalous_2008,paradisi_csf15_pandora,magdziarz_explicit_2016,taylor-king_fractional_2016,dybiec_pre17,aghion_epjb18}) and a quite recent and complete review can be found in Zaburdaev et al., 2015 \cite{zaburdaev_rmp15}. LWs have been applied to many phenomena, but surely the most promising and widespread applications are in the modeling of search strategies, such as bacteria foraging through run-and-tumble motion \cite{viswanathan_2011,ariel2015swarming,zaburdaev_rmp15}. Unlike L\'evy flights, where the particle is allowed to make large jumps in a whatever short time step (theoretically zero in the time-continuous limit), thus giving instantaneous infinite velocities and discontinuous paths, in LW models the particle moves with a finite speed. Such speed remains constant throughout a random duration time, also called Waiting Time (WT). After this WT, velocity randomly changes according to an assigned walking rule and it remains constant for another random WT. Thus, even if there are events with discontinuous acceleration, LWs have continuous velocity. When WT have a constant value, equivalent to a fixed time step, and velocity MSD is finite, LW reduces to a standard Random Walk with ballistic diffusion: $\langle x^2\rangle \sim t^2$. For WTs with finite mean, ballistic diffusion also occurs, but in the long-time limit (e.g., exponentially distributed WTs). Interestingly, the LW also displays {\it strong anomalous diffusion}, also known as {\it multiscaling/multifractal} diffusion \cite{ott2002,kantelhardt_pa02}. Multiscaling detection algorithms are usually based on the analysis of fractional moments: \begin{equation} \label{eq:frac_moments} \langle |x|^q \rangle = \int_{-\infty}^{\infty} \mathrm{d}x |x|^q p(x, t) = M_q \cdot t^{\lambda(q)}\ , \end{equation} where $\lambda(q) = q H(q)$, being $H(2)$ the well-known Hurst exponent or second-moment scaling. A complex system is multiscaling when $H$ changes with the moment order $q$ and the particular multiscaling features are defined by the behavior of the function $H(q)$. Conversely, a constant $H$, thus independent of $q$, is associated with monoscaling systems: $\langle |x|^q \rangle \sim t^{q H}$. \noindent Here we consider two different LW models that differ for the velocity distribution. The first one is the most classical one with randomly alternating velocities, i.e., constant speed $|V_{_{\rm LW}}|$ and randomly changing direction according to a coin tossing prescription \cite{shlesinger_random_1982,allegrini_pre02,zaburdaev_rmp15}. We limit here to the case $V_{_{\rm LW}} = \pm 1$. In the second one, we consider a continuous and symmetric random variable for the velocity \cite{zaburdaev_rmp15}. In both LW models the velocity is constant throughout a WT of duration $\delta t_i = t_{i+1}-t_i$ and randomly changes in correspondence of the critical event $i+1$, whose occurrence time $t_{i+1}$ marks the passage from the WT $\delta t_i$ to the next WT $\delta t_{i+1}$. For the WT distribution, we consider the following PDF \cite{paradisi_romp12}: \begin{equation} \label{eq:lw_time_distr} \psi(\delta t) = \frac{\left(\mu-1\right)T^{\mu-1}} {\left( T + \delta t \right)^{\mu}}, \end{equation} where $\mu > 1$ and $T$ is a reference time scale. The power-law tail emerges in the range $\delta t \gg T$. In the following we set $T=1$. The superdiffusive sub-ballistic behavior (the one we are interested in) is revealed when $2<\mu<3$. In the LW with alternating velocities, this regime is characterized by a central part of the PDF $p(x,t)$ that is well approximated by a symmetric L\'evy stable density $L_\alpha^0$ with stability index $\alpha = 1/(\mu-1)$ \cite{allegrini_pre02,paradisi_romp12}. At sufficiently large $|x|$, the PDF is abruptly truncated by ballistic peaks located at $x = \pm V_{_{\rm LW}} t$, which corresponds to the ballistic motion of paths whose first WT is longer than $t$. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{lw150_lin} \caption{\label{fig:lw_pdfs_lin}L\'evy walk probability density functions: the central part looks like L\'evy $\alpha$-stable law cut by ballistic fronts at $x = \pm V_{_{\rm LW}} t$}. \end{figure} In Fig.~\ref{fig:lw_pdfs_lin} the PDFs at different times computed from a MC simulation of LW with alternating velocities are reported. The ballistic peaks truncating the PDFs are evident. \noindent Similarly to HEBP, Eq. ~\eqref{dmax_model}, this LW model displays a power-law decay in a intermediate range of $|x|$ followed by an abrupt cut-off, thus resulting in the finiteness of moments and, in particular, of the MSD: $\langle x^2 \rangle \propto t^{4-\mu}$ \cite{allegrini_pre02,zaburdaev_rmp15}. For the LW with randomly alternating velocities and WT-PDF given by ~\eqref{eq:lw_time_distr} \footnote{ This is valid for all WT-PDFs with fat tails: $\psi(\tau) \sim 1/\tau^\mu$. }, the fractional moments are given by \cite{zaburdaev_rmp15}: \begin{equation} \lambda(q) = \left\{ \begin{array}{l} q/(\mu-1)\ ; \quad \quad\ \ q \le \mu-1\ ;\\ \ \\ q - (\mu-2)\ ; \qquad q > \mu-1\ . \end{array} \right. \label{levy_multiscaling} \end{equation} It is then easily seen from this formula that the LW with randomly alternating velocities obeys a biscaling law, with a given scaling for low-order moments and another one for high-order moments. \noindent In the following we compare four different cases: two LW models (with alternating and continuous velocities, respectively) and our HEBP model for two different sets of parameters chosen to fit the LW models. In particular: \begin{itemize} \item[(i)] L\'evy walk with randomly alternating velocity rule. WT-PDF given by Eq. ~\eqref{eq:lw_time_distr} with $\mu=5/2$. Coin tossing prescription for the change of direction. We set $V_{_{\rm LW}} = \pm 1$, so that $\langle v^2 \rangle_{\rm eq} = \langle v^2_{_{\rm LW}} \rangle = 1$; \item[(ii)] HEBP analytical model, Eq. (\ref{dmax_model}), with parameters: $\alpha=3/2$, $\eta = 1/2$, $\langle v^2 \rangle_{\rm eq} = \langle \tau \rangle \langle D \rangle = 1$. It results: $\langle \tau \rangle = 1/2$, $\langle D \rangle = 2$, $\phi=2-\eta=3/2$ and $D_\mathrm{max} = 40.1$ (numerically calculated from Eq. ~\eqref{dmax_evaluate}); \item[(iii)] L\'evy walk with random continuous velocity. WT-PDF given by Eq. ~\eqref{eq:lw_time_distr} with $\mu=5/2$. The velocity PDF is symmetric and evaluated from the stationary state of MC simulations carried out for the HEBP, Eq. ~\eqref{dmax_model}. MC simulation parameters are the same as in the next case (iv). The random generation of velocities was performed with the inverse transform sampling method; \item[(iv)] HEBP analytical model, Eq. (\ref{dmax_model}), with parameters: $\alpha=3/2$, $\eta = 1/2$, $\langle v^2 \rangle_{\rm eq} = \langle \tau \rangle \langle D \rangle = 8.127$. % It results: $\langle \tau \rangle = 1/2$, $\phi=2-\eta=3/2$, $\langle D \rangle = 16.254$ and $D_\mathrm{max} = 1.43\cdot 10^5$ (numerically calculated from Eq. ~\eqref{dmax_evaluate}); \end{itemize} The PDFs $p(x,t)$ of the HEBP, cases (ii) and (iv), are obtained, for different times, by means of numerical evaluation of Eq. ~\eqref{dmax_model}. Then, DEA $S(t)$ and MSD $\langle x^2(t) \rangle$ are computed by the calculated PDFs. Conversely, the paths of LW models (i) and (iii) are computed by means of MC stochastic simulations and, then, PDFs, DEA and MSDs are evaluated by statistical analysis of the sample paths. \begin{figure}[tbp] \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{LW_pm1_an_pdfs}\\ \includegraphics[width=\linewidth,height=0.6\linewidth]{LW_rnd_an_pdfs} \caption{\label{fig:lw_an_1} Comparison of PDFs $p(x,t)$ for the four model cases (i-iv): {\bf Top panel}: LW model (i) with randomly alternating velocities (solid lines) fitted by HEBP, case (ii) (dashed lines); {\bf Bottom panel}: LW model (iii) with continuous random velocities (solid lines) fitted by HEBP, case (iv) (dashed lines). } \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{LW_an_msd}\\ \includegraphics[width=\linewidth,height=0.6\linewidth]{LW_an_entropy} \caption{\label{fig:lw_an_2} Comparison of MSD $\langle x^2 \rangle(t)$ and DEA $S(t)$, Eq.~\eqref{eq:shannon}), for the four model cases (i-iv): \textbf{Top panel}: LW model (i) with randomly alternating velocities (blue triangles); HEBP, case (ii) (dashed blue line); LW model (iii) with random continuous velocities (red circles); HEBP, case (iv) (dashed red line); \textbf{Bottom panel}: LW model (i) with randomly alternating velocity (blue dots and line); HEBP, case (ii) (purple line); LW model (iii) with random continuous velocity (green dots and line); HEBP, case (iv) (red line). } \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{LW_frac_moments}\\ \includegraphics[width=\linewidth,height=0.6\linewidth]{an_frac_moments} \caption{\label{fig:lw_an_3} \textbf{Top panel}: fractional moments, (Eq.~\eqref{eq:frac_moments}), of LW models (i) and (iii) (purple and green points, respectively -- almost coincide), dashed lines are analytical asymptotes given by Eq. ~\eqref{levy_multiscaling}; \textbf{Bottom panel}: fractional moments of HEBP, cases (ii) and (iv) (purple and green points, respectively -- almost coincide), dashed lines are the same analytical asymptotes as in Top panel. } \end{figure} \noindent The results are gathered in Figs. ~\ref{fig:lw_an_1},~\ref{fig:lw_an_2} and ~\ref{fig:lw_an_3}. In Fig. ~\ref{fig:lw_an_1} the comparisons of position PDFs $p(x,t)$ for different times are shown: LW model (i) fitted by HEBP, case (ii) (top panel) and LW model (iii) fitted by HEBP, case (iv). It is evident that the two LW models (alternating/continuous velocities) have very similar behaviors and that, for both models, it is always possible to find a parameter set for the HEBP to be comparable with LW models. In particular, HEBP well reproduces the power-law decay in the intermediate range, while for $x$ greater than the ballistic peaks of LW models an exponential cut-off emerges in the PDFs of HEBP. However, the space-time scaling $\delta$ is clearly different for the two models, as it is clear from the slight shifts between solid and dashed lines in both panels. Then, for fixed set of parameters, the quality of the fit is not the same for all PDF times. In particular, in the top panel (models (i) and (ii)) the best fit is made at time $t=10^3$, so that the less accurate agreement is at the longest time $t=10^6$. On the contrary, in the bottom panel the best fit is made at $t=10^6$ and, consequently, the worst agreement is at the first displayed time $t=10^3$. \noindent In Fig. ~\ref{fig:lw_an_2} we report the MSD $\langle x^2 \rangle$ (Top panel) and the DEA $S(t)$ (Bottom panel) for the four model cases. Numerical simulations and calculations of all models (i-iv) (LW models and HEBP) reproduce the expected power-law dependence $\langle x^2 \rangle \sim t^{3/2}$ with a very good agreement. The DEA $S(t)$ shows the net differences of the self-similarity index $\delta$ among LW and HEBP, thus making it more evident the different space-time scaling seen in Fig. ~\ref{fig:lw_an_1}. The numerical scaling is also in agreement with theoretical values: $\delta=1/(\mu-1)=2/3 \simeq 0.67$ for LWs and $\delta=1-\eta/2 = 3/4$ for HEBP. \noindent Finally, in order to explore the multiscaling character of the models, Fig. ~\ref{fig:lw_an_3} show the results for the evaluation of fractional moments. In top panel the fractional moments of LW model (i) and (iii) are compared, while bottom panel compares HEBP, cases (i) and (iv). The LW models (i) and (iii) have exactly the same behavior for $\lambda(q)$, thus in agreement with the expected multiscaling and, in particular, the biscaling law of Eq. ~\eqref{levy_multiscaling}. The behavior of our numerical simulations of HEBP, cases (ii) and (iv), is also found to be very similar to each other and in agreement with theoretical predictions, that is, it shows a well-defined monoscaling. The space-time scaling is the same for both parameter sets, as it depends only on $\eta$, being $\langle X^q \rangle \sim t^{\lambda(q)}=t^{qH(q)}$ with: \begin{equation} H(q)=\phi/2=1-\eta/2, \label{our_scaling} \end{equation} and, for HEBP, this is also equal to the self-similarity index: $\delta=H(q)=\phi/2$ (see Eq. ~\eqref{eq:space_fract}). \section{\label{discuss}Discussion and concluding remarks} \noindent We here introduced and discussed a model based on the idea of a standard friction-diffusion process in a strongly heterogeneous condition with inverse power-law distributions of parameter's populations (Eqs. ~\eqref{eq:tau_distr} and ~\eqref{eq:D_distr}). We considered the Langevin equation for an Ornstein--Uhlenbeck process with randomly distributed relaxation/correlation times $\tau$ and diffusivities $D$. The model with random $\tau$ population and constant diffusivity gives a Gaussian process with long-range correlation and anomalous diffusion scaling. The moments $\langle |x|^q \rangle$ are finite, as well as the energy $\langle v^2 \rangle_{\rm eq}$. However, anomalous transport often displays power-law decays in the position PDF $p(x,t)$ that cannot be reproduced by Gaussian processes, even if long-range. In order to extend this model to non-Gaussian PDFs with power-law tails, a random $D$ population is needed. This is obtained by means of the distribution $f(D)$ given in Eq. ~\eqref{eq:D_distr}. However, this distribution has an infinite mean and determine infinite moments for the velocity PDF and, thus, infinite energy. This is an unphysical condition, which also prevents to get a fluctuation-dissipation relation. For this reason we adopted a more realistic assumption by imposing a cut-off maximum value $D_{\rm max}$ for the diffusivity population, from which Eq. ~\eqref{dmax_model} follows. \noindent We proved that, similarly to LWs, our proposed HEBP model can take into account intermediate power-law decays in the PDF. Unlike LWs, where power-law is truncated by the ballistic peaks due to the underlying WT statistics, in HEBP the power-law is truncated by an exponential cut-off in the regime of large $x$. In experimental data, this is often associated with lack of statistics or presence of noise \cite{allegrini_pre10,paradisi_csf15_pandora,paradisi_springer2017}, but it is also recognized to be reminiscent of heterogeneous media \cite{lanoiselee_jpamt18}. \noindent In summary, we derived a model in a physical framework involving {\bf heterogeneity} and, then, a {\bf population} of parameters characterized by given inverse power-law distributions. Our model follows from a superposition of standard Gaussian processes with stationary and independent increments. This model therefore has the following properties: \begin{itemize} % \vspace{-.2cm} \item[(1)] {\bf long-range} correlations $R(t)\sim1/t^\eta$ and anomalous {\bf superdiffusive} scaling in the variance: $\langle x^2 \rangle \sim t^\phi$ ($\phi=2-\eta$; $1\le\phi<2$); % \vspace{-.2cm} \item[(2)] {\bf finite moments} $\langle |x|^q\rangle$, {\bf finite energy} $\langle v^2\rangle_{\rm eq}$ and a fluctuation-dissipation relation:\\ $\langle v^2\rangle_{\rm eq} = \langle \tau \rangle \langle D \rangle$; \vspace{-.2cm} \item[(3)] both an intermediate range with {\bf power-law decay} $1/|x|^{1+\alpha}$ and an asymptotic range with exponential cut-off in the PDF $p(x,t)$; \vspace{-.2cm} \item[(4)] space-time {\bf monoscaling} behavior: $x \sim t^\delta$ \\ ($\delta=\phi/2$); \vspace{-.2cm} \item[(5)] $\alpha$ and $\delta$ are independent scaling parameters; % \vspace{-.2cm} \item[(6)] a {\bf transition} from anomalous (intermediate time regime) to normal diffusion (long time regime). \end{itemize} This last point implies that the (mono-)scaling index $\delta$ is a function of time, being $\delta(t) \ne 1/2$ for $t$ much less than the maximum relaxation time $\tau_{\rm max}$, and $\delta(t) = 1/2$ for $t\gg \tau_{\rm max}$. \noindent Properties (1-3) are similar to those displayed by LW models, apart from the exponential cut-off of HEBP, which could be difficult to distinguish from the cut-off in the LW-PDF when dealing with experimental data. \noindent On the contrary, property (4) is not satisfied by LWs, which obey the {\bf biscaling} law ~\eqref{levy_multiscaling}, distinctly different from the {\bf monoscaling} behavior of HEBP. Also the crucial property (5) is not seen in LWs. \noindent Further, L\'evy Walks do not reproduce the right space-time scaling $\delta$ of HEBP neither in the PDF's central part, which is that part of the PDF more similar to a pure L\'evy stable density. In fact, in our HEBP model the space-time scaling $\delta$ and the power-law decay of the probability distribution $\alpha$ are {\bf independent parameters}, while they are not in LWs. LW is only driven by the parameter $\mu$ associated with the underlying trapping mechanism described by Eq. ~\eqref{eq:lw_time_distr}. The additional assumption of jumps coupled with WTs triggers the emergence of anomalous superdiffusion, thus directly affecting the space-time self-similarity index $\delta_{_{\rm LW}}$\footnote{ It is interesting to note that this is also true when the velocity PDF, even if characterized by a power-law decay, has finite variance, and finite higher-order moments, due to the cut-off (always present in real experimental data). }. Thus, when the LW-PDF is characterized by the decay: $p(x,t)\sim 1/|x|^{\mu}$, the scaling $\delta_{_{\rm LW}}$ is constrained, by the jump-WT coupling, to obey the relationship \cite{allegrini_pre02, zaburdaev_rmp15}: \begin{equation} \delta_{_{\rm LW}}=\frac{1}{\mu-1}=\frac{1}{\alpha_{_{\rm LW}}}\ . \label{lw_delta} \end{equation} where $\alpha_{_{\rm LW}} = \mu-1$ represents the L\'evy stability index. As known, this is well-established in the intermediate range where the LW-PDF is more similar to a pure L\'evy stable density $L^0_{\alpha_{_{\rm LW}}}$. It is important to notice that the above relationship among $\delta$ and $\alpha$ can be also satisfied by our HEBP model for particular parameter choices, i.e., given the experimental $\alpha$, for $\eta=2-2/\alpha$. \noindent Another important aspect worthy of discussion is the physical basis of the considered models. HEBP models directly follow from an heterogeneity assumption applied to a standard Gaussian process, whether the origin of heterogeneity is (in the medium or in the particle parameters). Thus, HEBP models, which are based on the same idea of the ggBM \cite{mura-phd-2008,pagnini_etal-ijsa-2012}, are derived from a physical background directly involving the idea of a complex heterogeneity and indeed we expect HEBP models to be more suitable to heterogeneous transport phenomena. Conversely, LWs should better fit phenomena where trapping plays a fundamental role. \noindent As already said, many authors have recently been focusing on position transport models with heterogeneous diffusivities, e.g., DDMs \cite{chubynsky_etal-prl-2014,chechkin_prx17,jain_jcs17,lanoiselee_jpamt18,sposini_njp18}. In some sense, HEBP models belong to the class of transport models with random diffusivity, i.e., HDMs. However, unlike other ones, the HEBP model here discussed explicitly describes the velocity dynamics, thus including the often neglected but crucial role of viscous relaxation time $\tau$. More precisely, we here refer to the relaxation time of the velocity, a physical parameter whose relationship with medium/fluid properties is well-established. As seen above, the role of heterogeneity in the relaxation time $\tau$ is taken into account and modeled through a population with inverse power-law distribution $g(\tau)$. Another important aspect is that the here proposed HEBP model is derived from a standard friction-diffusion process having finite energy and satisfies a fluctuation-dissipation theorem. \noindent From the above discussion, we can finally suggest a possible statistical recipe to distinguish the best modeling approach among LWs and HEBP models starting from a set of experimental transport data. This is of great interest when the underlying mechanism, heterogeneity or trapping, driving anomalous diffusion is not yet clear. If the experimental PDF displays a power-law decay, it is possible to apply a best fit procedure to get $p(x,t)\sim 1/|x|^{1+\alpha_{_{\rm exp}}}$ for some $\alpha_{_{\rm exp}}<2$, where 'exp' stands for {\it experimental}. Then, fractional moments and the function $\lambda(q) = q H(q)$ can be computed. DEA can be applied to compute the self-similarity index and let us assume that a well-defined $\delta$ exists. Then, we have the indices $\alpha_{_{\rm exp}}$, $\delta$ and $\lambda(q)$. If the data are monoscaling, then HEBP could be a good candidate and we have two parameters to independently fit the PDF scaling ($\alpha$) and the moment scaling ($H(q)=\phi/2=$ constant). The exponential cut-off could be another clue towards HEBP \cite{lanoiselee_jpamt18}, but actually an exponential cut-off is usually seen in the tail of experimental PDFs due to lack of statistics and/or presence of instrumental/environmental noise. If data are multiscaling, then there are two possibilities: (i) biscaling law ~\eqref{levy_multiscaling} is satisfied for some $\mu=1+\alpha_{_{\rm exp}}$ and LW modeling approach is the most reasonable one; (ii) other multiscaling laws, biscaling or not, emerge and neither LW nor HEBP cannot be applied. \section*{Acknowledgments} \noindent \small This research is supported by the Basque Government through the BERC 2014--2017 and BERC 2018--2021 programs, and by the Spanish Ministry of Economy and Competitiveness MINECO through BCAM Severo Ochoa excellence accreditation SEV--2013--0323 and through project MTM2016--76016--R ''MIP''. VS acknowledges BCAM, Bilbao, for the financial support to her internship research period during which she developed her Master Thesis research useful for her Master degree in Physics at University of Bologna, and SV acknowledges the University of Bologna for the financial support through the ''Marco Polo Programme'' for her PhD research period abroad spent at BCAM, Bilbao, useful for her PhD degree in Physics at University of Bologna. PP acknowledges financial support from Bizkaia Talent and European Commission through COFUND scheme, 2015 Financial Aid Program for Researchers, project number AYD--000--252 hosted at BCAM, Bilbao. Authors would also like to acknowledge the usage of the cluster computational facilities of the BCAM--Basque Centre for Applied mathematics of Bilbao. \normalsize
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Precision tests at low energies, such as flavour physics in the quark and lepton sectors, as well as precision tests at the electroweak (EW) scale, such as $Z$ pole observables, are important probes of physics beyond the Standard Model (SM). The absence of a direct discovery of any particle beyond the SM spectrum at the LHC makes these indirect tests all the more important. Effective field theories (EFTs) are a standard tool to describe new physics (NP) effects in these precision observables. For low-energy quark flavour physics, their use is mandatory to separate the long-distance QCD dynamics from the short-distance NP of interest. But also for precision tests at electroweak-scale energies, EFTs have become increasingly popular, given the apparent scale separation between the EW scale and the scale of the NP. With mild assumptions, namely the absence of non-SM states below or around the EW scale as well as a linear realization of EW symmetry breaking, NP effects in precision observables can be described in the context of the Standard Model effective field theory (SMEFT), that extends the SM by the full set of dimension-6 operators allowed by the SM gauge symmetry \cite{Buchmuller:1985jz,Grzadkowski:2010es} (see \cite{David:2015waa,deFlorian:2016spz,Brivio:2017vri} for reviews). While this description facilitates model-independent investigations of NP effects in precision observables, a perhaps even more important virtue is that SMEFT can serve as an intermediate step between dynamical models in the UV and the low-energy precision phenomenology. Computing all the relevant precision observables in a given UV model and comparing the predictions to experiment is a formidable task. Employing SMEFT, this task can be separated in two: computing the SMEFT Wilson coefficients at the UV scale is model-dependent but straightforward, while computing all the precision observables in terms of these Wilson coefficients and comparing them to experiment is challenging but, importantly, model-independent. Eventually, to test a UV model given the plethora of existing precision measurements, we require a likelihood function that quantifies the agreement of all existing precision observable measurements to the model's predictions. This likelihood function $L$ is a function of the model's Lagrangian parameters $\vec{\lambda}$ and certain model-independent phenomenological parameters $\vec{\theta}$ (form factors, decay constants, etc.), $L=L(\vec{\lambda}, \vec{\theta})$. Using SMEFT to describe NP effects in precision observables model-independently in terms of the Wilson coefficients $\vec{C}$, the likelihood can be reexpressed as \begin{equation} L(\vec{\lambda}, \vec{\theta}) = L_\text{SMEFT}(\vec C(\vec{\lambda}), \vec{\theta})\,, \end{equation} where $L_\text{SMEFT}(\vec C, \vec{\theta})$ is the {\em global SMEFT likelihood} in the space of Wilson coefficients and phenomenological parameters. Having this function at hand, the problem of testing any UV model is reduced to computing the SMEFT Wilson coefficients $\vec{C}(\vec{\lambda})$ (and suitably accounting for the uncertainties in the parameters $\vec{\theta}$). A major challenge in obtaining this global likelihood function is that the SMEFT renormalization group evolution from the NP scale down to the EW scale does not preserve flavour, such that the likelihood in the space of SMEFT Wilson coefficients does not factorize into sectors with definite flavour quantum numbers. This is in contrast to the weak effective theory (WET) below the EW scale, that is frequently employed in low-energy flavour physics, where QCD and QED renormalization is flavour-blind. Thanks to the calculation of the complete one-loop SMEFT RGEs \cite{Jenkins:2013zja, Jenkins:2013wua, Alonso:2013hga, Celis:2017hod}, the complete matching from SMEFT onto WET \cite{Aebischer:2015fzz, Jenkins:2017jig} and the complete one-loop QCD and QED RGEs within WET \cite{Aebischer:2017gaw, Jenkins:2017dyc} that have been incorporated in the public code \texttt{wilson} \cite{Aebischer:2018bkb} leveraging the open Wilson coefficient exchange format (WCxf) \cite{Aebischer:2017ugx}, the relation between high-scale SMEFT Wilson coefficients and the coefficients in the appropriate low-energy EFT can now be automatized. Having obtained the Wilson coefficients at the appropriate scales, the precision observables must be calculated and compared to the experimental measurements to obtain the likelihood function. This programme has been carried out in the literature for various subsets of observables or Wilson coefficients, e.g. \begin{itemize} \item simultaneous fits to Higgs and EW precision data have been performed by many groups, see \cite{deFlorian:2016spz} and references therein, \item a fit to $Z$ pole observables not assuming lepton flavour universality (LFU) \cite{Efrati:2015eaa}, \item a likelihood incorporating low-energy precision measurements (but not flavour-changing neutral currents) \cite{Falkowski:2017pss}, \item fits of semi-leptonic operators to beta decays \cite{Alioli:2017ces,Gonzalez-Alonso:2018omy}, \item fits of triple gauge boson coupling operators \cite{Falkowski:2015jaa,Bobeth:2015zqa}, \item a fit of four-lepton operators \cite{Falkowski:2015krw}. \end{itemize} So far, no global likelihood has been constructed however that contains the observables relevant for the anomalies in $B$ physics or the numerous measurements of flavour-changing neutral current (FCNC) processes that are in principle sensitive to very high scales. The main aim of the present work is thus to provide a likelihood function that also takes into account a large number of observables in flavour physics, with a focus on the ones that are relevant in models motivated by the anomalies recently observed in $B$ decays based on the $b\to c\tau\nu$ and $b\to s\mu\mu$ transition. Our results build on the open source code \texttt{flavio} \cite{flavio}, that computes a large number of observables in flavour physics as a function of dimension-6 Wilson coefficients beyond the SM and contains a database of relevant experimental measurements. To incorporate constraints beyond quark flavour physics, we have also implemented EW precision tests, lepton flavour violation, and various other precision observables in \texttt{flavio}. By using open source software throughout, we hope our results can serve as the basis for a more and more {\em global} SMEFT likelihood emerging as a community effort. The rest of this paper is organized as follows. In section~\ref{sec:setup}, we describe the statistical formalism, in section~\ref{sec:obs}, we list the observables included in our likelihood function, in section~\ref{sec:pheno}, we discuss several example applications relevant for the $B$ physics anomalies, in section~\ref{sec:py}, we describe the usage of the Python package provided by us, and finally we summarize in section~\ref{sec:concl}. \section{Formalism}\label{sec:setup} Given a set of independent precision measurements $\vec{O}_\text{exp}$ and the corresponding theory predictions $\vec{O}_\text{th}$ in the presence of NP described model-independently by dimension-6 SMEFT Wilson coefficients, the general form of the SMEFT likelihood reads \begin{equation} L_\text{SMEFT}(\vec C, \vec{\theta}) = \prod_i L_\text{exp}^i\left(\vec{O}^\text{exp}, \vec{O}^\text{th}\left(\vec C, \vec{\theta}\right)\right) \times L_\theta(\vec{\theta})\,, \label{eq:lsmeft} \end{equation} where $L_\text{exp}^i$ are the distribution functions of the experimental measurements and $L_\theta(\vec{\theta})$ are experimental or theoretical constraints on the theory parameters $\theta$. Since we are interested in the likelihood as a function of the Wilson coefficients, all parameters $\theta$ are nuisance parameters that have to be removed by an appropriate procedure. In a Bayesian approach, $L_\theta(\vec{\theta})$ would be a prior probability distribution for the theory parameters and the appropriate procedure would be to obtain the posterior probability by means of Bayes' theorem, integrating over the $\theta$ directions. In a frequentist approach\footnote{% See \cite{Charles:2016qtt} for a comprehensive discussion of the treatment of theory uncertainties in a frequentist approach, also discussing methods that are not captured by \eqref{eq:lsmeft}. }, one would instead determine the profile likelihood, i.e.\ for a given Wilson coefficient point $\vec{C}$ maximize the likelihood with respect to all the $\vec{\theta}$. While both the Bayesian and the frequentist treatment are valid approaches, they both have the drawback that they are computationally very expensive for a large number of parameters. Even if one were to succeed in deriving the Bayesian posterior distribution or the profile likelihood in the entire space of interest, the procedure would have to be repeated anytime the experimental data changes, which in practice happens frequently given the large number of relevant constraints. Due to these challenges, here we opt for a more approximate, but much faster approach. We split all the observables of interest into two categories, \begin{enumerate} \item Observables where the theoretical uncertainty can be neglected at present compared to the experimental uncertainty. \item Observables where both the theoretical and experimental uncertainty can be approximated as (possibly multivariate) Gaussian and where the theoretical uncertainty is expected to be weakly dependent on $\vec C$ and $\vec \theta$. \end{enumerate} We then write the nuisance-free likelihood \begin{equation} L_\text{SMEFT}(\vec C) = \prod_{i\,\in\,1.} L_{\text{exp}}\left(\vec{O}^\text{exp}_i, \vec{O}^\text{th}_i\left(\vec C, \vec{\theta}_0\right)\right) \prod_{i\,\in\,2.} \widetilde{L}_{\text{exp}}\left(\vec{O}^\text{exp}_i, \vec{O}^\text{th}_i\left(\vec C, \vec{\theta}_0\right)\right) . \label{eq:nfSL} \end{equation} The first product contains the full experimental likelihood for a fixed value of the theory parameters $\theta_0$, effectively ignoring theoretical uncertainties. The second product contains a modified experimental likelihood. Assuming the measurements of $\vec{O}_i^\text{exp}$ to be normally distributed with the covariance matrix $C_\text{exp}$ and the theory predictions to be normally distributed as well with covariance $C_\text{th}$, $\widetilde{L}_\text{exp}$ has the form \begin{equation} -2\ln\widetilde{L}_{\text{exp}} = \vec{x}^T (C_\text{exp} + C_\text{th})^{-1} \vec x \,, \end{equation} where \begin{equation} \vec x = \vec{O}^\text{exp}_i - \vec{O}^\text{th}_i \,. \end{equation} Effectively, the theoretical uncertainties stemming from the uncertainties in the theory parameters $\theta$ are ``integrated out'' and treated as additional experimental uncertainties. These two different approaches of getting rid of nuisance parameters are frequently used in phenomenological analyses. Neglecting theory uncertainties is well-known to be a good approximation in EFT fits to electroweak precision tests (see e.g.\ \cite{Efrati:2015eaa,Falkowski:2017pss}). The procedure of ``integrating out'' nuisance parameters has been applied to EFT fits of rare $B$ decays first in \cite{Altmannshofer:2014rta} and subsequently also applied elsewhere (see e.g.~\cite{Descotes-Genon:2015uva}). While the nuisance-free likelihood is a powerful tool for fast exploration of the parameter space of SMEFT or any UV theory matched to it, we stress that there are observables where none of the two above assumptions are satisfied and which thus cannot be taken into account in our approach, for instance: \begin{itemize} \item We treat the four parameters of the CKM matrix as nuisance parameters, but these parameters are determined from tree-level processes that can be affected by dimension-6 SMEFT contributions themselves, e.g. $B$ decays based on the $b\to c\ell\nu$ \cite{Jung:2018lfu} or $b\to u\ell\nu$ transition, charged-current kaon decays \cite{Gonzalez-Alonso:2016etj}, or the CKM angle $\gamma$ \cite{Brod:2014bfa}. Thus to take these processes into account, one would have to treat the CKM parameters as floating nuisance parameters. We do however take into account tests of lepton flavour universality (LFU) in these processes where the CKM elements drop out. \item The electric dipole moments (EDMs) of the neutron or of diamagnetic atoms\footnote{The uncertainties of EDMs of paramagnetic atoms are instead under control \cite{Dekens:2018bci} and could be treated within our framework. We thank Jordy de Vries for bringing this point to our attention.} are afflicted by sizable hadronic uncertainties, but are negligibly small in the SM. Thus the uncertainty can neither be neglected nor assumed to be SM-like and the poorly known matrix elements would have to be treated as proper nuisance parameters. \end{itemize} We will comment on partial remedies for these limitations in section~\ref{sec:concl}. \section{Observables}\label{sec:obs} Having defined the general form of the global, nuisance-free SMEFT likelihood \eqref{eq:nfSL} and the two different options for treating theory uncertainties, we now discuss the precision observables that are currently included in our likelihood. Generally, the observables we consider can be separated into two classes: \begin{itemize} \item Electroweak precision observables (EWPOs) on the $Z$ or $W$ pole. In this case we evolve the SMEFT Wilson coefficients from the input scale to the $Z$ mass and then compute the NP contributions directly in terms of them. \item Low-energy precision observables. In this case we match the SMEFT Wilson coefficients onto the weak effective theory (WET) where the electroweak gauge bosons, the Higgs boson and the top quark have been integrated out. We then run the WET Wilson coefficients down to the scale appropriate for the process. For decays of particles without $b$ flavour, we match to the appropriate 4- or 3-flavour effective theories. \end{itemize} The Python package to be described in section~\ref{sec:py} also allows to access a pure WET likelihood. In this case the constraints in the first category are ignored. The complete tree-level matching from SMEFT onto WET \cite{Aebischer:2015fzz,Jenkins:2017jig} as well as the one-loop running in SMEFT \cite{Alonso:2013hga,Jenkins:2013zja,Jenkins:2013wua} and WET \cite{Aebischer:2017gaw,Jenkins:2017dyc} is done with the \texttt{wilson} package \cite{Aebischer:2018bkb}. In appendix~\ref{app:obstable}, we list all the observables along with their experimental measurements and SM predictions. \subsection{Electroweak precision observables}\label{sec:ewpo} To consistently include EWPOs, we follow \cite{Brivio:2017vri} by parameterizing the shifts in SM parameters and couplings as linear functions of SMEFT Wilson coefficients. Terms quadratic in the dimension-6 Wilson coefficients are of the same order in the EFT power counting as the interference of the SM amplitude with dimension-8 operators and thus should be dropped. We use the $\lbrace \hat \alpha_e, \hat G_F, \hat m_Z \rbrace$ input parameter scheme. We include the full set of $Z$ pole pseudo-observables measured at LEP-I without assuming lepton flavour universality. Following \cite{Efrati:2015eaa} we also include $W$ branching ratios, the $W$ mass (cf.~\cite{Bjorn:2016zlr}), and the $W$ width. As a non-trivial cross-check, we have confirmed that the electroweak part of our likelihood exhibits the reparametrization invariance pointed out in \cite{Brivio:2017bnu}. Finally, we include LEP and LHC constraints on LFV $Z$ decays. The total number of observables in this sector is 25. For all these observables, we neglected the theoretical uncertainties, which are in all cases much smaller than the experimental uncertainties. \subsection{Rare $B$ decays} Measurements of rare $B$ decays based on the $b\to s$ transition are of particular interest as several deviations from SM expectations have been observed there, most notably the anomalies in $\mu$/$e$ universality tests in $B\to K^{(*)}\ell^+\ell^-$ \cite{Aaij:2014ora,Aaij:2017vbb} and the anomalies in angular observables in $B\to K^*\mu^+\mu^-$ \cite{Aaij:2015oid}. We include the following observables. \begin{itemize} \item All relevant CP-averaged observables in inclusive and exclusive semi-leptonic $b\to s\mu\mu$ decays that have also been included in the global fit \cite{Altmannshofer:2017yso}. In this case the theoretical uncertainties are sizable and strongly correlated and we use the second approach described in section~\ref{sec:setup}. \item T-odd angular CP asymmetries in $B\to K^*\mu^+\mu^-$. These are tiny in the SM and we neglect the theory uncertainty. \item High-$q^2$ branching ratios and angular observables of $\Lambda_b\to \Lambda\mu^+\mu^-$ \cite{Boer:2014kda,Detmold:2016pkz}. \item The branching ratios of the leptonic decays $B^0\to\mu^+\mu^-$ and $B_s\to\mu^+\mu^-$ \cite{DeBruyn:2012wj,DeBruyn:2012wk}. \item The $\mu$/$e$ universality tests $R_K$ and $R_{K^*}$ following \cite{Altmannshofer:2017fio}. Here we neglect the tiny theory uncertainties \cite{Bordone:2016gaq}. \item The branching ratio of the inclusive decay $B\to X_se^+e^-$ \cite{Huber:2015sra}. \item All observables in inclusive and exclusive radiative $b\to s\gamma$ decays \cite{Misiak:2015xwa} (including $B\to K^*e^+e^-$ at very low $q^2$) that have also been included in the global fit in \cite{Paul:2016urs}. \item Bounds on the exclusive decays $B\to K^{(*)}\nu\bar\nu$ \cite{Buras:2014fpa}. Even though these have sizable uncertainties in the SM, they can be neglected compared to the experimental precision (which in turn allows us to take into account the non-Gaussian form of the likelihoods). A sum over the unobserved neutrino flavours is performed, properly accounting for models where wrong-flavour neutrino modes can contribute. \item Bounds on tauonic $B$ decays: $B\to K\tau^+\tau^-$, $B^0\to \tau^+\tau^-$, $B_s\to \tau^+\tau^-$. We neglect theoretical uncertainties. \item Bounds on LFV $B$ decays: $B\to (\pi, K, K^*)\ell\ell'$ \cite{Becirevic:2016zri} for all cases where bounds exist. We neglect theoretical uncertainties. \end{itemize} In contrast to EWPOs, in flavour physics there is no formal need to drop terms quadratic in the dimension-6 SMEFT Wilson coefficients. For processes that are forbidden in the SM, such as LFV decays, this is obvious since the leading contribution is the squared dimension-6 amplitude and the dimension-8 contribution is relatively suppressed by four powers of the NP scale. But also for processes that are not forbidden but suppressed by a mechanism that does not have to hold beyond the SM, the dimension-8 contributions are subleading. Schematically, the amplitude reads $\epsilon A_\text{SM} + v^2/\Lambda^2 A_6 + v^4/\Lambda^2 A_8+\ldots$, where $\epsilon$ is a SM suppression factor (e.g.\ GIM or CKM suppression) and $A_{6,8}$ the dimension-6 and 8 contributions without the dimensional suppression factors, respectively. Obviously, in the squared amplitude the $A_\text{SM} A_8^*$ interference term is suppressed by $\epsilon$ compared to the $|A_6|^2$ term, so it is consistent to only keep the latter. \subsection{Semi-leptonic $B$ and $K$ decays} As discussed at the end of section~\ref{sec:setup}, we cannot use the semi-leptonic charged-current $B$ and $K$ decays with light leptons in our approach since we do not allow the CKM parameters to float. Nevertheless, we can include tests of LFU in $b\to q\ell\nu$ decays where the CKM elements drop out. We include: \begin{itemize} \item The ratio of $K^+\to e^+\nu$ and $K^+\to \mu^+\nu$, \item The branching ratios\footnote{ While these observables are strictly speaking not independent of the CKM element $V_{ub}$, the much larger experimental uncertainty compared to $B\to\pi\ell\nu$ means that they are only relevant as constraints on large violations of LFU or large scalar operators, which allows us to take them into account nevertheless. Alternatively, these observables could be normalized explicitly to $B\to\pi\ell\nu$, but we refrain from doing so for simplicity. } of $B\to\pi\tau\nu$, $B^+\to\tau^+\nu$, $B^+\to\mu^+\nu$, and $B^+\to e^+\nu$, \item The ratios $R_{D^{(*)}} = \text{BR}{(B\to D^{(*)}\tau\nu)}/{\text{BR}(B\to D^{(*)}\ell\nu)}$, where the deviations from SM expectations are observed, \item The $q^2$ distributions of $B\to D^{(*)}\tau\nu$ from Belle \cite{Huschle:2015rga} and BaBar \cite{Lees:2013uzd}. \end{itemize} For the latter, we use the results of\cite{Celis:2016azn}, where these are given for an arbitrary normalization. For our purpose we normalize these values in each bin by the integrated rate, in order to leave $R_{D^{(*)}}$ as independent observables. For the form factors of the $B\to D$ and $B\to D^*$ transition, we use the results of \cite{Jung:2018lfu}, combining results from lattice QCD, light-cone sum rules, and heavy quark effective theory but not using any experimental data on $b\to c\ell\nu$ decays to determine the form factors. This leads to a larger SM uncertainty (and also lower central values) for $R_D$ and $R_{D^*}$. Even though we require $b\to c\ell\nu$ with $\ell=e,\mu$ to be mostly SM-like for consistency as discussed in section~\ref{sec:setup}, we prefer to use the form factors from pure theory predictions to facilitate a future treatment of the CKM elements as nuisance parameters (see section~\ref{sec:concl}). \subsection{Meson-antimeson mixing} We include the following observables related to meson-antimeson mixing in the $K^0$, $B^0$, $B_s$, and $D^0$ systems: \begin{itemize} \item The $B^0$ and $B_s$ mass differences $\Delta M_d$ and $\Delta M_s$, \item The mixing-induced CP asymmetries $S_{\psi K_S}$ and $S_{\psi\phi}$ (neglecting contributions to the penguin amplitude from four-quark operators), \item The CP-violating parameter $\epsilon_K$ in the $K^0$ system, \item The CP-violating parameter $x_{12}^\text{Im}$ in the $D^0$ system defined as in \cite{Aebischer:2018csl}. \end{itemize} We include the SM uncertainties as described in section~\ref{sec:setup}. \subsection{FCNC $K$ decays} We include the following observables in flavour-changing neutral current kaon decays. \begin{itemize} \item The branching ratios of $K^+\to\pi^+\nu\bar\nu$ and $K_L\to\pi^0\nu\bar\nu$. \item The branching ratios of $K_{L,S}\to\ell^+\ell^-$ \cite{Chobanova:2017rkj}. \item The bound on the LFV decay $K_L\to e^\pm\mu^\mp$. \item The parameter $\varepsilon'/\varepsilon$ measuring the ratio of direct to indirect CP violation in $K_L\to\pi\pi$ \cite{Aebischer:2018quc,Aebischer:2018csl,Blum:2015ywa,Bai:2015nea,Aebischer:2018rrz}. \end{itemize} For $\varepsilon'/\varepsilon$, using our approach described in section~\ref{sec:setup} to assume the uncertainties to be SM-like also beyond the SM is borderline since beyond the SM, other matrix elements become relevant, some of them not known from lattice QCD \cite{Aebischer:2018quc}. We stress however that we do not make use of the partial cancellations of matrix element uncertainties between the real and imaginary parts of the SM amplitudes \cite{Buras:2015yba}, so our SM uncertainty is conservative in this respect. Moreover, visible NP effects in $\varepsilon'/\varepsilon$ typically come from operators contributing to the $\Delta I=3/2$ amplitude, where the matrix elements are known to much higher precision from lattice QCD \cite{Blum:2015ywa}, such that also in these cases our approach can be considered conservative. \subsection{Tau and muon decays} We include the following LFV decays of taus and muons: \begin{itemize} \item $\mu\to 3e$ \cite{Brignole:2004ah}, $\tau\to 3\mu$ \cite{Kuno:1999jp,Brignole:2004ah}, $\tau^-\to \mu^-e^+e^-$ \cite{Brignole:2004ah}, \item $\tau^-\to e^-\mu^+e^-$, $\tau^-\to\mu^-e^+\mu^-$, \item $\mu\to e\gamma$, $\tau\to \ell\gamma$ \cite{Brignole:2004ah}, \item $\tau\to \rho \ell$, $\tau\to \phi \ell$, \end{itemize} where $\ell=e$ or $\mu$. Theoretical uncertainties can be neglected. For $\tau\to \rho \ell$ and $\tau\to \phi \ell$, we have calculated the full WET expressions of the decay widths including contributions from semi-leptonic vector and tensor operators as well as leptonic dipole operators. In all expressions, we have kept the full dependence on the mass of the light lepton $\ell$. The results, which to our knowledge have not been presented in this generality in the literature before, are given in appendix~\ref{app:tau_lV}. As expected, considering only the dipole contributions, $\tau\to \rho \ell$ and $\tau\to \phi \ell$ are not competitive with $\tau\to \ell\gamma$. Interestingly, the semi-leptonic tensor operators are generated in the tree-level SMEFT matching only for up-type quarks (semi-leptonic down-type tensor operators violate hypercharge). This means that in a SMEFT scenario and neglecting loop effects, tensor operators do contribute to $\tau\to \rho \ell$ but do not contribute to $\tau\to \phi \ell$. In addition we include the charged-current tau decays \begin{itemize} \item $\tau\to \ell\nu\nu$ \cite{Pich:2013lsa}, \end{itemize} which represent important tests of lepton flavour universality (LFU). Since these are present in the SM and measured precisely, theory uncertainties cannot be neglected and we include them as described in section~\ref{sec:setup}. A sum over unobserved neutrino flavours is performed, properly accounting for models where wrong-flavour neutrino modes can contribute. Note that the branching ratio of $\mu\to e\nu\nu$ is not a constraint in our likelihood as it is used to define the input parameter $G_F$ via the muon lifetime. Potential NP contributions to this decay enter the EWPOs of section~\ref{sec:ewpo} via effective shifts of the SM input parameters. \subsection{Low-energy precision observables} Finally, we include the following flavour-blind low-energy observables: \begin{itemize} \item the anomalous magnetic moments of the electron, muon, and tau, $a_\ell = (g_\ell-2)/2$, \item the neutrino trident production cross section \cite{Altmannshofer:2014pba}. \end{itemize} \section{Applications}\label{sec:pheno} In this section, we demonstrate the usefulness of the global likelihood with a few example applications motivated in particular by the $B$ anomalies. While we restrict ourselves to simplistic two-parameter scenarios for reasons of presentation, we stress that the power of the {\em global} likelihood is that it can be used to test models {\em beyond} such simplified scenarios. \subsection{Electroweak precision analyses} A non-trivial check of our implementation of EWPOs discussed in sec.~\ref{sec:ewpo} is to compare the pulls between the SM prediction and measurement for individual observables to sophisticated EW fits as performed e.g.\ by the Gfitter collaboration \cite{Haller:2018nnx}. We show theses pulls in fig.~\ref{fig:ew} left and observe good agreement with the literature. The largest pull is in the forward-backward asymmetry in $Z\to b\bar b$. \begin{figure} \centering \includegraphics[width=4cm]{graphics/ewpulls}% \includegraphics[width=0.5\textwidth]{graphics/Oblique} \caption{Left: pulls for individual $Z$- and $W$-pole observables for the SM point. Right: 1--3$\sigma$ likelihood contours in the plane of two Warsaw-basis Wilson coefficients that are proportional to the oblique parameters $S$ and $T$, assuming all other coefficients to vanish.} \label{fig:ew} \end{figure} Another well-known plot is the EWPO constraint on the oblique parameters $S$ and $T$, which are proportional to the SMEFT Warsaw basis Wilson coefficients $C_{\phi WB}$ and $C_{\phi D}$, respectively (see e.g.~\cite{Wells:2015uba}). Their corresponding operators read: \begin{equation} O_{\phi WB}=\phi^\dagger \tau^I \phi W_{\mu \nu}^I B^{\mu \nu}\,,\quad O_{\phi D}= (\phi^\dagger D^\mu \phi)^* (\phi^\dagger D_\mu \phi)\,. \end{equation} In fig.~\ref{fig:ew} right, we show likelihood contours in the plane of these coefficients at the scale $m_Z$, in good agreement with results in the literature \cite{Haller:2018nnx,Ellis:2018gqa}. \subsection{Model-independent analysis of $b\to s\ell\ell$ transitions} Model-independent fits of the WET Wilson coefficients $C_9^{bs\mu\mu}$ and $C_{10}^{bs\mu\mu}$ of the operators\footnote{Throughout, we use the WCxf convention \cite{Aebischer:2017ugx} of writing the effective Lagrangian as $\mathcal L_\text{eff} = -\mathcal H_\text{eff} =\sum_{O_i= O_i^\dagger} C_i \, O_i + \sum_{O_i\neq O_i^\dagger} \left( C_i \, O_i + C^*_i \, O^\dagger_i\right)$ and include normalization factors directly in the definition of the operators.} \begin{align} O_{9}^{bs\mu\mu}&= \frac{4G_F}{\sqrt{2}}V_{tb}V_{ts}^*\frac{e^2}{16\pi^2} (\bar s_L \gamma^\rho b_L) (\bar \mu \gamma_\rho \mu) \,,& O_{10}^{bs\mu\mu}&= \frac{4G_F}{\sqrt{2}}V_{tb}V_{ts}^*\frac{e^2}{16\pi^2} (\bar s_L \gamma^\rho b_L) (\bar \mu \gamma_\rho \gamma_5 \mu)\,, \end{align} play an important role in the NP interpretation of the $B\to K^*\mu^+\mu^-$, $R_K$, and $R_{K^*}$ anomalies and have been performed by several groups (for recent examples see \cite{Altmannshofer:2017fio, Altmannshofer:2017yso, Ciuchini:2017mik, Hurth:2017hxg, Capdevila:2017bsm}). Since all relevant $b\to s\ell\ell$ observables are part of our global likelihood, we can plot the well-known likelihood contour plots in the space of two WET Wilson coefficients as a two-dimensional slice of the global likelihood. In fig.~\ref{fig:bsmumu_C9-C10} left we plot contours in the $C_9^{bs\mu\mu}$-$C_{10}^{bs\mu\mu}$ plane, assuming them to be real and setting all other Wilson coefficients to zero. The result is equivalent to \cite{Altmannshofer:2017fio, Altmannshofer:2017yso} apart from the addition of the $\Lambda_b\to\Lambda\mu^+\mu^-$ decay. In fig.~\ref{fig:bsmumu_C9-C10} right, we show the analogous plot for the SMEFT Wilson coefficients $[C_{lq}^{(1)}]_{2223}$ and $[C_{qe}]_{2322}$ of the operators \begin{equation} [O_{lq}^{(1)}]_{2223}= (\bar \ell_2 \gamma^\mu \ell_2) (\bar q_2 \gamma_\mu q_3)\,,\quad [O_{qe}]_{2322}= (\bar q_2 \gamma^\mu q_3) (\bar e_2 \gamma_\mu e_2)\,, \end{equation} that match at tree level onto $C_9^{bs\mu\mu}$ and $C_{10}^{bs\mu\mu}$ (cf.~\cite{Celis:2017doq}). While the plot of the real parts of $C_9^{bs\mu\mu}$ and $C_{10}^{bs\mu\mu}$ is well known, the global likelihood allows to explore arbitrary scenarios with real or complex contributions to several Wilson coefficients. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{graphics/bsmumu_C9-C10}% \includegraphics[width=0.5\textwidth]{graphics/bsmumu_Clq1_Cqe} \caption{Likelihood contours from $b\to s\mu\mu$ transitions and from $R_K$ and $R_{K^*}$ in the space of the two WET Wilson coefficients $C_9^{bs\mu\mu}$ and $C_{10}^{bs\mu\mu}$ at the $b$ quark scale (left) and the two SMEFT Wilson coefficients $[C_{lq}^{(1)}]_{2223}$ and $[C_{qe}]_{2322}$ at the scale 10~TeV. All other Wilson coefficients are assumed to vanish.} \label{fig:bsmumu_C9-C10} \end{figure} \subsection{Model-independent analysis of $b\to c\tau\nu$ transitions}\label{sec:bctaunu} Model-independent EFT analyses of $b\to c\tau\nu$ transitions relevant for the $R_D$ and $R_{D^*}$ anomalies have been performed within the WET \cite{Freytsis:2015qca,Celis:2016azn,Bhattacharya:2018kig,Celis:2012dk} and SMEFT \cite{Feruglio:2018fxo,Hu:2018veh}. Within simple two-coefficient scenarios, an interesting case is the one with new physics in the two WET Wilson coefficients $C_{S_L}^{bc\tau\nu_\tau}$ and $C_{S_R}^{bc\tau\nu_\tau}$. The corresponding operators are defined by \begin{align} O_{S_L}^{bc\tau\nu_\tau}&= -\frac{4G_F}{\sqrt{2}}V_{cb} (\bar c_R b_L) (\bar \tau_R \nu_{\tau L})\,, & O_{S_R}^{bc\tau\nu_\tau} &= -\frac{4G_F}{\sqrt{2}}V_{cb} (\bar c_L b_R) (\bar \tau_R \nu_{\tau L})\,. \end{align} The constraint from $B_c\to\tau\nu$ \cite{Li:2016vvp, Akeroyd:2017mhr} allows a solution to the $R_D$ anomaly only for $C_{S_L}^{bc\tau\nu_\tau}\approx C_{S_R}^{bc\tau\nu_\tau}$ and precludes a solution of the $R_{D^*}$ anomaly \cite{Alonso:2016oyd}. Additional disjoint solutions in the 2D Wilson coefficient space are excluded by the $B\to D\tau\nu$ differential distributions \cite{Celis:2016azn}. Both effects are visible in figure~\ref{fig:bctaunu_CSL-CSR} left. The preferred region is only improved slightly more than $2\sigma$ compared to the SM, signaling that the $R_D$ and $R_{D^*}$ anomalies, that have a combined significance of around $4\sigma$, cannot be solved simultaneously. Even this less-than-perfect solution turns out to be very difficult to realize in SMEFT. In fact, the immediate choice for SMEFT Wilson coefficients matching onto $C_{S_L}^{bc\tau\nu_\tau}$ and $C_{S_R}^{bc\tau\nu_\tau}$ would be $[C_{ledq}]_{3332}$ and $[C_{lequ}^{(1)}]_{3332}$, respectively, defined by the operators \begin{equation} [O_{ledq}]_{3332}= (\bar \ell_3 e_3) (\bar d_3 q_2)\,,\quad [O_{lequ}^{(1)}]_{3332}= (\bar \ell_3^j e_3)\epsilon_{jk} (\bar q_3^k u_2)\,. \end{equation} However, $[C_{ledq}]_{3332}$ also generates the FCNC decay $B_s\to\tau^+\tau^-$, and even though this has not been observed yet, the existing bound puts strong constraints. Choosing instead $[C_{ledq}]_{3333}$, the Wilson coefficient has to be larger by a factor $1/V_{cb}$ and leads to a sizable NP effect in the decay $B^+\to\tau\nu_\tau$ based on the $b\to u\tau\nu$ transition. These effects are demonstrated in fig.~\ref{fig:bctaunu_CSL-CSR} right, where the relation between the left- and right-handed coefficients that evades the $B_c\to\tau\nu$ constraint, \begin{equation} [C_{lequ}^{(1)}]_{3332}=[C_{ledq}]_{3332} + V_{cb} \, [C_{ledq}]_{3332} \,, \label{eq:evade} \end{equation} has been imposed. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{graphics/bctaunu_CSL_CSR}% \includegraphics[width=0.5\textwidth]{graphics/bctaunu_CSL_CSR_SMEFT}% \caption{Left: Likelihood contours in the space of the $b\to c\tau\nu_\tau$ WET scalar operators from $R_D$ and $R_{D^*}$ (blue), the combination of $B_c\to\tau\nu$, $B\to D^{(*)}\tau\nu$ differential rates and $F_L(B\to D^*\tau\nu)$ (green) and the global likelihood (red). Right: Likelihood contours for the SMEFT Wilson coefficients matching onto the WET scalar operators for two choices of flavour indices, imposing the relation between coefficients \eqref{eq:evade} that evades the $B_c\to\tau\nu$ constraint. The purple region is allowed by $B_s\to\tau^+\tau^-$ and $B^+\to\tau\nu$.} \label{fig:bctaunu_CSL-CSR} \end{figure} Another interesting two-coefficient scenario is the one with new physics in $C_{S_L}^{bc\tau\nu_\tau}$ and the tensor Wilson coefficient $C_{T}^{bc\tau\nu_\tau}$, that are generated with the relation $C_{S_L}^{bc\tau\nu_\tau}=-4 C_T^{bc\tau\nu_\tau}$ at the matching scale in the scalar singlet leptoquark $S_1$ scenario\footnote{See also \cite{Becirevic:2018afm,Dekens:2018bci} for the $R_2$ leptoquark scenario with complex couplings, which generates the Wilson coefficients with the relation $C_{S_L}^{bc\tau\nu_\tau}= 4 C_T^{bc\tau\nu_\tau}$.} \cite{Freytsis:2015qca}. In fig.~\ref{fig:bctaunu_CSL-T} left, we show the constraints on this scenario. A new finding, that to our knowledge has not been discussed in the literature before, is that a second, disjoint solution with large tensor Wilson coefficient is excluded by the new, preliminary Belle measurement of the longitudinal polarization fraction $F_L$ in $B\to D^*\tau\nu$ \cite{Adamczyk:2019wyt}, which is included in our likelihood and enters the green contour in the plot. The analogous scenario in SMEFT with the Wilson coefficients $[C_{lequ}^{(1)}]_{3332}$ and $[C_{lequ}^{(3)}]_{3332}$ does not suffer from the constraints of the scenario with $C_{S_R}$ as the operator involves a right-handed up-type quark, so is not related by $SU(2)_L$ rotations to any FCNC operator in the down-type sector. Here the Wilson coefficient $[C_{lequ}^{(3)}]_{3332}$ is defined by the operator \begin{equation} [O_{lequ}^{(3)}]_{3332}= (\bar \ell_3^j \sigma_{\mu\nu}e_3)\epsilon_{jk} (\bar q_3^k \sigma^{\mu\nu} u_2)\,. \end{equation} Consequently, the constraints are qualitatively similar as for WET, as shown in fig.~\ref{fig:bctaunu_CSL-T} right. Note that we have included the anomalous magnetic muon and tau in our likelihood, but do not find a relevant constraint for this simple scenario (cf.~\cite{Feruglio:2018fxo}). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{graphics/bctaunu_CSL_CT}% \includegraphics[width=0.5\textwidth]{graphics/bctaunu_CSL_CT_SMEFT}% \caption{Left: Likelihood contours in the space of $b\to c\tau\nu_\tau$ WET scalar and tensor operator from $R_D$ and $R_{D^*}$ (blue), the combination of $B_c\to\tau\nu$, $B\to D^{(*)}\tau\nu$ differential rates and $F_L(B\to D^*\tau\nu)$ (green) and the global likelihood (red). Right: Likelihood contours for the SMEFT Wilson coefficients matching onto the WET scalar and tensor operators.} \label{fig:bctaunu_CSL-T} \end{figure} \subsection{$B$ anomalies from new physics in top} A new physics effect in the semi-leptonic SMEFT operator $[C_{lu}]_{2233}$ involving two left-handed muons and two right-handed top quarks was suggested in \cite{Celis:2017doq} as a solution to the neutral-current $B$ anomalies, as it induces a $b\to s\mu\mu$ transition at low-energies via electroweak renormalization effects. This effect can be realized in $Z'$ models \cite{Kamenik:2017tnu}. It was subsequently shown however that the effect is strongly constrained by the effects it induces in $Z\to\mu^+\mu^-$ \cite{Camargo-Molina:2018cwu}, which can be cancelled by a simultaneous contribution to $[C_{eu}]_{2233}$. The result obtained there can be reproduced with our likelihood by plotting likelihood contours in the plane of these two Wilson coefficients at 1~TeV, see fig.~\ref{fig:Ceu-Clu} left. Here the operators for the Wilson coefficients $[C_{eu}]_{2233}$ and $[C_{lu}]_{2233}$ are given by \begin{equation} [O_{eu}]_{2233}= (\bar e_2 \gamma_\mu e_2) (\bar u_3 \gamma^\mu u_3)\,,\quad [O_{lu}]_{2233} = (\bar \ell_2 \gamma_\mu \ell_2) (\bar u_3 \gamma^\mu u_3)\,. \end{equation} At $2\sigma$, the two constraints cannot be brought into agreement and the global likelihood is optimized at an intermediate point. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{graphics/Ceu-Clu}% \includegraphics[width=0.5\textwidth]{graphics/lq1_lq3_3323} \caption{Left: Likelihood contours in the plane of the SMEFT Wilson coefficients $[C_{lu}]_{2233}$ and $[C_{eu}]_{2233}$ at 1~TeV. Right: Likelihood contours in the plane of the SMEFT Wilson coefficients $[C_{lq}^{(1)}]_{3323}$ and $[C_{lq}^{(3)}]_{3323}$ at 1~TeV.} \label{fig:lq1_lq3} \label{fig:Ceu-Clu} \end{figure} \subsection{Tauonic vector operators for charged-current anomalies} The SMEFT operator $[C_{lq}^{(3)}]_{3323}$ can interfere coherently with the SM contribution to the $b\to c\tau\nu_\tau$ process, does not suffer from any CKM suppression and is thus a good candidate to explain the $R_D$ and $R_{D^*}$ anomalies. However, a strong constraint is given by the limits on the $B\to K^{(*)}\nu\bar\nu$ decays, which can receive contributions from tau neutrinos \cite{Buras:2014fpa}. At tree level and in the absence of RG effects, this constraint can be avoided in models that predict $[C_{lq}^{(3)}]_{3323} = [C_{lq}^{(1)}]_{3323}$. The modification of this constrain in the presence of SMEFT RG effects above the EW scale can be seen in fig.~\ref{fig:lq1_lq3} right. The Wilson coefficients $[C_{lq}^{(1)}]_{3323}$ and $[C_{lq}^{(3)}]_{3323}$ are defined by the operators \begin{equation} [O_{lq}^{(1)}]_{3323}= (\bar \ell_3 \gamma_\mu \ell_3) (\bar q_2 \gamma^\mu q_3)\,,\quad [O_{lq}^{(3)}]_{3323} = (\bar \ell_3 \gamma_\mu \tau^I \ell_3) (\bar q_2 \gamma^\mu \tau^I q_3)\,. \end{equation} Recently, it has been pointed out that the large value of the tauonic Wilson coefficient required to accommodate $R_D$ and $R_{D^*}$ induces a LFU contribution to the $b\to s\ell\ell$ Wilson coefficient $C_9$ at the one loop level \cite{Crivellin:2018yvo}, an effect discussed for the first time in \cite{Bobeth:2011st}. This effect can be reproduced by taking into account the SMEFT and QED running. In agreement with \cite{Crivellin:2018yvo}, fig.~\ref{fig:lq1_lq3} right shows that the $b\to s\mu\mu$ anomalies as well as $R_D$ and $R_{D^*}$ can be explained simultaneously without violating the $B\to K^{(*)}\nu\bar\nu$ constraint. Note that $R_K$ and $R_{K^*}$ are SM-like in this simple scenario. \subsection{Flavour vs. electroweak constraints on modified top couplings} Another nice example of the interplay between flavour and EW precision constraints was presented in \cite{Brod:2014hsa}. The Wilson coefficients corresponding to modified couplings of the $Z$ boson to left- and right-handed top quarks, $[\widehat{C}_{\phi q}^{(1)}]_{33}$ (in the Warsaw-up basis where the up-type quark mass matrix is diagonal, see appendix~\ref{app:conv}) and $[C_{\phi u}]_{33}$, defined by \begin{equation} [O_{\phi q}^{(1)}]_{33}= (\phi^\dagger i \overset{\text{$\leftrightarrow$}}{D_\mu} \phi) (\bar q_3 \gamma^\mu q_3)\,,\quad [O_{\phi u}]_{33} = (\phi^\dagger i \overset{\text{$\leftrightarrow$}}{D_\mu} \phi) (\bar u_3 \gamma^\mu u_3)\,, \end{equation} induce on the one hand effects in flavour-changing neutral currents in $K$ and $B$ physics such as $B_s\to\mu^+\mu^-$ and $K^+\to\pi^+\nu\bar\nu$, on the other hand radiatively induce a correction to the Wilson coefficient of the bosonic operator $O_{\phi D}$ that corresponds to the oblique $T$ parameter. This interplay is reproduced in fig.~\ref{fig:Ztt} left. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{graphics/Ztt}% \includegraphics[width=0.5\textwidth]{graphics/U1} \caption{Left: Likelihood contours in the space of the two SMEFT Wilson coefficients that correspond to modified $Z$ couplings to left- or right-handed top quarks. The constraints from flavour physics (dominated by $B_s\to\mu^+\mu^-$) and EWPOs are complementary. Right: Likelihood contours in the plane of the couplings $g_{lq}^{23}$ and $g_{lq}^{32}$ of the $U_1$ vector leptoquark model at $1\sigma$ level. } \label{fig:U1} \label{fig:Ztt} \end{figure} \subsection{Vector leptoquark solution to the $B$ anomalies} The $U_1$ vector leptoquark transforming as $(3, 1)_{\frac{2}{3}}$ under the SM gauge group is the phenomenologically most successful single-multiplet scenario that simultaneously solves the charged- and neutral-current $B$ anomalies \cite{Barbieri:2015yvd} as it does not give rise to $b\to s\nu\bar\nu$ at tree level \cite{Buras:2014fpa} and is still allowed by direct searches \cite{Buttazzo:2017ixm}. Writing the leptoquark's couplings to left-handed fermions as \begin{equation} \mathcal L_{U_1} \supset g_{lq}^{ji} \,\left(\bar q_L^i \gamma^\mu l_L^j\right) \, U_{\mu} +\text{h.c.}\,, \end{equation} the solution of the neutral-current $B$ anomalies depends on the coupling combination $g_{lq}^{22}g_{lq}^{23*}$, while the charged-current anomalies require a sizable $g_{lq}^{32}g_{lq}^{33*}$.\footnote{While the coupling $g_{lq}^{33}$ would be sufficient to enhance $R_D$ and $R_{D^*}$, this solution is disfavoured by direct searches \cite{Faroughy:2016osc}.} Fig.~\ref{fig:U1} right shows the likelihood contours for the $U_1$ scenario in the plane $g_{lq}^{32}$ vs. $g_{lq}^{23}$ where we have fixed \begin{align} m_{U_1} &= 2\,\text{TeV} \,,& g_{lq}^{33} &= 1 \,,& g_{lq}^{22} &= 0.04^2 \approx V_{cb}^2 \,. \end{align} The LFV decays are important constraints to determine the allowed pattern of the couplings $g_{lq}^{ij}$ \cite{Kumar:2018kmr}. This can be seen from the orange contour in Fig.~\ref{fig:U1} right, which shows constraints from BR$(B \to K \tau^+ \mu^-)$, BR$(B\to K\mu^+ \tau^-)$, and BR$(\tau \to \phi \mu)$. The former two depend on the coupling combinations $g_{lq}^{33} g_{lq}^{22}$ and $g_{lq}^{23} g_{lq}^{32}$ respectively, whereas the latter is controlled by $g_{lq}^{32} g_{lq}^{22}$. \subsection{$B$ anomalies from third generation couplings} An interesting EFT scenario for the combined explanation of the $B$ anomalies in the neutral and charged currents is to assume TeV-scale NP in the purely third generation operators $[O_{lq}^{(1)}]_{3333}$ and $[O_{lq}^{(3)}]_{3333}$ in the interaction basis \cite{Bhattacharya:2014wla}. The effective Lagrangian in the Warsaw basis (as defined in WCxf \cite{Aebischer:2017ugx}) can be written as \begin{equation} \mathcal{L}_\text{eff} \supset \frac{\lambda^{ij}_\ell \lambda^{kl}_q }{\Lambda^2}\left (C_1 \bar \ell_{iL} \gamma_\mu \ell_{jL} \bar q_{kL} \gamma^\mu q_{lL} + C_3 \bar \ell_{iL} \gamma_\mu \tau^I \ell_{jL} \bar q_{kL} \gamma^\mu\tau^I q_{lL} \right ), \label{eq:3333} \end{equation} where $\lambda_\ell$ and $\lambda_q$ parameterize the mismatch between the interaction basis and the basis where the down-type quark mass matrix is diagonal. As required by the data, purely third generation operators induce a large NP contribution in $b\to c\tau \bar\nu$, whereas in $b\to s\mu^+\mu^-$ comparatively smaller effects arise due to mixing on rotating to the mass basis. In this context, ref.~\cite{Feruglio:2017rjo} found that electroweak corrections can lead to important effects in $Z$ pole observables and $\tau$ decays challenging this simultaneous solution for the $B$ anomalies. Since all the relevant observables as well as the SMEFT RG evolution are included in our global likelihood, we can reproduce these conclusions. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{graphics/3333.pdf} \caption{Likelihood contours at $2\sigma$ for various sets of observables for the scenario with mostly third generation couplings defined in eq.~\eqref{eq:3333}.} \label{fig:3333} \end{figure} In figure~\ref{fig:3333} we show likelihood contours of the various observables in the plane of $C_1=C_3$ and $\lambda_\ell^{23}$. We have set $\Lambda=1$ TeV, $\lambda_q^{23}=-0.008$ and the relations $\lambda_{\ell,q}^{22}=(\lambda_{\ell, q}^{23})^2$, $\lambda_{\ell}^{33} = \lambda_{q}^{33} =1$ are imposed\footnote{The overall conclusion are unchanged even if we vary the parameter $\lambda_q^{23}$.}. Like \cite{Feruglio:2017rjo}, we find that the $2\sigma$ region for the precision $\tau$ decays does not overlap with the $2\sigma$ regions preferred by $R_{D^{(*)}}$ and $R_{K^{(*)}}$. Furthermore, the $2\sigma$ region from EWPOs has only a very small overlap with the $2\sigma$ region preferred by $R_{D^{(*)}}$. Compared to \cite{Feruglio:2017rjo}, we find a stronger constraint on the shift in the tau neutrino's electroweak coupling. We have traced this difference back to the treatment of the LEP constraint in the invisible $Z$ width. \cite{Feruglio:2017rjo} uses the invisible $Z$ width extracted by LEP \cite{ALEPH:2005ab}, corresponding to the effective number of neutrino species $N_\nu=2.984\pm0.008$, which favours a destructive interference with the SM at $2\sigma$. This number is obtained exclusively from $\sigma_\text{had}$, using the measured value of $R_l$ (assuming lepton flavour universality). Our treatment differs in two respects. First, since both $\sigma_\text{had}$ and $R_{e,\mu,\tau}$ are among the observables in the likelihood, we effectively use the SM values of $R_{e,\mu,\tau}$ rather than the measured ones when shifting only the neutrino coupling. This leads to a value $N_\nu=2.990\pm0.007$, in better agreement with the SM value. Second, we include additional observables sensitive to the electroweak coupling of the tau neutrino, notably the total $Z$ width $\Gamma_Z$ and the $W\to\tau\nu$ branching ratio\footnote{We find the total $W$ width to not give a relevant constraint.}. Figure~\ref{fig:gLnu33} shows the contributions of these three observables to the likelihood as well as their combination. While $\sigma_\text{had}$ alone favours a slightly shifted coupling (less significant than $2\sigma$ due to the different treatment of $R_l$), the combined constraints are in agreement with the SM at $1\sigma$ and more strongly disfavour a positive shift in $[C_{\phi l}^{(1)}]_{33}=-[C_{\phi l}^{(3)}]_{33}$. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{graphics/gLnu33.pdf} \caption{Contributions to the log-likelihood $\ln L$ from the observables sensitive to a shift in the tau neutrino's electroweak coupling and their combination, relative to their respective extrema. The axis on top shows the effective number of neutrino species that would correspond to the relative modification of the $Z$ boson's invisible width.} \label{fig:gLnu33} \end{figure} \section{Usage}\label{sec:py} The global likelihood is accessed via the Python package \texttt{smelli}{} (\uline{SME}FT \uline{l}ike\uline{li}hood). Given a working installation of Python version 3.5 or above, the package can be installed with the simple command \\\noindent \begin{minipage}{\linewidth} \begin{lstlisting}[language=iPython, basicstyle=\normalsize\ttfamily] python3 -m pip install smelli --user \end{lstlisting} \end{minipage} that downloads it from the Python package archive (PyPI) along with all required dependencies and installs it in the user's home directory (no administrator privileges required). The source code of the package can be browsed via a public Github repository\footnote{\url{https://github.com/smelli/smelli}}. As with any Python package, \texttt{smelli}{} can be used as library imported from other scripts, directly in the command line interpreter, or in an interactive session. For interactive use, we recommend the Jupyter notebook\footnote{See \url{https://jupyter.org}.} that runs in a web browser. In all cases, the first step is to import the package and to initialize the class \texttt{GlobalLikelihood}, \\\noindent \begin{minipage}{\linewidth} \begin{lstlisting}[language=iPython, basicstyle=\normalsize\ttfamily] import smelli gl = smelli.GlobalLikelihood() \end{lstlisting} \end{minipage} The initialization function takes two optional arguments: \begin{itemize} \item The argument \texttt{eft} (default value: \texttt{\textquotesingle SMEFT\textquotesingle}) can be set to \texttt{\textquotesingle WET\textquotesingle} to obtain a likelihood in the parameter space of WET rather than SMEFT Wilson coefficients. In this case EWPOs are ignored. \item The argument \texttt{basis} allows to select a different WCxf basis (default: \texttt{\textquotesingle Warsaw\textquotesingle} in the case of SMEFT, \texttt{\textquotesingle flavio\textquotesingle} in the case of WET). \end{itemize} By default, \texttt{smelli}{} uses the leading logarithmic approximation for the SMEFT RG evolution, since it is faster than the full numerical solution of the coupled RGEs. This behaviour can be changed by setting the corresponding option of the \texttt{wilson}{} package \textit{after} importing \texttt{smelli}{}, e.g. \begin{lstlisting}[language=iPython, basicstyle=\normalsize\ttfamily] import smelli, wilson wilson.Wilson.set_default_option('smeft_accuracy', 'integrate') \end{lstlisting} The next step is to select a point in Wilson coefficient space by using the \texttt{parameter\char`_point} method. The Wilson coefficients must be provided in the EFT and basis fixed in the first step. There are three possible input formats: \begin{itemize} \item a Python dictionary (containing Wilson coefficient name/value pairs) and an input scale, \item as a WCxf{} data file in YAML or JSON format (specified by its file path as a string), \item as an instance of \texttt{wilson.Wilson} defined by the \texttt{wilson} package. \end{itemize} Using the first option, fixing the Wilson coefficient $[C_{lq}^{(1)}]_{2223}$ to $10^{-8}\,\text{GeV}^{-2}$ at the scale 1\,TeV is achieved with \begin{lstlisting}[language=iPython, basicstyle=\normalsize\ttfamily] glp = gl.parameter_point({'lq1_2223': 1e-8}, scale=1000) \end{lstlisting} Note that, consistently with the WCxf format, all dimensionful values are expected to be in appropriate powers of GeV. The same result could be achieved with a WCxf{} file in YAML format, \begin{lstlisting}[language=yaml, basicstyle=\normalsize\ttfamily\color{red!70!black}] eft: SMEFT basis: Warsaw scale: 1000 values: lq1_2223: 1e-8 \end{lstlisting} that is imported as \begin{lstlisting}[language=iPython, basicstyle=\normalsize\ttfamily] glp = gl.parameter_point('my_wc.yaml') \end{lstlisting} The variable \texttt{glp} defined above holds an instance of the \texttt{GlobalLikelihoodPoint} class that gives access to the results for the chosen parameter point. Its most important methods are \begin{itemize} \item \texttt{glp.log\char`_likelihood\char`_global()} returns the numerical value of the logarithm of the likelihood minus its SM value $\ln \Delta L$, i.e. the logarithm of the likelihood ratio or $-\Delta\chi^2/2$ when writing the likelihood as $L=e^{-\chi^2/2}$. \item \texttt{glp.log\char`_likelihood\char`_dict()} returns a dictionary with the contributions to $\ln\Delta L$ from the individual products in \eqref{eq:nfSL}. \item \texttt{glp.obstable()} returns a \texttt{pandas.DataFrame} table-like object that lists all the individual observables with their experimental and theoretical central values and uncertainties ordered by their ``pull'' that is defined by $\sqrt{|\Delta \chi^2_i|}$ where $-\chi^2_i/2$ is their individual contribution to the log-likelihood neglecting all correlations. This table can be useful to get a better understanding of the likelihood value at a given point. However it should be used with caution. In particular, the log-likelihood is {\em not} equal to the sum of the individual contributions obtained from the pulls, as there can be significant correlations between them. Also, the uncertainties listed in this table can be inaccurate in the case of strongly non-Gaussian probability distributions. \end{itemize} The observables with the highest pulls in the SM as obtained by this method are shown for illustration in table~\ref{tab:pulls_SM}. A few comments are in order. \begin{itemize} \item The largest deviation is in the branching ratio of $B_s\to\phi\mu^+\mu^-$ at low $q^2$, where the prediction relies strongly on the form factors from \cite{Straub:2015ica}. \item The observable $R_{\tau \ell}(B\to D^{\ast}\ell^+\nu)$ is nothing but $R_{D^\ast}$\footnote{The observable $R_D$ is found to have a pull of $2.1\sigma$ and thus does not appear in table~\ref{tab:pulls_SM}.}, while $\langle R_{\mu e} \rangle(B^\pm\to K^\pm \ell^+\ell^-)^{[1.0,6.0]}$ and $\langle R_{\mu e} \rangle(B^0\to K^{\ast 0}\ell^+\ell^-)^{[a, b]}$ are $R_K$ and $R_{K^\ast}$, respectively. $\langle A_\text{FB}^{\ell h}\rangle(\Lambda_b\to\Lambda \mu^+\mu^-)$ is denoted $K_6$ in \cite{Aaij:2018gwm}. We use the full observable names as defined in \texttt{flavio}{} here. \item The SM uncertainties in $\epsilon'/\epsilon$ are entirely due to matrix elements from lattice QCD \cite{Blum:2015ywa,Bai:2015nea}. \end{itemize} \begin{table}[t] \resizebox{\textwidth}{!}{ \renewcommand{\arraystretch}{1.3} \input{obstable_sm_anomalies} } \caption{Observables with highest pulls in the SM.} \label{tab:pulls_SM} \end{table} \section{Conclusions}\label{sec:concl} In this paper we have presented a likelihood function in the space of dimension-6 Wilson coefficients of the SMEFT. This function is made publicly available in the form of the Python package \texttt{smelli}{}, building on the existing public codes \texttt{flavio}{} and \texttt{wilson}{}. At present, the likelihood includes numerous observables from $B$ and $K$ decays, EWPOs, neutral meson mixing, LFV and CP violating processes and many more, counting a total of 265 observables. We have demonstrated its validity and usefulness by reproducing various results given in the literature. In passing, we have also pointed out new results, in particular the fact that one of the two possible solutions to the $R_D$ and $R_{D^*}$ anomalies involving the tensor operator is excluded by the recent Belle measurement of the longitudinal polarization fraction in $B\to D^*\tau\nu$, which is included in our likelihood (see section~\ref{sec:bctaunu}). Clearly, the 265 observables do not constrain the entire 2499-dimensional parameter space of SMEFT Wilson coefficients yet. Observables that are still missing include \begin{itemize} \item Higgs production and decay \cite{Khachatryan:2016vau,Butter:2016cvz,Ellis:2018gqa} including $h\to\gamma\gamma$ \cite{Hartmann:2015aia,Hartmann:2015oia,Dedes:2018seb}, \item top physics \cite{AguilarSaavedra:2010zi,Buckley:2015nca,deBlas:2015aea,AguilarSaavedra:2018nen}, \item further low-energy observables \cite{Falkowski:2017pss}, such as neutrino scattering, parity violation in atoms, and quark pair production in $e^+e^-$ collisions, \item non-leptonic $B$ decays \cite{Bobeth:2014rra}, \item rare $D$ decays \cite{Fajfer:2015mia,deBoer:2015boa,Petrov:2016kmb,deBoer:2017que}, \item further hadronic tau decays \cite{Celis:2013xja,Cirigliano:2018dyk}, \item beta decay \cite{Cirigliano:2012ab,Alioli:2017ces,Gonzalez-Alonso:2018omy}, \item paramagnetic EDMs \cite{Cirigliano:2016nyn,Dekens:2018bci}, \end{itemize} among others. Furthermore, as discussed at the end of section~\ref{sec:setup}, a major limitation of the nuisance-free likelihood we have constructed is that several classes of observables cannot be incorporated consistently without scanning over nuisance parameters. The next step in generalizing our results would be to allow the 4 parameters of the CKM matrix to vary in addition to the Wilson coefficients. This would make it possible to consistently include semi-leptonic charged-current $B$ and $K$ decays with general NP effects. We hope that the groundwork laid by us will allow the community to build a more and more global likelihood as a powerful tool to constrain UV models from precision measurements. \section*{Note added} After our preprint was published, ref.~\cite{Descotes-Genon:2018foz} appeared that proposes a procedure for a consistent treatment of the CKM matrix in the presence of dimension-6 contributions. Implemented in our framework, this would allow to include semi-leptonic charged-current decays without the need to scan over nuisance parameters. \section*{Acknowledgements} \noindent We thank Wolfgang Altmannshofer, Christoph Bobeth, Ilaria Brivio, Andreas Crivellin, Martin Jung, Aneesh Manohar, and Jordy de Vries for discussions. We thank Alejandro Celis, M{\'e}ril Reboud, and Olcyr Sumensari for pointing out typos. We thank Martín González-Alonso, Admir Greljo, and Marco Nardecchia for useful comments. The work of D.\,S. and J.\,A. is supported by the DFG cluster of excellence ``Origin and Structure of the Universe''. The work of J.\,K. is financially supported by NSERC of Canada.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The exact solutions \cite{seiberg1, seiberg2, seiberg3} for the IR Wilsonian effective theory of N=1 supersymmetric QCD (SQCD) reveal some surprising dynamical effects. Most striking are the occurence of massless composite bound states (or solitons) in the strong coupling regime. It is intriguing whether these massless states could smoothly map to states important to the dynamics of non-supersymmetric gauge theories. It is highly implausible that the massless composite fermions of SQCD can survive in the QCD limit. The lattice arguments of Weingarten \cite{Wein} imply that any composite states in QCD must be heavier than the pions. Nevertheless, it is possible, for example, that the scalar ``dual quark'' solitons might survive in some form and be involved in some ``dual magnetic'' description of confinement in QCD. Soft breaking terms, such as squark and gaugino masses, may be introduced to the SQCD theories as spurion fields with non-zero F-component vevs that explicitly break supersymmetry \cite{soft1}. The symmetries of the enlarged spurion model constrain how they may appear in the low energy Wilsonian theory. In general these constraints are not however sufficient to determine the low energy theory since ``Kahler Potential'' terms may be constructed that are invariant to all symmetries and are hence unknown \cite{soft2}. For the cases where a squark and/or gaugino mass are the sole supersymmetry and chiral symmetry breaking parameters these Kahler terms dominate the behaviour of the potential. Some speculations as to the behaviour of these theories were made in Refs \cite{peskin,hoker}. In this paper we discuss these difficulties and investigate some cases in which the effects of the soft breakings {\it can} be controlled. We start with a model with supersymmetry preserving quark/squark masses, and then break supersymmetry with squark and gaugino masses resulting from spurions that occur linearly in the superpotential. It can be shown that any possible Kahler corrections are higher order in the soft breakings, and thus control may be retained over the low-energy theory. The analysis is similar to that performed on the N=2 SQCD solutions in Ref \cite{soft2}. The derivative (low-energy) expansion performed to obtain the solutions of SQCD restricts the solutions of the softly broken models to the regime where the soft breakings are small relative to the strong interaction scale. At first sight the resulting models appear to behave almost identically to their supersymmetric counterparts but, as for the N=2 solutions \cite{soft3}, the models have the additional new feature of displaying $\theta$ angle dependence. The softly broken models distinguish the $N_c$ vacua of the SQCD models and as $\theta$ is changed these vacua interchange at first order phase transitions. We contrast this behaviour with that of the QCD chiral Lagrangian. \section{N=1 SQCD} We begin from the N=1 $SU(N_c)$ SQCD theories with $N_f$ flavors described by the UV Lagrangian \begin{equation} {\cal L} = K^\dagger K (Q^\dagger_i Q_i + \tilde{Q}^\dagger_i \tilde{Q}_i)|_D + {1 \over 8 \pi} Im \tau W^\alpha W^\alpha|_F + 2 Re\, m_{ij} Q_i \tilde{Q}_j|_F \end{equation} where $Q$ and $\tilde{Q}$ are the standard chiral matter superfields and $W^\alpha$ the gauge superfield. The coupling $K$ determines the kinetic normalization of the matter fields. The gauge coupling $\tau = \theta/2 \pi + i 4 \pi/g^2$ defines a dynamical scale of SQCD: $\Lambda^{b_0} = \mu^{b_0} exp( 2\pi i \tau)$, with $b_0 = 3 N_c - N_f$ the one loop coefficient of the SQCD $\beta$-function. And, finally, $m$ is a supersymmetric mass term for the matter fields. We may raise these couplings to the status of spurion chiral superfields which are then frozen with scalar component vevs. The SQCD theory without a mass term has the symmetries \begin{equation} \begin{tabular}{ccccc} &$SU(N_f)$ & $SU(N_f)$ & $U(1)_B$ & $U(1)_R$\\ $Q$ & $N_f$ & 1 & 1 & ${N_f - N_c \over N_f}$\\ $\tilde{Q}$ &1& $\bar{N}_f$ & -1 & ${N_f - N_c \over N_f}$\\ $W^\alpha$ & 1 & 1 & 0 & 1\end{tabular} \end{equation} The mass term breaks the chiral symmetries to the vector symmetry. The classical $U(1)_A$ symmetry on the matter fields is anomalous and, if there is a massless quark, may be used to rotate away the theta angle. In the massive theory the flavor symmetries may be used to rotate $m_{ij}$ to diagonal form and the anomalous $U(1)_R$ symmetry under which the $Q$s have charge $+1$ may be used to rotate $\theta$ on to the massless gaugino. Including the spurion fields the non-anomalous $U(1)_R$ symmetry charges are \begin{equation}\label{sym} \begin{tabular}{cccccc} $W$ & $Q$ & $\tilde{Q}$ & $\tau$ & $m$ & $K$ \\ 1 & ${N_f - N_c \over N_f}$ & ${N_f - N_c \over N_f}$ & 0 & ${2N_c \over N_f}$ & {arbitrary} \end{tabular} \end{equation} The anomalous symmetries may be restored to the status of symmetries of the model if we also allow the spurions to transform. The appropriate charges are \begin{equation} \begin{tabular}{ccccccc} &$W$ & $Q$ & $\tilde{Q}$ & $\Lambda^{b_0}$ & $m$ & $K$ \\ $U(1)_R$ & 1 & 0 & $ 0 $ & $2(N_c-N_f)$ & 2 & arbitrary\\ $U(1)_A$ & 0 & 1 & 1 & $2N_f$ & -2 & arbitrary \end{tabular} \end{equation} The $m_{ij}$ spurions also transform under the chiral flavor group. The solutions of the models are $N_f$ dependent. For $N_f < N_c$ the low energy superpotential is exactly determined by the symmetries and the theory has a run away vacuum \cite{seiberg1}. For $N_f = N_c$ the low energy theory is in terms of meson and baryon fields \begin{eqnarray} M_{ij} & = & Q_i \tilde{Q}_j \nonumber\\ b^{[i_1,...,i_N]} & = & Q^{i_1} ... Q^{i_{N_c}}\\ \tilde{b}^{[i_1,...,i_N]} & = & \tilde{Q}^{i_1} ... \tilde{Q}^{i_{N_c}} \nonumber \end{eqnarray} subject to the constraint $det M + b \tilde{b} = \Lambda^{2N_f}$ \cite{seiberg2}. For $N_F = N_c+1$ the theory is again described by baryon and meson fields with the classical moduli space unchanged \cite{seiberg2}. When $N_c+1 < N_f < 3N_c$ the theory has an alternative description of the low energy physics in terms of a dual magnetic theory with an $SU(N_f-N_c)$ gauge group, $N_f$ flavors of dual quarks, $q$ and $\tilde{q}$, and $N_f^2$ meson fields, $M_{ij}$ \cite{seiberg3}. The dual theory has the additional superpotential term $M_{ij}q_i \tilde{q}_j$. Generally one of the two duals is strongly coupled whilst the other is weakly coupled (the electric theory is weakly coupled for $N_f \sim 3N_c$, the magnetic theory when $N_f \sim N_f+2$). In the strongly coupled variables the low energy Wilsonian effective theory is a complicated theory with all higher dimensional terms in the superfields equally important (since the IR theory is in a conformal regime the scale $\Lambda$ at which the theory entered the conformal regime is not available to suppress higher dimension terms and similarly the gauge coupling is order one and may not suppress these operators). The weakly interacting theory however, has a very simple Wilsonian effective theory of the canonical bare form. According to the duality conjecture these two effective theories must describe the same physics and therefore there is presumably a (complicated) mapping between the electric and magnetic variables in the IR. \section{Soft Supersymmetry Breaking} Soft breaking interactions terms which explicitly break supersymmetry may be included in the UV theory by allowing the spurions to acquire non-zero $F$-components.(These are the terms that can be induced by spontaneous supersymmetry breaking and hence may be included perturbatively while inducing only logarithmic divergences in the theory as a remnant of the supersymmetric non-renormalization theorems \cite{soft1}). We will consider three such breaking terms, a squark mass ($F_K \neq 0$), a gaugino mass ($F_\tau \neq 0$) and a squark mass mixing ($F_m \neq 0$). The dependence of the IR effective theory on the spurion fields is determined in the N=1 limit by the dependence on their scalar components, the couplings and masses. The exact solutions of Seiberg, however, do not provide sufficient information to take the soft breakings to infinity limit and obtain results for models with completely decoupled superpartners since the solutions are only low energy derivative expansions. Higher dimension terms are suppressed by the strong coupling scale $\Lambda$ and hence in the non-supersymmetric theories there are unknown soft breaking terms of higher order in $F_S / \Lambda^2$. A second problem is that squark masses are only generated through the Kahler potential (the spurion $F_m$ generates a squark mass mixing but it is unbounded without additional contributions to the masses from the Kahler sector) via such terms as $|F_S|^2 |Q|^2$ with $S$ a general spurion. There are no symmetry constraints on these terms so we do not know whether they occur in the low energy theory or if they do, their sign. We note that the sign of these terms relative to the sign of the equivalent terms in the UV theory is crucial. As a particular example consider theories close to $N_f = 3N_c$ where the electric theory has a very weak IR fixed point and the magnetic theory a strongly coupled IR fixed point. We are interested in what happens when we introduce squark and gaugino masses in the UV magnetic theory. We can consider the case where these soft breakings are small relative to the scale $\Lambda$ at which the theory enters it's strongly interacting conformal phase. We expect a conformal phase down to the soft breaking scale but can we say anything about the theory below that scale? The dual squarks in the weakly coupled IR description only acquire masses from $F_\tau$ and $F_K$ from the Kahler terms. For infinitesimal soft breakings we do not expect the weakly coupled nature of the dual theory at the breaking scale to be disturbed. If these masses are positive (as investigated in Ref\cite{peskin}) then below the soft breaking scale the theory behaves like QCD and presumably confines and breaks chiral symmetries at an exponentially small scale relative to the soft breaking masses. Alternatively if the masses are negative (as investigated in Ref\cite{hoker}) then the magnetic gauge group is higgsed with the possible interpretation in the electric variables of a dual Meissner type effect. The spurion symmetry arguments are not sufficient to distinguish between these possibilities. It should be remarked that there {\it is} a strongly coupled magnetic theory that corresponds to the introduction of any soft breaking terms in the electric theory. This is true since we can use the mapping of electric to magnetic field variables from the SQCD theory (which is not known explicitly, but exists in principle) to write the soft breaking terms of the simple weakly interacting theory in terms of the strongly interacting variables in the IR. The result will be a complicated mess of relevant higher dimension operators in the strongly interacting theory. The subtlety is that if we now run the renormalization group back to the UV in the magnetic variables we will, very likely, never recover a weakly interacting theory. At each step to recover the effective theory at the lower scale we must add important higher dimension terms. The problem is therefore to identify which soft breaking terms in the IR electric description correspond to canonical soft breaking terms in the UV magnetic theory. In the next section we shall resolve this problem for the $F_\tau$ and $F_m$ cases after including a supersymmetric mass that determines the squark masses at order $F^0$. Then for small soft breakings relative to $m$ (and $\Lambda$) exact solutions may be obtained. \section{Controlled N=0 Theories} To obtain solutions to softly broken N=1 SQCD theories, we begin by including a supersymmetric mass for the matter fields. The resulting theories have a mass gap on the scale $m$ and the induced meson $M_{ij}= Q^i \tilde{Q}_j$ vev is determined independently of $N_f$ by holomorphy \begin{equation}\label{Slimit} M_{ij} = \Lambda^{{3N_c - N_f \over N_c}} (detm)^{1/N_c}\left( {1 \over m} \right) _{ij} = |M_{ij}| e^{i\alpha}~~~. \end{equation} The resulting supersymmetric theories have $N_c$ distinct vacua corresponding to the $N_c$th roots of unity, $\alpha = 2n\pi/N_c$ (as predicted by the Witten index). Note that for the theories with magnetic duals putting masses in for all flavors breaks the dual gauge group completely. For simplicity henceforth we shall take $m_{ij}$ to be proportional to the identity matrix; in this basis $\langle M_{ij} \rangle$ is also proportional to the identity matrix. These massive theories may be softly broken in a controlled fashion. If the spurion generating the soft breaking enters the superpotential linearly then we may obtain desirable results when that spurion's F-component $F \ll m \ll \Lambda$. Any D-term contributions to the scalar potential take the form $F_X^\dagger F_Y$ with $X$ and $Y$ standing for generic fields or spurions. In the supersymmetric limit all F-components are zero and will grow as the vacuum expectation value of the soft breaking spurion. These Kahler terms are therefore higher order in the soft breaking parameter than the linear term from the superpotential. The unknown corrections to the squark masses in the theory are subleading to the masses generated by the supersymmetric mass term and hence we may determine the potential minima at lowest order. \subsection{Squark Mass Mixing} The first model we consider includes the bare squark mixing term \begin{equation} Re(F_{m \, ij} ~Q_i \tilde{Q}_j) \end{equation} which is generated from the superpotential. Again for simplicity we will take $F_{mij}$ to be diagonal with degenerate eigenvalues in the basis in which $m_{ij}$ is diagonal. The form of the effective theory is governed by the symmetries in (\ref{sym}) which determine that the superpotential of the theory is not renormalized. The soft breaking term is therefore also not renormalized and is the leading term in an expansion in $m/\Lambda$. For $F_m \ll m \ll \Lambda$ we find that there are the $N_c$ minima of the SQCD theory given by the values of $M_{ij}$ in (\ref{Slimit}) and distinguished by their contribution to the potential \begin{eqnarray}\label{Fmpotential} -Re Tr[ F_{m} M_{ij}]& = & - N_f |F_m| |M| \cos([\theta_0 + (N_f - N_c) \theta_m + N_c \theta_f + 2n \pi]/N_c])\\ & = & - N_f |F_m| |M| \cos([\theta_{phys}+ 2n \pi]/N_c)~~~. \nonumber \end{eqnarray} Freezing the spurion $F_m$ explicitly breaks $U(1)_R$ and introduces dependence on the $\theta$ angle. $\theta_{phys}$ is the correct combination of phases on $m$, $F_m$ and the bare $\theta$ angle. To see this in the bare Lagrangian we may use the anomalous $U(1)_A$ symmetry to rotate any phases on $F_m$ onto $m$ and into the $\theta F \tilde{F}$ term. Then using the anomalous $U(1)_R$ symmetry under which $Q_i$ transforms with charge 0 we may rotate the resulting phase on $m$ into the $\theta$ angle as well. The resulting $\theta$ angle is the physical $\theta$ angle in which the physics is $2 \pi$ periodic: \begin{equation} \label{phth} \theta_{phys} = \theta_0 + (N_f - N_c) \theta_m + N_c \theta_f \end{equation} We can also understand the form of (\ref{phth}) as follows. Once the $U(1)_R$ symmetry is explicitly broken by $f_m$ a gaugino mass is generated by radiative effects. We can think of $\theta_{phys}$ as generated by the effective phases on the quark and gaugino masses. The gaugino mass is generated by a perturbative graph with a quark-squark loop. The result is of the form $F_m / m$, leading to an effective phase which is $\theta_f - \theta_m$.The effective gaugino phase then appears in (\ref{phth}) with an anomaly factor from $C_2 (R)$ of $N_c$ rather than $N_f$. The equivalent effective superpotential term is of the form \begin{equation} ln [ m ]~ WW~ \vert_F, \end{equation} which yields another contribution to the potential when the gauginos condense. Using the Konishi anomaly \cite{KA}, one can see that this term has the same form as (\ref{Fmpotential}). The resulting potential (\ref{Fmpotential}) distinguishes the $N_c$ vacua. For $\theta_{phys} = 0$ the $n=0$ vacua is the true minima. \vspace{.4cm} $\left. \right.$ \hspace{-0.4in}\ifig\prtbdiag{} {\epsfxsize 12truecm\epsfbox{angles.eps}} \vspace{-1.7cm} \begin{center} Fig.1\,: First order phase transition as $\theta_{phys}$ is varied from 0 to $\pi$. \end{center} As $\theta_{phys}$ passes through $\pi$ the $n=0,N_c -1$ vacua become degenerate and there is a first order phase transition. Then as $\theta_{phys}$ moves through (odd)$\pi$ there are subsequent first order phase transitions at which the SQCD minima interchange. \subsection{Gaugino Mass} In the UV theory we may induce a gaugino mass through a non zero F-component of the gauge coupling $\tau$ \begin{equation} {1 \over 8 \pi} Im [ F_{\tau} \lambda \lambda] \end{equation} In the IR theory $\tau$ enters through the strong interaction scale $\Lambda$ which again occurs linearly in the superpotential of the theory. Taking $F_{\tau} \ll m \ll \Lambda$ we again may determine the vacuum structure. The IR superpotential terms compatible with the symmetries of the theory involving $\Lambda$ are \begin{equation} Re[ m M_{ij} + ({\rm det} M_{ij})^ { 1/(N_f - N_c) } \Lambda^{(3N_c-N_f) / (N_c-N_f)}] \end{equation} where the final term results from non-perturbative effects in the broken gauge group. At lowest order in perturbation theory the vev of $M_{ij}$ is given by (\ref{Slimit}) which also contains $\Lambda$ and hence has a non-zero F-component. Including $F_\tau$ and performing the superspace integral we obtain up to a coefficient the following corrections to the potential that break the degeneracy between the $N_c$ SQCD vacua \begin{eqnarray} \label{gpot} \Delta V & = & - Re\left[ m^{N_f/N_c} i F_\tau \Lambda^{(3N_c-N_f)/ N_c}\right]\\ & = \nonumber & - \left|m^{N_f/N_c} F_\tau \Lambda^{(3N_c-N_f)/ N_c}\right| \cos[ ~ \theta_{phys}/N_c ~+~ \alpha ~] \end{eqnarray} where again $\alpha$ are the $N_c$th roots of unity and $\theta_{phys}$ is the physical theta angle in which the physics must be $2 \pi$ periodic. It may be obtained by again making rotations with the anomalous $U(1)_A$ and $U(1)_R$ symmetries \begin{equation} \theta_{phys} ~=~ \theta_0 ~+~ N_c ( \theta_{F_\tau} + \pi /2) ~+~ N_f \theta_m \end{equation} The factor of $\pi/2$ occurs as a result of the discrepancy between the phase of $F_\tau$ and that of the canonical definition of the gaugino mass. There is also an additional contribution to the vacuum energy arising from the gaugino condensate. Using the Konishi anomaly \cite{KA}, we see that it has the same form as (\ref{gpot}). The supersymmetry breaking contributions again break the degeneracy between the $N_c$ supersymmetric vacua. There are again phase transitions as $\theta_{phys}$ is varied, occurring at $\theta_{phys} ~=~ $(odd)$\pi$. \section{Discussion} We have investigated some examples where controlled, low-energy descriptions of softly broken massive SQCD may be obtained, despite the lack of supersymmetry. The models we studied are obtained by the inclusion of soft breaking masses from spurions occuring linearly in the superpotential. Examples of such soft breaking terms are gaugino masses and squark mass mixings.The soft breaking corrections to the potential distinguish between the $N_c$ vacua of SQCD at a generic value of theta angle. At the special values of $\theta_{phys} = $(odd)$\pi$ there are first order phase transitions at which two of the $N_c$ vacua interchange. This behavior can be compared with the theta angle dependence of the QCD chiral Lagrangian \cite{chiral} for which there are $N_f$ distinct vacua which interchange through first order phase transitions at $\theta =$(odd)$\pi$. This difference in the number of vacua between the softly broken theories and QCD would prohibit us from seeing any sign of a smooth transition between the two theories (one might hope that the $M_{ij}$ vev might smoothly map to the quark condensates of QCD for example) even if we were able to begin to take the squark and gaugino masses towards infinity. There is however one conclusion for QCD that we can tentatively draw from this analysis. In these models the form of the confined effective theory changes smoothly with the theta angle and there is no sign of a break down of confinement as suggested in \cite{schierholz}. This lends some support to the assumption \cite{chiral} that the chiral Lagrangian remains the correct discription of QCD in the IR even at non-zero theta. \vspace{3cm} \noindent {\Large \bf Acknowledgements} NE would like to thank R. Sundrum for useful discussions. This work was supported by DOE contracts DE-AC02-ERU3075 and DE-FG02-91ER40676. \newpage \baselineskip=1.6pt
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} Nowadays, waste management is an issue of increasing concern due to the accumulation of litter in natural environments such as seas and oceans. Every year, between 1.15 and 2.41 million tonnes of plastic waste enter the ocean from rivers \cite{MarineLitterStatistics}. Plastic waste disposed in natural environments releases tiny fractions of material called microplastics that potentially could harm the health of all species, including humans, because they are ingested or breathed at any levels of the food chain \cite{HarmfulForHumanHealth,USGSmicroplastics,EUmicroplastics}. For marine litter removal, an emerging approach consists of deploying fleets of underwater vehicles either fully autonomous or partially operated by cleaning divers. In order to perform debris detection and removal, the automation pipeline of an autonomous underwater vehicle (AUV) could be structured as in Fig. \ref{fig:AutomationOfAUV}: an underwater image enhancement (UIE) block, a recognition block to recognize the class of the objects, their positions, their shapes and constituent materials, and finally the block of guidance, navigation and control to guide the vehicle towards the objects and to grab them leveraging the recognition system outputs \cite{AUVandGrasping}. In this paper, we focus on the UIE and the recognition blocks. \begin{figure*}[t] \begin{minipage}{\textwidth} \includegraphics[width=\textwidth]{Figures/AutomationOfAUV} \centering \caption[...]{General automation pipeline of an AUV equipped with a gripper for marine debris detection and removal. At the top, from left to right: an underwater image enhancement technique improves the RGB camera images, a recognition algorithm makes the predictions, then the recognition system outputs are used by the guidance, navigation and control systems of both vehicle and gripper. Different recognition systems provide different outputs as detailed in the table. This paper focuses on the orange blocks by seeking the improvement of the efficiency of a set of state-of-the-art object detectors called EfficientDets \cite{EfficientDets} and investigating the efficiency of the whole pipeline with and without low-light underwater image enhancement. Bottom left: we collected images of waste items inside a testing pool using the Chasing Dory drone. Bottom right: the BlueROV2, a commercial robot equipped with a gripper\textsuperscript{a}.}\tiny\textsuperscript{a}BlueROV2 image sourced from: \url{https://free3d.com/3d-model/underwater-robot-bluerov2-rigged-4532.html} \label{fig:AutomationOfAUV} \end{minipage} \end{figure*} \begin{figure}[H] \includegraphics[width=0.48\textwidth]{Figures/ContributionSummary} \centering \caption{Summary of the paper contributions: we found an architecture for the parent detector D0 that led to more efficient EfficientDets detectors \cite{EfficientDets} on PASCAL VOC 2012. Then, we trained our EfficientDets for marine debris detection and investigated the performance of the low-light image enhancement method proposed in \cite{L2UWE} when combined with our detectors.} \label{fig:mAPvsLatency} \end{figure} Most of the state-of-the-art computer vision algorithms are based on convolutional neural networks (CNN) and an end-to-end learning paradigm inspired by the biological learning process \cite{LeCunCNN}. The approach of learning the image features is based on the deep learning of representations (typically shortened with ``deep learning'') and is increasingly replacing the classical techniques based on hand-crafted features \cite{SzeliskiBook} such as bag-of-words \cite{ObsoleteCV} and part-based \cite{ObsoleteCV2} models. As the most accurate object detectors have computational and memory costs prohibitive for real-time applications such as marine debris detection and removal, finding the best accuracy-latency compromise is an active area of research \cite{EfficientDets}. \textbf{Our contributions.} \begin{enumerate} \item{As the object detectors recently proposed by Tan et al. \cite{EfficientDets} showed state-of-the-art accuracy-latency performance, we investigated two architectural alternatives of their baseline detector D0. On PASCAL VOC 2012, we found that one of our architectures, namely D0(1-5), is 1.5\% more accurate than the one proposed in \cite{EfficientDets} while having comparable latency. Then, by scaling-up D0(1-5), we achieve a more efficient class of EfficientDets as shown in Fig. \ref{fig:mAPvsLatency}.} \item{We trained D0(1-5) and its derivations D1-D3 on 1200 samples of the Trash-ICRA19 dataset \cite{TrashDataset}. The results are in Table \ref{tab:ResultsWithTrashDataset}.} \item{Since the Trash-ICRA19 dataset \cite{TrashDataset} does not specify the type of plastic object, we created the in-water plastic bags and bottles (WPBB) dataset and made it publicly available; then we trained D0(1-5) and its derivations on the WPBB dataset. The results are in Table \ref{tab:ResultsWithWPBBDataset} and Fig. \ref{fig:DebrisDetectionsPBBW}.} \item{Assuming low-light conditions, we address two questions: ``Is it more efficient scaling-up the object detector size (e.g. from D0(1-5) to D3(1-5)) or adding an underwater image enhancement method before the smallest detector D0(1-5)?'' and ``How does the method proposed in \cite{L2UWE} perform as pre-processing step for object detection?''. These investigations are covered in Subsection \ref{sub:UIE}.} \end{enumerate} The rest of the paper is organized as follows: Section \ref{sec:RelWorks} covers the related work, Section \ref{sec:OurChangesToEfficientDets} details our modifications to the class of detectors proposed by Tan et al. \cite{EfficientDets}, Section \ref{sec:Experiments} presents and discusses the experimental results and finally Section \ref{sec:Conclusions} concludes. \section{RELATED WORK} \label{sec:RelWorks} \subsection{Underwater Image Enhancement} As the images recorded in-water often suffer from low-light conditions, light refraction, light absorption, scattering or water turbidity, several works focused on improving the quality of underwater images \cite{imageEnhancement1,imageEnhancement2,imageEnhancement3,FUIE,L2UWE}, with \cite{imageEnhancement1,imageEnhancement2,imageEnhancement3,FUIE} based on generative adversarial networks \cite{GANpaper}. The authors of \cite{imageEnhancement1} developed both a GAN and a CNN. The former is to generate underwater synthetic images from images taken in air with the aim of generating a large amount of in-water data to train deep image enhancement models. The latter is an underwater image enhancement model trained using the GAN synthetic outputs. One of the very first works proposing a GAN to improve in-water images is \cite{imageEnhancement2}, where a lack of distorted in-water images was addressed by using CycleGAN \cite{CycleGAN} to generate distorted images from undistorted ones; then, the paired dataset is used to train a generator that performs the opposite mapping of CycleGAN, i.e. it outputs the undistorted version of a distorted input image. Guo et al. \cite{imageEnhancement3} developed a densely connected generator by introducing a multiscale dense block inside a fully convolutional architecture. This choice was inspired by DenseNet \cite{DenseNet}, where a large number of skip-connections strengthen feature reuse and feature propagation leading to an improvement in CNN efficiency. A different GAN was designed in \cite{FUIE} for underwater image enhancement: the generator architecture has five encoder-decoder pairs with mirrored skip-connections, whereas the discriminator is a Markovian PatchGAN \cite{PatchGAN}; the loss function was specifically defined to take into account the aspects of global similarity, image content and local textures. A non-adversarial approach is \cite{L2UWE}, where the authors generated two lighting distribution models for local and non-local regions, respectively. This resulted in two different dehazing maps which subsequently are combined into a single output that preserves the image details while removing the darkness from the original image. \subsection{Efficiency of Object Detectors} The last years have seen several works addressing both the speed and the accuracy of object detection to permit the use of detectors in edge devices and real-time applications. In 2016, two of the first works in this direction were \cite{Ref24OfTan}, which removed the proposal generation and re-sampling stages, and \cite{Ref30OfTan}, which added batch normalization to all convolutional layers and removed dropout from its model. In 2017, Lin et al. \cite{Ref21OfTan} boosted the detection efficiency by modifying the cross entropy loss typically used for the classification subnet. In 2018, Law et al. \cite{Ref18OfTan} replaced the prediction of anchor boxes with pair of keypoints, while Huang et al. \cite{Ref26OfTan} opted for removing some layers and the batch normalization operation to speed-up a larger detector; in contrast, \cite{Ref31OfTan} designed a larger backbone network to improve the accuracy of a previous smaller model. In 2019, Zhou et al. \cite{Ref41OfTan} proposed to predict center points of each object as keypoints and regressing width and high of the corresponding bounding box, while Tian et al. \cite{Ref37OfTan} simplified the detection framework by formulating the problem as a per-pixel prediction analogously to instance segmentation. More recently, Tan et al. \cite{EfficientDets} introduced a bidirectional feature pyramid network for fast multi-scale feature fusion, while Cai et al. \cite{AnotherDetectorSeekingEfficiency} simplified a larger detector through an inter-block weight pruning of convolutional layers, where a block contains the weights of $n$ consecutive channels of $m$ filters. \section{VARIANTS OF EfficientDets} \label{sec:OurChangesToEfficientDets} \subsection{Original Architecture} Tan et al. \cite{EfficientDets} developed a set of eight detectors named EfficientDets obtained by scaling-up the parent architecture EfficientDet D0. The detectors share the same general structure: they have an EfficientNet \cite{ScalingUpCNNs} as the backbone network, which is then followed by one or more bi-directional feature pyramid network (BiFPN) layers and end with two separate identical sub-nets, one for class prediction and one for box prediction. \noindent \textbf{EfficientNets.} It is becoming more common designing a CNN of small size and derive a set of larger child networks from it by scaling-up its dimensions. One of the first systematic studies of model scaling for CNN was \cite{ScalingUpCNNs}, where a set of six classifiers, namely EfficientNets, were developed by scaling-up a small baseline CNN, namely EfficientNet-B0. The architecture of B0 resulted from a multi-objective neural architecture search inspired by \cite{NeuralArchitectureSearch2}. While previous works scaled-up single dimensions, e.g. the depth \cite{ScalingUpByDepth} or the width \cite{ScalingUpByWidth}, Tan et al. \cite{ScalingUpCNNs} jointly increased depth, width and resolution by a constant ratio and empirically showed the benefit in terms of efficiency given by jointly scaling all the network dimensions. \noindent \textbf{BiFPN.} The BiFPN layer was proposed in \cite{EfficientDets} to merge features from different scales generated through a pyramidal feature hierarchy. It can be seen as the successor of three previous approaches for cross-scale feature fusion: FPN \cite{ForBiFPN1} proposed a top-down pathway on top of a bottom-up pyramid feature hierarchy; then PANet \cite{ForBiFPN2} added a further bottom-up layer that merges features from both FPN pathways; finally \cite{ForBiFPN3} used the neural architecture search algorithm \cite{NeuralArchitectureSearch} to define an architectural topology of a block for cross-scale feature representation fusion whose copies can be stacked multiple times. BiFPN takes the top-down pathway from FPN, the extra bottom-up layer from PANet and the use of a repeatable block from \cite{ForBiFPN3}. While previous methods treated equally the features from different scales, another peculiarity of BiFPN is that cross-scale features are weighed depending on their resolution \cite{EfficientDets}. \noindent \textbf{Detector Scaling.} Scaling-up object detectors is typically based on using a bigger backbone network or stacking more FPN layers \cite{ForBiFPN3}. An object detector has generally more dimensions than a classifier since it extends the prediction capability of the latter, hence the grid search done in \cite{ScalingUpCNNs} to define the scaling factors was replaced in \cite{EfficientDets} by an heuristic-based approach: a coefficient $\phi$ that determines at the same time the size of the backbone network, the resolution, the number of channels and the number of layers of the network building blocks. The rationale behind a uniform scaling across dimensions is that an higher resolution image requires deeper and wider architectures to capture the information contained in more pixels. This intuition was empirically supported in \cite{ScalingUpCNNs} as the increase of width without changing depth and resolution led to faster accuracy saturation compared to an all-dimension scaling. \vspace{0.5cm} The parent EfficientDet is D0, it is the smallest detector and made of 3 BiFPN layers, 3 convolutions in class/box sub-nets and B0 as the backbone. The other EfficientDets, namely D1-D7, result from scaling-up D0 according to the scaling factor $\phi = \{0, 1, \dots, 7\}$. The architecture size is augmented by jointly increasing all the dimensions: the input image resolution ($R^{in}$), the backbone network, the number of channels of the BiFPN layer and class/box nets ($W^{Bi,cb}$), the number of BiFPN layers ($D^{Bi}$) and the depth of the class/box prediction sub-nets ($D^{cb}$). Specifically, the re-scaling equations in Tan et al. \cite{EfficientDets} are \begin{equation} W^{Bi,cb} = 64 \cdot (1.35^\phi), \label{eq:TanWidth} \end{equation} \begin{equation} D^{Bi} = 3 + \phi, \label{eq:TanDepth1} \end{equation} \begin{equation} D^{cb} = 3 + \left \lfloor{\phi / 3}\right \rfloor \label{eq:TanDepth2} \end{equation} and \begin{equation} R^{in} = 512 + \phi \cdot 128. \label{eq:TanResolution} \end{equation} Since D0 corresponds to $\phi = 0$, from (\ref{eq:TanDepth1}) and (\ref{eq:TanDepth2}) follows that the parent network has 3 BiFPN layers and 3 convolutions in both class and box prediction nets, i.e. $D^{Bi} = D^{cb} = 3$. \subsection{Our Architectural Modifications} We observe that the design of D0 is particularly important as its efficiency affects the one of all the child detectors D1-D7 \cite{ScalingUpCNNs}. Hence, we investigate the performance of two variants of D0 by modifying (\ref{eq:TanDepth1}) and (\ref{eq:TanDepth2}) as \begin{equation} D^{Bi} = N^{Bi}_0 + \phi \end{equation} and \begin{equation} D^{cb} = N^{cb}_0 + \left \lfloor{\phi / 3}\right \rfloor, \end{equation} respectively. Here, $N^{Bi}_0$ is the number of BiFPN layers in D0 and $N^{cb}_0$ is the number of convolutional layers for box/class prediction nets in D0. While Tan et al. \cite{EfficientDets} set $(N^{Bi}_0$; $N^{cb}_0) = (3; 3)$, to keep approximately constant the complexity of any D0 variants we fix the sum $N^{Bi}_0 + N^{cb}_0 = 6$, then we investigate an architecture with ($N^{Bi}_0$; $N^{cb}_0$) = (1; 5) and one with ($N^{Bi}_0$; $N^{cb}_0$) = (5; 1) to see \emph{whether is more efficient having more BiFPN layers or more convolutions in the class/box subnets}. Figure \ref{fig:OurArchitectures} clarifies our modifications of D0, while Table \ref{tab:netsScaledFrom1-5} and Table \ref{tab:netsScaledFrom5-1} detail the scaling configurations of the detectors D0-D3 obtained by scaling-up D0(1-5) and D0(5-1), respectively. \begin{figure*}[t] \includegraphics[width=\textwidth]{Figures/Architectures} \centering \caption{Our architectural modifications of EfficientDet D0. The number of BiFPN layers (i.e. $N^{Bi}_0$) and the number of convolutional layers in class/box nets (i.e. $N^{cb}_0$) proposed in \cite{EfficientDets} for D0 are 3 and 3, respectively. We keep $N^{Bi}_0 + N^{cb}_0 = 6$ and change the layer distribution: ($N^{Bi}_0$; $N^{cb}_0$) = (1; 5) for the architecture D0(1-5) and ($N^{Bi}_0$; $N^{cb}_0$) = (5; 1) for the architecture D0(5-1).} \label{fig:OurArchitectures} \end{figure*} \begin{table} \centering \caption{Scaling configurations for EfficientDets D0-D3 considering D0(1-5) as the parent architecture. The architecture D$n$(1-5) is scaled considering $\phi = n$ in (\ref{eq:TanWidth}), (\ref{eq:TanDepth1}), (\ref{eq:TanDepth2}) and (\ref{eq:TanResolution}). The values in bold differ from \cite{EfficientDets} because influenced by our modifications.} \begin{tabular}{cccccc} \hline Architecture & \makecell{Input size \\ ${R^{in}}$} & \makecell{Backbone \\ network} & $W^{Bi,cb}$ & $D^{Bi}$ & $D^{cb}$\\ \hline D0(1-5) & 512 & B0 & 64 & \textbf{1} & \textbf{5}\\ D1(1-5) & 640 & B1 & 88 & \textbf{2} & \textbf{5}\\ D2(1-5) & 768 & B2 & 112 & \textbf{3} & \textbf{5}\\ D3(1-5) & 896 & B3 & 160 & \textbf{4} & \textbf{6}\\ \hline \end{tabular} \label{tab:netsScaledFrom1-5} \end{table} \begin{table} \centering \caption{Scaling configurations for EfficientDets D0-D3 considering D0(5-1) as the parent architecture. The architecture D$n$(5-1) is scaled considering $\phi = n$ in (\ref{eq:TanWidth}), (\ref{eq:TanDepth1}), (\ref{eq:TanDepth2}) and (\ref{eq:TanResolution}). The values in bold differ from \cite{EfficientDets} because influenced by our modifications.} \begin{tabular}{cccccc} \hline Architecture & \makecell{Input size \\ ${R^{in}}$} & \makecell{Backbone \\ network} & $W^{Bi,cb}$ & $D^{Bi}$ & $D^{cb}$\\ \hline D0(5-1) & 512 & B0 & 64 & \textbf{5} & \textbf{1}\\ D1(5-1) & 640 & B1 & 88 & \textbf{6} & \textbf{1}\\ D2(5-1) & 768 & B2 & 112 & \textbf{7} & \textbf{1}\\ D3(5-1) & 896 & B3 & 160 & \textbf{8} & \textbf{2}\\ \hline \end{tabular} \label{tab:netsScaledFrom5-1} \end{table} \section{EXPERIMENTS} \label{sec:Experiments} Any training covered in this letter were performed on Google Colaboratory, which randomly assigned us either an Nvidia Tesla P100 or an Nvidia Tesla T4 GPU. Shorter and longer trainings were performed with approximately 12 GB and 25 GB of RAM, respectively. We used SGD as optimizer and the EfficientNet backbones were initialized with ImageNet checkpoints; then the whole networks were trained on each target dataset: PASCAL VOC 2012 in Subsection \ref{sub:EffDetsBenchmarking} for benchmarking, Trash-ICRA19 and WPBB in Subsection \ref{sub:TrashAndWPBB} for marine debris detection. Most of the source code for this paper is a modification of the original TensorFlow implementation of EfficientDets\footnote{https://github.com/google/automl/tree/master/efficientdet}. We also used the MATLAB implementation of $\text{L}^2$UWE\footnote{https://github.com/tunai/l2uwe} for Subsection \ref{sub:UIE}. \subsection{Comparison of Original and Our EfficientDets}\label{sub:EffDetsBenchmarking} Given that \begin{enumerate} \item{the training of EfficientDets on PASCAL VOC 2012 is time consuming,} \item{the neural network training is not deterministic due to the random weight initialization,} \end{enumerate} initially we trained for 5 times the three variants of D0 (i.e. D0(3-3) from \cite{EfficientDets}, D0(1-5) and D0(5-1)) considering just 15 epochs. The results are shown in Table \ref{tab:Selection}. As D0(5-1) was the least accurate with a latency similar to the other candidates, we discarded D0(5-1) and re-trained the remaining detectors with a larger number of epochs, 25 precisely. The results are shown in Table \ref{tab:D0s}. \begin{table} \centering \caption{Accuracy and latency of the candidate variants of D0 trained for 15 epochs on PASCAL VOC 2012. The values reported are the best in 15 epochs (not necessarily at the last epoch).} \begin{tabular}{c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c|cc} \hline Training & Architecture & \multicolumn{3}{c}{Accuracy (\%)} & \multicolumn{2}{c}{Latency (ms)} \\ & & AP & AP50 & AP75 & GPU & CPU \\ \hline \multirow{3}{*}{1} & D0(3-3) & 11.2 & 21.9 & 10.4 & \multirow{12}{*}{D0(3-3)} & \multirow{12}{*}{D0(3-3)} \\ & D0(1-5) & 11.5 & 21.9 & 11.0 & \multirow{12}{*}{15} & \multirow{12}{*}{186} \\ & D0(5-1) & 9.2 & 17.9 & 8.6 & \multirow{14}{*}{D0(1-5)} & \multirow{14}{*}{D0(1-5)} \\ \cline{1-5} \multirow{3}{*}{2} & D0(3-3) & 10.0 & 19.6 & 9.2 & \multirow{14}{*}{15} & \multirow{14}{*}{179}\\ & D0(1-5) & 11.1 & 21.8 & 10.1 & \multirow{16}{*}{D0(5-1)} & \multirow{16}{*}{D0(5-1)}\\ & D0(5-1) & 7.7 & 15.3 & 6.8 & \multirow{16}{*}{16}& \multirow{16}{*}{209}\\ \cline{1-5} \multirow{3}{*}{3} & D0(3-3) & 10.8 & 21.1 & 9.9 &&\\ & D0(1-5) & 9.6 & 18.8 & 8.8 &&\\ & D0(5-1) & 9.4 & 18.5 & 8.7 &&\\ \cline{1-5} \multirow{3}{*}{4} & D0(3-3) & 11.7 & 22.7 & 10.8 &&\\ & D0(1-5) & 11.9 & 23.1 & 11.0 &&\\ & D0(5-1) & 8.4 & 16.9 & 7.6 &&\\ \cline{1-5} \multirow{3}{*}{5} & D0(3-3) & 11.7 & 23.4 & 10.6 &&\\ & D0(1-5) & 12.2 & 24.0 & 11.3 &&\\ & D0(5-1) & 9.8 & 19.6 & 8.7 &&\\ \cline{1-5} \multirow{3}{*}{Mean} & D0(3-3) & 11.1 & 21.7 & 10.2 &&\\ & \textbf{D0(1-5)} & $\bm{11.3}$ & $\bm{21.9}$ & $\bm{10.4}$ &&\\ & D0(5-1) & 8.9 & 17.6 & 8.1 &&\\ \hline \end{tabular} \label{tab:Selection} \end{table} \begin{table} \centering \caption{Accuracy and latency of the candidate variants of D0 trained for 25 epochs on PASCAL VOC 2012. The values are the best in 25 epochs (not necessarily at the last epoch).} \begin{tabular}{c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c|cc} \hline Training & Architecture & \multicolumn{3}{c}{Accuracy (\%)} & \multicolumn{2}{c}{Latency (ms)} \\ & & AP & AP50 & AP75 & GPU & CPU \\ \hline \multirow{2}{*}{1} & D0(3-3) & 19.9 & 35.8 & 20.5 & \multirow{8}{*}{D0(3-3)} & \multirow{8}{*}{D0(3-3)}\\ & D0(1-5) & 23.5 & 41.4 & 24.2 & \multirow{8}{*}{15} & \multirow{8}{*}{186}\\ \cline{1-5} \multirow{2}{*}{2} & D0(3-3) & 23.9 & 44.2 & 23.5 & \multirow{10}{*}{D0(1-5)} & \multirow{10}{*}{D0(1-5)}\\ & D0(1-5) & 23.7 & 42.6 & 24.3 & \multirow{10}{*}{15} & \multirow{10}{*}{179}\\ \cline{1-5} \multirow{2}{*}{3} & D0(3-3) & 24.4 & 43.3 & 25.6 &&\\ & D0(1-5) & 23.9 & 42.8 & 24.5 &&\\ \cline{1-5} \multirow{2}{*}{4} & D0(3-3) & 22.0 & 40.0 & 22.4 &&\\ & D0(1-5) & 25.2 & 44.8 & 25.7 &&\\ \cline{1-5} \multirow{2}{*}{5} & D0(3-3) & 22.5 & 40.7 & 23.0 &&\\ & D0(1-5) & 23.6 & 41.3 & 24.9 &&\\ \cline{1-5} \multirow{2}{*}{Mean} & D0(3-3) & 22.5 & 40.8 & 23.0 &&\\ & \textbf{D0(1-5)} & \textbf{24.0} & \textbf{42.6} & \textbf{24.7} &&\\ \hline \end{tabular} \label{tab:D0s} \end{table} Both Table \ref{tab:Selection} and Table \ref{tab:D0s} show that \emph{the detector with deeper class/box subnets and lowest number of BiFPN layers}, i.e. D0(1-5), \emph{is the most accurate}. Therefore, we have deduced that it was more efficient to have less BiFPN layers and that the detector D0(3-3) proposed in \cite{EfficientDets} has an intermediate performance between D0(1-5) and D0(5-1). Subsequently, we scaled-up the parent detectors D0(1-5) and D0(3-3) considering $\phi = 3$ to get D3(1-5) and D3(3-3), respectively, and performed 3 trainings with 29 epochs. As visible in Table \ref{tab:D3s}, D3(1-5) is 1.3\% more accurate than D0(3-3) on average while having a comparable latency. These results further supported that \emph{more convolutions in the class/box subnets are more efficient than more BiFPN layers}. \begin{table} \centering \caption{Accuracy and latency of the candidate variants of D3 trained for 29 epochs on PASCAL VOC 2012. The values are the best in 29 epochs (not necessarily at the last epoch).} \begin{tabular}{c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c|cc} \hline Training & Architecture & \multicolumn{3}{c}{Accuracy (\%)} & \multicolumn{2}{c}{Latency (ms)} \\ & & AP & AP50 & AP75 & GPU & CPU \\ \hline \multirow{2}{*}{1} & D3(3-3) & 46.0 & 74.0 & 50.4 & \multirow{4}{*}{D3(3-3)} & \multirow{4}{*}{D3(3-3)}\\ & D3(1-5) & 47.2 & 74.7 & 52.3 & \multirow{4}{*}{48} & \multirow{4}{*}{1337}\\ \cline{1-5} \multirow{2}{*}{2} & D3(3-3) & 45.0 & 72.5 & 49.5 & \multirow{6}{*}{D3(1-5)} & \multirow{6}{*}{D3(1-5)}\\ & D3(1-5) & 46.5 & 74.1 & 51.5 & \multirow{6}{*}{48} & \multirow{6}{*}{1423}\\ \cline{1-5} \multirow{2}{*}{3} & D3(3-3) & 44.3 & 71.7 & 49.0 &&\\ & D3(1-5) & 45.6 & 72.9 & 51.3 &&\\ \cline{1-5} \multirow{2}{*}{Mean} & D3(3-3) & 45.1 & 72.7 & 49.6 &&\\ & D3(1-5) & \textbf{46.4} & \textbf{73.9} & \textbf{51.7} &&\\ \hline \end{tabular} \label{tab:D3s} \end{table} Then, we considered $\phi = 1$ and $\phi = 2$ to compare the detectors D1 and D2 and the results are reported in Table \ref{tab:D1andD2}. A summary of the comparison between the standard EfficientDets achieved by scaling-up D0(3-3) and our variants achieved by scaling-up D0(1-5) is depicted in Fig. \ref{fig:mAPvsLatency} where it is visible the efficiency improvement with less BiFPN layers. \begin{table} \centering \caption{Accuracy and latency of the candidate variants of D1 and D2 on PASCAL VOC 2012. The values are the best in the total number of epochs (not necessarily at the last epoch).} \begin{tabular}{c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c@{\hskip 0.1in}c|cc} \hline Architecture & \# epochs & \multicolumn{3}{c}{Accuracy (\%)} & \multicolumn{2}{c}{Latency (ms)} \\ & & AP & AP50 & AP75 & GPU & CPU \\ \hline D1(3-3) & 35 & 37.0 & 62.9 & 39.7 & 21 & 394\\ D1(1-5) & 35 & \textbf{39.6} & \textbf{65.6} & \textbf{42.9} & 21 & 395\\ \hline D2(3-3) & 29 & 37.1 & 62.9 & 40.1 & 29 & 656\\ D2(1-5) & 29 & \textbf{38.3} & \textbf{64.6} & \textbf{41.9} & 29 & 652\\ \hline \end{tabular} \label{tab:D1andD2} \end{table} \subsection{On Marine Debris Datasets}\label{sub:TrashAndWPBB} Subsequently, we trained our improved EfficientDets on the first 1200 samples of the Trash-ICRA19 \cite{TrashDataset}, which is a publicly available annotated dataset for object detection targeting three classes: ``bio'' for biological items, ``plastic'' for plastic items and ``rov'' for remotely operated vehicles. The results are in Table \ref{tab:ResultsWithTrashDataset}, where 35 epochs have been chosen for demonstration purposes; when training a model for deployment on an AUV, the number of epochs should be increased in order to reach the learning saturation. With 35 epochs, we reached 47.4\% AP with D0 and 49.6\% AP with D3. In terms of latency, there is a significant difference between the use of a CPU and a GPU. On a CPU, all detectors take more than 0.1 seconds and D3 requires more than 1 second to process a single image. A GPU on-board would certainly speed-up the detection, however the speed-up will likely be smaller than the one reported in \ref{tab:ResultsWithTrashDataset} because the GPU on the AUV will likely be less powerful than Google Colab GPUs (for this work, we were randomly assigned either a Tesla P100 or a Tesla T4). Note that, along with speeding-up the detectors, a GPU reduces also the increase of latency with the scaling-up of the detector size: while on a CPU D1, D2 and D3 are 2.2, 3.6 and 7.9 times slower than D0, respectively, on a GPU these ratios become 1.4, 1.9 and 3.2, respectively. \begin{table} \centering \caption{Accuracy and latency of the proposed detectors on Trash-ICRA19 dataset along with the number of epochs used for training. Architectural details are in Table \ref{tab:netsScaledFrom1-5}.} \begin{tabular}{ccccc|cc} \hline Architecture & \# epochs & \multicolumn{3}{c}{Accuracy (\%)} & \multicolumn{2}{c}{Latency (ms)} \\ & & AP & AP50 & AP75 & GPU & CPU \\ \hline D0(1-5) & 35 & 47.4 & 60.0 & 56.0 & 15 & 179\\ \hline D1(1-5) & 35 & 49.0 & 60.0 & 56.5 & 21 & 395\\ \hline D2(1-5) & 35 & 43.7 & 52.5 & 49.7 & 29 & 652\\ \hline D3(1-5) & 35 & 49.6 & 56.9 & 55.5 & 48 & 1423\\ \hline \end{tabular} \label{tab:ResultsWithTrashDataset} \end{table} Since Trash-ICRA19 does not specify the type of plastic object detected, we created the in-water plastic bags and bottles (WPBB) dataset, which has 900 fully annotated images: 500 depict bags and 400 depict bottles. The images where generated by selecting the frames of videos recorded in a testing pool located in the David Keir Building at Queen's University Belfast. The videos of underwater waste items were recorded with a Chasing Dory drone as shown in the bottom left of Fig. \ref{fig:AutomationOfAUV}. We trained our EfficientDets on WPBB achieving approximately 15\% AP as detailed in Table \ref{tab:ResultsWithWPBBDataset}. Examples of detections are shown in Fig. \ref{fig:DebrisDetectionsPBBW} with different values of confidence threshold $\tau$, which is an hyperparameter meaning that detections with a confidence lower than $\tau$ are ignored. \begin{table} \centering \caption{Accuracy and latency of the proposed detectors on WPBB dataset along with the number of epochs used for training. Architectural details are in Table \ref{tab:netsScaledFrom1-5}.} \begin{tabular}{ccccc|cc} \hline Architecture & \# epochs & \multicolumn{3}{c}{Accuracy (\%)} & \multicolumn{2}{c}{Latency (ms)} \\ & & AP & AP50 & AP75 & GPU & CPU \\ \hline D0(1-5) & 50 & 15.1 & 17.3 & 16.8 & 15 & 179\\ \hline D1(1-5) & 40 & 14.8 & 17.3 & 16.8 & 21 & 395\\ \hline D2(1-5) & 40 & 14.9 & 17.3 & 17.3 & 29 & 652\\ \hline D3(1-5) & 36 & 15.2 & 17.3 & 16.8 & 48 & 1423\\ \hline \end{tabular} \label{tab:ResultsWithWPBBDataset} \end{table} \begin{figure*} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Figures/d0_PBBW_a_conf40} \caption{D0(1-5), $\tau = 40\%$} \label{fig:DebrisDetection3-D0} \vspace{5mm} \end{subfigure \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Figures/d1_PBBW_a_conf95} \caption{D1(1-5), $\tau = 95\%$} \label{fig:DebrisDetection3-D1} \vspace{5mm} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Figures/d2_PBBW_a_conf95} \caption{D2(1-5), $\tau = 95\%$} \label{fig:DebrisDetection3-D2} \vspace{5mm} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Figures/d3_PBBW_a_conf95} \caption{D3(1-5), $\tau = 95\%$} \label{fig:DebrisDetection3-D3} \vspace{5mm} \end{subfigure} \\ \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Figures/d0_PBBW_b_conf95} \caption{D0(1-5), $\tau = 95\%$} \label{fig:DebrisDetection4-D0} \end{subfigure \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Figures/d1_PBBW_b_conf95} \caption{D1(1-5), $\tau = 95\%$} \label{fig:DebrisDetection4-D1} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Figures/d2_PBBW_b_conf95} \caption{D2(1-5), $\tau = 95\%$} \label{fig:DebrisDetection4-D2} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[width=.99\linewidth]{Figures/d3_PBBW_b_conf95} \caption{D3(1-5), $\tau = 95\%$} \label{fig:DebrisDetection4-D3} \end{subfigure} \caption{Examples of detections on the WPBB dataset performed by D0(1-5) and its derivations; $\tau$ is the minimum confidence. The models producing these inferences are in Table \ref{tab:ResultsWithWPBBDataset}.} \label{fig:DebrisDetectionsPBBW} \end{figure*} \subsection{Low-Light Underwater Conditions}\label{sub:UIE} Water depth or the use of AUVs at night time could lead to low-light underwater images recorded by the on-board RGB camera. In this subsection, we investigate how low-light conditions could affect object detection and how the low-light image enhancement method proposed in \cite{L2UWE} works when used as pre-processing step before an object detector as described in Fig. \ref{fig:AutomationOfAUV}. To simulate low-light conditions, we modified the first 300 samples of Trash-ICRA19 by simply subtracting a constant value from all the pixels of each image/sample, that is, \begin{equation} I^{s}_{d} = I^{s} - 120, \end{equation} where $I^{s}_{d}$ is the darkened version of the $s$-th image $I^{s}$ of Trash-ICRA19 and $s = {1, 2, \dots, 299, 300}$. Two examples of the outcome are the first two rows of Fig. \ref{fig:EnhancedImages}. Then, we considered four scenarios to compare the inference accuracy of the detectors trained previously on Trash-ICRA19 (see Table \ref{tab:ResultsWithTrashDataset}): \begin{enumerate} \item{No any low-light image enhancement method is used, i.e. the darkened images are the input of the detectors.} \item{The low-light image enhancement method proposed in \cite{L2UWE} (named ``$\text{L}^2$UWE'') improves the darkened images, which are then given as input to a detector.} \item{The darkened images are enhanced by adding a constant value $c = 40$ to all the pixels, i.e. $I^{s}_{e} = I^{s}_{d} + c$, where $I^{s}_{e}$ is the enhanced version of $I^{s}_{d}$; then, $I^{s}_{e}$ is given as input to a detector.} \item{The same as the previous scenario, but using $c = 80$.} \end{enumerate} A qualitative assessment of the outcome of $\text{L}^2$UWE and $c = 40$ is shown in Fig. \ref{fig:EnhancedImages} where it is visible the difference between the non-uniform and the uniform increase of light given by the former and the latter, respectively. \begin{figure} \includegraphics[width=0.43\textwidth]{Figures/EnhancedImages} \centering \caption{Examples of images used to evaluate the performance of low-light underwater image enhancement strategies as pre-processing step before an object detector as in Fig. \ref{fig:AutomationOfAUV}.} \label{fig:EnhancedImages} \end{figure} The result considering the four scenarios are in Table \ref{tab:withEnhancements} along with the computational time required by each enhancement method to improve a single image. \begin{table} \centering \caption{Detection accuracy without and with enhancement methods along with the latency of each enhancement method, i.e. the computational time to enhance a single image.} \begin{tabular}{ccccc|c} \hline Detector & Enh. method & AP & AP50 & AP75 & CPU latency (s)\\ \hline \multirow{4}{*}{D0(1-5)} & None & 31.4 & 44.9 & 36.8 & \multirow{10}{*}{$\text{L}^2$UWE}\\ & $\text{L}^2$UWE & 39.4 & 49.2 & 44.4 & \multirow{10}{*}{11.1}\\ & $c$ = 40 & 50.5 & 62.8 & 58.7 & \multirow{18}{*}{$c$ = constant}\\ & $c$ = 80 & \textbf{56.1} & \textbf{70.5} & \textbf{66.4} & \multirow{18}{*}{\textbf{0.02}}\\ \cline{1-5} \multirow{4}{*}{D1(1-5)} & None & 5.8 & 19.0 & 3.7 &\\ & $\text{L}^2$UWE & 38.9 & 54.6 & 43.9 &\\ & $c$ = 40 & 41.3 & 56.0 & 47.9 &\\ & $c$ = 80 & \textbf{51.3} & \textbf{73.9} & \textbf{61.1} &\\ \cline{1-5} \multirow{4}{*}{D2(1-5)} & None & 7.2 & 12.6 & 6.6 &\\ & $\text{L}^2$UWE & 51.3 & 65.3 & 57.5 &\\ & $c$ = 40 & 54.8 & 70.5 & 61.3 &\\ & $c$ = 80 & \textbf{59.0} & \textbf{71.9} & \textbf{68.2} &\\ \cline{1-5} \multirow{4}{*}{D3(1-5)} & None & 20.5 & 28.9 & 22.3 &\\ & $\text{L}^2$UWE & 57.8 & \textbf{70.9} & 66.6 &\\ & $c$ = 40 & 47.7 & 62.2 & 50.4 &\\ & $c$ = 80 & \textbf{61.4} & 69.4 & \textbf{68.3} &\\ \hline \end{tabular} \label{tab:withEnhancements} \end{table} As mentioned in the Introduction section, the first of the two questions we aim to answer is: ``Is it more efficient scaling-up the object detector size (e.g. from D0(1-5) to D3(1-5)) or adding an underwater image enhancement method before the smallest detector D0(1-5)?''. Table \ref{tab:withEnhancements} suggests that D0(1-5) is the most accurate detector if no enhancement methods are used (31.4\% AP with D0, 5.8\% AP with D1, 7.8\% AP with D2 and 20.5\% AP with D3), which is also the smallest. To improve its accuracy, an enhancement method could be used: $\text{L}^2$UWE provides an improvement of 8\% AP at the cost of 11.1 seconds to enhance a single image, $c = 40$ provides an improvement of 19.1\% AP at the cost of 20 milliseconds per image and $c = 80$ an improvement of 24.7\% AP at the cost of 20 milliseconds per image; hence, the total inference times on a CPU are 0.179 s + 11.1 s = 11.279 seconds with $\text{L}^2$UWE and 0.179 s + 0.020 s = 0.199 seconds with $c$-based methods. To further increase the accuracy, D0(1-5) could be scaled-up to D3(1-5): this results in a detector 7.9 times slower on a CPU (179 milliseconds with D0 vs. 1423 milliseconds with D3) and 3.2 times slower on a GPU, hence the total inference time per image with D3 becomes 1.423 s + 11.1 s = 12.523 seconds with $\text{L}^2$UWE and 1.423 s + 0.020 s = 1.443 seconds with $c$-based methods on a CPU, whereas the evaluation on a GPU requires the implementation of the enhancement methods on a GPU, which is currently not available. However, D3 can reach 61.4\% AP accuracy, which is the highest, but only 5.3\% AP higher than D0 (with 56.1\% AP). Note that the number of epochs chosen in this study to train the models can be increased and this should result in a larger difference in accuracy between D0 and D3. In conclusion, for real-time underwater marine detection using a CPU, the suggested detectors are D0(1-5) and D1(1-5) if trained for more epochs in order to leverage its bigger size compared to D0. If a GPU is available on-board, D2(1-5) and D3(1-5) could also be considered for real-time detection provided that they are trained with more epochs in order to fully exploit their potential. In case a low-light image enhancement is needed, $\text{L}^2$UWE is too computationally demanding for real-time applications (this is the answer to the second question we asked in the Introduction section). A more efficient approach is to tune the value of $c$ off-line using genetic algorithms or particle swarm optimization and subsequently use it for low-light real-time marine debris detection within an automation pipeline as the one in Fig. \ref{fig:AutomationOfAUV}. \section{CONCLUSION} \label{sec:Conclusions} As marine debris is an increasing concern for both human and wildlife health, AUVs could be used for debris removal. In this letter, the efficiency of debris detection is addressed by improving the efficiency of EfficientDets, creating and publishing a fully annotated dataset for in-water plastic bags and bottles, and finally considering low-light underwater conditions. Future work will consist of implementing the detectors on an AUV and integrate the detection outputs with the control system of both vehicle and gripper for debris removal. \addtolength{\textheight}{-4cm} \input{references.tex} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Image data in robotics is subject to uncertainty, e.g., due to robot motion, or variations in lighting. To account for the uncertainty, it is not sufficient to apply deterministic algorithms that produce a single answer to a computer vision task. Rather, we are interested in the full Bayesian posterior probability distribution related to the task; e.g., given the input image data, how likely is it that a particular image segment corresponds to a real object? The posterior distribution enables quantitatively answering queries on relevant tasks which helps in decision making. For example, the robot more likely succeeds in a grasping action targeting an object proposal with a high probability of corresponding to an actual object~\cite{vanHoof2014,Pajarinen2015b}. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{fig/process.pdf} \caption{Overview of the object discovery approach. Superpixels in an initial oversegmentation (1) are grouped applying the distance dependent Chinese restaurant process (ddCRP) (2). Multiple segmentation samples are drawn from the ddCRP posterior distribution. Object proposals are extracted from the set of segmentation samples (3), and ranked according to how likely they correspond to an object (4).} \label{fig:overview} \end{figure} In this paper, we propose a method for object discovery based on the distance dependent Chinese restaurant process (ddCRP). In contrast to other approaches, we do not combine superpixels deterministically to generate object proposals, but instead place a ddCRP prior on clusters of superpixels, and then draw samples from the posterior given image data to generate proposals. This firstly increases the diversity of object proposals, and secondly enables calculation of a likelihood term for each proposal. We show that the likelihood term may be used to rank proposals according to their quality. Additionally, the likelihood term might be exploited by a mobile robot to plan its actions. An overview of our approach is shown in Fig.~\ref{fig:overview}. We begin with a superpixel oversegmentation of the input image, and then place a ddCRP prior on clusters of superpixels. The ddCRP hyperparameters are selected to encourage object proposal generation: clusters of superpixels with high internal similarity and external dissimilarity are preferred. We apply Markov chain Monte Carlo (MCMC) to draw samples of the posterior distribution on clusterings of superpixels. We extract all unique clusters which form our set of object proposals. We rank the object proposals according to the Gestalt principles of human object perception~\cite{Wagemans2012}. We propose to include the likelihood term, i.e., how often each proposal appears in the set of samples, as part of the ranking, and show that this effectively improves the quality of the proposals. The paper is organized as follows. Section~\ref{sec:related_work} reviews related work and states our contribution w.r.t. the state-of-the-art. In Sections~\ref{sec:the_distance_dependent_chinese_restaurant_process}-\ref{sec:gestalt_principles_for_object_discovery}, we present in detail the steps involved in the overall process shown in Fig.~\ref{fig:overview}. Section~\ref{sec:evaluation} describes an experimental evaluation of our approach. Section~\ref{sec:conclusion} concludes the paper. \section{Related work} \label{sec:related_work} Object discovery methods include window-scoring methods (e.g.~\cite{Alexe2012}) that slide a window over the image which is evaluated for its objectness, and segment-grouping methods (e.g.~\cite{Manen2013}), that start with an oversegmentation of the image and group these segments to obtain object proposals. Segment-grouping methods have the advantage of delivering object contours instead of only bounding boxes, which is especially important in applications such as robotics where the object of interest might have to be manipulated. We concentrate here on the segment-grouping approach. The segment-grouping approaches often start from an oversegmentation of the image into superpixels that are both spatially coherent and homogeneous with respect to desired criteria, e.g., texture or color. Object proposals are then generated by combining several superpixels together. For an overview of the various combination strategies we refer the reader to~\cite{Hosang2016}. Although some segment-grouping approaches such as e.g.~\cite{Manen2013} apply random sampling to generate object proposals, it is often not possible to estimate a likelihood value for a particular combination of superpixels, nor is it intuitively clear what the overall probability distribution over image segments is that is applied in the sampling. However, both these properties are useful in application domains such as robotics, where decisions are made based on the observed image data, see, e.g.,~\cite{vanHoof2014,Pajarinen2015b}. To address these limitations, we consider non-parametric Bayesian methods for superpixel clustering. Such methods have been previously applied to image segmentation with the aim of replicating human segmentation of images. For example, \cite{Ghosh2011} applies the distance dependent Chinese restaurant process (ddCRP) and \cite{Nakamura2012} proposes a hierarchical Dirichlet process Markov random field for the segmentation task. In~\cite{Ghosh2012}, multiple segmentation hypotheses are produced applying the spatially dependent Pitman-Yor process. Recent work applies a Poisson process with segment shape priors for segmentation~\cite{Ghanta2016}. In our work, similarly to~\cite{Ghosh2011}, we apply Markov chain Monte Carlo (MCMC) sampling from a ddCRP posterior to generate clusters of superpixels. However, in contrast to earlier work our main aim is object discovery. We tune our method especially towards this aim by setting the model hyperparameters to produce clusters of superpixels that have a strong link to human object perception as described by the Gestalt principles of human object perception~\cite{Wagemans2012}. \section{The distance dependent Chinese restaurant process} \label{sec:the_distance_dependent_chinese_restaurant_process} We first oversegment the input image into superpixels (step 1 in Fig.~\ref{fig:overview}). For each superpixel, we compute a feature vector $x_i$ that we define later. We generate object proposals by grouping superpixels together applying the distance dependent Chinese restaurant process (ddCRP)~\cite{Blei2011}, a distribution over partitions. \begin{figure}[t] \centering \includegraphics{fig/ddcrp_a.pdf} \caption{The distance dependent Chinese restaurant process. Customers corresponding to superpixels in the input image are denoted by the nodes $x_i$. The links between customers induce a table assignment which corresponds to a segmentation of the image.} \label{fig:ddcrp} \end{figure} The ddCRP is illustrated by an analogy where data points correspond to customers in a restaurant. Every customer links to another customer with whom they will sit at the same table. A partitioning is induced by this set of customer links: any two customers $i$ and $j$ are seated at the same table if $i$ can be reached from $j$ traversing the links between customers (regardless of link direction). Applied to object proposal generation, the image is the restaurant, the customers are superpixels, and the assignment of customers to tables corresponds to a segmentation of the image, with each table forming one object proposal -- see Fig.~\ref{fig:ddcrp} for an illustration. In the ddCRP, the probability that a customer links to another is proportional to the distance between customers. Let $c_i$ denote the index of the customer linked to by customer $i$, $d_{ij}$ the distance between customers $i$ and $j$, and $D$ the set of all such distances. The customer links are drawn conditioned on the distances, \begin{equation} p(c_i = j \mid D, f, \alpha) \propto \begin{cases} \alpha & \text{if }j = i \\ f(d_{ij}) & \text{if }j\neq i\end{cases}, \label{eq:ddcrp_prior} \end{equation} where $\alpha$ is a parameter defining the likelihood of self-links, and $f:[0,\infty)\to\mathbb{R}^+$ is a decay function that relates the distances between customers to the likelihood of them connecting to each other. We require $f$ to be non-increasing and $f(\infty) = 0$. We next define the posterior over customer links. Let $\boldsymbol{x} = x_{1:N}$ denote the collection of all data points. Denote by $\boldsymbol{c} = c_{1:N}$ the vector of customer links, and by $\boldsymbol{z}(\boldsymbol{c})$ the corresponding vector of assignments of customers to tables. Denote by $K \equiv K(\boldsymbol{c})$ the number of tables corresponding to link assignment $\boldsymbol{c}$. Furthermore, write $\boldsymbol{z}^k(\boldsymbol{c})$ for the set of all customers $i$ that are assigned to table $k\in \{1, \ldots K\}$. For each table $k$, we assume that the data $x_i$, $i \in \boldsymbol{z}^k(\boldsymbol{c})$, is generated from $p(\cdot \mid \theta_k)$. The parameter $\theta_k$ is assumed to be drawn from a base measure $G_0$, which may be considered a prior on $\theta$. Thus, the posterior is \begin{equation} p(\boldsymbol{c} \mid \boldsymbol{x}, D, f, \alpha, G_0) \propto \left(\prod\limits_{i=1}^N p(c_i\mid D, f, \alpha) \right) p(\boldsymbol{x} \mid \boldsymbol{z}(\boldsymbol{c}), G_0). \label{eq:ddcrp_posterior} \end{equation} The first term on the right hand side above is the ddCRP prior, and the second likelihood term is conditionally independent between the tables $k$: \begin{equation} p(\boldsymbol{x} \mid \boldsymbol{z}(\boldsymbol{c}), G_0) = \prod\limits_{k=1}^K p(\boldsymbol{x}_{\boldsymbol{z}^k(\boldsymbol{c})} \mid G_0), \label{eq:ddcrp_likelihood} \end{equation} where $\boldsymbol{x}_{\boldsymbol{z}^k(\boldsymbol{c})}$ denotes the collection of data points in table $k$ under link configuration $\boldsymbol{c}$. As the ddCRP places a prior on a combinatorial number of possible image segmentations, computing the posterior is not tractable. Instead, we apply Markov chain Monte Carlo (MCMC)~\cite[Sect.~24.2]{Murphy2012} to sample from the posterior given the model hyperparameters $\boldsymbol{\eta} = \{D, f, \alpha, G_0\}$. \paragraph{Sampling from the ddCRP posterior:} \label{ssub:gibbs_sampler_for_the_ddcrp} Sampling from the ddCRP corresponds to step 2 of Fig.~\ref{fig:overview}, and each individual sample corresponds to a segmentation of the input image - see Fig.~\ref{fig:seg_example}, left, for an example. We apply Gibbs sampling, a MCMC algorithm for drawing samples from high-dimensional probability density functions, introduced for the ddCRP in~\cite{Blei2011}. The idea is to sample each variable sequentially, conditioned on the values of all other variables in the distribution. Denote by $\boldsymbol{c}_{-i}$ the vector of link assignments excluding $c_i$. We sequentially sample a new link assignment $c_i^*$ for each customer $i$ conditioned on $\boldsymbol{c}_{-i}$ via \begin{equation} p( c_i^{*} \mid \boldsymbol{c}_{-i}, \boldsymbol{x}, \boldsymbol{\eta}) \propto p(c_i^* \mid D, f, \alpha) p(\boldsymbol{x} \mid \boldsymbol{z}(\boldsymbol{c}_{-i} \cup c_i^*), G_0). \label{eq:ddcrp_gibbs} \end{equation} The first right hand side term is the ddCRP prior of Eq.~\eqref{eq:ddcrp_prior}, and the second term is the marginal likelihood of the data under the partition $\boldsymbol{z}(\boldsymbol{c}_{-i} \cup c_i^*)$. The current link $c_i$ is first removed from the customer graph which may either cause no change in the table configuration, or split a table (c.f. Fig.~\ref{fig:ddcrp}). Then, reasoning about the effect that a potential new link $c_i^*$ would have on the table configuration, it can be shown that~\cite{Blei2011} \begin{equation} p( c_i^{*} \mid \boldsymbol{c}_{-i}, \boldsymbol{x}, \boldsymbol{\eta}) \propto \begin{cases} \alpha & \text{if } c_i^* = i\\ f(d_{ij}) & \text{if } c_i^*=j \text{ does not join two tables}\\ f(d_{ij}) L(\boldsymbol{x}, \boldsymbol{z}, G_0) & \text{if } c_i^* = j \text{ joins tables } k \text{ and } l, \end{cases} \label{eq:ddcrp_gibbs} \end{equation} where \begin{equation} L(\boldsymbol{x}, \boldsymbol{z}, G_0) = \frac{p(\boldsymbol{x}_{ \boldsymbol{z}^k(\boldsymbol{c}_{-i}) \cup \boldsymbol{z}^l(\boldsymbol{c}_{-i}) } \mid G_0) }{ p(\boldsymbol{x}_{ \boldsymbol{z}^k(\boldsymbol{c}_{-i})}\mid G_0) p(\boldsymbol{x}_{ \boldsymbol{z}^l(\boldsymbol{c}_{-i})}\mid G_0) }. \label{eq:likelihood_ratio} \end{equation} The terms in the nominator and denominator can be computed via \begin{equation} p(\boldsymbol{x}_{\boldsymbol{z}^k(\boldsymbol{c})} \mid G_0) = \int \left( \prod\limits_{i \in \boldsymbol{z}^k(\boldsymbol{c})} p(x_i \mid \theta)\right) p(\theta \mid G_0) d\theta. \label{eq:cluster_likelihood} \end{equation} Recall that we interpret the base measure $G_0$ as a prior over the parameters: $G_0 \equiv p(\theta)$. If $p(\theta)$ and $p(x\mid \theta)$ form a conjugate pair, the integral is usually straightforward to compute. \section{Object proposal generation and likelihood estimation} \label{sec:object_proposal_generation} We extract a set of object proposals (step 3 in Fig.~\ref{fig:overview}) from samples drawn from the ddCRP posterior. Furthermore, we associate with each proposal an estimate of its likelihood of occurrence. As proposals are clusters of superpixels, we use here notation $s_i$ to refer to superpixels instead of their feature vectors $x_i$. To sample a customer assignment $\boldsymbol{c}$ from the ddCRP posterior, we draw a sample from Eq.~\eqref{eq:ddcrp_gibbs} for each $i=1,\ldots,N$. Denote by $\boldsymbol{c}_j$ the $j$th sample, and by $K_j \equiv K(\boldsymbol{c}_j)$ the number of tables in the corresponding table assignment. We can view $\boldsymbol{c}_j$ as a segmentation of the input image, $\bigcup\limits_{k=1}^{K_j} S_{j,k}$, where $S_{j,k} = \{ s_i \mid i \in \boldsymbol{z}^k(\boldsymbol{c}_j) \}$ is the set of superpixels assigned to table $k$ by $\boldsymbol{c}_j$. E.g., in Fig.~\ref{fig:ddcrp}, we would have $S_{j,1} = \{s_1, s_2, s_3\}$, $S_{j,2} = \{s_4, s_5\}$, and $S_{j,3} = \{s_6, s_7 \}$. We sample $M$ customer assignments $\boldsymbol{c}_j$, $j=1, \ldots, M$, and write $S_j = \{ S_{j,1}, S_{j,2}, \ldots, S_{j, K_j} \}$ as the set of segments in the $j$th customer assignment. E.g., for the case of Fig.~\ref{fig:ddcrp}, we have $S_j = \{ S_{j,1}, S_{j,2}, S_{j,3}\}$ $=\{ \{s_1,s_2,s_3\}$, $\{s_4,s_5\}$, $\{s_6,s_7\} \}$. The set $O$ of object proposals is obtained by keeping all unique segments observed among the sampled customer assignments: $O = \bigcup\limits_{j=1}^{M} S_j$. Each proposal $o\in O$ appears in at least one and in at most $M$ of the assignments $S_j$, $j=1,\ldots,M$. We estimate the likelihood of each proposal by \begin{equation} P(o) = \left. \left[ \sum\limits_{j=1}^{M} \mathbbm{1}\left( o \in S_j \right) \right] \middle/ \left[ \sum\limits_{j=1}^{M}|S_j| \right] \right., \label{eq:proposal_likelihood} \end{equation} where $\mathbbm{1}(A)$ is an indicator function for event $A$, and $|\cdot|$ denotes set cardinality. Fig.~\ref{fig:seg_example} illustrates the likelihood values for the proposals. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig/segmentation_example.png} \caption{Left: an example of a segmentation result from the ddCRP. Each segment is a proposal $o$. Right: The corresponding proposal likelihood estimates $P(o)$.} \label{fig:seg_example} \end{figure} \section{Gestalt principles for object discovery} \label{sec:gestalt_principles_for_object_discovery} We select the hyperparameters $\boldsymbol{\eta} = \{D, f, \alpha, G_0\}$ to promote two important principles: objects tend to have \emph{internal consistency} while also exhibiting \emph{contrast} against their background. This ensures that the proposal set $O$ contains segments that are likely to correspond to objects. As $O$ contains segments from all parts of the image, there are certainly also segments that belong to the background and contain no objects. To mitigate this drawback, we rank the proposals in $O$ and output them in a best-first order. For ranking, we calculate a set of scores from the proposals based on properties such as convexity and symmetry, that have also been shown to have a strong connection to object perception~\cite{Wagemans2012}. Next, we describe the superpixel feature extraction, the selection of the ddCRP hyperparameters, and the ranking of object proposals (step 4 of Fig.~\ref{fig:overview}). \paragraph{Feature extraction:} \label{ssub:feature_extraction} We compute three feature maps from the input image as in: the grayscale intensity $I$, and the red-green and blue-yellow color contrast maps $RG$ and $BY$, respectively. The feature vector $x_i$ for superpixel $i$ is \begin{equation} x_i = \begin{bmatrix} x_{i,I} & x_{i,RG} & x_{i,BY} & x_{i,avg} \end{bmatrix}^\mathrm{T}, \end{equation} where $x_{i,I}$, $x_{i,RG}$, and $x_{i,BY}$ are the 16-bin normalized histograms of the intensity, red-green, and blue-yellow contrast maps, respectively, and $x_{i,avg}$ is the average RGB color value in the superpixel. \paragraph{Hyperparameter selection:} \label{ssub:hyperparameter_selection} We incorporate contrast and consistency via the distance function $d$ and the base measure $G_0$, respectively. The distance function $d$ and the decay function $f$ determine how likely it is to link two data points. We impose a condition that only superpixels that share a border may be directly linked together. Also, superpixels with similar contrast features should be more likely to be linked. We define our distance function as \begin{equation} d(i,j) = \begin{cases} \infty & \text{if } s_i \text{ and } s_j \text{ are not adjacent}\\ \sum\limits_{n\in \{I, RG, BY\}} w_n \cdot v(x_{i,n}, x_{j,n}) & \text{otherwise} \end{cases}, \label{eq:distance} \end{equation} where $v(x, y) = \frac{1}{2} ||x-y||_1$ is the total variation distance, and $w_n$ is a weight for feature $n \in \{I, RG, BY\}$, s.t. $\sum_n w_n = 1$. The distance function $d$ has values in the range $[0,1]$, or the value $\infty$. The weights $w_n$ may be tuned to emphasize certain types of contrasts, but in our experiments we set all to $1/3$. We set an exponential decay function $f(d) = \exp(-d/a)$, where $a > 0$ is a design hyperparameter, to make it more likely to link to similar superpixels. We encourage internal consistency in the segments by setting the base measure $G_0$. For the likelihood terms in Eq.~\eqref{eq:cluster_likelihood}, we only consider the average RGB color feature $x_{i,avg}$ of the superpixels\footnote{The other elements of the feature vector are considered via the distance function $d$.}, which is a 3-dimensional vector. We set a multivariate Gaussian cluster likelihood model $p(x_{i,avg} \mid \theta) = N(x_{i,avg}; \mu, \Sigma)$. The model parameters are $\theta = \{\mu, \Sigma\}$, where $\mu$ and $\Sigma$ are the mean vector and covariance matrix, respectively. We apply the Normal-inverse-Wishart distribution as a conjugate prior~\cite[Sect.~4.6.3]{Murphy2012}, i.e. $p(\theta \mid G_0) = NIW(\theta \mid m_0, \kappa_0, v_0, S_0) = N(\mu\mid m_0, \frac{1}{\kappa_0}\Sigma)\cdot IW(\Sigma\mid S_0, v_0)$. Here, $m_0$, $\kappa_0$, indicate our prior mean for $\mu$ and how strongly we believe in this prior, respectively, and $S_0$ is proportional to the prior mean for $\Sigma$ and $v_0$ indicates the strength of this prior. With this choice, adjacent superpixels with similar average RGB colors have a high likelihood of belonging to the same table in the ddCRP. \paragraph{Object proposal ranking:} \label{sub:object_proposal_ranking} Similarly as in~\cite{Werner2015}, for each object proposal $o\in O$, we compute the following Gestalt measures that have been shown to have a relation to human object perception~\cite{Wagemans2012}: \begin{itemize} \item symmetry, calculated by measuring the overlaps $l_1$ and $l_2$ between the object proposal $o$ and its mirror images along both of its principal axes, i.e., eigenvectors of its scatter matrix. We use the symmetry measures $\frac{\lambda_1 l_1 + \lambda_2 l_2}{\lambda_1 + \lambda_2}$ and $\max \{ l_1, l_2\}$, where $\lambda_i$ are the eigenvalues of the scatter matrix, \item solidity, the ratio of the area of the convex hull of $o$ to the area of $o$ itself, \item convexity, the ratio of the proposal's boundary length and the boundary length of its convex hull, \item compactness, the ratio of the area of $o$ to the squared distance around the boundary of $o$, i.e., its perimeter, \item eccentricity, the ratio of the distance between the foci of the ellipse encompassing $o$ and its major axis length, and \item centroid distance, the average distance from the centroid of the proposal to its boundary. \end{itemize} As in~\cite{Werner2015}, we apply the first sequence of the KOD dataset~\cite{Horbert2015} to train a support vector machine (SVM) regression model~\cite[Sect.~14.5]{Murphy2012} from the Gestalt measures of a proposal $o$ to the intersection-over-union (IoU) of $o$ with the ground truth objects. Applying the SVM, we can predict a score $s(o)$ for any object detection proposal in $O$. The proposals with the highest score are deemed most likely to correspond to an actual object. We propose a weighted variant of this score taking into account the likelihood (Eq.~\eqref{eq:proposal_likelihood}): \begin{equation} s_w(o) = P(o)s(o). \end{equation} The rationale for this definition is that we would like to give higher priority to object proposals that 1) have a high score $s(o)$ and 2) appear often in the segmentations, indicating robustness with respect to internal consistency and external contrasts as defined via our model hyperparameters. For example in Fig.~\ref{fig:seg_example}, the scores of proposals with high $P(o)$, i.e., proposals that appear in many samples from the ddCRP, are given higher priority. As an optional step, we add non-maxima suppression (NMS) for duplicate removal: iterating over all object proposals $o$ in descending order of score, all lower ranked proposals with an IoU value greater than 0.5 with $o$ are pruned. \section{Evaluation} \label{sec:evaluation} We evaluate our object proposal generation method on the Kitchen Object Discovery (KOD) dataset~\cite{Horbert2015}. We select this dataset as it contains sequences from challenging cluttered scenes with many objects (approximately 600 frames and 80 objects per sequence). This makes it more suitable for our envisioned application area of robotics than other datasets consisting mostly of single images. Ground truth labels indicate the true objects for every 30\textsuperscript{th} frame. We tuned our method and trained the proposal scoring SVM on the first sequence of the data set, and apply it to the remaining four sequences, labeled Kitchen A, B, C, and D, for testing. For superpixel generation, we apply the SLIC algorithm~\cite{Achanta2012} with a target of 1000 superpixels with a compactness of 45. Features for superpixels are computed as described in Section~\ref{sec:gestalt_principles_for_object_discovery}. We set a self-link likelihood as $\log \alpha = 0$. For the exponential decay function $f(d)=\exp(-d/a)$, we set $a = 0.05$. For the base measure, we set $m_0 = \begin{bmatrix}1 & 1 & 1 \end{bmatrix}^T$ with a low confidence $\kappa_0 = 0.1$, and $S_0 = 10 \cdot \text{I}_{3\times 3}$ with $v_0 = 5$. For each image, we draw $M=50$ samples of segmentations applying the ddCRP. Samples from a burn-in period of 50 samples were first discarded to ensure the underlying Markov chain enters its stationary distribution. We rank the proposals applying the score $s(o)$ or the likelihood-weighted score $s_w(o)$, and return up to 200 proposals with the highest score. Before ranking we removed proposals larger than 10\% or smaller than 0.1\% of the image size. We compare our method to the saliency-guided object candidates (SGO) of~\cite{Werner2015}, the objectness measure (OM) of~\cite{Alexe2012}, and the randomized Prim's algorithm (RP) of~\cite{Manen2013}. SGO is a recent method that performs well on the KOD dataset. The other two methods are representatives of the window-scoring (OM) and segment-grouping (RP) streams of object discovery methods. We measure precision and recall in terms of the number of valid object proposals that have IoU $\geq$ 0.5 with the ground truth. As OM outputs proposals as bounding boxes, we evaluate all methods with bounding boxes for a fair comparison. We define the bounding box of a proposal as the smallest rectangle enclosing the whole proposal. \begin{table}[t] \centering \caption{Area under curve (AUC) values for precision and recall averaged over all frames on the test data sequences labeled A through D, and averaged over all test sequences. ``Weighted'' refers to using the score $s_w(o)$, ``plain'' to using the score $s(o)$. Non-maxima suppression (NMS) was applied in all cases. The greatest values for each sequence are shown in a bold font.} \label{tab:auc} \begin{tabular}{@{}ccccccccccccccc@{}} \toprule & \multicolumn{2}{c}{Kitchen A} & & \multicolumn{2}{c}{Kitchen B} & & \multicolumn{2}{c}{Kitchen C} & & \multicolumn{2}{c}{Kitchen D} & & \multicolumn{2}{c}{Average}\\ \cmidrule(lr){2-3} \cmidrule(lr){5-6} \cmidrule(lr){8-9} \cmidrule(l){11-12} \cmidrule(l){14-15} & Prec. & Rec. & & Prec. & Rec. & & Prec. & Rec. & & Prec. & Rec. & & Prec. & Rec. \\ \midrule Ours (weighted) & \textbf{19.4} & \textbf{93.3}& & 25.2 & \textbf{86.0}& & 12.1 & \textbf{86.7}& & 26.7 & 47.4 & & \textbf{20.8} & \textbf{78.3} \\ Ours (plain) & 16.8 & 83.1 & & 22.2 & 79.1 & & 11.8 & 85.3 & & \textbf{27.9} & 49.2 & & 19.7 & 74.2 \\ SGO~\cite{Werner2015} & 9.8 & 60.9 & & \textbf{25.3} & 85.5 & & 9.6 & 81.6 & & \textbf{27.9} & \textbf{51.9} & & 18.2 & 70.0 \\ OM~\cite{Alexe2012} & 11.5 & 45.7 & & 14.7 & 44.4 & & \textbf{18.1} & 83.8 & & 8.6 & 17.2 & & 13.2 & 47.8 \\ RP~\cite{Manen2013} & 11.1 & 61.2 & & 12.3 & 46.0 & & 12.0 & 70.0 & & 11.8 & 25.2 & & 11.8 & 50.6 \\ \bottomrule \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig/kitchen_C_summary.pdf} \caption{From left to right: recall, precision, and global recall (fraction of all objects in the sequence detected) averaged over all frames in the Kitchen C sequence. The results are shown as a function of the number of best-ranked proposals considered.} \label{fig:kitchen_c} \end{figure} The results are summarized in Table~\ref{tab:auc}. As shown by the average column, the proposed method with likelihood weighting performs best both in terms of precision and recall. With the plain scoring we still slightly outperform SGO, OM, and RP. On individual sequences, we reach the performance of SGO on sequences B and D, while outperforming it on A and C. OM has better precision and similar recall as our method and SGO on sequence C, but does not perform as well on other sequences. On sequences A, B, and C, applying our likelihood-weighted proposal scoring improves performance compared to the plain scoring method. Thus, the likelihood is useful for ranking proposals, providing complementary information not available with the plain score. For sequence C, the recall, precision, and global recall (fraction of all objects in the sequence detected over all frames) as a function of the number of best-ranked proposals considered are shown in Fig.~\ref{fig:kitchen_c}. We achieve higher precision and global recall than SGO for a low number of proposals ($<$ 50) per frame. We achieve greater global recall than all the other methods, detecting a greater fraction of all objects over the whole sequence. Fig.~\ref{fig:ranking} shows the effect of ranking method on the performance of our method when averaging over all of the four sequences. Applying likelihood-weighting together with non-maxima suppression (NMS) improves the results over applying the plain score. Applying NMS decreases the reported precision, since it removes also good duplicates from the set of proposals. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig/kitchen_ranking_methods.pdf} \caption{Evaluation of the ranking methods. Plain refers to the score $s(o)$, weighted is the likelihood weighted score $s_w(o)$, while NMS indicates applying non-maxima suppression (duplicate removal). The numbers in parenthesis show the AUC values for each curve.} \label{fig:ranking} \end{figure} \begin{figure}[h!t!] \centering \includegraphics[width=\columnwidth]{fig/kitchen_examples.png} \caption{Bounding boxes for the top 5 object proposals. From top to bottom: input image, ground truth labels, ours (likelihood weighted), ours (plain score), SGO~\cite{Werner2015}, OM~\cite{Alexe2012}, and RP~\cite{Manen2013}. From left to right: one frame from sequence A, B, C, or D.} \label{fig:qualitative} \end{figure} Fig.~\ref{fig:qualitative} qualitatively compares the 5 best proposals from each of the methods. OM and RP tend to produce large object proposals (last two rows). The third and fourth row show the likelihood weighted and plain scoring, respectively. Compared to plain scoring, likelihood weighting increases the rank of proposals that appear often in the ddCRP samples. For example, in the last column, fourth row, the plain score gives a high rank for the patch of floor in the lower left corner and the patch of table covering in the lower middle part of the image. These proposal rarely appear in the ddCRP samples. With likelihood weighting (last column, third row), the often appearing proposals on the coffee cup in the middle left part and near the glass in the top left part of the image are preferred as they have a higher likelihood, as also seen from Fig.~\ref{fig:seg_example}. \section{Conclusion} \label{sec:conclusion} We introduced object proposal generation via sampling from a distance dependent Chinese restaurant process posterior on image segmentations. We further estimated a likelihood value for each of the proposals. Our results show that the proposed method achieves state-of-the-art performance, and that the likelihood estimate helps improve performance. Further uses for the likelihood estimates may be found, e.g., in robotics applications. Other future work includes extending the method to RGB-D data, and an analysis of the parameter dependency. \bibliographystyle{splncs03}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The search and study of galaxy populations at very high redshift is one of the most promising research areas of today astrophysics and cosmology. It derives its importance on two different and interrelated aspects: 1) The estimate of the UV photon budget provided by star-forming galaxies and its role on the reionization of the universe at $z>6$; 2) The study of the formation and the physical properties of the first bulding blocks of present-day galaxies. There is observational evidence that the Universe is highly ionized at $z\sim6$ \cite[e.g.][]{Fan2006,Totani2006}, in agreement with the latest WMAP estimates of the Thomson optical depth \citep{Komatsu2010}, although significant uncertainties remains on the homogeneity \cite[e.g.][]{Mesinger2009} and on the exact timeline of the reionization process \cite[e.g.][]{Gallerani2006}. Whether the UV light emitted by star-forming galaxies is capable of reionizing the Universe by these epochs remains an open question that should be answered through the analysis of large samples of high redshift objects. The search for high-redshift star forming galaxies has been carried out so far mainly with renditions of the Lyman Break, or ``drop-out'' technique that has been proved to be extremely efficient at redshift from 2 to 6 \cite[e.g.][]{Steidel1995,Steidel1999,Adelberger2004,Dickinson2004,Giavalisco2004,Ouchi2004,Bouwens2007,Mclure2009}, or through narrow-band studies targeting the Ly$\alpha$ emission \cite[e.g.][]{Iye2006,Kashikawa2006,Ouchi2009b}. The application of the Lyman Break technique at $z>6$ has been performed, at first, in small areas with deep near-IR $J+H$ NICMOS data \cite[e.g.][]{Bouwens2004}, and it has recently acquired momentum thanks to the installation of the WFC3 camera onboard of the Hubble Space Telescope yielding to a sample of tens of faint Lyman Break galaxies (LBGs) \citep{Bouwens2009b,Oesch2009b,Mclure2009b,Bunker2009,Yan2009,Wilkins2009,Wilkins2010}. In the meantime, ground based surveys \cite[][C10 hereafter]{Ouchi2009,Capak2009,Hickey2009,Castellano2010}, along with refined analysis of archival NICMOS observations \citep{Bouwens2010} have expanded the number of bright LBGs known. The basic feature of the high redshift galaxy population that can be analysed through the present datasets is its UV luminosity function (LF). The current picture of the evolution of the UV LF points to a factor of 6-11 decrease in the number density of UV bright galaxies from $z\sim 3$ to $z\sim 6$ \cite[e.g.][]{Stanway2003,Shimasaku2005,Bouwens2006a}, although some uncertainties are still present on the exact amount of evolution in the different parameters of the Schechter function \citep{Dickinson2004,Giavalisco2004,Sawicki2006,Iwata2007,Yoshida2006,Bouwens2006a,Bouwens2007,Beckwith2006}. At redshift above 6, most of the analysis indicates a strong evolution in the LF, mainly through a dimming of the characteristic magnitude $M_{*}$ and/or a decrease of the normalization factor $\phi$ \citep{Bouwens2008,Mclure2009b,Ouchi2009,Yan2009,Castellano2010,Bouwens2010}. The recent WFC3-based analysis by \citet{Oesch2009b} also found evidence for a steep faint-end ($\alpha \sim -1.8$), in agreement with the predictions of theoretical models \citep{Trenti2010,Salvaterra2010}. LBGs searches around lensing clusters have also been performed, finding discrepant results that highlight the many challenges and uncertainties in these investigations \citep{Richard2006,Richard2008,Bradley2008,Bouwens2009,Zheng2009}. Along with improved constraints on the LF at $z\sim7$, the latest analysis of the WFC3 data have also provided a first estimate of the evolution at $z\sim8-9$ that points to a further decrease in the LBG number density, and thus in the total amount of UV photons produced by young stars at these early epochs. The discrepancies among different works, both at $z\sim3-5$ and at $z\sim6-9$ are most probably due to the effect of cosmic variance \cite[e.g.][]{Trenti2008,Robertson2010}, but also to the difficulties in avoiding systematic effects in the different estimates of completeness level, contamination from lower redshift interlopers, volume elements, and redshift distributions in the various samples \citep{Stanway2008b}, all worsened by the known degeneracy among the parameters adopted to fit the LF. The strong decrease observed in the UV emission coming from relatively bright sources seems to imply that reionization cannot be explained on the basis of UV bright galaxies only. An increased number of low luminosity galaxies indicated by the steep faint end of the Schechter LF might play a decisive role in the reionization process. Large and reliable samples of high-z galaxies both at the bright and at the faint end of the LF are thus necessary to shed light on this issue, and, possibly to highlight the need to search for even more intriguing sources of the reionizing radiation with future facilities \cite[see e.g.][]{Venkatesan2003,Madau2004}. Latest surveys have also given the opportunity of analysing the physical properties of high redshift galaxies, whose knowledge is also a decisive factor to understand the very role of these sources in the reionization process. Recent studies have given the first estimates of masses, ages and SFRs for single $z \gtrsim 6.5$ objects, the first constraints on the stellar mass density at these epochs, and have also raised an interesting debate on the possibilty that the first galaxes might be characterized by peculiar properties, like a very low dust content, nearly primordial metallicity or top-heavy stellar initial mass functions \citep{Finkelstein2009,Bouwens2010,Gonzalez2010,Labbe2010,Salvaterra2010,Schaerer2010}. To give an answer to some of the above problems we are using the new VLT IR imager Hawk-I \citep{Pirard2004,Casali2006,Kissler2008}, to conduct a deep, medium area survey in the $Y$ band over four independent pointings, aimed at the detection of relatively bright LBGs at $6.5<z<7.5$. Thanks to the extreme efficiency and large field of view (7.5$\times$7.5 arcmin) of Hawk-I, it is possible to easily reach $Y\sim26.5$ AB at $>5\sigma$ (roughly corresponding to $M_{1500}=-20.5$ at $z=7$), over large areas in a reasonable amount of time (15 hrs). In C10 we discussed the results of the first half of our survey, covering a large fraction of the GOODS-S field, and we estimated a statistically significant ($\sim$99 \% c.l.) decrease, with respect to $z\sim6$, of the number density of UV-bright galaxies. In this paper we present the $z\sim7$ candidates found in the second half of the survey, covering two other independent fields. We will constrain the evolution of the LF combining this new sample with the GOODS one. The paper is organised as follows. In Section 2 we present the imaging set and the multiwavelength catalogue; in Section 3 the LBG selection criteria and the potential interlopers affecting the selection are discussed; in Section 4 we present our final sample of candidate $z$-drop LBGs. In Section 5 we discuss a stacking analysis of all the $z$-drop galaxies found in the four Hawk-I pointings, that are used to constrain the $z>6$ UV LF in Section 6. In Section 7 we derive an upper limit on the number density of very bright $z\sim8$ LBGs. A summary of our methods and results is provided in Section 8. Throughout the whole paper, observed and rest--frame magnitudes are in the AB system, and we adopt the $\Lambda$-CDM concordance model ($H_0=70km/s/Mpc$, $\Omega_M=0.3$, and $\Omega_{\Lambda}=0.7$). \section{Data} \label{dataset} \begin{figure} \centering \includegraphics[width=9.0cm]{fig1.ps} \caption{Colour-composite image of the BDF (left) and NTTDF (right) fields, created using the weighted mean of Hawk-I $Y$, $J$ and $K$ images as red, the FORS2 $Z$ as green, and the weighted mean of FORS1/FORS2 optical images as blue.} \label{fields} \end{figure} \begin{table*} \centering \caption{BDF - Observations} \label{tabBDF} \begin{tabular}{cccccc} \hline Filter & Instr. & Exp. Time (s)& Seeing (arcsec) & Mag. Limit$^a$ \\ \hline V-High& FORS2 & 13800& 0.75 & 29.1 \\ R-Special& FORS2 & 11600 & 0.63 & 29.3\\ I-Bessel& FORS2 & 4800 & 0.70 & 27.8 \\ Z-Gunn& FORS2 & 64800 & 0.59 & 28.6 \\ Y-Open& HAWK-I & 56940 & 0.52 & 28.3 $^b$\\ J-Open& HAWK-I & 18720 & 0.54 & 26.5 \\ Ks-Open& HAWK-I & 30060 & 0.44 & 26.0 \\ \hline \end{tabular} \\ \smallskip \begin{tabular}{l} a - S/N=1\\ b - Y=26.5 at S/N=5 \\ \end{tabular} \\ \end{table*} \begin{table*} \centering \caption{NTTDF - Observations} \label{tabNTTDF} \begin{tabular}{cccccc} \hline Filter & Instr. & Exp. Time (s) & Seeing (arcsec) &Mag. Limit$^a$ \\ \hline U-Bessel& FORS1 & 32876& 0.84& 27.8 \\ B-Bessel& FORS1 & 16064 & 0.56& 28.9 \\ V-Bessel& FORS1 & 10500 & 0.47& 29.0 \\ R-Special& FORS2 & 14000& 0.79 & 28.4 \\ I-Bessel& FORS2 & 7830 & 0.61& 28.0 \\ Z-Gunn& FORS2 & 46386 & 0.60 & 28.4 \\ Y-Open& HAWK-I & 54180 & 0.49 & 28.3$^b$ \\ J-Open& HAWK-I & 14400 & 0.47 & 26.7 \\ Ks-Open& HAWK-I & 24720 & 0.39& 26.3 \\ \hline \end{tabular} \\ \smallskip \begin{tabular}{l} a - S/N=1\\ b - Y=26.5 at S/N=5 \\ \end{tabular} \\ \end{table*} \subsection{Observations} This work is based on deep $Y$--band images obtained with the IR camera Hawk-I at the VLT, and on deep optical FORS2 observations. We use data collected through a dedicated ESO Large Programme in 2008 and 2009. The first set of data, covering two adjacent regions of the GOODS-S field has been presented in C10. Here we present the analysis of two other pointings (Fig.~\ref{fields}), chosen for the wealth of deep, public observations previously exploited by other authors to search for $z\sim4-6$ LBGs: the BDF field at Ra=336.98\textdegree, Dec=-35.17\textdegree \citep{Lehnert2003}, and the New Technology Telescope Deep Field (NTTDF) at Ra=181.36\textdegree, Dec=-7.72\textdegree \citep{Arnouts1999,Fontana2000,Fontana2003}. The total exposure time is 15h49m for BDF and 15h03m for the NTTDF in the $Y$ band. The $Y$ band images were reduced using standard techniques for IR data - flat fielding, sky subtraction among consecutive frames, and final coaddition. The reduction procedure, which is described in detail in our first paper C10, has been specifically designed to enhance the reliability of the images at the faintest fluxes, and to get rid of persistence effects and cross-talk resonances. We determine an FWHM of 0.52 $\pm 0.01$ arcsec ($\simeq$ 4.9 pixels) in the final coadded BDF image and 0.49 $\pm 0.01$ arcsec ($\simeq$ 4.6 pixels) in the NTTDF one. Image zeropoints were computed using the standard stars observed during the same night and at similar airmasses. Reference fluxes were converted to the photometric system and filter set used in this paper, as described in C10. We obtained the absolute r.m.s. maps for each pointing by computing the r.m.s. in each individual image (using the Poisson statistics and the instrumental gain) and propagating self-consistently this r.m.s. over the whole data reduction process. The typical $5\sigma$ magnitude in one arcsec$^2$ is in the range 26.7-26.8 over more than 60\% of the whole image, and $>26.2$ in 85\% of the image - the rest of the images being shallower because of the gaps between the four Hawk-I chips. A wide wavelength coverage is needed to reliably select high redshift LBGs excluding lower redshift interlopers and red and dusty galaxies at intermediate redshift. To this aim we obtained or re-reduced deep observations of both fields ranging from the blue to the near-IR, matching all images to the $Y$ band pixel-size and astrometric solution. Along with the main $Y$--band pointings, we also acquired deep $J$ and $Ks$ Hawk-I observations of both fields. We also obtained $\sim 7$ hours of FORS2 $Z$ band coverage for each field, that we coadded with the already existing FORS2 images \citep{Fontana2003} to reach the required depth. We also re-reduced the archive $U$, $B$, $V$, $R$, $I$ FORS2 and FORS1 observations of the NTTDF, and the FORS2 $R$ and $I$ images of the BDF. Finally, we obtained $\sim 4$ hours of $V$-FORS2 observations on the BDF. The full dataset is presented in Tab. \ref{tabBDF} and Tab. \ref{tabNTTDF}. \subsection{The photometric catalogue} \label{catalogue} \subsubsection{Detection} \label{detection} We obtained the photometric catalogue using the SExtractor code V2.5 \citep{Bertin1996} and the $Y$ band as detection image with the r.m.s. map derived as described above. Since high redshift galaxies are almost unresolved in ground-based images, and SExtractor's \verb|MAG_BEST| are known to underestimate the total flux of faint objects ($Y > 24$ in our case), we chose to use aperture-corrected total magnitudes. We computed aperture magnitudes in a 2 FWHM diameter and corrected them to total magnitudes adopting aperture corrections from bright non-saturated stars in each field. While this choice might give slightly underestimated fluxes for the more extended high redshift candidates, we can easily take into account this systematic through the simulations that we use to estimate the LF (Sect.~ \ref{Montecarlo}) that are based on the observed profile of LBGs with known spectroscopic redshifts $5.5<z<6.2$ \citep{Vanzella2009} in the GOODS-S ACS images. We optimised the SExtractor parameters involved in the detection process through the analysis of a `negative' image as discussed in C10, adopting the set of parameters that minimises the ratio between 'negative' and 'positive' detections at the faint end of the number counts. As expected, we find that the best parameters for faint objects detection on BDF and NTTDF $Y$-band images are the same adopted for the similar set of images over GOODS-South: we require 10 contiguous pixels each at $S/N>0.727$, corresponding to a $2.3\sigma$ detection, and we restrict the analysis to the regions where the r.m.s is less than $\sim$ 1.5 times the lowest value. With this choice of parameters, we do not find any detection on the negative images at $Y<26.2$, and a fraction of negative detections less than 5\% of the real ones at fainter magnitudes. However, \textit{a posteriori}, the latter value overestimates the actual rate of spurious detections. Indeed, all spurious sources should appear as ``drop-out'' candidates with a single-band detection. On the contrary, all the $Y>26.2$ objects in our $z$-drop sample are confirmed by detections in other IR bands. Indeed, as we discuss also in C10, the test on the negative image is probably influenced by non-trivial issues concerning the subtraction of the background or a potential asymmetry in the noise distribution. \subsubsection{The multicolour catalogue}\label{catal} A multiwavelength catalogue containing self-consistent magnitudes in all available bands was built running SExtractor in dual mode using the $Y$-band Hawk-I image as the detection image with the detection parameters indicated above. Aperture fluxes were computed within a 2FWHM aperture and converted to total applying appropriate aperture corrections in each band. The typical $1\sigma$ limiting magnitudes in a 2FWHM aperture are in the range $27.8-29.3$ for the optical bands, $J \sim 26.5-26.7$, and $Ks \sim 26.0-26.3$. The corresponding $1\sigma$ limiting magnitude in the $Z$ band, which is used to define the 'dropout' selection, is $\sim 28.6$ in BDF and $\sim 28.4$ in the NTTDF (see Tab. \ref{tabBDF} and Tab. \ref{tabNTTDF}). For each field we defined the total areas where the image depth is sufficiently homogeneous. The candidates found in this area will be used for the evaluation of the LF. We used $Y$-band detected objects only in the regions selected on the basis of the negative image test explained above. In addition, we also masked borders, CCD defects and noisiest regions in the other images of our data-set. The areas selected in this way correspond to $\sim 71$\% of the $Y$-band coverage in the BDF, and $\sim 56$\% in the NTTDF (due to strong vignetting in the $Z$-band image). As a result, the total area used for $z$-drop detection amounts to ~$71.7$~ arcmin$^2$. We will subtract to this value the fraction of area covered by lower redshift objects ($\sim$9\%) to estimate effective volumes in Sect.~\ref{LF} and Sect.~\ref{z8}. \section{The selection of z$>6.5$ galaxies}\label{Selection} \subsection{The dropout criterion}\label{drop} We select candidate $z>6.5$ galaxies using the ``drop-out'' technique adapted to our filter set and imaging depth. In order to individuate the appropriate selection criteria we estimated the expected colours of high-redshift star forming galaxies (black points in the right panel of Fig.~ \ref{diagram}) on the basis of the models of Charlot and Bruzual 2007 \citep{Bruzual2007a,Bruzual2007b} with the same range of free parameters as in C10: Metallicity: 0.02, 0.2 and 1 $Z_\odot$; age from 0.01 Gyr to the maximal age of the Universe at a given $z$; E(B-V)=0...0.2 \citep{Calzetti2000}. Ly-$\alpha$ rest-frame equivalent width in the range 0-200 \AA. Intergalactic absorption following \citet{Madau1995}. The same range of model parameters will be used as baseline for the MonteCarlo simulations used to estimate the LF in a self consistent way, see Sect. \ref{LF}. As shown in the left panel of Fig.~ \ref{diagram}, galaxies at $z>6.5$ show an increasing $Z-Y$ colour which is due to the sampling within these two filters of the sharp drop shortward of the Lyman-$\alpha$, where most of the photons are absorbed by the intervening HI in the intergalactic medium. The drop in the flux observed shortward of the $Y$ band is analogous to the one used to select star forming galaxies at lower redshifts, like $i$-drops at $z\sim5$, $V$-drops at $z\sim4$ etc.: the major difference with respect to the standard Lyman break technique being that the $Y$ band does not sample the continuum around $1500$ \AA ~ but a region shortward of it, contaminated by both the larger IGM absorption at $z>6$ and by the Lyman-$\alpha$ emission line. These effects can only be accurately accounted for by realistic imaging simulations, as we discuss in detail in C10 and in section ~\ref{Montecarlo} of this paper. Following this test, we choose $Z-Y>1$ as our main selection criterion to select $z>6.5$ galaxies. Given that the $Z$-band observations, as well as the optical ones, used in the present paper are slightly shallower than the GOODS-ACS ones, we limit our selection at $Y<26.5$ instead of the $Y<26.8$ adopted in C10. \begin{figure} \centering \includegraphics[width=9cm]{fig2.ps} \caption{{\it Left}: $Z-Y$ colour of star forming galaxies as a function of redshift. In the upper part, the efficiency curve of the two filters is shown, computed at observed wavelength of a Lyman-$\alpha$ emission at the corresponding redshift. {\it Right}: $Z-Y$ vs. $Y-J$ colour diagram showing the expected colours of LBGs (same as in left panel, black points), passively evolving galaxies (red squares) and reddened starbursts (green circles) at $1.5<z<4$ and cool dwarf stars from the templates of \citet{Tsuji2004} (magenta stars). Galaxy colours are computed according to CB07 models, see text for details on the adopted parameters. In both panels lines indicate the relevant colour selection criteria discussed in Sect. \ref{Selection} } \label{diagram} \end{figure} The selection of $z$-drop galaxies cannot be solely based on the $Z-Y$ colour, since other classes of objects can display a red $Z-Y$ colour similar to that of $z>6.5$ galaxies. Selection criteria, both in the optical and in the IR bands, are thus necessary to individuate a reliable sample of $z\sim7$ galaxies. \subsection{IR colour selection}\label{ircolours} We tailored our IR colour selection to exclude any possible contamination in our $z$-drop sample from known classes of lower redshift objects: i) We modelled passively evolving galaxies and dusty starburst galaxies at $z>1.5$ with a suitable set of spectral synthesis models. We use the same CB07 library as for high redshift galaxies to predict the colours of such objects at $1.5<z<4$, using a combination of short star formation exponential timescales ($0.1-1$ Gyrs) and ages $>1$ Gyr to reproduce passively evolving galaxies, and constant star-forming models with $0.5<E(B-V)<1.5$ \cite[adopting a][extinction law]{Calzetti2000} for the dusty starbursts. As shown in Fig. \ref{diagram} (right panel) these galaxies also show a large IR colour term. To exclude these objects, we adopt the same additional criteria on IR colours as in C10 :~ (Z-Y) $>$ (Y-K); ~(Z-Y) $>$ 0.5+2.0(Y-J); ~(Y-J)$<$ 1.5;~(Y-K)$<$ 2.0. ii) Cool ($T_{eff}<1500$ K), low-mass stars, and substellar objects of the T spectral class have infrared spectra that are dominated by the $CH_{4}$ and $H_2O$ absorption bands and by $H_{2}$ resonant absorption \cite[e.g.][]{Chabrier2005,Burgasser2006} that produce a sharp break in their IR colours. We used the most up-to-date estimate of the T-dwarfs number density (as observed in the $J$ band) of Burgasser et al 2007 to compute the expected number of faint, cool dwarfs in our fields. Adopting an average $Y-J$ colour of 0.8 mags estimated from the catalogue of observed dwarfs compiled by \citet{Leggett2010}, and considering the dependence on galactic latitude as in \citet{Burgasser2004}, we estimated that $\sim$ 0.6 stars of spectral types T0-T8 with $Y<26.5$ are expected to fall in each of our fields. However the exact number of expected contaminating dwarfs depends on the still uncertain parameters constraining the IMF and their spatial distribution inside the disk and the halo of the Galaxy: a pessisimistic estimate (see C10) gives a nearly double surface density of cool dwarfs in our pointings. For this reason we used the synthetic spectral libraries by \citet{Tsuji2004} \cite[see also][]{Tsuji2002,Tsuji2005} to check whether it is possible to define selection criteria discriminating between LBGs and cool dwarfs. As shown in the right panel of Fig. ~\ref{diagram}, cool dwarfs appear redder than $z$-dropouts in the $Y-J$ colour, and the $Y-J$ criterion we adopt allows us to exclude these objects from our selection window. We note that the brown dwarf discovered in the NTTDF by \citet{Cuby1999}, having $Z-Y\sim2$ and no optical detection, is consistently excluded from our high-z sample on the basis of its $Y-J$ colour. iii) Finally, we cross-checked each object selected according to the above criteria against variability, by analysing images acquired at different epochs. The BDF observations have been split in two separated epochs with a 3 months gap (September and December 2009), while the NTTDF have been observed during four runs in January, February, April and May 2009. We verified that all the objects in our sample are clearly detected, and that they have a consistent total flux (within $2\sigma$), in the different epochs. In the NTTDF case, we checked that a detection $> 3\sigma$ of the faintest candidate ($Y\sim26.5$) was possible in the two epochs with larger integration time. We summarise here the full set of colour selection criteria: \begin{eqnarray*} Y &<& 26.5 \\ (Z-Y) &>& 1.0\\ (Z-Y) &>& (Y-K)\\ (Z-Y) &>& 0.5+2.0(Y-J)\\ (Y-J)&<& 1.5\\ (Y-K)&<& 2.0\\ \end{eqnarray*} \subsection{Comparison with the GOODS-ACS dataset} In our analysis of the GOODS-South field we exploited the ACS V2.0 $B$, $V$, $I$, $Z$ observations (M. Giavalisco and the GOODS Team, in preparation) to select $z$-drop galaxies and to exclude lower redshift interlopers showing significant detection in the optical bands. The main concern we have to consider to provide a $z$-drop selection as clean as the one in the GOODS field regards the difference in resolution between FORS2 optical observations of BDF and NTTDF and their corresponding ACS-GOODS images we used to remove interlopers from the colour-selected sample. Indeed, in C10 we found that a sample of galaxies selected with IR criteria only is populated also by faint contaminants showing significant detection in filters covering wavelengths shorter than the redshifted Lyman limit at $z>6$ ($U$, $B$, $V$, $R$, $I$) where high redshift LBGs are not expected to present any flux. These objects are, in most cases, clearly extended, but their spectral energy distributions cannot be reproduced by a straightforward application of the CB07 models. While determining their nature is beyond the scope of the present analysis, we note that they might be faint galaxies with a very blue continuum whose SED is altered by strong emission lines such as in unobscured AGNs, or in star-forming galaxies like the blue compact dwarf galaxies \citep{Izotov2004,Izotov2007} or the ultra strong emission line galaxies \cite[USELs,][]{Hu2009}. Potential contamination of z$\sim 7$ samples due to an unknown class of objects with no optical detection ($<2\sigma$) has also been suggested by \citet{Capak2009}. Their objects are brighter than those found in our fields but display similar colours. Given the unknown nature of this contaminants, at present, the only feasible approach is to adopt more stringent criteria on the optical non-detections. Follow-up spectroscopy of z$\sim 7$ candidates and faint contaminants is needed to fully evaluate the impact of this population on high redshift studies. In our analysis of the GOODS-South field we adopted very strict selection criteria in the blue bands in order to exclude these contaminants, measuring the S/N ratios in small apertures (0.6'') exploiting the high resolution of ACS images. In order to obtain optical selection criteria as effective as the ones used with the GOODS dataset, we performed tests computing S/N ratios and photometry on the GOODS-ACS images degraded and smoothed to the depth/seeing of the BDF and NTTDF corresponding ones. We then re-selected GOODS dropouts on ``mock'' BDF/GOODS and NTTDF/GOODS catalogues built in the same way as the real BDF and NTTDF catalogues. We verified that the criteria already adopted in the GOODS fields are effective in the NTTDF case: $S/N<2\sigma_{S/N}$ in all the optical bands and $<1\sigma_{S/N}$ in at least four of them. In the BDF, given the absence of $U$ and $B$ images and the slightly shallower $I$ imaging, we adopted the conservative criterion $S/N<1\sigma_{S/N}$ in all the optical bands. We verified that this criterion allows us to safely remove all those objects, up to $Y$=26.5, that have been verified to be lower redshift contaminants on the basis of GOODS-ACS and, whenever possible, UDF-ACS photometry. The parameter $\sigma_{S/N}$ indicated above is the r.m.s. of the S/N distribution estimated, as in C10, dropping random apertures in portions of the images free of detected objects. This procedure allows us to take into account the sky noise distribution and the presence of faint, undetected foreground objects at the same time. \section{Detected z$>6.5$ galaxies} Adopting the selection criteria outlined above we find a total of eight candidates, three in the BDF and five in the NTTDF field, whose coordinates, $Y$ magnitudes and $Z-Y$ colours are listed in Tab~\ref{data}. Thumbnails of the candidates are presented in Fig.~\ref{thumb}. We note that two of them are clearly detected, and two others are marginally detected (S/N$\sim2$), in the $Z$ band. Three of the five candidates present in the NTTDF are also detected at S/N$\sim2-4$ in the $J$ and $K$ bands, thanks to the slightly deeper images available for this field (see Tab~\ref{tabNTTDF}). We also verified that each candidate is undetected in the image obtained as the weighted sum of its $V$, $R$ and $I$ observations. \begin{figure*}[!ht] \centering \includegraphics[width=16cm]{fig3.ps} \caption{Thumbnails showing the images of the 8 selected high-redshift candidates in the different observed bands.} \label{thumb} \end{figure*} As a final check we performed a stacking of all the objects in the available images. This test allows us to confirm the non-detection in the optical images, and to obtain a clear detections in the $J$ (S/N$\sim5$) and $K$ band (S/N$\sim4$) stacked images. The stacked object shows an average colour $Z-Y\simeq 1.6$. In the following sections we will combine this sample $z$-drop candidates found in the BDF and NTTDF pointings with the sample discussed in C10, obtained from the two pointings over the GOODS-South field, to find their average properties through a stacking analysis, and to constrain their LF. The GOODS $z$-drop sample includes seven candidates in the range $Y\sim25.5-26.7$, selected through colour selection criteria analogous to the ones outlined above, whose reliability has also been checked on the available IRAC and NICMOS observations. \begin{figure}[!ht] \centering \includegraphics[width=8.6cm]{fig4.ps} \caption{Thumbnails showing the stacked images of the 15 high-redshift candidates selected in the GOODS1, GOODS2, BDF and NTTDF fields. The observed bands are shown in the legends.} \label{thumb_stack} \end{figure} \begin{figure*}[!ht] \centering \includegraphics[width=14cm]{fig5.ps} \caption{Best-fit SED to the stacked photometry, with relevant photometric redshift at $z=6.85$.} \label{sed_stack} \end{figure*} \begin{table} \caption{Candidates in the BDF and NTTDF fields} \label{data} \centering \begin{tabular}{cccccc} \hline ID & R.A. (deg) & DEC. (deg)& Y & Z-Y & S/N (Y)\\ \hline BDF\_521& 336.9444 & -35.1188& 25.86& 2.13& 10.2\\ BDF\_3299& 337.0511 &-35.1665 & 26.15& $>$2.4& 7.8\\ BDF\_5905& 337.0230 & -35.2094& 26.24&1.20 & 7.6\\ NTTDF\_1479& 181.3429 & -7.6813 & 26.12 & 1.97& 8.4\\ NTTDF\_1632& 181.3857 & -7.6835 & 26.47 & 1.61& 6.1 \\ NTTDF\_1917& 181.3212 & -7.6877 & 26.32 & 1.58 & 7.1\\ NTTDF\_6345& 181.4039 & -7.7561 & 25.46 & 1.45& 15.6\\ NTTDF\_6543& 181.3834& -7.7595 & 25.75 & $>$2.6& 12.0\\ \hline \end{tabular} \end{table} \section{Mean properties of Hawk-I $z\sim7$ galaxies}\label{stacked} We perform a weighted mean of the images in the available filters from the $U$ to the $K$ band for all the 15 objects detected in the Hawk-I fields. We did not attempt a similar stacking of the IRAC images, since most of our GOODS candidates are partially or extremely blended with other foreground sources, and the candidates in the other fields are either not covered by IRAC observations, or they are present in shallower exposures with respect to GOODS. We matched the ACS images to the Hawk-I PSF and masked all the foreground objects surrounding the candidates in each image. The stacked object shows an $S/N \gtrsim 5$ detection in the $Z$, $J$ and $K$ bands and a non-detection in all the optical bands, corresponding to an ($optical - Y$) colour of $>4$ magnitudes. We use the magnitudes estimated for the stacked object to find the photometric redshift and physical parameters through our photo-z code \citep{Giallongo1998,Fontana2000} exploiting a $\chi^2$ minimisation procedure to find the best-fitting spectral template to the observed colours among the full CB07 library. While the ACS optical filters used in the GOODS field have different passbands with respect to the FORS2 ones used in BDF and NTTDF fields, this is not a significant concern since they all span a wavelength range where no flux is expected for $z>6$ objects. In turn, the small difference between FORS2 and ACS $Z$-band filters does not provide significant variations in the redshift selection window defined by the $Z-Y$ colour which is the main constraint to the photometric redshift. The resulting SED provides a unique photometric redshift solution at $z=6.85^{+0.20}_{-0.15}$. Relevant thumbnails and SED are shown in Figure~ \ref{thumb_stack} and ~\ref{sed_stack}. Given the absence of IRAC, most physical parameters are largely unconstrained, apart from the E(B-V) parameter whose estimate is mostly based on the $Y-J$ and $Y-K$ colours. We find that our stacked SED is fitted by an $E(B-V)=0.05^{+0.15}_{-0.05}$ at a 68\% confidence level. This value is consistent with the E(B-V) distribution obtained from the analysis of z$\sim7-8$ objects by \citet{Finkelstein2009} and by \citet{Schaerer2010}. Our best fit E(B-V) indicates a low dust content for $z\sim7$ galaxies, in agreement also with the best-fit $A_V$ values found by \citet{Gonzalez2010} and by \citet{Labbe2010} for the mean SED of their $z$-drop samples, and with the blue UV continuum slope measured by \citet{Bouwens2010b}. \section{The evolution of the LF} \label{LF} \subsection{MonteCarlo simulations}\label{Montecarlo} When small galaxy samples are used to constrain the high-redshift LF, it is necessary to exploit detailed imaging simulations to appropriately treat the systematic effects arising from faint object detection, and from the application of colour selection criteria. To this aim we use the CB07 synthetic libraries described in Sect.~\ref{Selection} to produce, for each field, a set of $\sim 8\times10^5$ simulated LBGs with redshift in the range $5.5<z<8.0$ and observed magnitudes computed in the same filter set used for the observations. These galaxies are placed at random positions of the $Y$-band images, and catalogs are extracted exactly as in the original frames. To avoid an excessive crowding in the simulated images, we include only 200 objects each time, after masking the regions of the images where real objects are present. As in C10, we randomly assign to each of our simulated galaxies the light profile of one of the four most distant spectroscopically confirmed LBGs observed with ACS in GOODS \cite[$z=5.5-6.2$,][]{Vanzella2009}, after convolving it with the relevant Hawk-I PSFs. \subsection{Stepwise LF} \label{stepwise} The magnitude range covered by our survey, $Y\simeq25.5-26.7$ roughly corresponds to the UV continuum magnitude range at $M \lesssim M_*$. For this reason, we first perform a binned estimate of the number density of the Hawk-I $z$-drop galaxies through the stepwise method \cite[see, e.g.][]{Bouwens2008}. The stepwise estimate is a non-parametric method based on the assumption that the rest-frame LF of galaxies can be approximated by a binned distribution, where the number density $\phi_i$ in each bin is a free parameter. To evalute also the potential systematics and the effects of observational uncertainties in this kind of estimates, we use two different procedures to compute the stepwise LF. The first one is the procedure commonly adopted in the literature based on the average relation between the observed $Y$ and the UV continuum magnitude at 1500\AA~ ($M_{1500}$), and on an estimate of the completeness in the different UV magnitude bins. The second, more conservative, procedure takes in consideration the uncertainties in the $Y$-$M_{1500}$ conversion due to photometric scatter, to the redshift distribution and to the intrinsic properties of different galaxy models. In a separate work we will combine this stepwise analysis with similar estimates at fainter and brighter magnitudes to determine the Schechter parameters at $z\sim7$ in a self-consistent way (Grazian et al. 2010, in preparation). \begin{figure}[!ht] \centering \includegraphics[width=8cm]{fig6.ps} \caption{Number densities in two rest-frame magnitude intervals estimated for our Hawk-I data set in a stepwise form with a standard $Y$-UV conversion of the observed number counts as discussed in the text (black filled squares), or with a $\chi^2$ method considering also photometric and model uncertainties (black empty circles). Other points are from \citet{Bouwens2010} (NICMOS, red empty squares), \citet{Ouchi2009} (SUBARU, blue empty squares and upper limits) and \citet{Oesch2009b} (WFC3-UDF, magenta empty circles). For a comparison we show the recent determinations of the LF at $z\sim 6$ by \citet{Bouwens2007} (B07, green dot-dashed line) and \citet{Mclure2009} (M09, blue dashed line). The black solid line is the best-fit LF obtained by combining the stepwise points shown in the figure with new determinations of the binned densities from WFC3-UDF and WFC3-ERS data (Grazian et al. in preparation).} \label{Fig_stepwise} \end{figure} \subsubsection{Stepwise LF from the average Y-UV relation} Through a linear regression we compute the average $Y-M_{1500}$ relation at the median redshift of our sample (z=6.8) for the CB07 models of $z$-drop galaxies. We then divide our sample in two bins centered at $M_{1500}=-21.1$ and $M_{1500}=-20.4$, and use the imaging simulations to estimate the completeness of our selection. Finally, we convert the redshift dependent completeness distribution into effective volumes of our survey at these magnitudes. The values of the stepwise LF estimated in this way are reported in Tab.~\ref{Tab_stepwise} and plotted as filled squares in Fig~\ref{Fig_stepwise}, with vertical error bars given by Poisson uncertainties in the number counts. The horizontal error bars indicate the relevant magnitude range of each bin. \begin{table} \caption{Stepwise determination of the UV LF} \label{Tab_stepwise} \centering \begin{tabular}{cc} \hline Mag. Range & $\phi ~ (10^{-4} Mpc^{-3}~ mag^{-1})$\\ \hline $-21.4<M_{1500}<-20.8$& $0.39 \pm 0.20$\\ $-20.8<M_{1500}<-20.0$& $1.81 \pm 0.54$\\ \hline \end{tabular} \end{table} \begin{figure}[!ht] \centering \includegraphics[width=8cm]{fig7.ps} \caption{The normalized distribution of UV continuum magnitudes (estimated from the average $Y-M_{1500}$ relation) for the 15 Hawk-I candidates divided in two bins (dashed histograms). The solid curves show the expected distributions of UV magnitudes for objects in the same observed ranges when photometric uncertainties are taken into account through MonteCarlo simulations.} \label{Fig_scatter} \end{figure} \subsubsection{Introducing photometric uncertainties in the Stepwise LF} A more conservative estimate can be computed assuming a stepwise LF made of three bins in the wider magnitude range $-22.0<M_{1500}<-19.0$. This interval takes into account the photometric scatter and the variation of the $Y-M_{1500}$ relation with redshift and galaxy models (see Fig.\ref{Fig_scatter}). We assume a fixed, constant, reference density $\phi_{ref}$, and we exploit the set of simulations described in Sect.~\ref{Montecarlo} to compute for each field the distribution of observed magnitudes originating in each rest-frame bin for LBGs in the redshift range sampled by our colour selection. The simulated number counts are then scaled to the relevant observed areas and summed together. Finally, we find the combination of binned densities $\phi{_i}=w_i \cdot \phi_{ref}$, that best reproduces the total number counts of our survey, where $w_i$ are multiplicative factors to the reference density that we determine by comparing observed and simulated distributions through a simple $\chi^2$ test. We plot as black empty circles in Fig~\ref{Fig_stepwise} the two bins at $M_{1500}<-19.8$. The third, faintest, bin at $M_{1500}>-19.8$ yields only a conservative upper limit and it is not represented in the figure, but it is anyway necessary in this procedure to consider the effect of Malmquist bias. Vertical error bars indicate the statistical uncertainties given by the $\chi^2$ test. \smallskip The two methods give consistent results, and they are in perfect agreement with other stepwise estimates in the same magnitude range (see Fig.~\ref{Fig_stepwise}). However, the error bars and the relevant magnitude range are much larger when using the $\chi^2$ minimization procedure. While an average conversion from observed to rest-frame magnitudes, along with an estimate of effective volumes, can provide a first order-of-magnitude estimate of the binned number density of LBGs, we emphasize that significant statistical uncertainties can arise due to photometric scatter, and to the different relation between $Y$ and UV continuum magnitudes for different galaxy models and redshifts. \subsection{Maximum Likelihood LF}\label{ML} \begin{figure} \centering \includegraphics[width=9cm]{fig8.ps} \caption{$\chi^2$ contour levels for the $ dlog(\phi)/dz$, $ M_*'$ parameters derived for the Schechter--like LF considering all the four Hawk-I fields. The lower and left axis refer to the evolutionary terms $ M_*'$ and $ dlog(\phi)/dz$ with respect to the best-fit z=6 parameters of \citet{Mclure2009} (black point and errorbars). The upper and right axis refer to the $ M_*$ and $ \phi$ values at the median redshift estimated for our sample (z=6.8). Grey points and errorbars mark the position of the $z\sim7$ best fit parameters by \citet{Ouchi2009} and by \citet{Bouwens2008}. The black solid line indicates the 99\% c.l. region estimated on the basis of the two GOODS-South pointings only (C10).} \label{chi2} \end{figure} We estimate how significant is the evolution of the LF at $z>6$ adopting a maximum likelihood approach. This method allows us to compare the observed number counts to those predicted for different evolving Schechter LFs \citep{Schechter1976} after accounting for the expected systematics in the detection process \cite[e.g.][]{Bouwens2007,Mannucci2007,Mclure2009}. As in C10, we assume that the LF can be described by a Schechter function with parameters $\phi$ and $M_*$ evolving from their value at $z_0=6.0$ \citep{Mclure2009} according to the following parametrisation: $$log(\phi(z)) = log(\phi(z_0)) + dlog(\phi)/dz \cdot (z-z_0)$$ $$M_*(z) = M_*(z_0) + M_*' \cdot (z-z_0)$$ Since our faint limit is close to the expected value of the characteristic luminosity $M_*$, we fix the faint end slope to the value $\alpha=-1.71$ of the $z \sim 6$ LF by \citet{Mclure2009}. We explicitly tested that no appreciable differences are found when fixing $\alpha$ to different values ($\alpha = -1.4, -2.0$). For a broad range of values of the evolving terms $ M_*'$ and $ dlog(\phi)/dz$ (see Fig.~\ref{chi2}) we simulate, under a MonteCarlo approach, the redshift $z$ and UV magnitude $M_{1500}$ for a population of $3\times10^5$ galaxies. These objects are randomly extracted from the larger database of simulated galaxies described in Sect.~ \ref{Montecarlo}, which encompasses a broad range of the physical parameters determining the rest frame photometry, like E(B-V), metallicity, Ly$\alpha$ EW etc. The distributions of $Y$ magnitudes and $Z-Y$ colours for each simulated population are scaled to the observed area in each of the fields and compared to the observed ones with a maximum likelihood test, under the assumption of simple Poissonian statistics. For each of the two distributions, and for each field, we build the likelihood function $\cal{L}$: \begin{equation} {\cal L} = \prod_{i} e^{-N_{exp,i}} \frac{(N_{exp,i})^{N_{obs,i}}}{(N_{obs,i})!} \label{eq:ml} \end{equation} where $N_{obs,i}$ is the observed number of sources in the magnitude (colour) interval $i$, $N_{exp,i}$ is the expected number of sources in the same magnitude (colour) interval, and $\Pi_{i}$ is the product symbol. For each field, we associate to every model a likelihood computed as the product of those obtained for the magnitude and colour distributions separately. We then compute a final likelihood as the product of the GOODS, BDF and NTTDF likelihoods. The colour plot in Fig.~ \ref{chi2} shows the 68\%, 95\%, and 99\% likelihood intervals on the evolutionary terms $ M_*'$ and $ dlog(\phi)/dz$ (left and bottom axes) and for the resulting Schechter parameters at the median redshift z=6.8 of our sample (top and right axes) for the combination of all the four Hawk-I fields. In the same plot, the colour code refers to the $\chi^2$ distribution obtained under the usual assumption $\chi^2=-2.0\cdot ln(\cal{L}) $ ~\cite[e.g.][]{Cash1979}. \textit{We reject at $\gtrsim$ 99\% confidence level the hypothesis that the LF remains constant in both parameters above z=6 ($dlog(\phi)/dz=0$ and $M_*'=dM_*/dz=0$, black point in Fig.~ \ref{chi2})}. In Fig.~ \ref{chi2} it is also shown the 99\% c.l. region on the Schechter evolutionary terms estimated on the basis of the two GOODS-South pointings only (C10). Although the degeneracy between $M_*$ and $\phi$ is still present, the analysis of the BDF and NTTDF fields considerably reduces the allowed parameter space. The region of allowed values for the LF parameters in our final likelihood map points to a pronounced decrease of $\phi$ along with a brightening of $M_*$ with redshift. However, the best-fit values for $M_*$ and $\phi$ at $z\sim7$ derived by \citet{Ouchi2009} and by \citet{Bouwens2008} \cite[see also][]{Bouwens2010c}, indicating a constant or slightly dimming $M_*$, still fall within the 2$\sigma$ region constrained by our maximum likelihood (grey points in Fig.~ \ref{chi2}), and they are consistent with our estimate once the uncertainties are considered. We argue that cosmic variance (see Sect.~\ref{cosmicvariance}) and the limited sample of very bright objects available may explain the discrepancies among different results: in our case, an inspection of the likelihood maps obtained separately on each field shows that the NTTDF, having two bright objects ($Y\sim25.5-25.7$, approximately $M_{1500}\lesssim-21.2$ at $z=6.8$), has a great effect in skewing the global likelihood towards brighter values of $M_*$. We also note that some theoretical models \cite[e.g.][]{Trenti2010,Finlator2010} predict a dimming of $M_*$ with redshift. However, several model parameters are largely unconstrained by the observations, while a large dust extinction might be required to match observed and predicted LFs at the bright end \citep{Lacey2010}. \subsection{Cosmic Variance} \label{cosmicvariance} The effects of cosmic variance are reduced in our case, since our data come from three independent areas, albeit of different sizes (the GOODS-South field being covered by two of the four Hawk-I pointings). We evaluate the possible impact of cosmic variance using the mock catalogues of the Millennium Simulation \citep{kitzbichler2007} in the same way as discussed in C10. For each of the three Hawk-I areas (GOODS-South, BDF, NTTDF), we extract 200 fields of the same size from independent Millennium light-cones, and we apply a corresponding photometric selection criteria on galaxies at $6.5<z<7.4$ (bracketing the peak of our selection window), without any constraint on the distribution of host haloes. We estimate that a cosmic variance of $\sim 21\%$ affects the total number counts of $z$-drop LBGs in our survey. We find that the evolution is still confirmed at a $\gtrsim$ 99\% confidence level by our maximum likelihood approach even allowing a $\sim 21\%$ variation in the total number density. Indeed, after accounting for all the observational effects, we estimate that we would have observed $\sim30$ ~$z$-drops in our survey in the case of a non-evolving LF: a factor of two higher than the observed number. However, while cosmic variance has not a significant effect on our conclusion that the LF strongly evolves from $z\sim7$ to $z\sim6$, it can have a great effect in determining the \textit{form} of this evolution. Cosmic variance is strongly luminosity dependent, and it is as high as 41\% for galaxies brighter than $Y=25.8$ in our survey, thus affecting the determination of the $M_*$ parameter. \subsection{UV luminosity density, SFRD and constraints on cosmic reionization} While the $M_*$ and $\phi$ parameters are highly degenerate, the number density of bright galaxies, i.e. the integral of the bright end of the LF is much better constrained, and so are derived integral quantities such as the UV luminosity density ($\rho_{UV}$) and star formation rate density (SFRD). We conservatively consider the model LFs within the 95\% c.l. region of our likelihood analysis to derive the $\rho_{UV}$ by integrating $L\cdot\Phi(L)$ up to the luminosity corresponding to $M_{1500}=-19.0$. We convert these values in a SFRD following the standard formula by \citet{Madau1998} and applying the extinction correction of \citet{Meurer1999} (considering an average UV slope $\beta=-2.0$). Finally, we use $\rho_{UV}$ to evaluate the emission rate $\dot{N}_{ion}$ of hydrogen ionizing photons per $Mpc^3$ following \citet{Bolton2007}. We consider an escape fraction $f_{esc}=0.2$, a spectral index $\alpha_s=3.0$ and an ionizing emission density at the Lyman limit $\epsilon_g=\rho_{UV}/6.0$. We report in Tab. ~\ref{Tab_parameters} the range of values for $\rho_{UV}$, SFRD and $log(\dot{N}_{ion})$. These values are perfectly consistent with the analogous ones presented in C10 and derived from the LFs in the 68\% c.l. region of the GOODS likelihood. Considering the same integral of the z=6 UV LF of \citet{Mclure2009}, our estimated $\rho_{UV}$ implies a drop of a factor $\sim 3.5$ in the UV luminosity density from z=6. The \textit{lower limit} for the ionization rate required to balance recombination at $z=7$, computed according to \citet{Madau1999} and assuming an HII clumping factor equal to one, is $log(\dot{N}_{rec})=50.1$, which is a factor of two higher than the highest value allowed by our analysis. This demonstrates that, under usual assumptions, bright UV galaxies alone cannot keep the universe reionized at $z\sim7$. By varying the escape fraction, we obtain that values larger than $f_{esc}=0.5$ are required to reconcile the emission rate from bright galaxies with the one required for reionization. This demonstrates that, either UV bright galaxies at $z\sim7$ have different physical properties with respect to lower redshift LBGs, or, most probably, a crucial contribution to the reionization process comes from galaxies at the faint end of the LFs or from other kind of sources. Once different integration limits are taken into account, our estimates are in agreement with the results obtained by \citet{Bouwens2008,Ouchi2009,Gonzalez2010}. \begin{table} \caption{Properties of the $z\sim 7$ population$^a$} \label{Tab_parameters} \centering \begin{tabular}{cc} \hline \hline $\rho_{UV}$& $1.5 ^{+2.1} _{-0.8} ~10^{25}~erg ~s^{-1} ~Hz^{-1}~ Mpc^{-3}$\\ SFRD & $3.2 ^{+3.6} _{-1.9}~ 10^{-3}~ M_\odot ~yr^{-1}~ Mpc^{-3}$ \\ $log(\dot{N}_{ion})$& $49.4 ^{+0.4} _{-0.3} ~Mpc^{-3}$ \\ \hline \end{tabular} \\ \smallskip \begin{tabular}{l} a - LFs in the 95\% c.l. region ($M_{1500}<-19.0$) \\ \end{tabular} \end{table} \section{Constraints on the LBG number density at $z\sim8$}\label{z8} We exploited deep Hawk-I $J$- and $K$-band observations to put an upper limit on the number density $z\gtrsim7.5$, $Y$-drop galaxies in our survey. We used the observations of the BDF and NTTDF fields presented in Sect.~\ref{dataset}, and deep observations of the two GOODS-South pointings obtained both in our program as well as through a similar ESO observing program (Cl\'ement et al. in preparation). We obtained a multicolour catalogue with the $J$-band as detection image using SExtractor in dual mode over the full imaging set presented in this paper and in C10. We used the same detection parameters and 2FWHM apertures adopted for the $Y$-detected catalogue, computing aperture corrected total magnitudes through appropriate corrections in each band. We chose the colour selection criteria in order to isolate galaxies having the Lyman-break sampled by the $Y-J$ colour, and to exclude contamination from lower redshift galaxies on the basis of the expected colours for passive and dusty-starburst galaxies modelled as described in Sect.~\ref{ircolours}: \begin{eqnarray*} (Y-J) &>& 0.8\\ (Y-J)&>& 1.1+0.6 \cdot (J-K)\\ \end{eqnarray*} We also required no detection in the optical bands adopting the same $S/N$ criteria outlined for the selection of $z$-drop galaxies. We limited our selection to $J$=24.5,24.8,25.0 in the BDF, NTTDF and GOODS pointings respectively, up to which we estimate that our catalogues are 100\% complete, and we used the same area chosen for the selection of $z$-drop galaxies in order to avoid the noisiest regions in any image. With these criteria we found no candidate $Y$-drop galaxy in our survey. Considered the average $J-M_{1500}$ relation at the median redshift of our colour selection (z=8), we are probing the $M < M_*$ region of the LF at $M_{1500}\sim -22.5$. We report in Tab.~\ref{Tab_z8} an upper limit on the number density of very bright $Y$-drop LBGs estimated as the inverse of the volume sampled by our survey in the redshift interval $7.5<z<9.0$. \begin{table} \caption{LBG number density at $z\sim8$} \label{Tab_z8} \centering \begin{tabular}{cc} \hline Mag. Range & $\phi ~ (10^{-4} Mpc^{-3})$\\ \hline $M_{1500}<-22.0$& $<0.02$\\ \hline \end{tabular} \end{table} \section{Summary and conclusions} We presented in this work the results of a $Y$--band survey of the two high galactic latitude BDF and NTTDF fields aimed at detecting galaxies at $z \gtrsim 6.5$ and measuring their number density. The survey is based on deep observations obtained under a dedicated ESO Large Programme. We made use of $Y$, $J$, $K$ band observations performed with Hawk-I, the new near-IR camera installed at the VLT, and of FORS2 $Z$-band observations. We matched and combined these data with deep archive FORS1 and FORS2 observations in the $U$, $B$, $V$, $R$, $I$ filters to detect high redshift LBGs under the main criterion $Z-Y>1$, requiring no optical detection and flat $Y-J$ and $Y-K$ colours. The colour selection criteria have been tailored in order to exclude lower redshift passive galaxies and dusty starbursts, Galactic T-dwarfs and galaxies exhibiting large $Z-Y$ colours as well as significant emission in the optical bands, possibly intermediate redshift sources with bright emission lines. As a result, we isolated 8 highly reliable $z$-drop candidates in the magnitude range $Y\simeq 25.5-26.5$ over a total area of 70.1 $arcmin^2$. We combined this $z$-drop sample with the similar one extracted from two pointings over the GOODS-South field comprising seven galaxies at $Y<26.7$. We performed a stacking analysis of the 15 objects to estimate the average properties of $M\sim M_*$ galaxies at $z\gtrsim 6.5$. The photometric redshift of the stacked object is $z=6.85^{+0.20}_{-0.15}$ in perfect agreement with the estimated selection window of our survey. The stacked SED is fitted by an $E(B-V)=0.05^{+0.15}_{-0.05}$ at a 68\% confidence level, indicating a low dust content in agreement with previous analysis of z$\sim7-8$ objects \citep{Finkelstein2009,Schaerer2010,Gonzalez2010}. We then estimated the number density and the LF evolution on the basis of detailed MonteCarlo imaging simulations accounting for all the uncertainties involved in the observations: detection completeness, photometric scatter, and random fluctuations in the S/N measure due to overlapping unresolved sources, or other effects. We first computed a binned estimate of the galaxy number density at $z\sim7$ following two different procedures. The first one, which is based on an average $Y$-$M_{1500}$ relation and on an estimate of the redshift dependent completeness of our selection, is the procedure commonly adopted in the literature. The second method is more conservative, and exploits a $\chi^2$ minimization to compare the observed number counts to those predicted on the basis of MonteCarlo simulations for different combinations of galaxy densities. This second procedure intrinsically considers the uncertainties in the $Y$-$M_{1500}$ conversion due to photometric scatter, to the redshift distribution and to the intrinsic properties of different galaxy models. We find that the two procedures are consistent and they are in agreement with similar analysis from the literature. However, the more conservative procedure highlights that sources of statistical uncertainty are usually underestimated. To assess the degree of evolution of the UV LF at $z>6.0$, we also simulated galaxy populations following different UV Schechter functions with linearly evolving parameters $log(\phi)$ and $M_*$. For each of the four Hawk-I pointings we compared the resulting distributions of simulated magnitudes and colours with the observed ones following a maximum likelihood approach. We find strong evidence of evolution of the LF above z=6: our analysis rules out at a $> 99\%$ confidence level that the LF remains constant in both $\phi$ and $M_*$ above $z=6$. Our likelihood maps for the Schechter parameters indicate a strong evolution in $\phi$ and a brightening of $M_*$ with redshift. However, the detection of two bright objects ($Y\sim 25.5-25.7$, corresponding to $M_{1500}\lesssim-21.2$) in the NTTDF pointing have a major role in skewing the evolution of $M_*$ towards bright values. The two Schechter parameters are, however, highly degenerate and our findings are also consistent within the uncertainties with a milder evolution of $\phi$ and a constant or slightly dimming $M_*$ as indicated by other authors \citep{Bouwens2008,Ouchi2009}. We estimate that the possible effect of cosmic variance is not capable of reconciling the observed number density of $z$-drop galaxies with the one predicted for a non-evolving LF. However, the strong dependence on luminosity of the cosmic variance, and the relatively small magnitude range probed by our survey at $M\lesssim M_*$, can influence the determination of the form of the evolving LF and provide an explanation for the difference between the evolution we determine and other estimates in the literature. The uncertainty and the degeneracy in the $M_*$ and $\phi$ best-fit values are not reflected in a comparable uncertainty in the number density of bright galaxies. We conservatively consider the model LFs within the 95\% c.l. region of our likelihood analysis to derive for galaxies at $M_{1500}<-19.0$ an UV luminosity density $\rho_{UV}= 1.5^{+2.1}_{-0.8} 10^{25} erg ~ s^{-1} ~ Hz^{-1} ~ Mpc^{-3} $, a star formation rate density $SFRD=3.2 ^{+3.6} _{-1.9}~ 10^{-3} M_{\odot} ~yr^{-1} ~ Mpc^{-3} $ and an emission rate of hydrogen ionizing photons $log(\dot{N}_{ion})=49.4 ^{+0.4} _{-0.3} ~Mpc^{-3}$. The UV luminosity density is lower than the corresponding one at $z\sim 6$ by a factor $\sim 3.5$, while $\dot{N}_{ion}$ is lower by at least a factor of $\sim 2$ than the lower limit required for reionization according to \citet{Madau1999}, considering $f_{esc}=0.2$ and an HII clumping factor equal to one. This implies that UV bright galaxies alone cannot reionize the universe, unless their physical parameters are much different from those of lower redshift LBGs (e.g. $f_{esc}>0.5$, harder UV spectrum etc.). Most probably, the crucial contribution to reionization comes from galaxies at the faint end of the LF or from other kind of sources. Finally, we exploit the Hawk-I $J$ and $K$ band observations of our survey to derive an upper limit of $2\cdot10^{-6} Mpc^{-3}$ for the number density of $M \sim -22.5$ LBGs at $z\sim8$ from the non-detection of $Y$-drop galaxies up to $J\sim25$. \begin{acknowledgements} Observations were carried out using the Very Large Telescope at the ESO Paranal Observatory under Programme IDs LP181.A-0717, LP168.A-0485, ID 170.A-0788, ID 181.A-0485, ID 283.A-5052 and the ESO Science Archive under Programme IDs 67.A-0249, 71.A-0584, 73.A-0564, 68.A-0563, 69.A-0539, 70.A-0048, 64.O-0643, 66.A-0572, 68.A-0544, 164.O-0561, 163.N-0210, and 60.A-9120. We acknowledge support from Agenzia Spaziale Italiana. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} A new era of nuclear matter research is envisaged with the $12$~GeV upgrade of the CEBAF accelerator at the Jefferson Lab in the USA and with the construction of the FAIR facility in Germany. These new facilities will have the exciting potential of implanting low-momentum charmonia and charmed hadrons in an atomic nucleus, like the $J/\Psi$ and $\psi$ mesons and heavy-light charmed mesons such as $D$ and $D^*$. While at JLab charmed hadrons will be produced by scattering electrons off nuclei, at FAIR they will be produced by the annihilation of antiprotons on nuclei. There are several reasons for the excitement, one of the main ones being the opportunity of studying the poorly understood low-energy excitations of gluon degrees of freedom. An example where these excitations play an important role is the propagation of charmonia in matter. Since a charmonium state does not have quarks in common with the nuclear medium, its interactions with the medium necessarily involve the intervention of gluons. Basic interaction mechanisms discussed in the literature have been the excitation of QCD van der Waals forces arising from the exchange of two or more gluons between color-singlet states~\cite{Peskin:1979va,Brodsky:1989jd}, and the excitation of charmed hadronic intermediate states with light quarks created from the vacuum~\cite{Brodsky:1997gh,Ko:2000jx}. Another interesting challenge is to study the properties of charmed $D$ and $D^*$ mesons in medium. The chiral properties of the light quarks that compose these mesons are much more sensitive to the nuclear medium than their companion, heavier charm quarks and therefore they offer the unique opportunity of studying phenomena like the partial restoration of chiral symmetry in nuclear matter. Motivated by such considerations, some very interesting phenomena involving these mesons have been predicted. Amongst these we mention the possible formation of $D(\bar{D})$ meson-nuclear bound states~\cite{Tsushima:1998ru}, enhanced dissociation of $J/\Psi$ meson in nuclear matter (heavy nuclei)~\cite{Sibirtsev:1999jr}, and enhancement of the $D$ and $\bar{D}$ meson production in antiproton-nucleus collisions~\cite{Sibirtsev:1999js}. Ref.~\cite{Voloshin:2007dx} presents a recent review of the properties of charmonium states and compiles a fairly complete list of references on theoretical studies concerning a great variety of physics issues related to these states. On the experimental side, one of the major challenges is to find appropriate kinematical conditions to produce these hadrons essentially at rest, or with small momentum relative to the nucleus, as effects of the nuclear medium are driven by low energy interactions. The original suggestion~\cite{Brodsky:1989jd} was that QCD van der Waals forces arising from multiple gluon exchange would be capable of binding a charmonium state by as much as $400$~MeV in an $A=9$ nucleus. The estimate was based on a variational calculation using a phenomenological ansatz for the charmonium-nucleus potential in the form of a Yukawa potential. Along the same lines but taking into account the distribution of nucleons in the nucleus by folding the charmonium-nucleon Yukawa potential with the nuclear density distribution, Ref.~\cite{Wasson:1991fb} found a maximum of $30$~MeV binding energy in a large nucleus. A somewhat more QCD-oriented estimate was made in Ref.~\cite{Luke:1992tm}. Using a lowest-order multipole expansion for the coupling of multiple gluons to a small-size charmonium bound state~\cite{Peskin:1979va}, it is possible to show on the basis of the operator product expansion that the mass shift of charmonium in nuclear matter is given, in the limit of infinitely heavy charm quark mass, by an expression similar to the usual second-order Stark effect in atomic physics, which depends on the chromo-electric polarizability of the nucleon. Using an estimate~\cite{Peskin:1979va} for the value of this polarizability, the authors of Ref.~\cite{Luke:1992tm} obtained a $10$~MeV binding for $J/\Psi$ in nuclear matter. On the other hand, for the excited charmonium states, a much larger binding energy was obtained, e.g. $700$~MeV for the excited charmonium state $\psi'(2S)$, an admittedly untrustworthy number. Following this same procedure, but keeping the charm quark mass finite and using realistic charmonium bound-state wave-functions, Ref.~\cite{Ko:2000jx} found $8$~MeV binding energy for $J/\Psi$ in nuclear matter, but still over $100$~MeV binding for the charmonium excited states. While an increase in the QCD Stark effect is expected for excited states (because of their larger size), the extreme values for the binding energies for these states found in the literature are widely considered to be unrealistic. The source for such an overestimate is attributed to the breakdown of the multipole expansion for the larger-sized charmonium states. There are some other studies on charmonium interactions with ordinary hadrons and nuclear matter, in particular involving the $J/\Psi$ meson. QCD sum rules studies estimated a $J/\Psi$ mass decrease in nuclear matter ranging from $4$ to $7$ MeV~\cite{Klingl:1998sr,Hayashigaki:1998ey,Kim:2000kj}, while an estimate based on color polarizability~\cite{Sibirtsev:2005ex} gave larger than $21$ MeV. In addition, there are studies of the charmonium-nucleon interaction and of $J/\Psi$ dissociation cross sections based on a one-boson exchange model~\cite{Sibirtsev:2000aw}, effective Lagrangians~\cite{Liu:2001ce,Oh:2007ej} and the quark-model~\cite{Hilbert:2007hc}. In Ref.~\cite{Yokokawa:2006td} the charmonium-hadron interaction was studied in lattice QCD. A first estimate for the mass shifts of charmonium states (we denote charmonium states generically by $\psi$) in nuclear medium arising from the excitation of a pair of $D$ and $D^*$ mesons -- see Fig.~\ref{fig:loop} -- was performed in Ref.~\cite{Ko:2000jx}. Employing a gauged effective Lagrangian for the coupling of $D$ mesons to the charmonia, the mass shifts were found to be positive for $J/\Psi$ and $\psi(3770)$, and negative for $\psi(3660)$ at normal nuclear matter density $\rho_0$. These results were obtained for density-dependent $D$ and $\bar D$ masses that decrease linearly with density, such that at $\rho_0$ they are shifted by $50$~MeV. The loop integral in the self-energy (Fig.~\ref{fig:loop}) is divergent and was regularized using form-factors derived from the $^3P_0$ decay model with quark-model wave functions for $\psi$ and $D$. The positive mass shift is at first sight puzzling, since even with a $50$~MeV reduction of the $D$ masses, the intermediate state is still above threshold for the decay of $J/\Psi$ into a $D\bar D$ pair and so a second-order contribution should be negative. As we shall explain below, this was not realized in the calculation of Ref.~\cite{Ko:2000jx} because of the interplay of the form factor used and the gauged nature of the interaction. \vspace{0.5cm} \begin{figure}[htb] \includegraphics[height=0.15\textheight]{loop.eps} \caption{$DD$-loop contribution to the $J/\Psi$ self-energy. We include also $DD^*$ and $D^*D^*$ contributions.} \label{fig:loop} \end{figure} In the present paper we reanalyze the mass shift of $J/\Psi$ in terms of the excitation of intermediate charmed mesons using effective Lagrangians. In addition to the $D\bar D$ loops, we also include $D \bar D^*$, $D^* \bar D$ and $D^*{\bar D}^*$ loops. The medium dependence of the $D$ and $D^*$ masses is included by an explicit calculation using the quark-meson coupling (QMC) model~\cite{Guichon:1987jp}. The QMC is a quark-based model for nuclear structure which has been very successful in describing nuclear matter saturation properties and has been used to predict a great variety of changes of hadron properties in nuclear medium. A~review of the basic ingredients of the model and a summary of results and predictions can be found in Ref.~\cite{Saito:2005rv}. The paper is organized as follows. In the next Section we present the effective Lagrangians used to calculate the $J/\Psi$ self-energy and give explicit expressions for the contributions of the different intermediate states. In Section~\ref{sec:qmc} we briefly review the QMC description of the $D$ and $D^*$ mesons in nuclear matter and present numerical results for the density dependence of the $D$ and $D^*$ masses. A full set of numerical results for the density dependence of the $J/\Psi$ self-energy is presented in Section~\ref{sec:num}. We show results for the separate contributions of the $D \bar D^*$, $D^* \bar D$ and $D^*{\bar D}^*$ loops and also investigate the sensitivity of our results to the cutoff masses. Our conclusions and perspectives for future work are presented in Section~\ref{sec:concl}. \section{Effective Lagrangians and $J/\Psi$ self-energy} \label{sec:lag} We use the following phenomenological Lagrangian densities for the vertices $J/\Psi$-$D$ and $J/\Psi$-$D^*$ (in the following we denote by $\psi$ the field representing $J/\Psi$): \begin{widetext} \begin{eqnarray} {\mathcal L}_{\psi D D} &=& i g_{\psi D D} \, \psi^\mu \left[\bar D \left(\partial_\mu D\right) - \left(\partial_\mu \bar D\right) D \right] , \label{LpsiDDbar} \\ {\cal L}_{\psi D D^*} &=& \frac{ g_{\psi D D^*}}{m_\psi} \, \varepsilon_{\alpha\beta\mu\nu} \left(\partial^\alpha \psi^\beta\right) \Bigl[\left(\partial^\mu \bar{D}^{*\nu}\right) D + \bar D \left(\partial^\mu D^{*\nu}\right) \Bigr] , \label{LpsiDD*} \\ {\cal L}_{\psi D^* D^*} &=& i g_{\psi D^* D^*} \, \bigl\{ \psi^\mu \left[\left(\partial_\mu \bar{D}^{*\nu}\right) D^*_\nu - {\bar D}^{*\nu}\left(\partial_\mu D^*_\nu\right) \right] \nonumber\\ & &\hspace{3em}+ \left[ \left(\partial_\mu \psi^\nu\right) \bar{D}^*_\nu - \psi^\nu \left(\partial_\mu {\bar D}^*_\nu\right) \right] D^{*\mu} + \, \bar{D}^{*\mu} \left[\psi^\nu \left(\partial_\mu D^*_\nu\right) - \left(\partial_\mu \psi^\nu\right) D^*_\nu \right] \bigr\} . \label{LpsiD*D*} \end{eqnarray} \end{widetext} Our convention for the $D$-meson-field isospin doublets is \begin{eqnarray} \bar D = ({\bar D}^0 \;\;\; D^{-} ), \hspace{1.0cm} {D = \left(\begin{array}{c} D^0 \\ D^+ \end{array}\right)} . \label{doublets} \end{eqnarray} We note that these Lagrangians are an $SU(4)$ extension of light-flavor chiral-symmetric Lagrangians of pseudoscalar and vector mesons. In the light flavor sector, they have been motivated by a local gauge symmetry principle, treating vector mesons either as massive gauge bosons or as dynamically generated gauge bosons. In the first case, there appear contact interactions involving two pseudoscalar and two vector mesons. When extended to the charm sector, in Eq.~(\ref{LpsiDDbar}) for instance, there is an additional term of the form $2 g^2_{\psi D D} \psi^\mu\psi_\mu \bar D D$. In view of the fact that $SU(4)$ flavor symmetry is strongly broken in nature, and in order to stay as close as possible to phenomenology, we use experimental values for the charmed mesons masses and use the empirically known meson coupling constants. For these reasons we choose not to use gauged Lagrangians -- a similar attitude was followed in Ref.~\cite{Lin:1999ve} in a study of hadronic scattering of charmed mesons. However, in order to compare results with Ref.~\cite{Ko:2000jx} and assess the impact of a contact term of the form $\psi \psi D D$, we will also present results for the $J/\Psi$ mass shift including such a term. We are interested in the difference of the in-medium, $m^*_\psi$, and vacuum, $m_\psi$, masses of $J/\Psi$, \begin{equation} \Delta m = m^*_\psi - m_\psi, \label{Delta-m} \end{equation} with the masses obtained from \begin{equation} m^2_\psi = (m^0_\psi)^2 + \Sigma(k^2 = m^2_\psi)\, . \label{m-psi} \end{equation} Here $m^0_\psi$ is the bare mass and $\Sigma(k^2)$ is the total $J/\Psi$ self-energy obtained from the sum of the contributions from the $DD$, $DD^*$ and $D^*D^*$ loops. The in-medium mass, $m^*_\psi$, is obtained likewise, with the self-energy calculated with medium-modified $D$ and $D^*$ meson masses. We take the averaged, equal masses for the neutral and charged $D$ mesons, i.e. $m_{D^0} = m_{D^{\pm}}$ and $m_{D^{*0}} = m_{D^{*\pm}}$. Averaging over the three polarizations of $J/\Psi$, one can write each of the loop contributions to the $J/\Psi$ self-energy $\Sigma_l$, $l=DD,DD^*,D^*D^*$, as \begin{equation} \Sigma_l(m^2_\psi) = - \frac{ g^2_{\psi \, l}}{3\pi^2} \int^\infty_0 dq \, {\bf q}^2 \, F_l({\bf q}^2) \, K_l({\bf q}^2), \label{Sigma-l} \end{equation} where $F_l({\bf q}^2)$ is the product of vertex form-factors (to be discussed later) and the $K_l({\bf q})$ for each loop contribution are given by \begin{eqnarray} K_{DD}({\bf q}^2) &=& \frac{{\bf q}^2}{\omega_D} \left( \frac{{\bf q}^2}{\omega^2_D - m^2_\psi/4} - \xi \right) , \label{KDD} \\ K_{DD^*}({\bf q}^2) &=& \frac{{\bf q}^2 \, \overline{\omega}_D} {\omega_D \omega_{D^*}}\frac{1}{\overline{\omega}^2_D - m^2_\psi/4}, \label{KDD*} \\ K_{D^*D^*}({\bf q}^2) &=& \frac{1}{4 m_\psi \omega_D } \Biggl[\frac{{\cal A}(q^0=\omega_{D^*})}{\omega_{D^*} - m_\psi/2} - \, \frac{{\cal A}(q^0=\omega_{D^*}+m_\psi)}{\omega_{D^*} + m_\psi/2} \Biggr], \label{SigmaD*D*} \end{eqnarray} where $\omega_D = ({\bf q}^2+m^2_D)^{1/2}$, $\omega_{D^*} = ({\bf q}^2+m^2_{D^*})^{1/2}$, $\overline\omega_D = (\omega_D + \omega_{D^*})/2$, $\xi = 0$ for the non-gauged Lagrangian of Eq.~(\ref{LpsiDDbar}) and $\xi = 1$ for the gauged Lagrangian of Ref.~\cite{Ko:2000jx}, and \begin{equation} {\cal A}(q) = \sum^4_{i=1} {\cal A}_i(q), \end{equation} with \begin{eqnarray} {\cal A}_1(q) &=& - 4 q^2 \left\{ 4 - \frac{q^2+(q-k)^2}{m^2_{D^*}} + \frac{[q\cdot(q-k)]^2}{m^4_{D^*}}\right\}, \label{A1} \\ {\cal A}_2(q) &=& 8 \left[q^2 - \frac{q\cdot(q-k)}{m^2_{D^*}}\right] \left[2 + \frac{(q^0)^2}{m^2_{D^*}}\right], \label{A2} \\ {\cal A}_3(q) &=& 8 \left(2q^0 - m_\psi\right) \Biggl\{q^0 - \left(2q^0 - m_\psi\right)\frac{q^2+q\cdot(q-k)}{m^2_{D^*}} \, + q^0 \frac{[q\cdot(q-k)]^2}{m^4_{D^*}}\Biggr\} , \label{A3} \\ {\cal A}_4(q) &=& -8 \left[q^0 - (q^0-m_\psi)\frac{q\cdot(q-k)}{m^2_{D^*}}\right] \, \left[ (q^0 - m_\psi) - q^0 \frac{q\cdot(q-k)}{m^2_{D^*}}\right] . \label{A4} \end{eqnarray} In these last expressions, $q$ and $k$ are four-vectors given by $q=(q^0,{\bf q})$ and $k = (m_\psi,0)$. \section{Quark-meson coupling model and $D$ and $D^*$ mesons in matter} \label{sec:qmc} In this section we briefly review the QMC description of the $D$ and $D^*$ mesons in nuclear matter. Notations and explicit expressions are given in Refs.~\cite{Tsushima:1998ru,Tsushima:2002cc}. The QMC model was created to provide insight into the structure of nuclear matter, starting at the quark level~\cite{Guichon:1987jp,Guichon:1995ue,Saito:2005rv}. Nucleon internal structure was modeled by the MIT bag, while the binding was described by the self-consistent couplings of the confined light quarks ($u,d$) (not $s$ nor heavier quarks) to the scalar-$\sigma$ and vector-$\omega$ meson fields generated by the confined light quarks in the other nucleons. The self-consistent response of the bound light quarks to the mean $\sigma$ field leads to a novel saturation mechanism for nuclear matter, with the enhancement of the lower components of the valence Dirac light quark wave functions. The direct interaction between the light quarks and the scalar $\sigma$ field is a key ingredient of the model, it induces a nucleon {\it scalar polarizability}~\cite{Thomas:2004iw,Chanfray:2006nz} and generates a nonlinear scalar potential (effective nucleon mass), or equivalently a density-dependent ($\sigma$-field dependent) $\sigma$-nucleon coupling. The model has opened tremendous opportunities for studies of the structure of finite nuclei and of hadron properties in a nuclear medium (nuclei) with a model based on the underlying quark degrees of freedom~\cite{Saito:2005rv}. In QMC the Dirac equations for the quarks and antiquarks in nuclear matter, inside the bags of $D$ and $D^*$ mesons, ($q = u$ or $d$, and $c$) neglecting the Coulomb force in nuclear matter, are given by: \begin{widetext} \begin{eqnarray} & &\left[ i \gamma \cdot \partial_x - (m_q - V^q_\sigma) \mp \gamma^0 \left( V^q_\omega + \frac{1}{2} V^q_\rho \right) \right] \left( \begin{array}{c} \psi_u(x) \\ \psi_{\bar{u}}(x) \\ \end{array} \right) = 0, \label{diracu}\\ & &\left[ i \gamma \cdot \partial_x - (m_q - V^q_\sigma) \mp \gamma^0 \left( V^q_\omega - \frac{1}{2} V^q_\rho \right) \right] \left( \begin{array}{c} \psi_d(x) \\ \psi_{\bar{d}}(x) \\ \end{array} \right) = 0, \label{diracd}\\ & &\left[ i \gamma \cdot \partial_x - m_c \right] \psi_c (x)\,\, ({\rm or}\,\, \psi_{\bar{c}}(x)) = 0. \label{diracc} \end{eqnarray} \end{widetext} The (constant) mean-field potentials for a light quarks in nuclear matter are defined by $V^q_\sigma \equiv g^q_\sigma \sigma$, $V^q_\omega \equiv g^q_\omega \omega$ and $V^q_\rho \equiv g^q_\rho b$, with $g^q_\sigma$, $g^q_\omega$ and $g^q_\rho$ the corresponding quark-meson coupling constants. The eigenenergies for the quarks in the $D$ and $D^*$ mesons in units of $1/R_{D,D^*}^*$ are given by, \begin{eqnarray} \left( \begin{array}{c} \epsilon_u \\ \epsilon_{\bar{u}} \end{array} \right) &=& \Omega_q^* \pm R^*_{D,D^*} \left( V^q_\omega + \frac{1}{2} V^q_\rho \right), \\ \left( \begin{array}{c} \epsilon_d \\ \epsilon_{\bar{d}} \end{array} \right) &=& \Omega_q^* \pm R^*_{D,D^*} \left( V^q_\omega - \frac{1}{2} V^q_\rho \right), \\ \epsilon_c &=& \epsilon_{\bar{c}} = \Omega_c. \label{energy} \end{eqnarray} Then, the $D$ and $D^*$ meson masses in a nuclear medium $m^*_{D,D^*}$, are calculated by \begin{eqnarray} & &\hspace*{-2em} m_{D,D^*}^* = \sum_{j=q,\bar{q},c,\bar{c}} \frac{ n_j\Omega_j^* - z_{D,D^*}}{R_{D,D^*}^*} + \frac{4}{3}\pi R_{D,D^*}^{* 3} B, \\ & &\left. \frac{\partial m_{D,D^*}^*}{\partial R_{D,D^*}} \right|_{R_{D,D^*} = R_{D,D^*}^*} = 0, \label{DDsmass} \end{eqnarray} where $\Omega_q^*=\Omega_{\bar{q}}^* =[x_q^2 + (R_{D,D^*}^* m_q^*)^2]^{1/2}\,(q=u,d)$, with $m_q^*=m_q{-}g^q_\sigma \sigma$, $\Omega_c^*=\Omega_{\bar{c}}^*=[x_c^2 + (R_{D,D^*}^* m_c)^2]^{1/2}$, and $x_{q,c}$ being the bag eigenfrequencies. $B$ (=(170.0 MeV)$^4$) is the bag constant, $n_q (n_{\bar{q}})$ and $n_c (n_{\bar{c}})$ are the lowest mode quark (antiquark) numbers for the quark flavors $q$ and $c$ in the $D$ and $D^*$ mesons, respectively, and the $z_{D,D^*}$ parameterize the sum of the center-of-mass and gluon fluctuation effects and are assumed to be independent of density. We choose the values $(m_q, m_c) = (5, 1300)$ MeV for the current quark masses, and $R_N = 0.8$ fm for the bag radius of the nucleon in free space. The quark-meson coupling constants, $g^q_\sigma$, $g^q_\omega$ and $g^q_\rho$, are adjusted to fit the nuclear saturation energy and density of symmetric nuclear matter, and the bulk symmetry energy~\cite{Saito:2005rv}. Exactly the same coupling constants, $g^q_\sigma$, $g^q_\omega$ and $g^q_\rho$, are used for the light quarks in the $D$ and $D^*$ mesons and baryons as in the nucleon. Because of baryon number conservation, no vector potential should contribute to the loop integrals. Then, the vector potentials for the $D$ and $D^*$ mesons should be the same in considering the case of the $D D^*$ mixed loop to cancel out. However, for the $K^+$ meson case, $g^q_\omega$ associated with the vector potential had to be scaled $1.96$ times to reproduce an empirically extracted repulsive potential of about 25 MeV at normal nuclear matter density~\cite{Tsushima:1997df}. The reason is that $K$-mesons may be regarded as pseudo-Goldstone bosons, and they are therefore difficult to describe by naive quark models as is also true for pions. {}For this reason, in earlier work we explored the possibility of also scaling the $g^q_\omega$ strength by a factor $1.96$ for the $D$-mesons~\cite{Tsushima:1998ru,Sibirtsev:1999js}. In the present case, this possibility is excluded by baryon number conservation. As a result, the vector potential does not contribute to the final results. Thus, we may focus on the (scalar) effective masses of $D$ and $D^*$ mesons. The QMC predictions for the in-medium effective masses of these mesons are shown in Fig.~\ref{fig:DDsmass} as a function of nuclear matter density. The net reductions in the masses of the $D$ and $D^*$ are nearly the same as a function of density, as dictated by {\it the light quark number counting rule}~\cite{Tsushima:2002cc}. \begin{figure}[htb] \includegraphics[height=95mm,angle=-90]{DDsmass.ps} \caption{ $D$ and $D^*$ meson (scalar) effective masses as a function of baryon density. } \label{fig:DDsmass} \end{figure} \section{Numerical results} \label{sec:num} Amongst the main ingredients of the present calculation are the phenomenological form factors needed to regularize the self-energy loop integrals in Eq.~(\ref{Sigma-l}). Following previous experience with a similar calculation for the $\rho$ self-energy~\cite{Leinweber:2001ac}, we use a dipole form for the vertex form factors \begin{equation} u_{D,D^*}({\bf q}^2) = \left(\frac{\Lambda_{D,D^*}^2 + m^2_\psi} {\Lambda_{D,D^*}^2 + 4\omega^2_{D,D^*}({\bf q})}\right)^2, \label{uff} \end{equation} so that the $F_l({\bf q}^2)$ in Eq.~(\ref{Sigma-l}) are given by \begin{eqnarray} && F_{DD}({\bf q}^2) = u^2_{D}({\bf q}^2), \\ && F_{DD^*}({\bf q}^2) = u_D({\bf q}^2) \, u_{D^*}({\bf q}^2), \\ && F_{D^*D^*}({\bf q}^2) = u^2_{D^*}({\bf q}^2), \label{ffs} \end{eqnarray} where $\Lambda_{D}$ and $\Lambda_{D^*}$ are cutoff masses. Obviously the main uncertainty here is the value of these cutoff masses. In a simple-minded picture of the vertices the cutoff masses are related to the extension of the overlap region of $J/\Psi$ and $D$ mesons at the vertices and therefore should depend upon the sizes of the wave functions of these mesons. One can have a rough estimate of $\Lambda_{D}$ and $\Lambda_{D^*}$ by using a quark model calculation of the form factors. Using a $^3P_0$ model for quark-pair creation~\cite{LeYaouanc} and Gaussian wave functions for the mesons, the vertex form factor can be written as~\cite{Ko:2000jx} \begin{equation} u_{QM}({\bf q}^2)=e^{-{\bf q}^2/4(\beta^2_D+2\beta^2_\psi)}, \label{ff} \end{equation} where $\beta_D$ and $\beta_\psi$ are the Gaussian size parameters of the $D$ and $J/\Psi$ wave functions. Demanding that the $u({\bf q}^2)$ of Eq.~(\ref{uff}) and $u_{QM}({\bf q}^2)$ have the r.m.s. radii $\langle r^2 \rangle^{1/2}$, with \begin{equation} \langle r^2 \rangle = - 6 \, \frac{d \ln u(q^2)}{dq^2}\Bigg|_{{\bf q}^2=0}, \end{equation} one obtains \begin{equation} \Lambda^2 = 32(\beta^2_D +2 \beta^2_\psi) - 4m^2_D. \label{Lambda} \end{equation} Using $m_D = 1867.2$~MeV and for the $\beta$'s the values used in Ref.~\cite{Ko:2000jx}, $\beta_D = 310$~MeV and $\beta_\psi = 520$~MeV, one obtains $\Lambda_D = 2537$~MeV. Admittedly this is a somewhat rough estimate and it is made solely to obtain an order of magnitude estimate, since we do not expect that Gaussian form factors should be very accurate at high ${\bf q}^2$. In view of this and to gauge uncertainties of our results, we allow the value of $\Lambda_D$ vary in the range $1000~{\rm MeV} \leq \Lambda_D \leq 3000~{\rm MeV}$. Moreover, for simplicity we use $\Lambda_D = \Lambda_{D^*}$. Using $m_{D^*} = 2008.6$ MeV for the average of the vacuum masses of the $D^*$'s, there remain to be fixed the bare $J/\Psi$ mass $m^0_\psi$ and the coupling constants. The bare mass is fixed by fitting the physical mass $m_{J/\Psi} = 3096.9$~MeV using Eq.~(\ref{m-psi}). For the coupling constants we use $g_{\psi DD} = g_{\psi DD^*} = g_{\psi D^*D^*} = 7.64$, which are obtained by invoking vector-meson-dominance and use of isospin symmetry~\cite{Lin:1999ad}. We are now in a position to present the results for the in-medium mass shift $\Delta m$ of $J/\Psi$, defined in Eq.~(\ref{Delta-m}). We calculate the in-medium self-energy using the in-medium $D$ meson mass as given by the QMC model presented in Section~\ref{sec:qmc}. We present results for $\xi = 0$ (no gauge coupling) and for $\xi = 1$ (with gauge coupling). \begin{table}[htb] \begin{tabular}{cc|ccc|c} \hline $\;\Lambda_D\;$ & $\;m^*_{J/\Psi}\;$ & $\;DD\;$ & $\;DD^*\;$ & $\;D^*D^*\;$ & $\;\Delta m\;$ \\ \hline $1000$ & $3081$ & $-3$ & $-2$ & $-11$ & $-16$ \\ $1500$ & $3079$ & $-3.5$ & $-2.5$ & $-12$ & $-18$ \\ $2000$ & $3077$ & $-4$ & $-3$ & $-13$ & $-20$ \\ $3000$ & $3072$ & $-6.5$ & $-5$ & $-12.5$ & $-24$ \\ \hline \end{tabular} \caption{In-medium $J/\Psi$ mass $m^*_{J/\Psi}$ and the individual loop contributions to the mass difference $\Delta m$ at nuclear matter density, for different values of the cutoff $\Lambda_D$, and using the non-gauged Lagrangian -- $\xi = 0$ in Eq.~(\ref{KDD}). All quantities are in MeV. } \label{tab:xi0} \end{table} Initially we present results for $\xi = 0$. In Table~\ref{tab:xi0} we present the in-medium $J/\Psi$ mass $m^*_{J/\Psi}$ and the individual loop contributions to the mass difference $\Delta m$ at nuclear matter density $\rho_0$, for different values of the cutoff mass $\Lambda_D$. First of all, one sees that the net effect of the in-medium mass change of the $D$ mesons gives a negative shift for the $J/\Psi$ mass. The total shift ranges $16$ to $24$~MeV at normal nuclear matter density. The results show in addition that the $D^*D^*$ loop gives the largest contribution of the three. Also, this contribution is rather insensitive to the cutoff mass values used in the form factors. A negative self-energy means that the nuclear mean field provides attraction to $J/\Psi$. The important question is of course whether such an attraction is enough to bind $J/\Psi$ to a large nucleus. A partial answer can be obtained as follows. One knows~\cite{Schiff} that for an attractive spherical well of radius $R$ and depth $V_0$, the condition for the existence of a nonrelativistic $s$-wave bound state of a particle of mass $m$ is \begin{equation} V_0 > \frac{\pi^2 \hbar^2}{8 m R^2}. \label{bound} \end{equation} Using for $m = m^*_{J/\Psi}$ and $R = 5$~fm (radius of a medium-size nucleus), one obtains $V_0 > 1$~MeV. Therefore, the prospects of capturing a $J/\Psi$ if produced almost at rest in a nucleus are quite favorable. Next, we assess the impact of using a gauged Lagrangian for the $DD$ loop on $m^*_{J/\Psi}$ and $\Delta m$. The results are shown in Table~\ref{tab:xi1}. The contribution of the $DD$ loop to $\Delta m$ is still much smaller than the $DD^*$ and $D^*D^*$ contributions, but of opposite sign. The net $J/\Psi$ mass shift is still sizable, varying from $13$~MeV to $18.5$~MeV as the cutoff is varied from $1000$~MeV to $3000$~MeV. The small, positive value of the $DD$ loop contribution is in agreement with the result of Ref.~\cite{Ko:2000jx}. \begin{table}[htb] \begin{tabular}{cc|ccc|c} \hline $\;\Lambda_D\;$ & $\;m^*_{J/\Psi}\;$ & $\;DD\;$ & $\;DD^*\;$ & $\;D^*D^*\;$ & $\;\Delta m\;$ \\ \hline $1000$ & $3084$ & $+1$ & $-2$ & $-12$ & $-13$ \\ $1500$ & $3082$ & $+1$ & $-2.5$ & $-12.5$ & $-14$ \\ $2000$ & $3080$ & $+1$ & $-3$ & $-14$ & $-16$ \\ $3000$ & $3078$ & $+0.5$ & $-5.5$ & $-13.5$ & $-18.5$ \\ \hline \end{tabular} \caption{Same quantities as in Table~\ref{tab:xi0}, but using the gauged Lagrangian -- $\xi = 1$ in Eq.~(\ref{KDD}). } \label{tab:xi1} \end{table} In Figs.~\ref{fig:DD} - \ref{fig:total} we show the separate contributions of the $DD$, $DD^*$ and $D^*D^*$ loops and their sum to the $J/\Psi$ mass shift. As the cutoff mass values increase in the form factors, obviously each loop contribution becomes larger since the integral is divergent, but the increase is less pronounced for the $D^*D^*$ loop. Since the $D^*D^*$ loop gives the largest contribution, it is encouraging that this loop contribution is rather insensitive to the cutoff mass values used. \begin{figure}[htb] \includegraphics[height=95mm,angle=-90]{jpsipot_DD.ps} \caption{Contribution from the $DD$ loop to the difference of the in-medium and vacuum $J/\Psi$ masses $\Delta m$ as a function of nuclear matter density for different values of the cutoff mass $\Lambda_D$.} \label{fig:DD} \end{figure} \begin{figure}[htb] \includegraphics[height=95mm,angle=-90]{jpsipot_DDs.ps} \caption{Contribution from the $DD^*$ loop. See also caption of Fig.~\ref{fig:DD}. } \label{fig:DDs} \end{figure} \begin{figure}[htb] \includegraphics[height=95mm,angle=-90]{jpsipot_DsDs.ps} \caption{Contribution from the $D^*D^*$ loop. See also caption of Fig.~\ref{fig:DD}. } \label{fig:DsDs} \end{figure} \begin{figure}[htb] \includegraphics[height=95mm,angle=-90]{jpsipot_total.ps} \caption{The total contributions of the $DD$, $DD^*$ and $D^*D^*$ loops to the difference of in-medium and vacuum $J/\Psi$ masses $\Delta m$ as a function of nuclear matter density for different values of the cutoff mass $\Lambda$.} \label{fig:total} \end{figure} \section{Conclusions and perspectives} \label{sec:concl} We have used an effective Lagrangian approach to evaluate the $D$ and $D^*$ loop contributions to the mass shift of $J/\Psi$ in cold nuclear matter. Effects of the medium on the $D$ and $D^*$ are calculated using the QMC model, in which effective scalar and vector meson mean fields are coupled to the light $u$ and $d$ quarks in the charmed mesons. There are no free parameters in this QMC calculation since all quark-meson coupling constants and bag parameters are fixed by fitting saturation properties of nuclear matter. The $J/\Psi-D$ coupling constants are taken as determined from vector meson dominance and the cutoff masses are varied over a large range of values. The QMC predicts a $62$~MeV mass drop for the $D$ and $D^*$ mesons at nuclear matter density. This mass drop leads to a corresponding in-medium $J/\Psi$ mass shift varying between $-16$~MeV and $-24$~MeV for cutoff masses within the range of $1000$~MeV and $3000$~MeV. Such a mass shift is large enough to bind a $J/\Psi$ to a nucleus for a $J/\Psi$ produced at low momentum in the rest frame of the nucleus. Although the conclusions of the present calculation are very promising towards the possibility of binding $J/\Psi$ in a nucleus, some issues clearly require further investigation. Amongst the most important ones are the calculation of effective $J/\Psi$ potentials for finite nuclei~\cite{plan} and their momentum dependence, and the inclusion of $D$ and $D^*$ widths. Recent calculations~\cite{tolos} of in-medium $D$ and $D^*$ widths based on meson-exchange models have obtained somewhat contradictory results and further study is required. As emphasized in Refs.~\cite{DN} the lack of experimental information on the free-space interaction of $D$ mesons with nucleons is a major impediment for constraining models and the use of symmetry principles and exploration of the interplay between quark-gluon and baryon-meson degrees of freedom is essential in this respect. Still another issue is the dissociation of $J/\Psi$ in matter by collisions with nucleons and light mesons. This subject has been studied vigorously in the last years using different approaches, like meson exchange~\cite{diss-mex} and quark models~\cite{diss-qm}, QCD sum rules~\cite{Navarra:2001jy}, and the NJL model~\cite{Bourque:2008es}. Finally, we stress the need for a deeper understanding of the role played by color van der Waals forces in the $J/\Psi$ mass shift, particularly in respect with nucleons interacting in a nucleus. \acknowledgments GK thanks the Theory Center of the Jefferson Lab for hospitality and support during a visit when part of this work was done. The work of GK was partially financed by CNPq and FAPESP (Brazilian agencies). Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes. GK and KT would like to acknowledge the hospitality of the CSSM, University of Adelaide, where the final part of the calculation was carried out. This work was also supported by the Australian Research Council through the award of an Australian Laureate Fellowship to AWT.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{ Introduction } {\emph{Introduction}. The ability to manipulate the waveform of optical pulses is essential for numerous applications \cite{keller2003recent} such as spectroscopy and coherent control \cite{goswami2003optical, silberberg2009quantum}, metrology \cite{tanabe2002spatiotemporal} , microscopy \cite{bardeen1999effect} and optical communications \cite{sardesai1998femtosecond}. Any pulse shaping system is constrained by the available bandwidth which sets a limit on the minimal possible pulse duration. This limit, described using a duration-bandwidth product, states that for a given bandwidth the minimal pulse duration is achieved when the spectral phase of the pulse is at most linear \cite{milonni2010laser}. Such a pulse is termed Fourier-transform-limited. However, band-limited signals can actually oscillate locally at an arbitrarily fast rate (concomitant with a decreased amplitude), thus breaking the Fourier-transform limit resolution-wise. This is achieved through an interference phenomenon now known as superoscillation \cite{MVBerry1994}. Superoscillatory functions are known since the early 1920 when attempts were made to produce antennas with extremely narrow radiation patterns \cite{zheludev2008}. Such functions were revived with the introduction of quantum weak measurements \cite{aharonov1988result} which can yield values much larger than the largest eigenvalue of an observable. Berry and Popescu have introduced superoscillations into optics, predicting their use for spatial super-resolution \cite{BerryPopescu2006} which indeed was demonstrated experimentally in several works \cite{FMHuang2009, Zheludev2012, wong2013optical, tang2015ultrabroadband}. Superoscillatory diffraction-free beams \cite{Makris2011, Segev2013} as well as a superoscillating pattern made of accelerating Airy beams were demonstrated \cite{eliezer2016super}. Superoscillations were also used to realize super-narrow optical frequency conversion \cite{Remez15}. In the temporal regime, relatively little has been done with attempts to break the temporal resolution limit. Two experiments done in 2006 and in 2005 used a quadratic spectral phase to break the Fourier resolution limit by $20\%$ \cite{Binhammer2006} and $30\%$ \cite{boyko2005temporal} respectively. A theoretical work discussed breaking the Fourier limit by super-resolution pulse compression techniques \cite{Liu06}. In the radio-frequency regime temporal superoscillations were successfully demonstrated in 2011 \cite{wong2011temporal} and in 2012 \cite{wong2012superoscillatory}. In addition it was suggested that optical temporal superoscillations can be used to overcome absorption in dielectric materials\cite{Eliezer14}. Here we experimentally break the temporal Fourier-transform limit of an ultra-short optical pulse by synthesizing a superoscillating pulse-envelope. In particular we achieve a temporal feature which is approximately four times narrower than a Fourier limited Gaussian pulse having the same bandwidth, while maintaining a visibility (ratio between the amplitudes of the narrow feature and its adjacent fringes) of $29.5\%$. Numerical simulations further demonstrate the ability of such signals to achieve super-resolution in the time domain. \emph{Theory}. We start with a known complex superoscillating function \cite{BerryPopescu2006}: \begin{equation} f_{SO}\left( t \right) = {\left[ {\cos \left(\Omega_0 t \right) + {i}a\sin \left(\Omega_0 t \right)} \right]^N}, \ a>1, \ N \in {\mathbb{N}}^{+} \label{eq:fsox} \end{equation} whose highest Fourier component is $N\Omega_0$, while around $t \approx 0$ it superoscillates $a$ times faster, at the rate of $aN\Omega_0$. We expand the real part of Eq. \ref{eq:fsox} into a cosine series: \begin{eqnarray} &&\mathrm{Re} \{ f_{SO(a,N)}\left( t \right) \} = \frac{1}{2^{N-1}} \sum\limits_{k \in Even}^N {{a^k} {N\choose k}} \times \nonumber \\ \label{eq:RealSO} &&\sum\limits_{l=0}^{N-k} { \sum\limits_{m=0}^k { \left(-1\right)^m {N-k\choose l} {k\choose m} e^{ i \left[ 2\left(l+m\right)-N \right] \Omega_0 t }} } \nonumber \\ &&= \sum_{n=0}^{M=\lfloor \frac{N}{2} \rfloor} { A_{q_n} \cos { \left( q_n \Omega_0 t \right) } } \\ &&q_{n} \equiv 2n + \mu_{N} \;\;\;;\;\;\; \mu_{N} \equiv mod(N,2) \end{eqnarray} to derive an exact set of modal real-valued amplitudes $A_{q_n}$ and frequencies creating a real-valued superoscillatory signal. Here $mod$ is the modulo operation. A set of optical carrier modes with such amplitudes and frequencies would produce a temporal superoscillatory signal, which is quite a challenge as all the modes need to be phase locked as well as harmonics of a given fundamental frequency. It is much easier to create a superoscillation in the envelope of a given pulse which is naturally made of phase-locked modes. An interference between two modes of slightly different frequencies produces a modulated envelope beating at the difference frequency of the two modes. Interference of several beat frequencies suited with the right amplitude ratios and phases would manifest a Super-Oscillating-Beat (SOB) - an envelope with a temporal feature which breaks the temporal Fourier focusing limit. For the following, we construct a SOB signal by first setting in Eq. \ref{eq:RealSO} the parameters $N=3$ ($M=1$) and $a=2$ which gives: \begin{equation} \mathrm{Re} \{ f_{SO(2,3)}(t)\} = -\frac{9}{8}\cos\left(\Omega_0 t\right) + \frac{13}{8}\cos\left(3\Omega_0 t\right) \label{eq:fso23} \end{equation} This signal superoscillates at a rate of $6\Omega_0$. Next, the two cosine modes of Eq. \ref{eq:fso23} are mounted on a signal's envelope according to a procedure outlined in the Supplemental material (section I) which maps each Fourier component to a beat constructed by two close-by frequency components resulting in the following modes' amplitudes and phases: $|A_{q_k}|=\{ 13/8, 9/8, 9/8, 13/8 \}$, $\phi_{q_k}=\{ 0, \pi, \pi, 0\}$, $q_k=\{-3,-1,+1,+3\}$. The beats spectral spacing $\Delta\omega $ is chosen such that $(2M+\mu_N)\Delta\omega$ fits within the available bandwidth. The frequency and time domain theoretical representations of this SOB signal are shown in the supplemental material (section I) and in Fig.\ref{fig:figure3meas}(a) below (in dashed lines). \emph{Results}. Our experimental setup consists of a Ti:Sapphire femtosecond laser oscillator together with a home built 4f pulse shaper and a home-built Frequency-Resolved-Optical Gating (FROG) apparatus \cite{trebino1997measuring} used for pulse characterization (see methods section in the Supplemental material). Generally the SOB signal is prone to dispersion destructing the superoscillation after propagating a dispersion length of $\left({4\pi^2}\right)/\left({GVD \times (N\Delta \omega/2)^2}\right)$ where $GVD={\partial^2 k}/{\partial \omega^2} |_{\omega=\omega_0}$ and $N\Delta \omega/2$ is the bandwidth of the pulse. For the signals that we used, with bandwidth around $8$ nm, the dispersion length in air (in BBO crystal) is in the order of one kilometer (centimeters) which is much longer than the optical path length we used (thickness of the crystal used in the FROG apparatus). Thus the distortions caused by dispersion could be ignored. The experiment went as follows: first, The pulse shaper was used to shape the original spectrum into a Gaussian and a rectangular shape, both with a flat spectral phase. The rectangle full width and the Gaussian full-width at half maximum (FWHM) are $15$ nm ($7$ THz). In the time domain, the transform limited Gaussian pulse feature has a FWHM of $140 \pm 5$ fs while the transform limited Sinc pulse has the primary and secondary lobes FWHM of $169 \pm 5$ fs and $93 \pm 5$ fs correspondingly. These numbers are within $97\%$ ($90\%$) of FWHM of ideal theoretical waveforms for the rectangular (Gaussian) due to experimental imperfections in the waveform synthesis. The spectrogram (time-frequency FROG traces), retrieved spectrum and the temporal waveform of the Gaussian and Sinc pulses are shown in Fig. \ref{fig:figure2meas}.(a) and Fig. \ref{fig:figure2meas}. (b) respectively. \begin{figure*}[htbp] \centerline{\includegraphics[width=0.9\linewidth]{figure2}} \caption{ \textbf{Transform-limited signals and a super-oscillating beat.} Experimental measurements of SHG FROG time-frequency spectrograms (left column) and the retrieved signals in the frequency (middle column) and time (right column) domains for four different signals sharing the same bandwidth (which is delimited by purple dot-dashed lines): (a) transform-limited Gaussian pulse (b) transform limited Sinc pulse (c) highest frequency beat signal (d) super-oscillating optical beat (SOB). The superoscillation is circled with a dot-dashed line. } \label{fig:figure2meas} \end{figure*} Next, the pulse shaper is set to generate a single beat comprised of two modes set $15$ nm apart (Fig. \ref{fig:figure2meas}.(c)). This is a manifestation of the fastest possible single Fourier component within the given bandwidth and it is generally assumed that it gives the narrowest possible temporal features in the form of interference fringes. The fringes FWHM is $112 \pm 5$ fs (which is off by $12\%$ of the corresponding perfect waveform). This FWHM value is $80\%$ of the Gaussian pulse's FWHM and $66\%$ of the Sinc pulse's central lobe FWHM. This is a very general and known result: the resolution available by a (spatial or temporal) double-slit interference is better than possible with a single slit diffraction whose width is equivalent to the double-slit separation. This fact also came to fame with the introduction of Ramsey-fringes in atom interferometry \cite{ramsey1950molecular}. Finally we synthesize the SOB signal given above for which we set the following: modes amplitudes are $|A_{\{-3,-1,+1,+3\}}| = \{ {13}/{8}, {9}/{8}, {9}/{8}, {13}/{8} \} \times A_0 \;$ ($A_0$ is a common amplitude factor,); center frequencies of the modes are: $ \nu_{\{-3,-1,+1,+3\}} = {\omega_{\{-3,-1,+1,+3\}}}/({2\pi}) = \{370.89 ,\; 373.22,\; 375.56,\; 377.89 \} \ THz$; The frequency difference between adjacent modes is $ \Delta\nu = 2.334 \; THz \;\; (\Delta \lambda \approx $5 nm$)$ and the modes phases are $\phi_{\{-3,-1,+1,+3\}} = \{ 0, \pi, \pi, 0 \} $. The modes possess an approximate Gaussian form whose width $\Delta\nu$ is inversely proportional to an overall $0.8 ps$ pulse duration. The SOB spectrogram, frequency and temporal waveforms are shown in (Fig. \ref{fig:figure2meas}.(d)). It is evident that around time zero a superoscillating feature emerges, with a FWHM of $48 \pm 4$ fs (the half-maximum value was taken between the peak maximum and its adjacent minima). The SOB FWHM is approximately twice as narrow as the fringes of the corresponding single beat (double-slit) pattern, three times narrower compared with the transform limited Gaussian pulse and $3.5$ times narrower than the central lobe of the transform-limited Sinc pulse. Note that although both the SOB and single beat signals have spectral content extending beyond the designated spectral width of $15$ nm (due to the finite bandwidth of each mode), it does not make the oscillating features \emph{within} the envelope narrower, and so it is irrelevant to our result. This excess spectral content only limits the overall duration of the entire signal. A comparison between the theoretical ideal SOB signal and the one that was synthesized can be seen in Fig. \ref{fig:figure3meas}(a) where the agreement is quite good, especially for the superoscillating feature. The $87 \pm 5$ fs temporal full-width delimiting the synthesized superoscillation corresponds to a $5.75$ THz local frequency. This local frequency differs by $18\%$ from the theoretical value of $a \times N \times \Delta\nu/2 = 7$ THz, which is twice the corresponding single beat frequency (which by itself corresponds to the fastest Fourier component in the SOB spectrum). The visibility of this SOB is $29.5\%$. Because superoscillations are an interference phenomena they rely on keeping the correct amplitude and phases of their constituting modes. Still, there is some resilience to changes. For example, we have decreased the phase difference between the beam modes by $0.2\pi$ and measured the resulting temporal shape (see Fig. \ref{fig:figure3meas}(b)). In this case the measured FWHM is $72 \pm 5$ fs which is wider by approximately $50\%$ than the full width of the unmodified SOB signal. We note that a theoretical treatment for the sensitivity of superoscillating signals to amplitude and phase changes was made in Ref.\cite{Eliezer14} . An illuminating case is equalizing the modes' phases which completely ruins the superoscillation (Fig. \ref{fig:figure3meas}(c)). Here the spectral phase is linear, resulting in a transform limited pulse for which the overall root-mean-square width is minimized \cite{weiner2000femtosecond}. Thus a linear spectral phase minimizes a global feature of the pulse - its overall width (which is also the case for the examples shown in \ref{fig:figure2meas}(a)-(c)). In contrast, when a super-oscillating function is constructed - the spectral phase is no longer linear - thus the overall width of the pulse is not minimized. What we gain, however, is a fringe (or several fringes) which is (are) narrower than the fringes of a transform limited pulse. In view of this, for super-oscillatory functions, the optimization, instead of being global is a local optimization: narrowing a given fringe while keeping the magnitude of its surrounding side-lobes as low as possible. \begin{figure*}[htbp] \centerline{\includegraphics[width=1.0\linewidth]{figure3}} \caption{ \textbf{Phase modifications of a SOB signal.} Experimentally retrieved waveforms (solid lines) vs. optimal theoretical waveforms (dashed lines) in the frequency domain (Left) and the time domain (right) for three different instances of a SOB signal: (a) original SOB signal (b) modified by lowering the phase difference between beat modes by $0.2\pi$ (c) modified to have a flattened spectral phase. } \label{fig:figure3meas} \end{figure*} For SOB signals, the increase in local frequency comes at the expense of increased side lobes resulting in decreased visibility. For spatial super-resolution the existence of large side lobes sets fundamental limits on the resolving power of the optical system \cite{sales1997fundamental}. However, this limitation becomes irrelevant when the narrow feature interacts with an isolated small enough object, or when the side lobes can be cut by the application of a nonlinear filter. In microscopy this means the use of a pupil close to the scanned object. The pinhole projects the superoscillation into a real Fourier component. Thus the super resolving spot becomes evanescent but without the side-lobes. We would gain resolution (compared to illuminating the pinhole with a plane wave) when the superoscillating spot is smaller than the diameter of the pinhole, while the pinhole still cuts the side lobes. Similarly for the time domain - the effects of the side-lobes can be mitigated when interacting with isolated short events, or when an additional temporal gating is used. Another interesting possibility would be the use of pre-processing for repeated applications of the SOB waveform, while changing one of its parameters, for isolating the effect of the superoscillating feature (see e.g. Ref.\cite{eramo2011method}). Regarding the above mentioned tradeoff it is theoretically possible to continuously tune the SOB waveform between better temporal focusing to better visibility of the superoscillating feature. Most simply this is done by changing $a$ in Eq. \ref{eq:fso} which sets the ratio between the superoscillating frequency to the highest frequency in the spectrum of the signal. This is shown in Fig. \ref{fig:figure4meas}(a) where keeping the same number of modes (with N=3) and their spectral width while continuously changing $a$ between 1 to 6 results in gradually increased temporal focusing and decreased visibility. We realized experimentally three instances of the SOB with different values of $a=\{1.63, 2, 2.5\}$. The SOB with $a=2$ was shown already in Fig. \ref{fig:figure2meas} and Fig. \ref{fig:figure3meas} where the FWHM of the superoscillation was $48 \pm 5$ fs and the visibility was $29.5\%$. Fig. \ref{fig:figure4meas}(b) (Fig. \ref{fig:figure4meas}(c)) shows a SOB signal with $a=1.63$ ($a=2.5$) where the superoscillation FWHM and visibility are $78 \pm 5$ fs ($45 \pm 4$ fs) and $41\%$ ($16.8\%$) respectively. \begin{figure*}[htbp] \centerline{\includegraphics[width=0.95\linewidth]{figure4}} \caption{ \textbf{Tuning the SOB signal between better resolution to better visibility.} (a) Numerical modification of the $a$ parameter in the Frequency domain (left column) and in the Temporal domain (right column). The increase in $a$ results in better temporal resolution of the SOB signal at the expense of visibility. Notice that the super-oscillating portion of the waveform is around time zero. The dashed white lines indicates the $a$ values for which waveforms where experimentally retrieved (solid lines) and compared with optimal theoretical waveforms (dashed lines) as shown in the frequency domain (Left) and the time domain (right) for: (b) $a=1.63$ (c) $a=2.5$. } \label{fig:figure4meas} \end{figure*} Practically, when considering pulse shaping, to get a narrower SOB feature, better resolution and control is needed in the spectral domain to allow for synthesizing the required waveform. In addition, in this work, we have chosen to work with a specific family of superoscillatory functions described with Eq. \ref{eq:fso}. Alternatively it is possible to work with other superoscillatory functions which optimize the duration and amplitude of the superoscillating features \cite{katzav2013yield}. Despite the obvious limitations mentioned above of superoscillatory wave-functions, several recent works already proved experimentally that in microscopy such waveforms can outperform transform limited beams, achieving super-resolution \cite{wong2013optical, Zheludev2012}. Due to the analogy that exists between optical phenomena in the time domain to that in the spatial domain it is expected that having a superoscillatory temporal signal would enable to achieve temporal super-resolution. Such an analogy is used in simulations presented in the Supplemental material (section II) which demonstrate numerically temporal super resolution. To conclude we have applied the concept of superoscillations to the temporal domain of ultra short optical pulses. We experimentally demonstrated a superoscillating optical beat having a temporal fringe which is three times narrower than a Gaussian pulse whose FWHM equals its full bandwidth, breaking the temporal Fourier-transform limit given with transform limited Gaussian pulses by $67\%$ while maintaining visibility of $29.5\%$. Such sub-Fourier focusing could be used for temporal super-resolution and so have important consequences in applications relying on ultra-short pulses such as spectroscopy, nonlinear optics and metrology. \newpage \section{Supplemental material - Constructing a super-oscillating beat (SOB) signal from a superoscillatory signal} Considering the following family of complex superoscillating functions: \begin{equation} f_{SO}\left( t \right) = {\left[ {\cos \left(\Omega_0 t \right) + {i}a\sin \left(\Omega_0 t \right)} \right]^N}, \quad \ a>1, \quad N \in {\mathbb{N}}^{+} \label{eq:fso} \end{equation} It is possible to expand the real part of Eq. \ref{eq:fso} into the following binomial expansion and Fourier cosine series: \begin{eqnarray} &&\mathrm{Re} \{ f_{SO}\left( t \right) \} = \frac{1}{2^{N-1}} \sum\limits_{k \in Even}^N {{a^k} {N\choose k}} \times \nonumber \\ &&\sum\limits_{l=0}^{N-k} { \sum\limits_{m=0}^k { \left(-1\right)^m {N-k\choose l} {k\choose m} e^{ i \left[ 2\left(l+m\right)-N \right] \Omega_0 t }} } \nonumber \\ &&= \sum_{n=0}^{M=\lfloor \frac{N}{2} \rfloor} { A_{q_n} \cos { \left( q_n \Omega_0 t \right) } } \label{eq:ftransform} \\ &&q_{n} \equiv 2n + \mu_{N} \;\;\;;\;\;\; \mu_{N} \equiv mod(N,2) \end{eqnarray} The SOB signal is the sum of $M+1$ beats, where those who have a beat frequency different than zero are composed of two modes having the same amplitude and phase: \begin{eqnarray} &&f_{SOB}(t) = (1-\mu_N) B_0 \cos(\omega_0 t + \phi_0) + \\ &&\sum_{m=-M-\mu_N}^{M+\mu_N} (1-\delta_{m,0}\mu_{N}) B_m \cos \left( \omega_m t + \phi_m\right) \nonumber \\ &\;& B_m = B_{-m}, \;;\; \phi_m = \phi_{-m}, \;;\; \omega_0 \equiv \omega_c \\ &=& 2(1-\mu_N) B_0 \cos(\omega_c t + \phi_0) + \nonumber \\ && 2\sum_{m=1}^{M+\mu_N} B_{m} \cos \left(\left(\frac{\omega_{-m}+\omega_{m}}{2}\right)t + \frac{\phi_{-m}+\phi_{m}}{2} \right) \times \nonumber \\ &&\cos \left(\left(\frac{\omega_{-m}-\omega_{m}}{2}\right)t \right) \label{eq:fsob1} \end{eqnarray} Here $\delta_{i,j}$ is the Kronecker delta function and $\omega_c$ is a carrier frequency within the bandwidth of the pulse. In addition the $B_m$ amplitudes are positive valued. The beats are chosen to have the same mean frequency, while their beat frequencies are integer multiplication of an arbitrary fundamental beat frequency: \begin{eqnarray} \forall m: \; \frac{\omega_m+\omega_{-m}}{2} &=& {\omega_c} \nonumber \\ \frac{\omega_m-\omega_{-m}}{2} &=& \left(\frac{2m-\mu_N}{2}\right)\Delta\omega \label{eq:fsobcond} \end{eqnarray} These reduce Eq. \ref{eq:fsob1} to: \begin{eqnarray} &&f_{SOB}(t) = 2\cos(\omega_c t + \phi_0) \times \\ &&\left( (1-\mu_N) B_0 + \sum_{m=1}^{M+\mu_N} B_{m} \cos \left(\left(\frac{2m-\mu_N}{2}\right) \Delta\omega t + {\phi_{m}} \right) \right) \nonumber \label{eq:fsob11} \end{eqnarray} Here $\cos \left({\omega_c} t +\phi_0\right)$ is the common carrier signal of the beats that oscillates at the frequencies $\left(({2m-\mu_N})/{2}\right) \Delta\omega$. Provided that the beats' amplitudes and phases are determined by Eq. \ref{eq:ftransform} i.e. $B_m \equiv |A_{q_{m-\mu_N}}| \;\;;\;\; \phi_m = ({\pi}/{2}) (1-sgn(A_{q_{m-\mu_N}}))$ (where $sgn$ indicates the sign function), the envelope of $f_{SOB}(t)$ would be superoscillating. While the highest beat frequency of the envelope is bounded by $\left({2M+\mu_N}/{2}\right)\Delta\omega$, still the envelope would locally superoscillate at a higher beat frequency of $a\left({2M+\mu_N}/{2}\right)\Delta\omega$. In practice, the modes constituting the SOB would have some spectral width, inducing a finite envelope width $\sigma_t$ for the temporal SOB signal while essentially not modifying the superoscillating frequency. In this case the SOB signal and its spectrum are given by: \small \begin{eqnarray} &&f_{SOB}(t) = 2 \exp{\left( -\frac{t^2}{2\sigma_t^2} \right)} \cos(\omega_c t + \phi_0) \times \label{eq:fsobFourier} \\ &&\left[ (1-\mu_N) |A_{q_0}| + \sum_{1-\mu_N}^{M} |A_{q_m}| \cos \left(\left(\frac{2m+\mu_N}{2}\right) \Delta\omega t + {\phi_{q_m}} \right) \right] \nonumber \\ \label{eq:fsobFreqDomain} &&F_{SOB}(\omega>0) = \frac{\sqrt{2\pi\sigma_t^2}}{2} \times \\ && \sum_{-M-\mu_N}^{M} |A_{q_k}| \exp{\left( -\frac{1}{2} \sigma_t^2\left(\omega - \left[\omega_c + \left(\frac{2k+\mu_N}{2}\right)\Delta\omega\right]\right)^2 +i{\phi_{q_k}} \right)} \nonumber \\ && A_{q_k}=A_{-q_k} \;\;,\;\; \phi_{q_k}=\phi_{-q_k} \nonumber \end{eqnarray} \normalsize For the main text we have constructed a SOB signal by first setting in Eq. 2 the parameters $N=3$ and $a=2$ which gives: \begin{equation} \mathrm{Re} \{ f_{SO(2,3)}(t)\} = -\frac{9}{8}\cos\left(\Omega_0 t\right) + \frac{13}{8}\cos\left(3\Omega_0 t\right) \label{eq:fso23} \end{equation} The Fourier representation for positive frequencies of this signal is depicted in Fig\ref{fig:figure0theory}.(a) while its time domain form is given in Fig \ref{fig:figure0theory}.(b). Together with this function we also depict a cosine oscillating at the highest Fourier component of the signal $3\Omega_0$, and a cosine oscillating at the superoscillation frequency $6\Omega_0$. It is apparent that the signal superoscillates around time zero. Then, the two cosine modes of Eq. \ref{eq:fso23} are mounted on a signal's envelope according to the procedure outlined above which results in the following modes' amplitudes and phases: $|A_{q_k}|=\{ 13/8, 9/8, 9/8, 13/8 \}$, $\phi_{q_k}=\{ 0, \pi, \pi, 0\}$, $q_k=\{-3,-1,+1,+3\}$. $\Delta\omega $ and $\omega_c$ are chosen such that $(2M+\mu_N)\Delta\omega$ fits within the available bandwidth and $\omega_c$ is the designated carrier frequency. Thus the SOB has been generated. The frequency and time domain representations of this SOB signal are shown in Fig \ref{fig:figure0theory}.(c) and Fig \ref{fig:figure0theory}.(d) respectively. \begin{figure*}[htbp] \centerline{\includegraphics[width=0.9\linewidth]{figure0}} \caption{ \textbf{Construction of a superoscillatory optical beat.} A real valued superoscillatory (SO) function is first defined through its Fourier modes which are harmonic multiples of a fundamental frequency. These modes are then reflected around a central carrier frequency to generate the superoscillating-optical-beat (SOB): a superposition of beat frequencies with a superoscillatory envelope function. (a) The positive frequency components of the SO signal (b) Temporal waveform of the SO signal (thick continuous purple line) together with its fastest Fourier component (dashed red line) and the Fourier component corresponding to the superoscillation (dot-dashed blue line). (c) The positive frequency components of the SOB signal. (d) The temporal waveform of the SOB (continuous blue line) together with a trace of the superoscillating envelope (thick continuous purple line) and the pulse finite envelope (dashed red line) which is due to the finite width of the constituting Fourier modes. } \label{fig:figure0theory} \end{figure*} \section{ Temporal super-resolution with SOB signals } Here we bring the results of numerical simulations applying an analogy with microscopy to demonstrate temporal super-resolution using a SOB signal. The analogy with the spatial case is quite straightforward. A spatial imaging system is described through the convolution of a point-spread-function and the object to be imaged. With a superoscillating point-spread-function super-resolution is achieved \cite{wong2013optical}. In our case - the physical signal to be used in a generic measurement would be an optical polarization vector proportional to the mixing of the SOB signal with a temporal event signal $g(t)$: $P \propto f_{SOB}(t)g(t-\tau)$. Here $\tau$ is the relative delay between the two real-valued signals. If we further assume that the overall interaction length is short, then a slow intensity detector would measure the cross-correlation signal $S(\tau)=\int \left[f_{SOB}(t)g(t-\tau)\right]^2 dt$. We wish to analyze the detection of a temporal double-peak modeled as two separate Gaussian pulses: \begin{equation} g(t) = \left( e^{ -\frac{(t-\frac{1}{2}t_{sep})^2}{2\sigma^2} } + e^{ -\frac{(t+\frac{1}{2}t_{sep})^2}{2\sigma^2} } \right) \cos(\omega_gt) \end{equation} with a carrier frequency $\omega_g$. For the following we fix $\sigma=0.15 \times t_{sep}$. In the simulations we set the carrier frequencies of both the SOB signal and $g(t)$ to zero to factor out the fast oscillations associated with the carrier frequency of the polarization (formally this is equivalent to the application of a low-pass filter to the cross-correlation). We numerically calculated the cross-correlation for various values of $t_{sep}$ and for various SOB signals by modifying the $a$ parameter for two values of the $N$ parameter: $N=3,4$. The SOB signals are normalized by their energy. In Fig. \ref{fig:figure5theory}(a) we see two examples of SOB signals with $N=3,4$ and with $a=3.4, 3.25$ (correspondingly) superimposed with a temporal double peak signal $g(t)$ with some small separation $t_{sep}$. In Fig. \ref{fig:figure5theory}(b) the cross-correlations are given separately for $N=3,4$ for a specific value of $t_{sep}= 0.32 \times \left[{2\pi}/{N\Delta\omega}\right]$ while $a$ is modified. The cross-correlations are shown only around time-delay-zero where the superoscillating feature is interacting with the double-pulse. The curved white lines delimits the range $\tau \in [-T_{SOB}/4, T_{SOB}/4]$ (where $T_{SOB}={4\pi}/({aN\Delta\omega}$) which reflects the temporal delays in which the double pulse interactions with the lobes outside the superoscillatory feature is minimal. The delay between the two straight vertical white lines is equal to $t_{sep}$. The two-pulse structure is resolved when a minima occurs at time-zero of the cross-correlation (for a single pulse we would get a maxima at this location). However the resolving power is really a matter of visibility - how well is this feature observable. We calculate the visibility of the central feature of the cross-correlation (not to be confused with the visibility of the superoscillating feature) for different values of $a$, where the cross-correlation visibility is $|({max(S)-min(S)})/({max(S)+min(S)})|$. The maxima and minima are calculated for the range $\tau \in [-T_{SOB}/4, T_{SOB}/4]$ . The $a$ value where the visibility is maximal is denoted with a horizontal straight white line. The greatest visibility is achieved for $a$ for which the two-pulse separation matches the distance between the closest zeros of the superoscillation feature. This condition is approximately given by: $T_{SOB}/2=t_{sep}$. This happens for both $N=3,4$. Furthermore - when we repeat the calculation of the visibility for different values of $t_{sep}$ we still see this identical behavior. This is shown in Fig. \ref{fig:figure5theory}(c) depicting the visibility as a function of both $t_{sep}$ and $a$. The maximal visibility approximately matches the line $t_{sep} = T_{SOB}/2$ (shown with black dots). The explanation for the fact that there is an optimal value of $a$ for resolving the double-pulse is simple: it is the result of trade-off between higher local frequency associated with higher values of $a$ and lower visibility due to lower ratio of the amplitude of the superoscillations compared to its adjacent side-lobes. In any case, the conclusion is obvious: if the double-pulse separation is shorter than half the period associated with diffraction limited signals, than a superoscillating signal would be better for detecting or resolving it (compared with a transform-limited pulse for which $a=1$), achieving super-resolution in the time-domain. We would like to add that the temporal resolving power is a function of the signal to be resolved. In analogy with imaging - regular microscopes are defined usually by their Modulation Transfer Function (MTF) which shows the visibility when imaging a specific Fourier component. The MTF is irrelevant for microscopes based on super-oscillations, as the power of the later lies in their ability to resolve signals made of a limited number of oscillations (not a Fourier component). If the number of oscillations extends too much into the side-lobes - they would not be resolved. In our case this reflects the cases where $t_{sep}$ is fixed and $a$ is increased too much. As we have seen the performance of the SOB signals would outperform transform-limited signals for cases where the signal to be resolved does not extend into the side-lobes of the SOB signal. This is the temporal counterpart to super-resolution imaging demonstrated experimentally with super-oscillating microscopes \cite{wong2013optical, Zheludev2012}. \begin{figure*}[htbp] \centerline{\includegraphics[width=0.95\linewidth]{figure5}} \caption{ \textbf{Temporal super-resolution with SOB signals} (a) SOB signals (blue line) with $N=3, a=3.4$ (left) $N=4, a=3.25$ (right) superimposed with a temporal double peak signal $g(t)$ (red) with some small separation. (b) Cross-correlation function of the SOB signal together with a double-peak signal with a specific separation $t_{sep}=0.32 \times ({2\pi})/({N\Delta\omega})$, given separately for $N=3$ (left) $N=4$ (right) as a function of time delay and the $a$ parameter. The separation of the two vertical straight lines is $t_{sep}$. The horizontal white line marks the $a$ value for which the visibility of the cross-correlation is maximal in the range $\tau \in [-T_{SOB}/4, T_{SOB}/4]$. This range is delimited between the two curved white lines. (c) Visibility as a function of $t_{sep}$ and $T_{SOB}/2={1}/{a}$ (in the units used in the graph) over the range $\tau \in [-T_{SOB}/4, T_{SOB}/4]$. The maximal visibility approximately matches the line $t_{sep} = T_{SOB}/2$ (shown with black dots). } \label{fig:figure5theory} \end{figure*} \section{Methods} In our experiment, we use a home-made Frequency Resolved Optical Gating (FROG) apparatus and a home-made Pulse Shaper. The FROG was built using a $50:50$ beam splitter, a $50 \mu m$ BBO SHG crystal, a $0.1 \mu m$ step linear motor stage, and an off-axis parabolic mirror having a reflected focal length of $4''$ . The pulse shaper was built using a pair of $35cm$ focal length cylindrical mirrors, and a pair of 1200 ${lines}/{mm}$ holographic gratings. At the Fourier plane we used a 640 pixel, dual-mask Spatial Light Modulator (Jenoptik SLM-S640d). The laser source used in the experiments was Coherent Vitara-T. Fig. \ref{fig:figure1setup} depicts a detailed schematic of our experimental setup. \begin{figure}[ht] \centerline{\includegraphics[width=1.0\linewidth]{setupSchematics}} \caption{ Experimental setup. The pulses emitted by an ultra-fast laser oscillator are shaped in a 4f Fourier domain pulse shaper. The shaped pulses amplitude and phase are retrieved through a measurement in a FROG apparatus. M=Mirror, CM=Cylindrical Mirror, G=Grating, BS=beam Splitter, PM=Off-axis Parabolic Mirror, SHGC=Second-Harmonic-Generation Crystal, B=Beam Blocker. } \label{fig:figure1setup} \end{figure} All FWHM measurements were done using a 2nd order polynomial fit over the retrieved waveforms. Indicated uncertainties in experimentally retrieved values are based on the temporal and spectral resolution of our FROG apparatus.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The goal of this paper is to show the following result. \begin{theorem}\label{string-surj} The map $\pi_\ast \mathrm{MString} \to \pi_\ast \mathrm{tmf}$ induced by the Ando-Hopkins-Rezk orientation is surjective. \end{theorem} This integral result was originally stated as \cite[Theorem 6.25]{hopkins-icm}, but, to the best of our knowledge, no proof has appeared in the literature. In \cite{hopkins-mahowald-orientations}, Hopkins and Mahowald give a proof sketch of Theorem \ref{string-surj} for elements of $\pi_\ast \mathrm{tmf}$ of Adams-Novikov filtration $0$. The analogue of Theorem \ref{string-surj} for $\mathrm{bo}$ (namely, the statement that the map $\pi_\ast \mathrm{MSpin}\to \pi_\ast \mathrm{bo}$ induced by the Atiyah-Bott-Shapiro orientation is surjective) is classical \cite{milnor-spin}. In Section \ref{abs-surj}, we present (as a warmup) a proof of this surjectivity result for $\mathrm{bo}$ via a technique which generalizes to prove Theorem \ref{string-surj}. We construct an $\E{1}$-ring $A$ with an $\E{1}$-map $A\to \mathrm{MSpin}$. The $\E{1}$-ring $A$ is a particular $\E{1}$-Thom spectrum whose mod $2$ homology is given by the polynomial subalgebra $\mathbf{F}_2[\zeta_1^4]$ of the mod $2$ dual Steenrod algebra. The Atiyah-Bott-Shapiro orientation $\mathrm{MSpin}\to \mathrm{bo}$ is an ${E_\infty}$-map, and so the composite $A\to \mathrm{MSpin} \to \mathrm{bo}$ is an $\E{1}$-map. We then prove that the map $\pi_\ast A\to \pi_\ast \mathrm{bo}$ is surjective; this is stronger than the Atiyah-Bott-Shapiro orientation being surjective on homotopy. The argument to prove Theorem \ref{string-surj} follows the same outline: we construct (in Section \ref{B-def}) an $\E{1}$-ring $B$ with an $\E{1}$-map $B\to \mathrm{MString}$. This $\E{1}$-ring $B$ is the height $2$ analogue of the $\E{1}$-ring $A$; this motivated the naming of $B$. We define $B$ as a particular $\E{1}$-Thom spectrum whose mod $2$ homology is given by the polynomial subalgebra $\mathbf{F}_2[\zeta_1^8, \zeta_2^4]$ of the mod $2$ dual Steenrod algebra. The Ando-Hopkins-Rezk orientation \cite{koandtmf} $\mathrm{MString} \to \mathrm{tmf}$ is an ${E_\infty}$-map, and so the composite $B\to \mathrm{MString} \to \mathrm{tmf}$ is an $\E{1}$-map. We then prove the following stronger statement: \begin{theorem}\label{main-thm} The map $\pi_\ast B \to \pi_\ast \mathrm{tmf}$ is surjective. \end{theorem} The map $B\to \mathrm{tmf}$ factors through $\mathrm{MString}$, so Theorem \ref{string-surj} follows. In Section \ref{invert-2}, we prove Theorem \ref{main-thm} after localizing at $3$ (as Theorem \ref{main-thm-invert-2}). In Section \ref{prime-2}, we prove Theorem \ref{main-thm} after localizing at $2$ (as Theorem \ref{main-thm-prime-2}); together, these yield Theorem \ref{string-surj} by Corollary \ref{12-equiv}. Finally, in Section \ref{apps}, we study some applications of Theorem \ref{string-surj}. In particular, we discuss Hirzebruch's prize question \cite[Page 86]{hirzebruch} along the lines of \cite[Corollary 6.26]{hopkins-icm}. We also prove a conjecture of Baker's from \cite{baker-conjecture}. The surjectivity of the Atiyah-Bott-Shapiro orientation $\mathrm{MSpin}\to \mathrm{bo}$ was considerably strengthened by Anderson, Brown, and Peterson in \cite{abp}: they showed that the Atiyah-Bott-Shapiro orientation $\mathrm{MSpin}\to \mathrm{bo}$ in fact admits a spectrum-level splitting. It is a folklore conjecture that the same is true of the Ando-Hopkins-Rezk orientation $\mathrm{MString}\to \mathrm{tmf}$, and there have been multiple investigations in this direction (see, for instance, \cite{laures-k1-local, laures-k2-local}). In forthcoming work \cite{bpn-thom}, we study in detail the relationship between $B$ and $\mathrm{tmf}$ (as well as $A$ and $\mathrm{bo}$). We show that old conjectures of Cohen, Moore, Neisendorfer, Gray, and Mahowald in unstable homotopy theory related to the Cohen-Moore-Neisendorfer theorem, coupled with a conjecture about the centrality of a certain element of $\sigma_2\in \pi_{13}(B)$ (resp. $\sigma_1\in \pi_5(A)$), implies that the Ando-Hopkins-Rezk orientation (resp. the Atiyah-Bott-Shapiro orientation) admits a spectrum level splitting. This provides another proof of Theorem \ref{string-surj}, assuming the truth of these conjectures. \subsection*{Acknowledgements} I'm extremely grateful to Mark Behrens and Peter May for agreeing to work with me this summer and for being fantastic advisors, as well as for arranging my stay at UChicago. I'd like to also thank Stephan Stolz for useful conversations when I visited Notre Dame. I'm also grateful to Andrew Baker, Hood Chatham, Jeremy Hahn, Eleanor McSpirit, and Zhouli Xu for clarifying discussions. Thanks also to Andrew Baker, Peter May, Haynes Miller, Zhouli Xu, and in particular Jeremy Hahn for providing many helpful comments and correcting mistakes, and to Andrew Senger for pointing out the reference \cite{hopkins-mahowald-orientations} after this paper was written. \section{Warmup: surjectivity of the Atiyah-Bott-Shapiro orientation}\label{abs-surj} The goal of this section is to provide a proof of the following classical theorem using techniques which generalize to prove Theorem \ref{string-surj}. \begin{theorem}\label{spin-surj} The map $\pi_\ast \mathrm{MSpin} \to \pi_\ast \mathrm{bo}$ induced by the Atiyah-Bott-Shapiro orientation is surjective. \end{theorem} As mentioned in the introduction, we prove Theorem \ref{spin-surj} by constructing an $\E{1}$-ring $A$ with an $\E{1}$-map $A\to \mathrm{MSpin}$. Composing with the Atiyah-Bott-Shapiro orientation $\mathrm{MSpin}\to \mathrm{bo}$ produces an $\E{1}$-map $A\to \mathrm{MSpin} \to \mathrm{bo}$. We then show the following result, which implies Theorem \ref{spin-surj}. \begin{theorem}\label{A-surj} The map $\pi_\ast A\to \pi_\ast \mathrm{bo}$ is surjective. \end{theorem} \begin{remark} Theorem \ref{A-surj} is an old result of Mahowald's: it appears, for instance, as \cite[Proposition 4.1(c)]{mahowald-some-etaj} and \cite[Proposition 2.2(3)]{hopkins-mahowald-orientations}. Theorem \ref{A-surj} also implies the second part of \cite[Proposition 4.10]{baker-characteristics}. \end{remark} The definition of the $\E{1}$-ring $A$ is as follows. \begin{construction} Let $S^4\to \mathrm{BSpin}$ be a generator of $\pi_4 \mathrm{BSpin} \cong \mathbf{Z}$. Since $\mathrm{BSpin}$ is an infinite loop space, there is an induced map $\Omega S^5\to \mathrm{BSpin}$. Let $A$ denote the Thom spectrum of this map. This is an $\E{1}$-ring with an $\E{1}$-map $A\to \mathrm{MSpin}$. Its $15$-skeleton is shown in Figure \ref{A-15-skeleton}. \end{construction} \begin{remark}\label{univ-property} The image of a generator of $\pi_4 \mathrm{BSpin}$ under the J-homomorphism $\mathrm{BSpin}\to B\mathrm{GL}_1(\S)$ is the Hopf element $\nu\in \pi_4 B\mathrm{GL}_1(\S) \cong \pi_3 \S$. Consequently, $A$ is the Thom spectrum of the map $\Omega S^5\to B\mathrm{GL}_1(\S)$ which detects $\nu$ on the bottom cell $S^4$ of the source. In particular, the universal property of Thom spectra from \cite{barthel-thom} exhibits $A$ as the $\E{1}$-quotient $\S/\!\!/\nu$ of the sphere spectrum by $\nu$. \end{remark} \begin{remark} The spectrum $A$ is ubiquitous in Mahowald's older works \cite{mahowald-thom, mahowald-bo-res, mahowald-imj} (where it is often denoted $X_5$), where its relationship to $\mathrm{bo}$ via the composite $A\to \mathrm{MSpin}\to \mathrm{bo}$ is utilized to great effect. \end{remark} \begin{prop}\label{A-homology} The $\mathrm{BP}_\ast$-algebra $\mathrm{BP}_\ast(A)$ is isomorphic to a polynomial algebra $\mathrm{BP}_\ast[y_2]$, where $|y_2| = 4$. There is a map $A_{(p)}\to \mathrm{BP}$. On $\mathrm{BP}$-homology, the element $y_2$ maps to $t_1^2$ mod decomposables at $p=2$. \end{prop} \begin{proof} The space $\Omega S^5$ has cells only in dimensions divisible by $4$, and hence the same is true of $A$. The Atiyah-Hirzebruch spectral sequence for $\mathrm{BP}_\ast(A)$ therefore collapses at the $E^2$-page, and so $\mathrm{BP}_\ast(A) \cong \mathrm{BP}_\ast[y_2]$, as desired. Since $\pi_\ast \mathrm{BP}$ is concentrated in even degrees, the element $\nu$ vanishes in $\pi_3 \mathrm{BP}$. The universal property of $A$ from Remark \ref{univ-property} therefore produces an $\E{1}$-map $A\to \mathrm{BP}$. The element $\nu$ is detected by $[t_1^2]$ in the $2$-local Adams-Novikov spectral sequence for the sphere (in fact, a choice of representative in the cobar complex is $t_1^2 + v_1 t_1$), so this yields the final sentence of the proposition. \end{proof} \begin{remark}\label{A-equiv} In particular, the map $A\to \mathrm{bo}$ is an equivalence in dimensions $\leq 4$. Proposition \ref{A-homology} implies that $\H_\ast(A; \mathbf{F}_2) \cong \mathbf{F}_2[\zeta_1^4]$; note that this is the $Q_0$-Margolis homology of $\H_\ast(\mathrm{bo};\mathbf{F}_2) \cong \mathbf{F}_2[\zeta_1^4, \zeta_2^2, \zeta_3, \cdots]$. This is sharp: $\pi_5 A$ contains a nontrivial element $\sigma_1$ which maps to zero in $\pi_\ast \mathrm{bo}$. This element is specified up to indeterminacy by the relation $\eta\nu = 0$; see Figure \ref{A-15-skeleton}. \end{remark} \begin{figure} \begin{tikzpicture}[scale=0.75] \draw [fill] (0, 0) circle [radius=0.05]; \draw [fill] (1, 0) circle [radius=0.05]; \draw [fill] (1, 1) circle [radius=0.05]; \draw [fill] (2, 0) circle [radius=0.05]; \draw [fill] (3, 0) circle [radius=0.05]; \draw (0,0) to node[below] {\footnotesize{$\nu$}} (1,0); \draw (1,0) to node[below] {\footnotesize{$2\nu$}} (2,0); \draw (2,0) to node[below] {\footnotesize{$3\nu$}} (3,0); \draw [->] (1,1) to node[left] {\footnotesize{$\eta$}} (1,0); \draw (0,0) to[out=-90,in=-90] node[below] {\footnotesize{$\sigma$}} (2,0); \end{tikzpicture} \caption{The $15$-skeleton of $A$ at the prime $2$ shown horizontally, with $0$-cell on the left. The element $\sigma_1$ is depicted.} \label{A-15-skeleton} \end{figure} We will momentarily prove Theorem \ref{A-surj}. Before doing so, we need to introduce one piece of notation. \begin{notation} Let $M$ be a unital spectrum, and suppose $\alpha,\beta\in \pi_\ast \S$ and $\gamma\in \pi_\ast M$ are elements such that $\alpha\beta = 0$ and $\beta \gamma = 0$ in $M$. Following \cite[Section 7]{baker-may}, the Toda bracket $\langle \alpha, \beta, \gamma \rangle$ will denote the coset of elements of $\pi_{|\alpha| + |\beta| + |\gamma| + 1}(M)$ determined by the subgroup $\mathrm{indet} = \alpha \pi_{|\beta| + |\gamma| + 1}(M) + \pi_{|\alpha| + |\beta| + 1}(\S) \gamma$. The subgroup $\mathrm{indet}$ is the indeterminacy of the bracket $\langle \alpha, \beta, \gamma\rangle$. There is an analogous definition for higher-fold Toda brackets, but we will not elaborate more on this, since we shall not need it. We will use this notation throughout without further comment. \end{notation} \begin{proof}[Proof of Theorem \ref{A-surj}] Because the map $A\to \mathrm{bo}$ is one of $\E{1}$-rings, it suffices to lift all the generators of $\pi_\ast \mathrm{bo}$ to $\pi_\ast A$. We first prove Theorem \ref{A-surj} after inverting $2$. Since $\pi_\ast \mathrm{bo}[1/2] \cong \mathbf{Z}[1/2][u^2]$, where $u^2$ is the square of the Bott element. It follows from Remark \ref{A-equiv} that the polynomial generator $u^2$ of $\pi_\ast \mathrm{bo}[1/2]$ lifts to $\pi_\ast A[1/2]$. It remains to prove Theorem \ref{A-surj} after $2$-localization. Recall that $\pi_\ast \mathrm{bo}$ is polynomially generated by $\eta$ in degree $1$ (which is spherical), $2v_1^2$ in degree $4$, and $v_1^4$ in degree $8$ (subject to some relations). The element $\eta\in \pi_1 \mathrm{bo}_{(2)}$ lifts to $\pi_1 A_{(2)}$ because it is a spherical element and the unit $\S\to \mathrm{bo}_{(2)}$ factors through $A_{(2)}$. It remains to lift the other two generators. Remark \ref{A-equiv} already shows that $2v_1^2$ lifts to $\pi_\ast A_{(2)}$. Alternatively, recall that there is a sole $d_3$-differential $d_3(v_1^2) = \eta^3$ in the Adams-Novikov spectral sequence for $\mathrm{bo}$. This implies that $2v_1^2\in \langle 8, \nu, 1_\mathrm{bo}\rangle$, with indeterminacy $0\pmod{2}$. Since $\nu$ vanishes in $\pi_\ast A$, we find that the bracket $\langle 8, \nu, 1_A\rangle$ is well-defined in $\pi_4 A_{(2)}$. For $v_1^4$, recall that $\sigma = 0$ in $\pi_\ast \mathrm{bo}$, and that $v_1^4\in \langle 16, \sigma, 1_\mathrm{bo}\rangle \subseteq \pi_\ast \mathrm{bo}$ (see \cite[Lemma 7.3]{baker-may}), where the indeterminacy in this bracket is $0\pmod{2}$. Note that $\langle 16, \sigma, 1_\mathrm{bo}\rangle \subseteq \langle 4, 4\sigma, 1_\mathrm{bo}\rangle$. We now observe that the attaching map of the $8$-cell of $A_{(2)}$ is given by $\sigma + \wt{2\nu}$, where $\wt{2\nu} \in \pi_7(C\nu)$ is an element determined by the relation $2\nu^2 = 0\in \pi_6(\S)$; see, for instance, \cite[Lemma 4.7]{baker-characteristics}. In particular, this implies that $4\sigma = 0$ in $\pi_\ast A_{(2)}$, so the bracket $\langle 4, 4\sigma, 1_A\rangle$ is well-defined. It follows from the above discussion that $2v_1^2$ and $v_1^4$ lift to $\pi_\ast A_{(2)}$ up to indeterminacy (and the indeterminacy in $\pi_\ast \mathrm{bo}$ is $0\pmod{2}$). If $2v_1^2 + 4nv_1^2 = 2(2n+1) v_1^2$ lifts to $\pi_\ast A_{(2)}$ for some $n\in \mathbf{Z}_{(2)}$, then so does $2v_1^2$ since $2n+1$ is a $2$-local unit. Arguing similarly for $v_1^4$, it follows that the other two generators of $\pi_\ast \mathrm{bo}_{(2)}$ lift to $\pi_\ast A_{(2)}$, as desired. \end{proof} \section{Defining $B$}\label{B-def} In this section, we will define the $\E{1}$-ring $B$ mentioned in the introduction and study some of its elementary properties. It is the height $2$ analogue of the spectrum $A$ from Section \ref{abs-surj}. We define $B$ as a Thom spectrum whose mod $2$ homology is given by $\mathbf{F}_2[\zeta_1^8, \zeta_2^4]$; notice that this is the $Q_0$-Margolis homology of the mod $2$ homology of $\mathrm{tmf}$. The spectrum $B$ appeared under the name $\ol{X}$ in \cite[Section 10]{hopkins-mahowald-orientations}. We will work \emph{integrally} (i.e., without inverting any primes) unless explicitly mentioned otherwise. \begin{construction}\label{B-constr} There is a fiber sequence $$S^9 = \O(10)/\O(9)\to \mathrm{BO}(9) \to \mathrm{BO}(10).$$ There is an element $f\in \pi_{12} \O(10) \cong \mathbf{Z}/12$, which is sent to $2\nu\in\pi_{12}(S^9) \cong \mathbf{Z}/24$ under the boundary homomorphism in the long exact sequence on homotopy. Define a space $BN$ as the homotopy pullback $$\xymatrix{ S^9 \ar[r] \ar@{=}[d] & BN\ar[r] \ar[d] & S^{13}\ar[d]^-f\\ S^9 \ar[r] & \mathrm{BO}(9) \ar[r] & \mathrm{BO}(10). }$$ Let $N$ be the loop space $\Omega BN$. If $S^9\to \mathrm{B^2 String}$ denotes the generator of $\pi_8 \mathrm{BString}$, then the composite $S^{12} \xrightarrow{2\nu} S^9\to \mathrm{B^2 String}$ is null. The Atiyah-Hirzebruch-Serre spectral sequence shows that the generator of $\mathrm{bstring}^1(S^9)$ extends to $\mathrm{bstring}^1(BN)$, and so there is a map $BN\to \mathrm{B^2 String}$. The induced loop map $N\to \mathrm{BString}$ is given by the map of fiber sequences \begin{equation}\label{fiber-sequence-map} \xymatrix{ N \ar[r] \ar[d] & \Omega S^{13} \ar[r] \ar[d] & S^9 \ar[d]\\ \mathrm{BString} \ar[r] & \ast \ar[r] & \mathrm{B^2 String}. } \end{equation} The Thom spectrum of the map $N\to \mathrm{BString}$ is the $\E{1}$-ring $B$. \end{construction} Note that $B$ is defined integrally, and that it admits an $\E{1}$-map $B\to \mathrm{MString}$ obtained by Thomifying the map $N\to \mathrm{BString}$. \begin{prop}\label{bp-homology} The $\mathrm{BP}_\ast$-algebra $\mathrm{BP}_\ast(B)$ is isomorphic to a polynomial algebra $\mathrm{BP}_\ast[b_4, y_{6}]$, where $|b_4| = 8$ and $|y_6| = 12$. There is a map $B_{(p)}\to \mathrm{BP}$. On $\mathrm{BP}$-homology, the elements $b_4$ and $y_{6}$ map to $t_1^4$ and $t_2^2$ mod decomposables at $p=2$, and $y_{6}$ maps to $t_1^3$ mod decomposables at $p=3$. \end{prop} \begin{proof} There is a fiber sequence \begin{equation}\label{fiber-sequence} \Omega S^9\to N\to \Omega S^{13}. \end{equation} The J-homomorphism $\mathrm{BString}\to B\mathrm{GL}_1(\S)$ gives a map $N\to \mathrm{BString}\to B\mathrm{GL}_1(\S)$. The composite with the map $\Omega S^9\to N$ gives a map $\Omega S^9\to B\mathrm{GL}_1(\S)$. This is the extension of the map $S^8\to B\mathrm{GL}_1(\S)$ detecting $\sigma\in \pi_7(\S)$ along $S^8\to \Omega S^9$. By one of the main theorems of \cite{barthel-thom}, we find that the Thom spectrum of the map $\Omega S^9\to B\mathrm{GL}_1(\S)$ is the $\E{1}$-quotient $\S/\!\!/\sigma$ of the sphere spectrum by $\sigma$. The fiber sequence \eqref{fiber-sequence} exhibits $B$ as the Thom spectrum of a map $\Omega S^{13}\to B\mathrm{GL}_1(\S/\!\!/\sigma)$. The induced map $S^{12}\to B\mathrm{GL}_1(\S/\!\!/\sigma)$ detects an element $\wt{\nu}\in \pi_{11}(\S/\!\!/\sigma)$. This element may be described as follows. The relation $\sigma\nu = 0$ in $\pi_\ast \S$ defines a lift of $\nu\in\pi_3(\S)$ to $\pi_{11}$ of the $15$-skeleton $C\sigma$ of $\S/\!\!/\sigma$; this is the element $\wt{\nu}$. Since $\mathrm{BP}$ is concentrated in even degrees, the element $\sigma$ vanishes in $\pi_\ast \mathrm{BP}$. Consequently, $\wt{\nu}$ is well-defined, and it, too vanishes in $\pi_\ast \mathrm{BP}$. The universal property of Thom spectra from \cite{barthel-thom} then supplies an $\E{1}$-map $B\to \mathrm{BP}$. In particular, the Thom isomorphism says that the $\mathrm{BP}$-homology of $B$ is abstractly isomorphic as an algebra to the $\mathrm{BP}$-homology of $N$. This may in turn be computed by the Atiyah-Hirzebruch spectral sequence. However, the fiber sequence \eqref{fiber-sequence} implies that the homology of $N$ is concentrated in even degrees. Since $\pi_\ast \mathrm{BP}$ is also concentrated in even degrees, this implies that the Atiyah-Hirzebruch spectral sequence calculating $\mathrm{BP}_\ast(B)$ collapses, and we find that $\mathrm{BP}_\ast(B) \cong \mathrm{BP}_\ast[b_4, y_6]$, as desired. The map $B\to \mathrm{BP}$ induces a map $\mathrm{BP}_\ast(B)\to \mathrm{BP}_\ast(\mathrm{BP}) \cong \mathrm{BP}_\ast[t_1, t_2, \cdots]$. The element $\wt{\nu}$ is detected by $[t_2^2]$ in the $2$-local Adams-Novikov spectral sequence for $\S/\!\!/\sigma$, and by $[t_1^3]$ in the $3$-local Adams-Novikov spectral sequence for $\S/\!\!/\sigma$. The element $\sigma\in \pi_7(\S)$ is detected by $[t_1^4]$ in the $2$-local Adams-Novikov spectral sequence for the sphere. This yields the final sentence of the proposition. \end{proof} \begin{remark} Proposition \ref{bp-homology} implies that the mod $2$ homology of $B$ is isomorphic to $\mathbf{F}_2[\zeta_1^8, \zeta_2^4]$; note that this is the $Q_0$-Margolis homology of $\H_\ast(\mathrm{tmf};\mathbf{F}_2) \cong \mathbf{F}_2[\zeta_1^8, \zeta_2^4, \zeta_3^2, \zeta_4, \cdots]$. \end{remark} \begin{remark}\label{same-12} The composite $B\to \mathrm{MString}\to \mathrm{tmf}$ is an $\E{1}$-ring map (since the first map is an $\E{1}$-ring map by construction, and the second is an ${E_\infty}$-ring map by \cite{koandtmf}), and it is an equivalence in dimensions $\leq 12$. This follows from Proposition \ref{bp-homology}. \end{remark} \begin{prop} The map $B\to \mathrm{tmf}$ induces a surjection on homotopy after inverting $6$. \end{prop} \begin{proof} By \cite[Proposition 4.4]{bauer-tmf}, $\pi_\ast \mathrm{tmf}[1/6]$ is a polynomial generator on two generators $c_4$ and $c_6$, in degrees $8$ and $12$, respectively. Since the map $B[1/6]\to \mathrm{tmf}[1/6]$ is an $\E{1}$-map, the map $\pi_\ast B[1/6]\to \pi_\ast \mathrm{tmf}[1/6]$ is a ring map. It therefore suffices to lift the elements $c_4$ and $c_6$ to $\pi_\ast B[1/6]$. This follows from Remark \ref{same-12}. \end{proof} As an immediate consequence, we have: \begin{corollary}\label{12-equiv} If the maps $\pi_\ast B_{(3)}\to \pi_\ast \mathrm{tmf}_{(3)}$ and $\pi_\ast B_{(2)}\to \pi_\ast \mathrm{tmf}_{(2)}$ are surjective, then Theorem \ref{main-thm} is true. \end{corollary} \begin{remark}\label{B-wood} In \cite{bpn-thom}, we show that $B$ is in many ways analogous to $\mathrm{tmf}$. For instance, it satisfies an analogue of the $2$-local Wood equivalence $\mathrm{tmf}_{(2)} \wedge DA_1 \simeq \mathrm{tmf}_1(3)_{(2)}$ from \cite{homologytmf}, where $DA_1$ is a certain $8$-cell complex: the spectrum $B_{(2)} \wedge DA_1$ is a summand of Ravenel's Thom spectrum $X(4)_{(2)}$. (More precisely, it is the summand $T(2)$ of $X(4)_{(2)}$ obtained from the Quillen idempotent, as studied in \cite[Chapter 6.5]{green}.) \end{remark} \section{Theorem \ref{main-thm} after localizing at $3$}\label{invert-2} In Corollary \ref{12-equiv}, we reduced Theorem \ref{main-thm} to showing that the maps $\pi_\ast B_{(3)}\to \pi_\ast \mathrm{tmf}_{(3)}$ and $\pi_\ast B_{(2)}\to \pi_\ast \mathrm{tmf}_{(2)}$ are surjective. Our goal in this section is to study the $3$-local case. We shall prove: \begin{theorem}\label{main-thm-invert-2} The map $\pi_\ast B_{(3)} \to \pi_\ast \mathrm{tmf}_{(3)}$ is surjective on homotopy. \end{theorem} \begin{convention}\label{3-localize} We shall localize at the prime $3$ for the remainder of this section. \end{convention} \subsection{The Adams-Novikov spectral sequence for $\mathrm{tmf}$}\label{anss-tmf} In this section, we review the Adams-Novikov spectral sequence for $\mathrm{tmf}$ at $p=3$; as mentioned in Convention \ref{3-localize}, we shall $3$-localize everywhere. The following result is well-known, and is proved in \cite{bauer-tmf}: \begin{theorem} The $E_2$-page of the descent spectral sequence (isomorphic to the Adams-Novikov spectral sequence) for $\mathrm{tmf}$ is $$\H^\ast(\M_\mathrm{ell};\omega^{\otimes 2\ast}) \cong \mathbf{Z}_3[\alpha, \beta, c_4, c_6, \Delta^{\pm 1}]/I,$$ where $I$ is the ideal generated by the relations $$3\alpha = 3\beta = 0, \ \alpha^2 = 0, \ \alpha c_4 = \beta c_4 = \alpha c_6 = \beta c_6 = 0, \ c_4^3 - c_6^3 = 1728 \Delta.$$ Moreover, $\alpha$ and $\beta$ are in the image of the map of spectral sequences from the Adams-Novikov spectral sequence of the sphere to that of $\mathrm{tmf}$, with preimages $\alpha_1$ and $\beta_1$. \end{theorem} The differentials are all deduced from Toda's relation $\alpha_1 \beta_1^3 = 0$ in $\pi_\ast \S$. There is a $d_5$-differential $d_5(\beta_{3/3}) = \alpha_1 \beta_1^3$ (the ``Toda differential''), where $\beta_{3/3}$ lives in bidegree $(t-s,s) = (34,2)$; see, e.g., \cite[Theorem 4.4.22]{green}. Under the ${E_\infty}$-ring map $\S\to \mathrm{tmf}$, this pushes forward to the same differential in the Adams-Novikov spectral sequence for $\mathrm{tmf}$. Then: \begin{lemma}\label{beta-3-3} There is a relation $\beta_{3/3} = \Delta \beta$ in the $E_2$-page of the Adams-Novikov spectral sequence for $\mathrm{tmf}$. \end{lemma} \begin{proof} We explain how to deduce this from the literature. Multiplication by $\alpha$ is an isomorphism in the Adams-Novikov spectral sequence for both the sphere and $\mathrm{tmf}$ in stem $34$, so it suffices to check that $\alpha \beta_{3/3} = \Delta \alpha\beta$. The class $\alpha_1 \beta_{3/3}$ (resp. $\Delta \alpha \beta$) is a permanent cycle in the Adams-Novikov spectral sequence of the sphere (resp. $\mathrm{tmf}$) by the discussion on \cite[Page 137]{green}. It is known (see \cite[Chapter 13, page 12]{tmf}) that $\Delta \alpha \beta$ detects $\alpha_1 \beta_{3/3}$ in homotopy. To conclude that they are the same on the $E_2$-page of the Adams-Novikov spectral sequence for $\mathrm{tmf}$, it suffices to note that $\alpha_1 \beta_{3/3}$ maps to (a unit multiple of) $\Delta \alpha\beta$, as desired. \end{proof} It follows by naturality that there is a $d_5$-differential $d_5(\Delta \beta) = \alpha \beta^3$, which gives (by $\beta$-linearity): \begin{prop}\label{d5-tmf-3} In the Adams-Novikov spectral sequence for $\mathrm{tmf}$, there is a $d_5$-differential $d_5(\Delta) = \alpha \beta^2$. \end{prop} Since $3\alpha = 0$ in the Adams-Novikov spectral sequence of $\mathrm{tmf}$, we must have $d_5(3\Delta) = 3\alpha\beta^2 = 0$. There are no other possibilities for differentials on $3\Delta$, so it is a permanent cycle. Proposition \ref{d5-tmf-3} shows that there is a Toda bracket $3\Delta\in \langle 3, \alpha, \beta^2\rangle$ in $\pi_\ast \mathrm{tmf}$. This can be expressed by the claim that $3\Delta$ can be expressed a composite $$S^{24} \to \Sigma^{20} C\alpha_1\xrightarrow{\beta^2} \mathrm{tmf},$$ where the first map is of degree $3$ on the top cell. By $\Delta$-linearity, there is also a $d_5$-differential $d_5(\Delta^2) = \alpha \beta^2 \Delta$, so $3\Delta^2$ lives in the $E_6$-page. There are no further possibilities for differentials, so $3\Delta^2$ lives in $\pi_\ast \mathrm{tmf}$. Again, this shows that $3\Delta^2\in \langle 3, \Delta \alpha, \beta^2\rangle$. Finally, we turn to $\Delta^3$. We have $d_5(\Delta^3) = 3\Delta^2 \alpha \beta^2$, so we find that $\Delta^3 \in \langle 3, \Delta^2 \alpha, \beta^2\rangle$. We collect our conclusions in the following: \begin{corollary}\label{bracket-delta} The following is true in $\pi_\ast \mathrm{tmf}$: \begin{itemize} \item $3\Delta\in \langle 3, \alpha, \beta^2\rangle$; \item $3\Delta^2 \in \langle 3, \Delta\alpha, \beta^2\rangle$ \item $\Delta^3 \in \langle 3, \Delta^2\alpha, \beta^2\rangle$. \end{itemize} \end{corollary} \begin{remark}\label{toda-unique-3} The indeterminacy of the above Toda brackets in $\pi_\ast \mathrm{tmf}_{(3)}$ are $3\mathbf{Z}_{(3)}\{3\Delta\}$, $3\mathbf{Z}_{(3)}\{3\Delta^2\}$, and $3\mathbf{Z}_{(3)}\{\Delta^3\}$, respectively. \end{remark} \subsection{The Adams-Novikov spectral sequence for $B$} In this section, we analyze the ring map $B\to \mathrm{tmf}$, and show that the generators of $\pi_\ast \mathrm{tmf}_{(3)}$ lift to $\pi_\ast B_{(3)}$. By Corollary \ref{12-equiv}, this implies Theorem \ref{main-thm-invert-2}. We begin by showing: \begin{prop}\label{delta-lift} There is an element in the $E_2$-page of the Adams-Novikov spectral sequence for $B$ which lifts the element $\Delta$ in the $E_2$-page of the Adams-Novikov spectral sequence for $\mathrm{tmf}$. \end{prop} \begin{proof} To prove the proposition, we begin by recalling the definition of a representative for the element $\Delta$ in the cobar complex computing the $E_2$-page of the Adams-Novikov spectral sequence for $\mathrm{tmf}$. The Hopf algebroid $(\mathrm{BP}_\ast \mathrm{tmf}, \mathrm{BP}_\ast \mathrm{BP}\otimes_{\mathrm{BP}_\ast} \mathrm{BP}_\ast \mathrm{tmf})$ is isomorphic to the elliptic curve Hopf algebroid $(A, \Gamma)$ presenting the moduli stack of cubic curves by \cite[Corollary 5.3]{homologytmf}. Recall from \cite[Page 16]{bauer-tmf} (or \cite[Section III.1]{silverman}) that for an elliptic curve in Weierstrass form \begin{equation}\label{weier} y^2 + a_1 xy + a_3 y = x^3 + a_2 x^2 + a_4 x + a_6, \end{equation} we can define quantities $$b_2 = a_1^2 + 4a_2, \ b_4 = 2a_4 + a_1 a_3, \ b_6 = a_3^2+ 4a_6, \ b_8 = a_1^2 a_6 + 4a_2 a_6 - a_1 a_3 a_4 + a_2 a_3^2 - a_4^2,$$ which allows us to define elements $$c_4 = b_2^2 - 24 b_4, \ c_6 = -b_2^3 + 36 b_2 b_4 - 216 b_6.$$ The discriminant is $$\Delta = -b_2^2 b_8 - 8 b_4^3- 27 b_6^2 + 9b_2 b_4 b_6.$$ Now, it is known that upon inverting $2$, every elliptic curve in Weierstrass form \eqref{weier} is isomorphic to one of the form \begin{equation}\label{weier-3} y^2 = x^3 + a_2 x^2 + a_4 x. \end{equation} It follows that the elliptic curve Hopf algebroid is isomorphic to a Hopf algebroid of the form $(A',\Gamma') = (\mathbf{Z}[1/2][a_2, a_4], A'[r]/(r^3 + a_2 r^2 + a_4 r))$, where $I$ is some ideal consisting of complicated relations, and where the Hopf algebroid structure can be written down explicitly (as in \cite[Section 3]{bauer-tmf}). A straightforward calculation proves that the discriminant is then \begin{equation}\label{Delta-3} \Delta = a_2^2 b_4^2 - 16 b_4^3. \end{equation} Turning to $B$, recall that $\mathrm{BP}_\ast B \cong \mathrm{BP}_\ast[b_4, y_{6}]$. The map $(\mathrm{BP}_\ast B, \mathrm{BP}_\ast \mathrm{BP} \otimes_{\mathrm{BP}_\ast} \mathrm{BP}_\ast B) \to (A', \Gamma')$ of Hopf algebroids induced by the map $B\to \mathrm{tmf}$ sends $b_4$ to $b_4$ and $y_{6}$ to $a_2 b_4$ mod decomposables. It follows from Equation \eqref{Delta-3} that the element $\Delta$ already exists in the $0$-line of the Adams-Novikov spectral sequence for $B$. Using Sage to calculate the $3$-series of the formal group law of the elliptic curve \eqref{weier-3}, one finds that $v_1$ is $a_2$ up to a $3$-adic unit. We conclude that $$c_4 = 4v_1^2 - 24b_4, \ c_6 = -4v_1^3 - 144 y_{6}.$$ This completes the proof of Proposition \ref{delta-lift}. \end{proof} By Remark \ref{same-12}, the elements $c_4,c_6\in \pi_\ast \mathrm{tmf}$ lift to $\pi_\ast B$. The key to lifting the other elements of $\pi_\ast \mathrm{tmf}$ is the following: \begin{theorem}\label{d5-B} There is a differential $d_5(\Delta) = \alpha \beta^2$ in the Adams-Novikov spectral sequence for $B$. Moreover, $\alpha \beta^2$ vanishes in $\pi_\ast B$, and $3\Delta$ is a permanent cycle. \end{theorem} \begin{proof} The element $\alpha \beta^2$ is detected in filtration $5$ in the Adams-Novikov spectral sequence for the sphere. We first check that there is no class above filtration $5$ in stem $23$ the Adams-Novikov spectral sequence for $B$. In Figure \ref{B-cell}, we depict the $20$-skeleton of $B$. Now, $\alpha \beta^2$ is the first class in filtration $5$ in the Adams-Novikov spectral sequence for the sphere, so there are no classes above filtration $5$ in stem $23$ in the algebraic Atiyah-Hirzebruch spectral sequence (converging to the Adams-Novikov spectral sequence of $B$). Consequently, there are no classes above filtration $5$ in stem $23$ of the Adams-Novikov spectral sequence for $B$. It follows that $\alpha \beta^2$ must be detected in filtration $5$ in the Adams-Novikov spectral sequence for $B$. Moreover, if the $d_5$-differential on $\Delta$ exists, then it is the longest one (and hence $3\Delta$ is a permanent cycle). We now prove the $d_5$-differential. We first claim that there is no nonzero target for a $d_r$-differential on $\Delta$ for $2\leq r\leq 4$. Indeed, such a class must live in bidegree $(t-s,s) = (23,r)$, so we only need to check that there are no classes in that bidegree. Such a class can only possibly come from those permanent cycles in the algebraic Atiyah-Hirzebruch spectral sequence which are supported on stems $23-8 = 15$, $23-12 = 11$, $23-16 = 7$, or $23-20 = 3$ of the Adams-Novikov spectral sequence of the sphere. The only classes in these stems are in Adams-Novikov filtration $1$, so cannot possibly contribute to a class that lives in bidegree $(t-s,s) = (23,r)$ with $2\leq r\leq 4$. Therefore, the first possibility for a differential on $\Delta$ is the $d_5$-differential $d_5(\Delta) = \alpha \beta^2$. The existence of this differential is forced by the same differential in the Adams-Novikov spectral sequence for $\mathrm{tmf}$. Therefore, $\alpha\beta^2$ vanishes in the $E_\infty$-page of the ANSS for $B$; there may, however, be a multiplicative extension causing $\alpha \beta^2$ to be nonzero in $\pi_\ast B$. But multiplicative extensions have to jump filtration, and we established that there are no classes above filtration $5$ in stem $23$ of the Adams-Novikov spectral sequence for $B$. Therefore, $\alpha\beta^2 = 0$ in $\pi_\ast B$, as desired. \begin{figure} \begin{tikzpicture}[scale=0.75] \draw [fill] (0, 0) circle [radius=0.05]; \draw [fill] (2, 0) circle [radius=0.05]; \draw [fill] (3, 0) circle [radius=0.05]; \draw [fill] (4, 0) circle [radius=0.05]; \draw [fill] (5, 0) circle [radius=0.05]; \draw (2,0) to (3,0); \draw (4,0) to (5,0); \draw (0,0) to[out=90,in=90] (2,0); \draw (3,0) to[out=90,in=90] (5,0); \draw (0,0) to[out=-90,in=-90] (4,0); \end{tikzpicture} \caption{Cell structure of the $20$-skeleton of $B$; the bottom cell (in dimension $0$) is on the left; straight lines are $\alpha_1$, and curved lines correspond to $\alpha_2$ and $\alpha_4$, in order of increasing length.} \label{B-cell} \end{figure} \end{proof} \begin{corollary} The elements $3\Delta,3\Delta^2,\Delta^3\in\pi_\ast \mathrm{tmf}$ lift to $\pi_\ast B$. \end{corollary} \begin{proof} Theorem \ref{d5-B} verifies that $3\Delta$ lifts to $\pi_\ast B$ and that the brackets in Corollary \ref{bracket-delta} are well-defined in $\pi_\ast B$. This implies that $3\Delta^2$ and $\Delta^3$ in $\pi_\ast \mathrm{tmf}$ lift to $\pi_\ast B$ up to indeterminacy. Remark \ref{toda-unique-3} tells us the indeterminacy of the brackets in Corollary \ref{bracket-delta}. If $3\Delta^2 + 3n[3\Delta^2] = 3(3n+1)\Delta^2$ (resp. $\Delta^3 + 3n\Delta^3 = (3n+1)\Delta^3$) lifts for some nonzero $n\in \mathbf{Z}_{(3)}$, then so does $3\Delta^2$ (resp. $\Delta^3$) since $3n+1$ is a $3$-local unit. \end{proof} The elements $\alpha$, $\beta$, $c_4$, $c_6$, $3\Delta$, $3\Delta^2$, $\Delta^3$, and $b = \langle \beta^2, \alpha, \alpha\rangle$ (no indeterminacy) generate the homotopy of $\mathrm{tmf}$. Moreover, $\alpha \beta^2 = 0$ in $\pi_\ast B$ and $\alpha^2 = 0$ in the sphere, so $b$ admits a lift to $\pi_\ast B$. Therefore, all generators of $\pi_\ast \mathrm{tmf}$ admit lifts to $\pi_\ast B$; this yields Theorem \ref{main-thm-invert-2}. \section{Theorem \ref{main-thm} after localizing at $2$}\label{prime-2} Our goal in this section is to prove: \begin{theorem}\label{main-thm-prime-2} The map $\pi_\ast B_{(2)} \to \pi_\ast \mathrm{tmf}_{(2)}$ is surjective on homotopy. \end{theorem} Together with Theorem \ref{main-thm-invert-2} and Corollary \ref{12-equiv}, this proves Theorem \ref{main-thm}. \begin{convention} We shall localize at $2$ throughout this section, unless explicitly mentioned otherwise. \end{convention} \subsection{The Adams-Novikov spectral sequence for $\mathrm{tmf}$}\label{anss-tmf-2} In this section, we review the Adams-Novikov spectral sequence for $\mathrm{tmf}$ at $p=2$. The following result is well-known, and is proved in \cite{bauer-tmf} (see also \cite[Proposition 1.4.9]{mark-handbook}): \begin{theorem} The $E_2$-page of the descent spectral sequence (isomorphic to the Adams-Novikov spectral sequence) for $\mathrm{tmf}$ is $$\H^\ast(\M_\mathrm{ell}; \omega^{2\ast}) \cong \mathbf{Z}_{(2)}[c_4, c_6, \Delta^{\pm 1}, \eta, a_1^2 \eta, \nu, \epsilon, \kappa, \ol{\kappa}]/I,$$ where $I$ is the ideal generated by the relations \begin{gather*} 2\eta, \eta \nu, 4\nu, 2\nu^2, \nu^3 = \eta\epsilon, \\ 2\epsilon, \nu\epsilon, \epsilon^2, 2a_1^2 \eta, \nu a_1^2 \eta, \epsilon a_1^2 \eta, (a_1^2 \eta)^2 = c_4 \eta^2, \\ 2\kappa, \eta^2 \kappa, \nu^2 \kappa = 4\ol{\kappa}, \epsilon\kappa, \kappa^2, \kappa a_1^2 \eta, \\ \nu c_4, \nu c_6, \epsilon c_4, \epsilon c_6, a_1^2 \eta c_4 = \eta c_6, a_1^2 \eta c_6 = \eta c_4^2, \\ \kappa c_4, \kappa c_6, \ol{\kappa} c_4 = \eta^4 \Delta, \ol{\kappa} c_6 = \eta^2 (a_1^2 \eta) \Delta, 1728 \Delta = c_4^3 - c_6^2. \end{gather*} \end{theorem} \begin{remark}\label{c4-2c6} The elements $c_4$ and $2c_6$ are permanent cycles. There is a map $\mathrm{tmf} \to \mathrm{tmf}_1(3)$, where the target is complex oriented. The elements $c_4$ and $2c_6$ are nontrivial in $\pi_\ast \mathrm{tmf}_1(3)$. In fact, the image of the map $\mathrm{tmf}\to \mathrm{tmf}_1(3)$ consists of the elements $c_4$, $2c_6$, $c_4 \Delta^k$, and $2c_6 \Delta^k$ for $k\geq 1$, so these elements must be permanent cycles in the Adams-Novikov spectral sequence for $\mathrm{tmf}$. \end{remark} The ANSS for $\mathrm{tmf}$ is essentially determined from Toda's relation $\ol{\kappa} \nu^3 = 0$ in $\pi_{29} \S$. We will explain this statement in the rest of this section. The relation $\ol{\kappa} \nu^3 = 0\in \pi_{29} \S$ is enforced by the differential $d_5(\beta_{6/2}) = \ol{\kappa} \nu^3$ in the ANSS for the sphere (see \cite{isaksen-anss-charts}). Then: \begin{lemma} There is a relation $\beta_{6/2} = \Delta \nu^2$ in the $E_2$-page of the Adams-Novikov spectral sequence for $\mathrm{tmf}$. \end{lemma} This gives the differential $d_5(\Delta \nu^2) = \ol{\kappa} \nu^3$ in the ANSS for $\mathrm{tmf}$. By $\nu$-linearity, we have $d_5(\Delta) = \ol{\kappa}\nu$. Since $4\nu = 0$ in the $E_2$-page of the ANSS, the class $4\Delta$ survives. The relation $4\nu = \eta^3$ forces a $d_7$-differential on $4\Delta$. In summary: \begin{theorem}\label{d5} There are differentials $d_5(\Delta) = \ol{\kappa}\nu$ and $d_7(4\Delta) = \ol{\kappa} \eta^3$ in the ANSS for $\mathrm{tmf}$, and $\ol{\kappa}\nu = 0$ in $\pi_\ast \mathrm{tmf}$. \end{theorem} In particular, since $2\eta = 0$ in the ANSS, $8\Delta$ survives to the $E_8$-page. There are no more differentials, so it is a permanent cycle. Theorem \ref{d5} then shows that there is a Toda bracket $8\Delta \in \langle 8, \nu, \ol{\kappa}\rangle$ in $\pi_\ast \mathrm{tmf}$; this bracket is well-defined since $8\nu = 0$ in $\pi_\ast \S$. This can be expressed by the claim that $8\Delta$ may be expressed as a composite $$S^{24} \to \Sigma^{20} C\nu \xrightarrow{\ol{\kappa}} \mathrm{tmf},$$ where the first map is degree $8$ on the top cell. Similarly, $\Delta \eta \in \langle \eta, \nu, \ol{\kappa}\rangle$ in $\pi_\ast \mathrm{tmf}$; this bracket is well-defined since $\eta\nu = 0$ in $\pi_\ast \S$. Arguing in the same way, and using the spherical relations $2\nu^2 = 0$, $\epsilon\nu = 0$, we find that: \begin{prop}\label{toda-1} The following Toda brackets exist in $\pi_\ast \mathrm{tmf}$: \begin{enumerate} \item $8\Delta \in \langle 8, \nu, \ol{\kappa}\rangle$; \item $\Delta\eta = \langle \eta, \nu, \ol{\kappa}\rangle$; \item $2\Delta\nu = \langle 2\nu, \nu, \ol{\kappa}\rangle$; \item $\Delta\epsilon = \langle \epsilon, \nu, \ol{\kappa}\rangle$; \item $\Delta\eta \kappa = \langle \eta\kappa, \nu, \ol{\kappa}\rangle$; \item $\Delta\eta\ol{\kappa} = \langle \eta\ol{\kappa}, \nu, \ol{\kappa}\rangle$. \end{enumerate} None of these except the first have any indeterminacy. \end{prop} To describe the other elements in $\pi_\ast \mathrm{tmf}$, we adopt a slightly different approach from Section \ref{anss-tmf} --- we will not bother writing down all the generators of $\pi_\ast \mathrm{tmf}$ as Toda brackets of spherical elements unless it is convenient/necessary to do so. This is only to streamline exposition, although one can of course work this out at one's own leisure; see Remark \ref{james}. The $d_5$-differential on $\Delta$ forces a differential $d_5(\Delta^k) = k\Delta^{k-1} \ol{\kappa}\nu$. The $d_7$-differential $d_7(\Delta^4) = \Delta^3\ol{\kappa}\eta^3$ now implies that the classes $\{\Delta^{8k}, 2\Delta^{8k+4}, 4\Delta^{4k+2}, 8\Delta^{2k+1}\}$ survive to the $E_8 = E_9$-page. In fact, these are permanent cycles. A simple induction on $k$ shows: \begin{prop}\label{toda-2} Up to units, we have \begin{enumerate} \item $\Delta^{8k}\in \langle 2, \Delta^{8k-1} \eta^3, \ol{\kappa}\rangle$ with indeterminacy $2\mathbf{Z}_{(2)}\{\Delta^{8k}\}$; \item $2\Delta^{8k+4} \in \langle 2, \Delta^{8k+3} \eta^3, \ol{\kappa}\rangle$ with indeterminacy $2\mathbf{Z}_{(2)}\{2\Delta^{8k+4}\}$; \item $4\Delta^{4k+2} \in \langle 2, 2\Delta^{4k+1} \nu, \ol{\kappa}\rangle$ with indeterminacy $2\mathbf{Z}_{(2)}\{4\Delta^{4k+2}\}$; \item $8\Delta^{2k+1} \in \langle 8, \Delta^{2k} \nu, \ol{\kappa}\rangle$ with indeterminacy $8\mathbf{Z}_{(2)}\{8\Delta^{2k+1}\}$. \end{enumerate} \end{prop} We now turn to the other generators of $\pi_\ast \mathrm{tmf}$, listed in \cite[Figure 1.2]{mark-handbook}. \begin{prop}\label{toda-3} We have the following Toda brackets in $\pi_\ast \mathrm{tmf}$, each without any indeterminacy: \begin{enumerate} \item $\Delta^2 \nu = \langle \nu, 2\nu \Delta, \ol{\kappa}\rangle$; \item $\Delta^4 \eta = \langle \eta, \Delta^3 \eta^3, \ol{\kappa}\rangle$; \item $\Delta^4 \nu = \langle \nu, \Delta^3 \eta^3, \ol{\kappa}\rangle$; \item $\Delta^4 \epsilon = \langle \epsilon, \Delta^3 \eta^3, \ol{\kappa}\rangle$; \item $\Delta^4 \kappa = \langle \kappa, 4\nu, 3\nu, 2\nu, \nu, \ol{\kappa}^4\rangle$; \item $2\Delta^5 \nu = \langle 2\nu, \Delta^4 \nu, \ol{\kappa}\rangle$; \item $\Delta^5 \epsilon = \langle \epsilon, \Delta^4 \nu, \ol{\kappa}\rangle$; \item $\Delta^6\nu = \langle \nu, 2\Delta^5\nu, \ol{\kappa}\rangle$; \end{enumerate} \end{prop} \begin{remark}\label{products} We have excluded those elements which can be derived using the multiplicative structure. All other elements (except for $c_4 \Delta^k$ and $2c_6 \Delta^k$) can be expressed as products of the elements listed in Propositions \ref{toda-1}, \ref{toda-2}, and \ref{toda-3}. Importantly, the proofs of these propositions \emph{only} use $\ol{\kappa}\nu = 0$ in $\pi_\ast \mathrm{tmf}$ (via Theorem \ref{d5}) and multiplicative relations in the sphere. \end{remark} \begin{remark}\label{james} There are a lot of interesting multiplicative extensions, described in \cite[Section 8]{bauer-tmf}, but we will not need them. Each of these relations can be derived essentially only using the $d_5$-differential of Theorem \ref{d5} and the multiplicative structure in the homotopy of the sphere. We can recast these extensions from the following perspective. The spectrum $C\nu$ is the Thom spectrum of the Spin-bundle over $S^4$ determined by the generator of $\pi_4 \mathrm{BSpin}$. Since $\mathrm{BSpin}$ is an infinite loop space, this bundle extends to one over $\Omega S^5$, and hence over the intermediate James constructions $J_k(S^4)$ for all $k\geq 1$. Let $J_k(S^4)^\mu$ denote the Thom spectrum of this bundle, so $J_1(S^4)^\mu = C\nu$. Since $\{J_k(S^4)\}$ forms a filtered $\E{1}$-space, we obtain a map $C\nu^{\wedge k} \to J_k(S^4)^\mu$. Taking the product of $\ol{\kappa}:\Sigma^{20} C\nu\to \mathrm{tmf}$ with itself $k$ times defines a map $$\ol{\kappa}^k: \Sigma^{20k} J_k(S^4)^\mu \to \Sigma^{20k} J_k(S^4)^\mu \wedge \mathrm{tmf} \to \mathrm{tmf}.$$ Suppose $x\in \pi_\ast \S$ lifts to a map $S^{4k + |x|}\to J_k(S^4)^\mu$ which given by $x$ on the top ($4k$-dimensional) cell of $J_k(S^4)^\mu$. Then the composite $S^{4k + |x|}\to J_k(S^4)^\mu \wedge \mathrm{tmf}$ defines an element of the form $x \Delta^k\in \pi_{24k + |x|} \mathrm{tmf}$. For instance, we have: \begin{enumerate} \item $\Delta^2 \nu\in \langle \nu, 2\nu, \nu, \ol{\kappa}^2\rangle$; \item $\Delta^4 \eta\in \langle \eta, 4\nu, 3\nu, 2\nu, \nu, \ol{\kappa}^4\rangle$; \item $\Delta^4\nu\in \langle \nu, 4\nu, 3\nu, 2\nu, \nu, \ol{\kappa}^4\rangle$; \item $\Delta^4 \epsilon\in \langle \epsilon, 4\nu, 3\nu, 2\nu, \nu, \ol{\kappa}^4\rangle$; \item $\Delta^4 \kappa\in \langle \kappa, 4\nu, 3\nu, 2\nu, \nu, \ol{\kappa}^4\rangle$; \item $2\Delta^5 \nu\in \langle 2\nu, 5\nu, 4\nu, 3\nu, 2\nu, \nu, \ol{\kappa}^5\rangle$; \item $\Delta^5 \epsilon\in \langle \epsilon, 5\nu, 4\nu, 3\nu, 2\nu, \nu, \ol{\kappa}^5\rangle$. \end{enumerate} The brackets in (b), (c), and (e) appear in \cite[Corollary 8.7]{bauer-tmf}. The others may also be obtained by arguing as Bauer does: they are consequences of the bracket $\ol{\kappa} = \langle \nu, 2\nu, 3\nu, 4\nu, \nu, \eta\rangle = \langle \nu, 2\nu, 3\nu, 4\nu, \eta, \nu\rangle$ in $\pi_\ast \mathrm{tmf}$ (no indeterminacy), stated as \cite[Lemma 8.6]{bauer-tmf}. \end{remark} \begin{remark} Mark Behrens pointed out to us that Mahowald expected $\ol{\kappa}^7 = 0$ in $\pi_\ast \S_{(2)}$ (it is known that $\ol{\kappa}^6 = 0$ in $\pi_\ast \mathrm{tmf}_{(2)}$). It would be interesting to know whether this is related to the existence of $\Delta^8$ in $\pi_\ast \mathrm{tmf}$ via the approach given in Remark \ref{james}. \end{remark} Finally, we prove Proposition \ref{toda-3}. \begin{proof}[Proof of Proposition \ref{toda-3}] We prove this case-by-case. \begin{enumerate} \item Since $d_5(\Delta^2) = 2\Delta \ol{\kappa}\nu$ and $2\nu^2 = 0$ in the ANSS for the sphere, we find that $\Delta^2 \nu\in \langle \nu, 2\nu \Delta, \ol{\kappa}\rangle$. We provide the argument for indeterminacy in this case, but not for the others since the argument is essentially the same. The indeterminacy lives in $\ol{\kappa}\pi_{31} \mathrm{tmf} + \nu \pi_{48} \mathrm{tmf}$, but $\ol{\kappa}\pi_{31} \mathrm{tmf} \cong \nu \pi_{48} \mathrm{tmf} \cong 0$. \item Since $d_7(\Delta^4) = \Delta^3 \ol{\kappa}\eta^3$, we have $d_7(\Delta^4 \eta) = \Delta^3 \ol{\kappa}\eta^4 = 0$. Therefore, $\Delta^4 \eta\in \langle \eta, \Delta^3 \eta^3, \ol{\kappa}\rangle$. This bracket is well-defined because $\Delta^3 \eta^3 = 4\Delta^3 \nu$ exists in $\pi_\ast \mathrm{tmf}$, $\eta \nu = 0$ in the sphere, and $\ol{\kappa}\eta^3 = 0$ in $\mathrm{tmf}$. \item Similarly, since $d_7(\Delta^4) = \Delta^3 \ol{\kappa}\eta^3$, we have $d_7(\Delta^4\nu) = \Delta^3 \ol{\kappa}\eta^3 \nu = 0$. Therefore $\Delta^4 \nu\in \langle \nu, \Delta^3 \eta^3, \ol{\kappa}\rangle$. This bracket is well-defined because $\Delta^3 \eta^3$ exists in $\pi_\ast \mathrm{tmf}$, $\eta\nu = 0$ in the sphere, and $\ol{\kappa}\eta^3$ vanishes in $\mathrm{tmf}$. \item Similarly, since $d_7(\Delta^4) = \Delta^3 \ol{\kappa}\eta^3$, we have $d_7(\Delta^4\epsilon) = \Delta^3 \ol{\kappa}\eta^3\epsilon = 0$, since $2\epsilon = 0$. Therefore, $\Delta^4 \epsilon\in \langle \epsilon, \Delta^3 \eta^3, \ol{\kappa}\rangle$. This bracket is again well-defined. \item This is in \cite[Corollary 8.7]{bauer-tmf}, where $\Delta^4 \kappa$ is denoted $e[110,2]$. \item Since $d_5(\Delta^5) = 5\Delta^4 \ol{\kappa}\nu$, we have $d_5(2\Delta^5\nu) = 10\Delta^4 \ol{\kappa}\nu^2 = 0$, since $2\nu^2 = 0$. It follows that $2\Delta^5 \nu \in 5\langle 2\nu, \Delta^4 \nu, \ol{\kappa}\rangle$. This is well-defined because $\Delta^4\nu$ lives in $\pi_\ast \mathrm{tmf}$, $2\nu^2 = 0$ in the sphere, and $\ol{\kappa}\nu = 0$ in $\mathrm{tmf}$. \item Similarly, since $d_5(\Delta^5) = 5\Delta^4 \ol{\kappa}\nu$, we have $d_5(\Delta^5\epsilon) = 5\Delta^4 \ol{\kappa}\nu\epsilon = 0$, because $\epsilon\nu = 0$. It follows that $\Delta^5 \epsilon\in 5 \langle \epsilon, \Delta^4 \nu, \ol{\kappa}\rangle$, which is well-defined because $\Delta^4 \nu$ lives in $\pi_\ast \mathrm{tmf}$, $\epsilon\nu = 0$ in the sphere, and $\ol{\kappa}\nu = 0$ in $\mathrm{tmf}$. \item Since $d_5(\Delta^6) = 6\Delta^5 \ol{\kappa}\nu$, we have $d_5(\Delta^6\nu) = 6\Delta^5 \ol{\kappa}\nu^2 = 0$. We therefore have $\Delta^6\nu \in 3\langle \nu, 2\Delta^5 \nu, \ol{\kappa}\rangle$. This is well-defined because $2\nu \Delta^5$ lives in $\pi_\ast \mathrm{tmf}$, $2\nu^2 = 0$ in the sphere, and $\ol{\kappa}\nu = 0$ in $\mathrm{tmf}$. \end{enumerate} \end{proof} \subsection{The Adams-Novikov spectral sequence for $B$}\label{anss-B} In this section, we analyze the ring map $B\to \mathrm{tmf}$, and show that the generators of $\pi_\ast \mathrm{tmf}_{(2)}$ lift to $\pi_\ast B_{(2)}$. Again, we will localize at $p=2$ throughout. We begin by showing: \begin{prop}\label{Delta} There is an element in the $0$-line of the $E_2$-page of the ANSS for $B$ which lifts the element $\Delta$ in the $E_2$-page of the ANSS for $\mathrm{tmf}$. \end{prop} \begin{proof} We begin by recalling a representative for $\Delta$ in the cobar complex for $\mathrm{tmf}$ at $p=2$. Recall from Proposition \ref{delta-lift} that the Hopf algebroid $(\mathrm{BP}_\ast \mathrm{tmf}, \mathrm{BP}_\ast \mathrm{BP}\otimes_{\mathrm{BP}_\ast} \mathrm{BP}_\ast \mathrm{tmf})$ is isomorphic to the elliptic curve Hopf algebroid $(A, \Gamma)$ presenting the moduli stack of cubic curves. As in the $3$-complete setting (studied in Proposition \ref{delta-lift}), it is known that upon $2$-completion, every elliptic curve in Weierstrass form is isomorphic to one of the form $$y^2 + a_1 xy + a_3 y = x^3.$$ Consequently (as in the $3$-complete setting), the elliptic curve Hopf algebroid is isomorphic to a Hopf algebroid of the form $(A', \Gamma') = (\mathbf{Z}_2[a_1, a_3], A'[s,t]/I)$, where $I$ is some ideal consisting of complicated relations, and where the Hopf algebroid structure can be written down explicitly (as in \cite[Section 3]{bauer-tmf}). A straightforward calculation proves that the discriminant is then \begin{equation}\label{Delta-2} \Delta = a_1^3 a_3^3 - 27 a_3^4 = b_4^3 - 27 b_6^2. \end{equation} Turning to $B$, recall that we may identify $\mathrm{BP}_\ast B$ with $\mathrm{BP}_\ast[b_4, y_{6}]$. The map $B\to \mathrm{tmf}$ induces a map $(\mathrm{BP}_\ast B, \mathrm{BP}_\ast \mathrm{BP} \otimes_{\mathrm{BP}_\ast} \mathrm{BP}_\ast B) \to (A', \Gamma')$ of Hopf algebroids that sends $b_4$ to $b_4$ and $y_{6}$ to $b_6$ mod decomposables. It follows from Equation \eqref{Delta-2} that the element $\Delta$ already exists in the $0$-line of the Adams-Novikov spectral sequence for $B$. This finishes the proof of Proposition \ref{Delta}. \end{proof} Since the map $B\to \mathrm{tmf}$ is an equivalence in dimensions $\leq 12$ (Corollary \ref{12-equiv}), the elements $c_4$ and $2c_6$ lift to $\pi_\ast B$. We claim that $c_4 \Delta^k$ and $2c_6 \Delta^k$ live in $\pi_\ast B$; to show this, we argue as in Remark \ref{c4-2c6}. There is a map $B\to B\wedge DA_1 \simeq T(2)$ (see also Remark \ref{B-wood}), and there is a particular complex orientation of $\mathrm{tmf}_1(3)$ exhibiting it as a form of $\BP{2}$, which sits in a commutative diagram $$\xymatrix{ B \ar[r] \ar[d] & T(2) \ar[d] \ar[r] & \mathrm{BP} \ar[dl]\\ \mathrm{tmf} \ar[r] & \mathrm{tmf}_1(3). & }$$ There are choices of indecomposables $v_1$ and $v_2$ producing an isomorphism $\pi_\ast \mathrm{tmf}_1(3) \cong \mathbf{Z}_2[v_1, v_2]$ such that $c_4$ is sent to $v_1^4$ and $\Delta$ is sent to $v_2^4$. The map $T(2) \to \mathrm{tmf}_1(3)$ is surjective on homotopy, since $v_1$ and $v_2$ live in $\pi_\ast T(2)$. Since the elements $c_4$, $2c_6$, $c_4 \Delta^k$, and $2c_6 \Delta^k$ for $k\geq 1$ therefore already live in the homotopy of $T(2)$, we find by the same argument that these elements already live in the homotopy of $B$. We next turn to showing that the other elements of $\pi_\ast \mathrm{tmf}$ lift to $\pi_\ast B$. The following is the $2$-local analogue of Theorem \ref{d5-B}: \begin{theorem}\label{d5-B-2} There are differentials $d_5(\Delta) = \ol{\kappa}\nu$ and $d_7(4\Delta) = \ol{\kappa} \eta^3$ in the ANSS for $B$. Moreover, $\ol{\kappa}\nu = 0$ in $\pi_\ast B$, and $8\Delta$ is a permanent cycle. \end{theorem} \begin{proof} To prove the differentials, first note that the $d_7$-differential follows from the $d_5$-differential via the spherical relation $4\nu = \eta^3$; it therefore suffices to prove the $d_5$-differential. The class $\ol{\kappa}\nu$ lives in bidegree $(23,5)$ in the ANSS for $B$, since it lives in that bidegree in the ANSS for both the sphere and for $\mathrm{tmf}$. We claim that if $\ol{\kappa} \nu^2$ vanishes in $\pi_\ast B$, then the $d_5$-differential follows. It suffices to establish that $d_5(\Delta\nu) = \ol{\kappa} \nu^2$, since the desired $d_5$-differential then follows from $\nu$-linearity. Since $\ol{\kappa}\nu^2$ is the first element of filtration $5$ in the ANSS for the sphere which does not come from an $\eta$-tower on the $\alpha$-family elements (and such $\eta$-towers are truncated by ANSS $d_3$-differentials), there cannot be any differential off it. Moreover, if it is killed on any finite page in the ANSS, then it must in fact be zero in homotopy, since multiplicative extensions have to jump in filtration (and there is nothing of higher filtration). We need to show that $\ol{\kappa}\nu^2$ cannot be the target of a $d_r$-differential for $2\leq r\leq 4$; then the claimed $d_5$-differential on the $E_5$-page is forced by the same differential in the ANSS for $\mathrm{tmf}$. The algebraic Atiyah-Hirzebruch spectral sequence for the ANSS of $B$ implies that the only possibility for a differential is a $d_3$; but the source of any nontrivial $d_3$-differential vanishes when mapped to the ANSS for $\mathrm{tmf}$, so no such $d_3$-differential can exist. We now show that $\ol{\kappa} \nu^2$ vanishes in $\pi_\ast B$. For this, we argue as in \cite[Proposition 8.1]{hopkins-mahowald-eo2}. Namely, \cite[Lemma 8.2]{hopkins-mahowald-eo2} states that $\ol{\kappa} \nu^2 \in \langle \eta_4 \sigma, \eta, 2\rangle$. Recall that $\eta_4 = h_1 h_4$; by \cite[Table 21]{more-stable}, there is a $\sigma$-extension from $h_1 h_4$ to $h_4 c_0$. There is no indeterminacy in the above Toda bracket, so $\ol{\kappa} \nu^2$ will vanish if we show that $h_4 c_0$ vanishes in $\pi_{23}(B)$. In fact, it vanishes in the $E_2$-page of the ANSS for $B$: since the attaching map of the $8$-cell of $B$ is $\sigma$, the $\sigma$-extension on $\eta_4$ implies that $h_4 c_0$ is killed in the algebraic Atiyah-Hirzebruch spectral sequence for the ANSS of $B$ by a $d_1$-differential off the ANSS class $h_1 h_4$ supported on the cell in dimension $8$. \end{proof} Finally: \begin{proof}[Proof of Theorem \ref{main-thm-prime-2}] Theorem \ref{d5-B-2} implies that $8\Delta$ lifts to $\pi_\ast B$, and that all the brackets in $\pi_\ast \mathrm{tmf}$ in Propositions \ref{toda-1}, \ref{toda-2}, and \ref{toda-3} are well-defined in $\pi_\ast B$. The elements of $\pi_\ast \mathrm{tmf}$ in those propositions for which the bracket has no indeterminacy therefore lift to $\pi_\ast B$. By Remark \ref{products}, all that remains is to show that the constant multiples of the powers of $\Delta$ which live in $\pi_\ast \mathrm{tmf}$ in fact lift to $\pi_\ast B$. Theorem \ref{d5-B-2} implies that they lift up to indeterminacy, and this indeterminacy is specified in Proposition \ref{toda-2}. If $\Delta^{8k} + 2n\Delta^{8k} = (2n+1) \Delta^{8k}$ lifts for some $n\in \mathbf{Z}_{(2)}$, then so does $\Delta^{8k}$ since $2n+1$ is a $2$-local unit. Similarly, one finds that $2\Delta^{8k+4}$, $4\Delta^{4k+2}$, and $8\Delta^{2k+1}$ also lift to $\pi_\ast B$, as desired. \end{proof} \begin{remark}\label{ass-B} We briefly look at the Adams spectral sequence for $B$. The Steenrod module structure of the $20$-skeleton of $B$ is as in Figure \ref{B-cell}; since we are at the prime $2$, straight lines are $\mathrm{Sq}^4$, and curved lines correspond to $\mathrm{Sq}^8$ and $\mathrm{Sq}^{16}$, in order of increasing length. Using this, we can calculate the Adams spectral sequence in small dimensions. The Adams charts below were created with Hood Chatham's Ext calculator, and the Steenrod module file for $B$ in this range can be found at \url{http://www.mit.edu/~sanathd/input-B-leq-24-prime-2}. The $E_2$-page for $B$ in the first few dimensions is shown in Figure \ref{B-ass-2}; there are no classes in higher Adams filtration in stem $23$. The red class is $g = \ol{\kappa}$, and the purple lines are $d_2$-differentials. The differential on the class in stem $23$ already exists in the Adams spectral sequence for the sphere as $d_2(i) = h_0 Pd_0$. The other classes in stem $23$ except for the one in filtration $9$ are permanent cycles, and there is no multiplicative extension causing any of them to be $\ol{\kappa}\nu$ on homotopy. As shown in Figure \ref{B-ass-2-E3}, there is also a $d_3$-differential on the leftmost class $x_{24,1}^{(0)}$ in bidegree $(24,6)$ (which supports a $h_0$-tower) to the class in bidegree $(23,9)$; the class $h_0 x_{24,1}^{(0)}$ is a permanent cycle in the ASS for $B$ which is sent to $8\Delta$ in the ASS for $\mathrm{tmf}$. The class in bidegree $(25,5)$ is a permanent cycle in the ASS for $B$ which is sent to $\Delta\eta$ in the ASS for $\mathrm{tmf}$. \end{remark} \begin{remark}\label{connections} We now compare the approach of this paper with that of \cite{hopkins-mahowald-orientations}, where the $\E{1}$-ring $B$ was constructed under the name $\ol{X}$. The special case of our Theorem \ref{main-thm} for elements in $\pi_\ast \mathrm{tmf}$ of ANSS filtration $0$ is stated as \cite[Theorem 11.1]{hopkins-mahowald-orientations}, where a proof is only sketched. First, their Proposition 11.2 is a combination of our Theorem \ref{d5-B} and Theorem \ref{d5-B-2}. Secondly, their proof proceeds by calculating the mod $2$ Adams spectral sequence of $B$ in dimensions $\leq 24$ to show that $\ol{\kappa} \nu$ vanishes in the $2$-local homotopy of $B$. Their argument does not seem to resolve potential multiplicative extensions: as Figure \ref{B-ass-2-E3} shows, there are two possibilities for multiplicative extensions in the Adams spectral sequence which could make $\ol{\kappa}\nu$ nonzero in $\pi_\ast B_{(2)}$. (Namely, the classes in bidegrees $(23,6)$ and $(23,7)$ could represent $\ol{\kappa}\nu$.) Thirdly, Remark \ref{james} essentially gives a proof of their Lemma 11.5, which seems to appear without proof. \end{remark} \begin{figure} \includegraphics[scale=0.375]{B-image-gh2.png} \caption{$E_2$-page of the Adams spectral sequence for $B$. The class highlighted in red is $\ol{\kappa}$.} \label{B-ass-2} \end{figure} \begin{figure} \includegraphics[scale=0.375]{B-image-gh2-E3.png} \caption{$E_3$-page of the Adams spectral sequence for $B$. The class highlighted in red is $\ol{\kappa}$. There are no differentials in this range from the $E_4$-page onwards.} \label{B-ass-2-E3} \end{figure} \section{Applications}\label{apps} In this section, we study some applications of Theorem \ref{string-surj} and Theorem \ref{main-thm}. \subsection{A conjecture of Baker's} In \cite{baker-conjecture}, Baker constructed a certain collection of ${E_\infty}$-ring spectra $Mj_r$ with ${E_\infty}$-ring maps $Mj_1\to \H\mathbf{Z}$, $Mj_2\to \mathrm{bo}$, and $Mj_3\to \mathrm{tmf}$. He conjectured in \cite[Conjecture 6.2]{baker-conjecture} that the map $\pi_\ast Mj_3\to \pi_\ast \mathrm{tmf}$ is surjective. In this section, we show that this conjecture follows from Theorem \ref{main-thm}. We begin by recalling the definition of the ${E_\infty}$-ring spectra $Mj_r$. \begin{definition} Let $\mathrm{BO}\langle 2^r\rangle^{[2^{r+1}-1]}$ denote the $(2^{r+1}-1)$-skeleton of $\mathrm{BO}\langle 2^r\rangle$. Since $\mathrm{BO}\langle 2^r\rangle$ is an infinite loop space, the skeletal inclusion $\mathrm{BO}\langle 2^r\rangle^{[2^{r+1}-1]}\to \mathrm{BO}\langle 2^r\rangle$ induces a map $\Omega^\infty \Sigma^\infty \mathrm{BO}\langle 2^r\rangle^{[2^{r+1}-1]}\to \mathrm{BO}\langle 2^r\rangle$. The Thom spectrum of this map is the ${E_\infty}$-ring $Mj_r$. \end{definition} There is an evident ${E_\infty}$-map $Mj_r\to \mathrm{MO}\langle 2^r\rangle$, which in the case $r = 3$ defines an ${E_\infty}$-map $Mj_3\to \mathrm{MString}$. The following result proves the aforementioned conjecture of Baker's as an application of Theorem \ref{main-thm}: \begin{prop}[{\cite[Conjecture 6.2]{baker-conjecture}}] The composite $Mj_3\to \mathrm{MString} \to \mathrm{tmf}$ is surjective on homotopy, where $\mathrm{MString}\to \mathrm{tmf}$ is the Ando-Hopkins-Rezk orientation. \end{prop} \begin{proof} By Theorem \ref{main-thm}, it suffices to show that the map $B\to \mathrm{MString}$ factors through a map $B\to Mj_3$. Since $Mj_3$ is the Thom spectrum of a bundle over $\Omega^\infty \Sigma^\infty \mathrm{BString}^{[15]}$, this in turn follows from the existence of a map $N\to \Omega^\infty \Sigma^\infty \mathrm{BString}^{[15]}$ factoring $N\to \mathrm{BString}$. Recall that the map $N\to \mathrm{BString}$ was constructed via the map \eqref{fiber-sequence-map} of fiber sequences. The map $S^9\to \mathrm{B^2 String}$ factors as $S^9\to \Omega^{\infty-1} \Sigma^\infty \mathrm{BString}^{[15]}$, and so the map of fiber sequences in \eqref{fiber-sequence-map} factors as \begin{equation*} \xymatrix{ N \ar[r] \ar[d]_-\exists & \Omega S^{13} \ar[r] \ar[d] & S^9 \ar[d]\\ \Omega^\infty \Sigma^\infty \mathrm{BString}^{[15]} \ar[r] \ar[d] & \ast \ar[r] \ar[d] & \Omega^{\infty-1} \Sigma^\infty \mathrm{BString}^{[15]} \ar[d]\\ \mathrm{BString} \ar[r] & \ast \ar[r] & \mathrm{B^2 String}, } \end{equation*} as desired. \end{proof} \subsection{Hirzebruch's prize question} Another application of Theorem \ref{string-surj} was stated as \cite[Corollary 6.26]{hopkins-icm}, and provides an answer to Hirzebruch's prize question \cite[Page 86]{hirzebruch}. See also \cite{hopkins-mahowald-orientations}. \begin{corollary}\label{prize} There exists a $24$-dimensional compact smooth string manifold $M$ with $\hat{A}(M)=1$ and $\hat{A}(M,\tau_M\otimes \mathbf{C})=0$. \end{corollary} \begin{proof} By the discussion on \cite[Page 86]{hirzebruch}, the conditions on the $\hat{A}$-genus of $M$ are equivalent to the Witten genus of $M$ being $c_4^3 - 744 \Delta = \Delta(j - 744)$, where $j$ is the $j$-function. Let $M^8_0$ denote the Kervaire-Milnor almost parallelizable $8$-manifold; then, the $8$-manifold $-M^8_0 - 224 \mathbf{H}P^2$ (whose string cobordism class we will denote by $[N_{c_4}]$, where $N_{c_4}$ is the explicit manifold representative above) admits a string structure by \cite[Lemma 15]{laures-k1-local}. The map $\mathrm{tmf} \to \mathrm{bo}$ sends $c_4 \in \pi_8 \mathrm{tmf}$ to $v_1^4\in \pi_8 \mathrm{bo}$. By Lemma \ref{commute-lemma}, there is a commutative diagram: \begin{equation}\label{string-commute} \xymatrix{ \mathrm{MString} \ar[r] \ar[d] & \mathrm{MSpin} \ar[d]\\ \mathrm{tmf} \ar[r] & \mathrm{bo},} \end{equation} where the left vertical map is the Ando-Hopkins-Rezk orientation and the right vertical map is the Atiyah-Bott-Shapiro orientation. Consequently, the Witten genus of $-M^8_0 - 224 \mathbf{H}P^2$ is $c_4$. By Theorem \ref{string-surj}, the element $24\Delta\in \pi_{24} \mathrm{tmf}$ lifts to a class $[N_\Delta]$ in $\pi_{24} \mathrm{MString}$, where $N_\Delta$ is any manifold representative. Since $744\Delta = 31\cdot 24\Delta$, we conclude that the string cobordism class of the $24$-dimensional compact oriented smooth string manifold $N_{c_4}^3 - 31 N_\Delta$ has Witten genus $c_4^3 - 744 \Delta$, as desired. \end{proof} The proof of Corollary \ref{prize} utilized the following lemma. \begin{lemma}\label{commute-lemma} The diagram \eqref{string-commute} commutes, where the left vertical map is the Ando-Hopkins-Rezk orientation and the right vertical map is the Atiyah-Bott-Shapiro orientation. \end{lemma} \begin{proof} We need to show that the composite $\mathrm{MString}\to \mathrm{tmf} \to \mathrm{bo}$ comes from the Atiyah-Bott-Shapiro orientation. By \cite[Corollary 7.12]{koandtmf}, it suffices to show that this composite has the same characteristic series as the restriction of the $\hat{A}$-genus to string manifolds. There is an isomorphism $\pi_\ast \mathrm{bo}\otimes \mathbf{Q} \cong \mathbf{Z}[\beta^2]$, where $\beta^2$ lives in degree $4$ and is the square of the Bott element. Moreover, $\pi_\ast \mathrm{tmf} \otimes \mathbf{Q}$ is isomorphic to the ring of rational modular forms (of weight given by half the degree in $\pi_\ast \mathrm{tmf}\otimes \mathbf{Q}$) by \cite[Proposition 4.4]{bauer-tmf}. The map $\pi_\ast \mathrm{tmf} \otimes \mathbf{Q}\to \pi_\ast \mathrm{bo}\otimes \mathbf{Q}$ sends a modular form of weight $k$ with $q$-expansion $f(q) = \sum a_n q^n$ to the element $a_0 (\beta^2)^k\in \pi_{2k}\mathrm{bo}\otimes \mathbf{Q}$. Consequently, the composite $\pi_\ast\mathrm{MString}\to \pi_\ast \mathrm{tmf} \otimes \mathbf{Q} \to \pi_\ast \mathrm{bo}\otimes \mathbf{Q}$ sends a string manifold $M$ to the constant term of the $q$-expansion of its Witten genus. The lemma will therefore follow if this constant term is the $\hat{A}$-genus of $M$, but this follows from the discussion on \cite[Page 84]{hirzebruch}. \end{proof} \begin{remark} The modular form $c_4^3 - 744 \Delta$ is $\theta_{\Lambda_{24}} - 24 \Delta$, where $\Lambda_{24}$ is the $24$-dimensional Leech lattice and $\theta_{\Lambda_{24}}$ is its theta function. \end{remark} \begin{remark}\label{monster-action} The original motivation for Hirzebruch's prize question was to relate the geometry of the $24$-dimensional string manifold $M$ of Corollary \ref{prize} to representations of the monster group by constructing an action of the monster group on $M$. The question of constructing this action remains unresolved. \end{remark} \begin{remark} The disussion on \cite[Page 86]{hirzebruch} implies that $\hat{A}(N_\Delta) = 0$ and $\hat{A}(N_\Delta, \tau_{N_\Delta}\otimes \mathbf{C}) = 24$. It follows from \cite[Theorem A]{stolz-scalar} that $N_\Delta$ (which we may assume is simply-connected by surgery) admits a metric with positive scalar curvature. Since the Witten genus of $N_\Delta$ is nonzero, Stolz's conjecture in \cite{stolz-ricci} would imply that it does not admit a metric of positive-definite Ricci curvature. We do not know whether Stolz's conjecture holds in this particular case. Note, however, that there are examples of non-simply-connected manifolds which admit positive scalar curvature metrics but no metrics of positive-definite Ricci curvature: as pointed out to us by Stolz, a connected sum of lens spaces of dimension at least $3$ gives such a manifold. \end{remark} Corollary \ref{prize} may be generalized in the following manner. Recall the following definition from \cite[Section 2.3]{ono-web}. Let $j_1(z) = j(z) - 744$, and define $j_n(z)$ for $n\geq 2$ via $nT_n(j_1(z))$, where $T_n$ is the weight zero Hecke operator, acting on $f(z)$ via $$T_n f(z) = \sum_{d|n, ad = n} \sum_{b=0}^{d-1} f\left(\frac{az+b}{d}\right).$$ By \cite[Proposition 2.13]{ono-web}, $j_n(z)$ is a monic integral polynomial in $j(z)$ of degree $n$; for instance, $$j_2(z) = j(z)^2 - 1488 j(z) + 159768, \ j_3(z) = j(z)^3 - 2232 j(z)^2 + 1069956 j(z) - 36866976.$$ The functions $j_n(z)$ for $n\geq 0$ (where $j_0(z) = 1$) form a basis for the complex vector space of weakly holomorphic modular forms of weight $0$, and appear in the denominator formula for the monster Lie algebra. They may be defined by Faber polynomials on $j$. The generalization of Corollary \ref{prize} is as follows. \begin{prop}\label{generalization} For all $n\geq 0$, there is a $24n$-dimensional compact smooth string manifold $M^{24n}$ whose Witten genus is $\Delta^n j_n(z)$. \end{prop} \begin{remark} By arguing as in \cite[Pages 86-87]{hirzebruch}, we find that the twisted $\hat{A}$-genera of bundles over $M^{24n}$ constructed from the complexified tangent bundle of $M$ are integral linear combinations of dimensions of irreducible representations of the monster group; for instance, $\hat{A}(M^{48}; \Sym^2(\tau_M\otimes \mathbf{C}))$ is the coefficient of $q^2$ in $\Delta^2 j_2(z)$, which is $2\times (21296876 + 196883 + 1)$. More generally, $\hat{A}(M^{24n}; \Sym^2(\tau_M\otimes \mathbf{C}))$ is an integral linear combination of the dimensions of the $n$ smallest irreducible representations of the monster group. In light of Hirzebruch's original motivation for his prize question (see Remark \ref{monster-action}), it seems reasonable to conjecture that the $24n$-dimensional string manifold $M^{24n}$ admits an action of the monster group by diffeomorphisms. \end{remark} \begin{remark} It would be interesting to know if there is an analogue of Proposition \ref{generalization} for other McKay-Thompson series. \end{remark} Before providing the proof, we need the following result. \begin{theorem}\label{tmf-htpy} A modular form $f$ is in the image of the boundary homomorphism $\pi_\ast \mathrm{tmf} \to \mathrm{MF}_\ast$ in the Adams-Novikov spectral sequence if and only if it is expressible as an integral linear combination of monomials of the form $a_{ijk} c_4^i c_6^j \Delta^k$ with $i,k\geq 0$ and $j=0,1$, where $$a_{ijk} = \begin{cases} 1 & i>0,j=0\\ 2 & j=1\\ 24/\gcd(24,k) & i,j=0. \end{cases}$$ \end{theorem} \begin{proof} This is \cite[Proposition 4.6]{hopkins-icm}, proved in \cite{bauer-tmf}. \end{proof} \begin{proof}[Proof of Proposition \ref{generalization}] We have $$\Delta^n j_n(z) = \sum_{0\leq k\leq n} \alpha_k j(z)^k \Delta^n = \sum_{0\leq k\leq n} \alpha_k c_4^{3k} \Delta^{n-k},$$ for some integers $\alpha_k$ (where $\alpha_n = 1$). By Theorem \ref{string-surj} and Theorem \ref{tmf-htpy}, it suffices to show that the constant term $\alpha_0$ of $j_n(z)$ (when expanded as a monic integral polynomial in $j(z)$) is a multiple of $24/\gcd(24,n)$. The $j$-function vanishes on a primitive third root of unity, so $\alpha_0 = j_n(\omega)$. Its generating function is $$\sum_{n\geq 0} j_n(\omega) q^n = -\frac{j'(z)}{j(z)} = \frac{c_6}{c_4},$$ where $q = e^{2\pi i z}$ and $\omega$ is a primitive third root of unity. Let $m\geq 1$; we claim that the coefficients $a_{4,m}$ and $a_{6,m}$ of $q^m$ in the $q$-expansion for $c_4$ and $c_6$ (respectively) are divisible by $24/\gcd(24,m)$. Indeed, the expression for their $q$-expansion shows that $a_{4,n} = -240\sigma_3(n)$ and $a_{6,n} = 504\sigma_5(n)$, and both $240$ and $504$ are already divisible by $24$. Since the coefficient of $q^m$ in $1/c_4$ can be expressed as an integral linear combination of the $a_{4,k}$, it follows that the coefficient of $q^m$ for $m\geq 1$ in $c_6/c_4$ (which is $j_m(\omega)$) is divisible by $24$, and hence by $24/\gcd(24,m)$, as desired. \end{proof} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} For a group $G$ let $G'$ be the commutator subgroup. For an element $g \in G'$ the \emph{commutator length} ($\mathrm{cl}(g)$) denotes the minimal number of commutators needed to express $g$ as their product. We define the \emph{stable commutator length (scl)} via $\mathrm{scl}(g) = \lim_{n \to \infty} \mathrm{cl}(g^n)/n$. Stable commutator length is well studied and has geometric meaning: Let $X$ be a topological space, let $\gamma$ be a loop in $X$ and let $[\gamma]$ be the conjugacy class in $\pi_1(X)$ corresponding to $\gamma$. Then both $\mathrm{cl}([\gamma])$ and $\mathrm{scl}([\gamma])$ measure the minimal complexity of an orientable surface needed to bound $\gamma$. The theory of these invariants is developed by Calegari in \cite{calegari:scl}. A group $G$ is said to have a \emph{gap in stable commutator length} if there is a constant $C>0$ such that either $\mathrm{scl}(g) = 0$ or $\mathrm{scl}(g) \geq C$ for every non-trivial $g \in G'$. If $G$ is non-abelian, such a constant necessarily satisfies $C \leq 1/2$. Similarly we may define gaps in scl for classes of groups. Many classes of ``negatively curved'' groups have a gap in scl; see Subsection \ref{subsec:spectral gaps of scl}. A common way of establishing gaps in $\mathrm{scl}$ is by constructing \emph{quasimorphisms} and using \emph{Bavard's Duality Theorem} (see \cite{bavard}): For an element $g \in G'$, \[ \mathrm{scl}(g) = \sup_{\bar{\phi} \in \mathcal{Q}(G)} \frac{\bar{\phi}(g)}{2 D(\bar{\phi}) } \] where $\mathcal{Q}(G)$ is the space of \emph{homogeneous quasimorphisms} and $D(\bar{\phi})$ is the \emph{defect of $\bar{\phi}$}; see Subsection \ref{subsec:quasimorphisms and Bavard's Duality} for the definitions and the precise statement. Though it is known that for every element $g \in G'$ the supremum in Bavard's Duality Theorem is obtained by so-called \emph{extremal quasimorphism} these maps are only known explicitly in special cases and are hard to construct; see \cite{calegari:extremal} and \cite{calegari:isometric}. In the first part of this paper, we will construct a family of extremal quasimorphisms on non-abelian free groups. Let $\mathbb{F}_2 = \langle \texttt{a}, \texttt{b} \rangle$ be the free group on generators $\texttt{a}$ and $\texttt{b}$ and let $w \in \mathbb{F}_2$ be such that it does not conjugate into $\langle \texttt{a} \rangle$ or $\langle \texttt{b} \rangle$. Then we will construct a homogeneous quasimorphism $\bar{\phi}$ such that $\bar{\phi}(w) \geq 1$ and $D(\bar{\phi})\leq 1$. This realises the well-known gap of $1/2$ in the case of non-abelian free groups. Our approach is as follows: instead of constructing more complicated quasimorphisms $\bar{\phi}$ we first ``simplify'' the element $w$. This simplification is formalised by functions $\Phi \colon G \to \mathcal{A} \subset \mathbb{F}_2$, called \emph{letter-quasimorphisms}; see Definition \ref{defn:letter quasihomomorphism}. Here $\mathcal{A}$ denotes the set of \emph{alternating words} in $\mathbb{F}_2 = \langle \texttt{a}, \texttt{b} \rangle$ with the generators $\texttt{a}$ and $\texttt{b}$. These are words where each letter alternates between $\{ \texttt{a}, \texttt{a}^{-1} \}$ and $\{ \texttt{b}, \texttt{b}^{-1} \}$. Letter-quasimorphisms are a special case of quasimorphisms between arbitrary groups defined by Hartnick--Schweitzer \cite{hartnick-schweitzer}. After this simplification, the extremal quasimorphisms on $G$ are obtained by pulling back most basic quasimorphisms $\mathbb{F}_2 \to \mathbb{R}$ via such letter-quasimorphisms $G \to \mathcal{A} \subset \mathbb{F}_2$. We further deduce that such quasimorphisms are induced by a circle action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ by examining the defect and using Theorem \ref{thm:ghys} due to Ghys; see also \cite{ghys}. We show: \begin{reptheorem}{thm:main} Let $G$ be a group, $g \in G$ and suppose that there is a letter-quasimorphism $\Phi \colon G \to \mathcal{A}$ such that $\Phi(g)$ is non-trivial and $\Phi(g^n) = \Phi(g)^n$ for all $n \in \mathbb{N}$. Then there is an explicit homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ such that $\bar{\phi}(g) \geq 1$ and $D(\bar{\phi}) \leq 1$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{reptheorem} By Bavard's Duality Theorem it is immediate that if such an element $g$ additionally lies in $G'$, then $\mathrm{scl}(g) \geq 1/2$. We state Theorem \ref{thm:main} separately as it may also be applied in other cases than the ones presented in this paper; see Remark \ref{rmk:chen-heuer}. Many groups $G$ have the property that for any element $g \in G'$ there is a letter-quasimorphism $\Phi_g \colon G \to \mathcal{A}$ such that $\Phi_g(g^n) = \Phi_g(g)^n$ where $\Phi_g(g) \in \mathcal{A}$ is non-trivial. We will see that residually free groups and right-angled Artin groups have this property. Note the similarities of this property with being \emph{residually free}; see Remark \ref{rmk:criterion for gaps}. In the second part of this paper we apply Theorem \ref{thm:main} to amalgamated free products using left-orders. A subgroup $H < G$ is called \emph{left-relatively convex} if there is an order on the left cosets $G/H$ which is invariant under left multiplication by $G$. We will construct letter-quasimorphisms $G \to \mathcal{A} \subset \mathbb{F}_2$ using the sign of these orders. We deduce: \begin{reptheorem}{thm:amalgamation} Let $A, B, C$ be groups, $\kappa_A \colon C \hookrightarrow A$ and $\kappa_B \colon C \hookrightarrow B$ injections and suppose both $\kappa_A(C) < A$ and $\kappa_B(C) < B$ are left-relatively convex. If $g \in A \star_C B$ does not conjugate into one of the factors then there is a homogeneous quasimorphism $\bar{\phi} \colon A \star_C B \to \mathbb{R}$ such that $\bar{\phi}(g) \geq 1$ and $D(\bar{\phi}) \leq 1$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{reptheorem} It is possible to generalise Theorem \ref{thm:amalgamation} to graphs of groups; see Remark \ref{rmk:chen-heuer}. Again by Bavard's Duality Theorem we infer that any such $g$ which also lies in the commutator subgroup satisfies $\mathrm{scl}(g) \geq 1/2$. We apply this to right-angled Artin groups using the work of \cite{convexsub}. This way we prove: \begin{reptheorem}{thm:raags and scl} Every non-trivial element $g \in G'$ in the commutator subgroup of a right-angled Artin group $G$ satisfies $\mathrm{scl}(g) \geq 1/2$. This bound is sharp. \end{reptheorem} This is an improvement of the bound previously found in \cite{raags1} and \cite{raags2} who deduced a general bound of $1/24$ and a bound of $1/20$ if the right-angled Artin group is two dimensional. Every subgroup of a right-angled Artin group will inherit this bound. Such groups are now known to be an extremely rich class, following the theory of special cube complexes. See \cite{wise}, \cite{haglund-wise}, \cite{agol}, \cite{martin1} and \cite{martin2}. Stable commutator length may serve as an invariant to distinguish virtually special from special cube complexes. We collect some properties of the quasimorphisms constructed in this paper. \begin{itemize} \item The quasimorphisms are induced by circle actions $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ even though we do not construct the explicit action $\rho$. In particular, for every $e \not = g \in F'$ where $F$ is a non-abelian free group and $\mathrm{scl}(g) = 1/2$ there is an \emph{extremal} quasimorphism $\bar{\phi} \colon F \to \mathbb{R}$ induced by a circle action. It is unknown if for an arbitrary element $g \in F'$ there is an action of $F$ on the circle such that the induced quasimorphism is extremal with respect to $g$. \item There are relatively few quasimorphisms needed to obtain the $1/2$ bound in Theorem \ref{thm:raags and scl}. Let $G$ be a right-angled Artin group. Analysis of the constructions show that there is a sequence $\mathcal{S}_N \subset \mathcal{Q}(G)$ of nested sets of homogeneous quasimorphisms such that for every non-trivial cyclically reduced element $g$ of length less than $N$ there is some $\bar{\phi} \in \mathcal{S}_N$ such that $\bar{\phi}(g) \geq 1$ and $D(\bar{\phi}) \leq 1$. We see that $|\mathcal{S}_N| = O(N)$ and the rate-constant only depends on the number of generators of the right-angled Artin group. \item We obtain gap results even for elements which are not in the commutator subgroup. This suggests that it may be interesting to use Bavard's Dualtiy Theorem as a generalisation of stable commutator length to an invariant of general group elements $g \in G$. That is to study the supremum of $\bar{\phi}(g) / 2$ where $\bar{\phi}$ ranges over all homogeneous quasimorphisms with $D(\bar{\phi}) = 1$ which vanish or are bounded on a fixed generating set. In \cite{calegari:ziggurats} the authors studied this supremum over all homogeneous quasimorphisms induced by circle actions. They could prove that this supremum has certain qualitative similarities to the experimental values observed for $\mathrm{scl}$. This includes the experimental phenomenon that values with low denominators appear more frequently in $\mathrm{scl}$. \end{itemize} \subsection{Organisation} In Section \ref{sec:QM and Bavard} we introduce notation, definitions and basic or well established results on stable commutator length, quasimorphisms and Bavard's Duality Theorem. In Section \ref{sec:letter-thin triples and alpha, beta} we introduce \emph{letter-thin triples} which are a special type of triples $(x_1,x_2,x_3)$ of alternating elements $x_1, x_2, x_3 \in \mathcal{A}$. These will be crucial in estimating the defect of the quasimorphisms constructed in this paper. We will define maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$, which we show to respect letter-thin triples in Lemma \ref{lemma:alpha keeps thin.}. In Section \ref{sec:gaps via Letter-Quasimorphisms} we define and study \emph{letter-quasimorphisms} which are maps from arbitrary groups to alternating words of the free group. We deduce Theorem \ref{thm:main} which serves as a criterion for $\mathrm{scl}$-gaps of $1/2$ using these letter-quasimorphisms. Section \ref{sec:Left orders and convex subgroups} recalls some results of \cite{convexsub} on left relatively convex subgroups and orders on groups. Using the sign of these orders we are able to deduce $1/2$ gaps for amalgamated free products in Section \ref{sec:amalgamation}; see Theorem \ref{thm:amalgamation}. We show the $1/2$ gaps for right-angled Artin groups in Section \ref{sec:RAAGs and scl}; see Theorem \ref{thm:raags and scl}. \subsection*{Acknowledgements} I would like to thank my supervisor, Martin Bridson, for his help, support and guidance, and Ric Wade for his very helpful comments. I would further like to thank the referee for carefully reading the paper and recommending helpful improvements. Moreover, I would like to thank the Isaac Newton Institute for Mathematical Sciences in Cambridge for support and hospitality during the programme \emph{Non-Positive Curvature Group Actions and Cohomology} where work on this paper was undertaken. I would like to thank Danny Calegari for a stimulating conversation at the Isaac Newton Institute and Max Forester for pointing out errors in a previous version of this paper. This work was supported by EPSRC grant no EP/K032208/1. The author is also supported by the Oxford-Cocker Scholarship. \section{Quasimorphisms and Bavard's Duality Theorem} \label{sec:QM and Bavard} In Subsection \ref{subsec:quasimorphisms and Bavard's Duality} we give basic properties and definitions of stable commutator length and Bavard's Duality Theorem. In Subsection \ref{subsec:spectral gaps of scl} we collect some known results on (spectral) gaps in stable commutator length. In Subsections \ref{subsec:generalised qm} we define generalised quasimorphisms and in Subsection \ref{subsec:brooks and 2-brooks} the well known Brooks quasimorphisms. \subsection{Quasimorphisms and Bavard's Duality Theorem} \label{subsec:quasimorphisms and Bavard's Duality} For what follows Greek letters ($\alpha$, $\beta$) will denote generic functions, upper-case Roman letters ($A$, $B$) will denote generic groups, lower-case Roman letters ($a,b$) generic group elements and code-font ($\texttt{a}$, $\texttt{b}$) will denote letters in a free group. We will stick to this notation unless it is mathematical convention to do otherwise. Let $G$ be a group. For two elements $g,h \in G$ the \emph{commutator} is defined via $[g,h] = g h g^{-1} h^{-1}$ and the group generated by all such commutators is called the \emph{commutator subgroup} of $G$ and is denoted by $G'$. For an element $g \in G'$ we set \[ \mathrm{cl}(g) = \min \{ k \mid g = \prod_{i=1}^k [g_i,h_i]; g_i, h_i \in G \} \] the \emph{commutator length of $g$}. Note that $\mathrm{cl}$ is subadditive and hence the limit \[ \mathrm{scl}(g) = \lim_{n \to \infty} \frac{\mathrm{cl}(g^n)}{n} \] exists and is called \emph{stable commutator length (scl)}. See \cite{calegari:scl} for a comprehensive reference on scl. Calegari showed that in non-abelian free groups scl can be computed efficiently in polynomial time and is rational. For a group $G$, the set of possible values of $\mathrm{scl}$ is not fully understood, even for non-abelian free groups. See Subsection \ref{subsec:spectral gaps of scl} for a discussion on gaps in $\mathrm{scl}$. We note the following basic property: \begin{prop} $\mathrm{scl}$ is monotone and characteristic. That is, for any group homomorphism $\theta \colon G \to H$ and any $g \in G$ we have $\mathrm{scl}(g) \geq \mathrm{scl}( \theta(g))$. If $\theta$ is an automorphism, then $\mathrm{scl}(g) = \mathrm{scl}( \theta(g))$. \end{prop} A \emph{quasimorphism} is a map $\phi \colon G \to \mathbb{R}$ such that there is a constant $D$, such that for all $g, h \in G$, $|\phi(g) + \phi(h) - \phi(gh) | \leq D$. The infimum of all such $D$ is called the \emph{defect} of $\phi$ and denoted by $D(\phi)$. Note that quasimorphisms form a vector space under pointwise addition and scalar multiplication. A quasimorphism $\bar{\phi}$ is said to be \emph{homogeneous} if $\bar{\phi}(g^n) = n \bar{\phi}(g)$ for all $n \in \mathbb{Z}$, $g \in G$. In particular, $\bar{\phi}$ is \emph{alternating}, i.e. $\bar{\phi}(g^{-1}) = - \bar{\phi}(g)$ for all $g \in G$. Every quasimorphism $\phi \colon G \to \mathbb{R}$ is boundedly close to a unique homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ defined via \[ \bar{\phi}(g) := \lim_{n \to \infty} \frac{\phi(g^n)}{n} \] and we call $\bar{\phi}$ the \emph{homogenisation} of $\phi$. Homogeneous quasimorphisms on $G$ form a vector space, denoted by $\mathcal{Q}(G)$. \begin{prop} \label{prop:defect of homogenisation doubles} Let $\phi \colon G \to \mathbb{R}$ be a quasimorphism and let be $\bar{\phi}$ be its homogenisation. Then $D(\bar{\phi}) \leq 2 D(\phi)$. \end{prop} See Lemma 2.58 of \cite{calegari:scl} for a proof. For what follows we will \emph{always} decorate homogeneous quasimorphisms with a bar-symbol, even if they are not explicitly induced by a non-homogeneous quasimorphism. We refer the reader to \cite{frigerio} and \cite{calegari:scl} for references on quasimorphisms and stable commutator length. If $g_1$ and $g_2$ lie in the same conjugacy class of $G$ then $\bar{\phi}(g_1) = \bar{\phi}(g_2)$, hence homogeneous quasimorphisms are class functions. The key ingredient to calculate gaps in stable commutator length is Bavard's Duality Theorem: \begin{thm} \label{thm:Bavards duality} \cite{bavard} Let $G$ be a group and let $g \in G'$. Then \[ \mathrm{scl}(g) = \sup_{\bar{\phi} \in \mathcal{Q}(G)} \frac{|\bar{\phi}(g)|}{2 D(\bar{\phi})}. \] \end{thm} See \cite{calegari:scl} for a proof and a generalisation of this statement. This theorem allows us to estimate stable commutator length using (homogeneous) quasimorphisms. It can be shown that the supremum in Bavard's Duality Theorem is obtained. That is, for every element $g \in G'$ there is a homogeneous quasimorphism $\bar{\phi}$ with $D(\bar{\phi}) = 1$ such that $\mathrm{scl}(g) = \bar{\phi}(g)/2$. These quasimorphisms are called \emph{extremal} and were studied in \cite{calegari:extremal}. \subsection{(Spectral) Gaps in scl} \label{subsec:spectral gaps of scl} It was shown by \cite{DH} that every non-trivial element $w \in \mathbb{F}_n'$ in the commutator subgroup of a non-abelian free group satisfies that $\mathrm{scl}(w) \geq 1/2$ and that every non-trivial commutator $[w_1,w_2] \in \mathbb{F}_n$ satisfies $\mathrm{scl}([w_1,w_2]) = 1/2$. Using the monotonicity of scl we may conclude that for an arbitrary group $G$ every commutator $[g_1,g_2] \in G'$ satisfies $\mathrm{scl}([g_1,g_2]) \leq 1/2$. On the other hand, some elements $g \in G'$ satisfy $\mathrm{scl}(g) = 0$ for trivial reasons, for example if they are torsions or a positive power of this element is conjugate to a negative power of this element. We call the infimum of $\{ scl(g) > 0 \mid g \in G' \}$ the \emph{gap of $\mathrm{scl}$}, often called the \emph{spectral gap}, and say that a group \emph{has a gap in scl} if this number is positive. Many classes of ``negatively curved'' groups have a gap in scl. \begin{itemize} \item Residually free groups have a gap of exactly $1/2$ by Duncan and Howie \cite{DH}. \item Mapping class groups of closed orientable surfaces, possibly with punctures, have a gap depending on the surface; see \cite{BBF}. \item Hyperbolic groups have a gap which depends on the hyperbolicity constant and the number of generators; see \cite{calegari_fujiwara}. \item Some classes of groups may not have a uniform gap but the first accumulation point on conjugatcy classes of positive $\mathrm{scl}$ may be uniformly bounded away from zero. For example for non-elementary, torsion-free hyperbolic groups and for the fundamental groups of closed hyperbolic manifolds this accumulation point is at least $1/12$; see Theorem B of \cite{calegari_fujiwara} and see Theorem 3.11 of \cite{calegari:scl}. \item Sometimes, one may control $\mathrm{scl}$ on certain generic group elements. If $G = G_1 \star G_2$ is the free product of two torsion-free groups $G_1$ and $G_2$ and $g \in G'$ does not conjugate into one of the factors, then $\mathrm{scl}(g) \geq 1/2$; see \cite{chen} and \cite{Ivanov-Klyachko}. Similarly, if $G = A \star_C B$ and $g \in G'$ does not conjugate into one of the factors and such that $C g C$ does not contain a copy of any conjugate of $g^{-1}$ then $\mathrm{scl}(g) \geq 1/12$. See Theorem D of \cite{calegari_fujiwara} for the first proof of this gap and \cite{scl_in_bs_groups} for the sharp gap and a generalisation to graphs of groups. \item Baumslag--Solitar groups have a sharp uniform gap of $1/12$; see \cite{scl_in_bs_groups}. \end{itemize} Note that this list is not meant to be comprehensive. By monotinicity, having a gap in scl may serve as an obstruction for group embeddings. If $H$ and $G$ are non-abelian groups with $H \hookrightarrow G$ and $C > 0$ is such that every non-trivial element $g \in G'$ satisfies $\mathrm{scl}(g) \geq C$ then so does every non-trivial element of $H'$. \subsection{Generalised Quasimorphisms} \label{subsec:generalised qm} It is possible to generalise quasimorphisms $\phi \colon G \to \mathbb{R}$ to maps $\Phi \colon G \to H$ for $G,H$ \emph{arbitrary groups}. Two quite different proposals for such a generalisation come from Fujiwara--Kapovich (\cite{fuji-kapo}) and Hartnick--Schweitzer (\cite{hartnick-schweitzer}). Whereas the former maps are quite restrictive, the latter type of maps are very rich. The ``letter-quasimorphisms'' defined in this paper will be quasimorphisms as defined by Hartnick--Schweitzer as shown at the end of Subsection \ref{subsec:letter-quasimorphisms and well-behaved letter quasimorphisms}. Adapting the definition of \cite{hartnick-schweitzer} we call a map $\Phi \colon G \to H$ between arbitrary groups a \emph{quasimorphism} if for every (ordinary) quasimorphism $\alpha \colon H \to \mathbb{R}$, $\alpha \circ \Phi \colon G \to \mathbb{R}$, i.e. the pullback of $\alpha$ to $G$ via $\Phi$, defines a quasimorphism on $G$. A map $\phi \colon G \to \mathbb{R}$ is a quasimorphism in the sense of Hartnick--Schweitzer if and only if it is an ordinary quasimorphism. The quasimorphisms $G \to \mathbb{R}$ constructed in this paper will be all pullbacks of the most basic quasimorphisms $\mathbb{F}_2 \to \mathbb{R}$ via letter-quasimorphisms $G \to \mathcal{A} \subset \mathbb{F}_2$; see Remark \ref{rmk:quasimorphisms are pullback of hs qm}. \subsection{Brooks Quasimorphisms} \label{subsec:brooks and 2-brooks} For what follows $\mathbb{F}_2$ will denote the group on two generators $\texttt{a}$ and $\texttt{b}$. A word $w = \texttt{x}_1 \cdots \texttt{x}_k \in F(\{ \texttt{a}, \texttt{b} \} ) = \mathbb{F}_2$ is called \emph{reduced} if it has no backtracking. Unless stated otherwise \emph{we will always assume that elements in the free group are represented by reduced words}. A sub-letter $\texttt{x}_i$ is called a \emph{power of $\texttt{a}$ (or $\texttt{b}$)} if $\texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}$ (or $\texttt{x}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}$). Furthermore, $w$ is called \emph{alternating} if the letters of $w$ alternate between an element in $\{ \texttt{a}, \texttt{a}^{-1} \}$ and an element in $\{ \texttt{b}, \texttt{b}^{-1} \}$. The set of alternating words of $\mathbb{F}_2 = \langle \texttt{a}, \texttt{b} \rangle$ is denoted by $\mathcal{A}$. A word $v = \texttt{y}_1 \cdots \texttt{y}_l$ is called \emph{subword} of $w = \texttt{x}_1 \cdots \texttt{x}_k$ if $l \leq k$ and there is an $n \in \{0, \ldots, k-l \}$ such that $\texttt{y}_i = \texttt{x}_{i+n}$ for every $i \in \{ 1, \ldots, l \}$. Let $w \in \mathbb{F}_2$, $g \in \mathbb{F}_2$ be arbitrary reduced words. Let $\nu_w(g)$ be the number of (possibly overlapping) subwords of $w$ in the reduced word $g$. Then the function \[ \eta_w = \nu_w - \nu_{w^{-1}} \] is a quasimorphism, called \emph{Brooks quasimorphism}. These maps were introduced by Brooks in \cite{brooks} to show that the vector space of (homogeneous) quasimorphisms of the free group is infinite dimensional. Observe that for a letter $\texttt{x}$, the map $\eta_\texttt{x}$ is a homomorphism. Brooks quasimorphisms have been vastly generalised to other cases and groups; see \cite{EF} and \cite{me}. Let $g,h \in \mathbb{F}_2$ and let $(c_1, c_2, c_3)$ be reduced words such that $g = c_1^{-1} c_2$, $h= c_2^{-1} c_3$, $h^{-1} g^{-1} = c_3^{-1} c_1$ are reduced words. Then it is easy to see that the value $\eta_w(g) + \eta_w(h) - \eta_w(gh)$ only depends on the first $|w|-1$ letters of the words $c_1, c_2, c_3$, hence the defect is indeed finite. There is an extremal Brooks quasimorphism to the basic commutator $[ \texttt{a}, \texttt{b} ]$, namely $\eta_{\texttt{a} \texttt{b}} - \eta_{ \texttt{b} \texttt{a} }$. This and homomorphisms will be the only Brooks quasimorphisms occurring in this paper. \begin{exmp} \label{exmp: extemal brooks quasimorphisms on free group} Consider $[\texttt{a}, \texttt{b}]$, the commutator of the letters $\texttt{a}$ and $\texttt{b}$. Then it is easy to see that the quasimorphism $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}}$ satisfies that $\eta_0([\texttt{a},\texttt{b}])=\bar{\eta_0}([\texttt{a},\texttt{b}])=2$, $D(\eta_0)=1$ and $D(\bar{\eta}_0) = 2$. As usual, $\bar{\eta}_0$ denotes the homogenisation of $\eta_0$. By Bavard's Duality Theorem (\ref{thm:Bavards duality}) we may estimate $\mathrm{scl}([\texttt{a},\texttt{b} ]) \geq \bar{\eta}([\texttt{a}, \texttt{b}])/2 D(\bar{\eta}) = 1/2$ and, as $\mathrm{scl}([\texttt{a},\texttt{b}]) \leq 1/2$ (see Subsection \ref{subsec:spectral gaps of scl}), we conclude $\mathrm{scl}([\texttt{a},\texttt{b}])=1/2$ and see that $\bar{\eta}_0$ is extremal. \end{exmp} \subsection{Bounded Cohomology} We define (bounded) cohomology of discrete groups and state its basic properties. We refer the reader to \cite{frigerio} for a thorough treatment of the bounded cohomology of discrete groups. Let $G$ be a group, let $V$ be a $\mathbb{Z} G$-module and set $C^n(G, V) = \{ f \colon G^n \rightarrow V \}$. For what follows, $V = \mathbb{Z}$ or $V = \mathbb{R}$ and we think of $V$ as a $\mathbb{Z} G$-module with trivial action. Let $\| \cdot \|_\infty$ be the $l^\infty$-norm on $C^n(G, \mathbb{R})$ and set \[ C^n_b(G, V) = \{ f \in C^n(G,V) \mid \|f \|_\infty < \infty \} \subset C^n(G,V) \] Define the well-known coboundary maps for the inhomogeneous resolution $\delta^n \colon C^n(G, V) \rightarrow C^{n+1}(G, V)$ via \begin{equ*}{rl} \delta^n(f) (g_1, \ldots, g_{n+1}) = &f(g_2, \ldots, g_{n+1}) + \sum_{i=1}^n (-1)^i f(g_1, \ldots, g_i g_{i+1}, \ldots, g_{n+1}) + \cdots \\ &\cdots (-1)^{n+1} f(g_1, \ldots, g_n) \end{equ*} and note that $\delta^n$ restricts to $\delta^n \colon C^n_b(G,V) \to C^{n+1}_b(G,V)$. Set \begin{align*} Z^n_{(b)}(G,V) &= \textrm{ker} \big( \delta^n \colon C_{(b)}^n(G, V) \to C_{(b)}^{n+1}(G,V) \big) \\ B^n_{(b)}(G,V) &= \textrm{im} \left( \delta^{n-1} \colon C_{(b)}^{n-1}(G, V) \to C_{(b)}^n(G,V) \right) \end{align*} the (bounded) cocycles $Z^n_{(b)}(G,V)$ and the (bounded) coboundaries $B^n_{(b)}(G,V)$. Then $\mathrm{H}^n(G,V) = Z^n(G,V) / B^n(G,V)$ is called the \emph{ordinary cohomology} and $\mathrm{H}_b^n(G,V) = Z_b^n(G,V) / B_b^n(G,V)$ is called the \emph{bounded cohomology} of $G$ with coefficients in $V$. The embedding $C_b^n(G, \mathbb{R}) \hookrightarrow C^n_(G, \mathbb{R})$ induces a map $c^n \colon \mathrm{H}_b^n(G,V) \to \mathrm{H}^n(G,V)$ called the \emph{comparison map}. Let $\phi \colon G \to \mathbb{R}$ be a quasimorphism. Then $\delta^1 \phi \in C^2_b(G,\mathbb{R})$ is a bounded $2$-cocycle and hence induces a class $[\delta^1 \phi] \in H^2_b(G, \mathbb{R})$. These classes are exactly the classes which lie in the kernel of the comparison map $c^2 \colon \mathrm{H}_b^2(G,\mathbb{R}) \to \mathrm{H}^2(G,\mathbb{R})$ described above. (Bounded) Cohomology is functorial in both slots: Any homomorphism $\alpha \colon G \to H$ induces a well defined map $\alpha^* \colon \mathrm{H}_{(b)}^n(H,V) \to \mathrm{H}_{(b)}^n(G,V)$ on (bounded) cohomology by pulling back cocycles on $H$ to cocycles on $G$ via $\alpha$. The map $\mathbb{Z} \to \mathbb{R}$ induces a \emph{change of coefficients} map $\mathrm{H}_{(b)}^n(G,\mathbb{Z}) \to \mathrm{H}_{(b)}^n(G,\mathbb{R})$. \subsection{Bounded Cocycles via Actions on the Circle and Vice Versa} This subsection states a classical correspondence between bounded cohomology and circle actions developed by Ghys; see \cite{ghys}. Also, see \cite{cicle_quasimorph_modern} for a thorough treatment of this topic. Let $\mathrm{Homeo}^+(S^1)$ be the group of orientation preserving actions on the circle and let \[ \mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R}) = \{ f \in \mathrm{Homeo}^+(\mathbb{R}) \mid \forall n \in \mathbb{Z}, x \in \mathbb{R}: f(x+n) = f(x)+n \} \] the subgroup of the orientation preserving homeomorphisms of the real line that commutes with the integers. By identifying $S^1 \cong \mathbb{R} / \mathbb{Z}$ we obtain a surjection $\pi \colon \mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R}) \to \mathrm{Homeo}^+(S^1)$. The kernel of $\pi$ is isomorphic to $\mathbb{Z}$ via $\iota \colon n \mapsto f_n$ with $f_n \colon x \mapsto x+n$ and lies in the center of $\mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R})$. Hence \[ \begin{tikzcd} 0 \arrow[r] & \mathbb{Z} \arrow[r,"\iota"] & \mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R}) \arrow[r,"\pi"] & \mathrm{Homeo}^+(S^1) \arrow[r] \arrow[l, bend right, "\sigma"] & 1 \end{tikzcd} \] is a central extension and hence corresponds to a class $\mathrm{eu} \in\mathrm{H}^2(\mathrm{Homeo}^+(S^1), \mathbb{Z})$ the \emph{Euler-class}. This class is represented by the cocycle $\omega \colon (g,h) \mapsto \sigma(g) \sigma(h) \sigma(gh)^{-1} \in \mathbb{Z}$ by identifying $\mathbb{Z}$ with $\mathrm{ker}(\pi)=\mathrm{im}(\iota)$ and where $\sigma$ is any set-theoretic section $\sigma \colon \mathrm{Homeo}^+(S^1) \to \mathrm{Homeo}^+_\mathbb{Z}(\mathbb{R})$. Let $\sigma_b$ be the unique section such that $\sigma_b(f)(0) \in [0,1)$. Then $\omega_b(g,h) = \sigma_b(g) \sigma_b(h) \sigma_b(gh)^{-1}$ satisfies that for all $g,h \in G$, $\omega_b(g,h) \in \{ 0,1 \}$ and hence is $\omega_b$ is a \emph{bounded} cocycle. We call the class $\mathrm{eu}_b = [\omega_b] \in \mathrm{H}^2_b(\mathrm{Homeo}^+(S^1), \mathbb{Z})$ the \emph{bounded Euler class}. See \cite{me_extensions} for the correspondence of group extensions and bounded cohomology. The image of $\mathrm{eu}_b$ under the change of coefficients $\mathrm{H}^2_b(\mathrm{Homeo}^+(S_1), \mathbb{Z}) \to \mathrm{H}^2_b(\mathrm{Homeo}^+(S_1), \mathbb{R})$ is called the \emph{real bounded Euler class} and denoted by $\mathrm{eu}^\mathbb{R}_b$. Any action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ induces a bounded class via $\rho^*\mathrm{eu}_b \in \mathrm{H}^2_b(G,\mathbb{Z})$ (resp. $\rho^*\mathrm{eu}_b^\mathbb{R} \in \mathrm{H}^2_b(G, \mathbb{R})$). Ghys (\cite{ghys}) showed that two actions $\rho_1, \rho_2 \colon G \to \mathrm{Homeo}^+(S_1)$ are \emph{semi-conjugate} if and only if $\rho_1^*\mathrm{eu}_b=\rho_2^*\mathrm{eu}_b \in \mathrm{H}^2_b(G,\mathbb{Z})$. See \cite{cicle_quasimorph_modern} for a precise definition of semi-conjugacy. Similarly, we have $\rho^* \mathrm{eu}^\mathbb{R}_b = 0 \in \mathrm{H}^2_b(G,\mathbb{R})$ if and only if $\rho$ is semi-conjugate to an action by rotations. The class $\rho^*\mathrm{eu}_b \in \mathrm{H}^2_b(G, \mathbb{Z})$ may be represented by a cocycle $\rho^*\omega_b \in Z^2_b(G, \mathbb{Z})$ such that for every $g,h \in G$, $\rho^*\omega_b(g,h) \in \{0,1 \}$. Surprisingly, a converse statement holds: \begin{thm} \label{thm:ghys} \footnote{See \cite{ghys}, see also Theorem 1.3 of \cite{cicle_quasimorph_modern}} Let $G$ be a discrete countable group and let $[\omega] \in H^2_b(G,\mathbb{Z})$ be a class represented by a cocycle $\omega$, such that for all $g,h \in G$, $\omega(g,h) \in \{ 0, 1 \}$. Then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $\rho^*\mathrm{eu}_b = [\omega] \in \mathrm{H}_b^2(G, \mathbb{Z})$. \end{thm} This allows us to show that certain quasimorphisms are induced by a circle action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ without explicitly constructing $\rho$. \section{Letter-Thin Triples and the Maps $\alpha$ and $\beta$} \label{sec:letter-thin triples and alpha, beta} The set of alternating words $\mathcal{A} \subset \mathbb{F}_2$ is the set of all words in the letters $\texttt{a}$ and $\texttt{b}$ where the letters alternate between $\{ \texttt{a}, \texttt{a}^{-1} \}$ and $\{ \texttt{b}, \texttt{b}^{-1} \}$. For example, $\texttt{a} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1}$ is an alternating word but $\texttt{a} \texttt{b} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{b}^{-1}$ is not. We will define maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$ and develop their basic properties in Subsection \ref{subsec:alpha and beta}. We also define a version of these maps on $\bar{\mathcal{A}}_0$, which are conjugatcy classes of \emph{even}-length words of $\mathcal{A}$ to understand how $\alpha, \beta$ behave on powers; see Proposition \ref{prop:powers of alpha, beta}. In Subsection \ref{subsec:letter-thin and alpha and beta} we define certain triples $(x_1,x_2,x_3)$ where $x_1,x_2,x_3 \in \mathcal{A}$ called \emph{letter-thin triples}. We think of them as the sides of (thin) triangles; see Figure \ref{fig:triangles}. Note that such triples are not triangles in the usual sense, i.e. the sides $x_1,x_2,x_3$ do \emph{not} correspond to the geodesics between three points in some metric space like a Cayley graph. Letter-thin triples will be crucial in estimating the defect of the quasimorphisms we construct in this paper. We will see that $\alpha$ and $\beta$ map letter-thin triples to letter-thin triples in Lemma \ref{lemma:alpha keeps thin.}, which is the main technical result of this paper. In Subsection \ref{subsec:brooks qm, homomorphisms and letter-thin} we see that basic Brooks quasimorphisms and homomorphisms behave well on letter-thin triples. We usually prove the properties we state for $\alpha, \beta$ just for $\alpha$ and note that all properties may be deduced analogously for $\beta$ by interchanging $\texttt{a}$ and $\texttt{b}$; see Proposition \ref{prop:cutting words}, (\ref{prop-case:interchange a b}). \subsection{The Maps $\alpha$ and $\beta$, Definition and Properties} \label{subsec:alpha and beta} We will describe two maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$ sending alternating words to alternating words. Define $\mathcal{S}_\texttt{a}^+, \mathcal{S}_\texttt{a}^- \subset \mathcal{A}$ as \begin{align*} \mathcal{S}_\texttt{a}^+ & = \{ \texttt{a} \texttt{y}_1 \texttt{a} \cdots \texttt{a} \texttt{y}_l \texttt{a} \mid \texttt{y}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}, l \in \mathbb{N} \} \\ \mathcal{S}_\texttt{a}^- & = \{ \texttt{a}^{-1} \texttt{y}_1 \texttt{a}^{-1} \cdots \texttt{a}^{-1} \texttt{y}_l \texttt{a}^{-1} \mid \texttt{y}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}, l \in \mathbb{N} \} \end{align*} that is, $\mathcal{S}_\texttt{a}^+$ is the set of alternating words which start and end in $\texttt{a}$ and don't contain the letter $\texttt{a}^{-1}$ and $\mathcal{S}_\texttt{a}^-$ is the set of alternating words which start and end in $\texttt{a}^{-1}$ and don't contain the letter $\texttt{a}$. Note that we assume $0 \in \mathbb{N}$, i.e. $\texttt{a} \in \mathcal{S}_\texttt{a}^+$ and $\texttt{a}^{-1} \in \mathcal{S}_\texttt{a}^-$. Analogously we define the sets $\mathcal{S}_\texttt{b}^+ \subset \mathcal{A}$ and $\mathcal{S}_\texttt{b}^- \subset \mathcal{A}$ as \begin{align*} \mathcal{S}_\texttt{b}^+ & = \{ \texttt{b} \texttt{x}_1 \texttt{b} \cdots \texttt{b} \texttt{x}_l \texttt{b} \mid \texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}, l \in \mathbb{N} \} \\ \mathcal{S}_\texttt{b}^- & = \{ \texttt{b}^{-1} \texttt{x}_1 \texttt{b}^{-1} \cdots \texttt{b}^{-1} \texttt{x}_l \texttt{b}^{-1} \mid \texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}, l \in \mathbb{N} \} \end{align*} and observe that $\texttt{b} \in \mathcal{S}_\texttt{b}^+$ and $\texttt{b}^{-1} \in \mathcal{S}_\texttt{b}^-$. We will decompose arbitrary words $w \in \mathcal{A}$ as a \emph{unique} product of elements in $\{ \texttt{b}, \texttt{b}^{-1} \}$ and $\mathcal{S}_\texttt{a}^+ \cup \mathcal{S}_\texttt{a}^-$: \begin{prop} \label{prop: a decomposition} Let $w \in \mathcal{A}$ be an alternating word. Then there are $\texttt{y}_0, \ldots, \texttt{y}_l$ and $s_1, \ldots, s_l$ such that \[ w = \texttt{y}_0 s_1 \texttt{y}_1 s_2 \cdots \texttt{y}_{l-1} s_l \texttt{y}_l \] where $\texttt{y}_i \in \{\texttt{b}, \texttt{b}^{-1} \}$ except that $\texttt{y}_0$ and/or $\texttt{y}_l$ may be empty and $s_i \in \mathcal{S}_\texttt{a}^+ \cup \mathcal{S}_\texttt{a}^-$. Moreover, $s_i$ alternates between $\mathcal{S}_\texttt{a}^+$ and $\mathcal{S}_\texttt{a}^-$, i.e. there is no $i \in \{ 1, \ldots, l-1 \}$ such that $s_i, s_{i+1} \in \mathcal{S}_\texttt{a}^+$ or $s_i, s_{i+1} \in \mathcal{S}_\texttt{a}^-$. This expression is unique. \end{prop} We will call this way of writing $w$ the \emph{ $\texttt{a}$-decomposition of $w$}. Analogously, we may also write $w \in \mathcal{A}$ as \[ w = \texttt{x}_0 t_1 \texttt{x}_1 t_2 \cdots \texttt{x}_{l-1} t_l \texttt{x}_l \] (possibly with a different $l$), where $\texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}$ except that $\texttt{x}_0$ and / or $\texttt{x}_l$ may be empty and $t_i \in \mathcal{S}_\texttt{b}^+ \cup \mathcal{S}_\texttt{b}^-$ where $t_i$ alternate between $\mathcal{S}_\texttt{b}^+$ and $\mathcal{S}_\texttt{b}^-$. We will call this way of writing $w$ the \emph{$\texttt{b}$-decomposition of $w$}. \begin{proof} (of Proposition \ref{prop: a decomposition}) Let $w \in \mathcal{A}$ be an alternating word. Since $\texttt{a} \in \mathcal{S}_\texttt{a}^+$ and $\texttt{a}^{-1} \in \mathcal{S}_\texttt{a}^-$, we may always find some $s_i \in \mathcal{S}_\texttt{a}^+ \cup \mathcal{S}_\texttt{a}^-$ and some $\texttt{y}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}$ such that \[ w = \texttt{y}_0 s_1 \texttt{y}_1 s_2 \cdots \texttt{y}_{n-1} s_n \texttt{y}_n \] with possibly $\texttt{y}_n$ and / or $\texttt{y}_0$ empty. Now let $m$ be the minimal $n$ of all such products representing $w$ i.e. \[ w = \texttt{y}_0 s_1 \texttt{y}_1 s_2 \cdots \texttt{y}_{m-1} s_m \texttt{y}_m. \] Suppose there is an $i \in \{ 1, \ldots, m-1 \}$ such that $s_i,s_{i+1} \in \mathcal{S}_\texttt{a}^+$ (resp. $s_i,s_{i+1} \in \mathcal{S}_\texttt{a}^-$). Set $s' = s_i \texttt{y}_i s_{i+1}$ and note that $s' \in \mathcal{S}_\texttt{a}^+$ (resp. $s' \in \mathcal{S}_\texttt{a}^-$). Then \[ w = \texttt{y}_0 s_1 \texttt{y}_1 s_2 \cdots \texttt{y}_{i-1} s' \texttt{y}_{i+1} \cdots \texttt{y}_{m-1} s_m \texttt{y}_m \] which would contradict the minimality of $m$. Hence all $s_i$ alternate between $\mathcal{S}_\texttt{a}^+$ and $\mathcal{S}_\texttt{a}^-$. By comparing two such expressions we see that such an expression is further unique. \end{proof} \begin{defn} \label{defn:alpha and beta} Let $w \in \mathcal{A}$ and let $w = \texttt{y}_0 s_1 \cdots \texttt{y}_{l-1} s_l \texttt{y}_{l}$ be the $\texttt{a}$-decomposition of $w$. Then $\alpha \colon \mathcal{A} \to \mathcal{A}$ is defined via \[ \alpha \colon w \mapsto \texttt{y}_0 \texttt{x}_1 \texttt{y}_1 \texttt{x}_2 \cdots \texttt{y}_{l-1} \texttt{x}_l \texttt{y}_{l} \] with $\texttt{x}_i = \texttt{a}$ if $s_i \in \mathcal{S}^+_\texttt{a}$ and $\texttt{x}_i = \texttt{a}^{-1}$ if $s_i \in \mathcal{S}^-_{\texttt{a}}$. Analogously suppose that $w = \texttt{x}_0 t_1 \texttt{x}_1 t_2 \cdots \texttt{x}_{l-1} t_l \texttt{x}_l$ is the $\texttt{b}$-decomposition of $w$, with $l$ possibly different from above. We define the map $\beta \colon \mathcal{A} \to \mathcal{A}$ via \[ \beta \colon w \mapsto \texttt{x}_0 \texttt{y}_1 \texttt{x}_1 \texttt{y}_2 \cdots \texttt{x}_{l-1} \texttt{y}_l \texttt{x}_{l} \] with $\texttt{y}_i = \texttt{b}$ if $t_i \in \mathcal{S}^+_\texttt{b}$ and $\texttt{y}_i = \texttt{b}^{-1}$ if $t_i \in \mathcal{S}^-_{\texttt{b}}$. \end{defn} \begin{exmp} Let $w = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \texttt{b} \texttt{a} \texttt{b} \texttt{a}^{-1}$. Then the $\texttt{a}$-decomposition of $w$ is \[ w = \texttt{b} s_1 \texttt{b}^{-1} s_2 \texttt{b} s_3 \texttt{b} s_4 \] where $s_1 = \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b} \texttt{a} \in \mathcal{S}^+_\texttt{a}$, $s_2 = \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \in \mathcal{S}_\texttt{a}^-$, $s_3 = \texttt{a} \in \mathcal{S}^+_\texttt{a}$ and $s_4 = \texttt{a}^{-1} \in \mathcal{S}^+_\texttt{a}$. Hence \[ \alpha(w) = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} \texttt{b} \texttt{a}^{-1}. \] Observe that then $\alpha(\alpha(w)) = \alpha(w)$. The $\texttt{b}$-decomposition of $\alpha(w)$ is \[ \alpha(w) = t_1 \texttt{a} t_2 \texttt{a}^{-1} t_3 \texttt{a}^{-1} \] where $t_1 = \texttt{b} \in \mathcal{S}_\texttt{b}^+$, $t_2 = \texttt{b}^{-1} \in \mathcal{S}_\texttt{b}^-$ and $t_3 = \texttt{b} \texttt{a} \texttt{b} \in \mathcal{S}_\texttt{b}^+$. Hence \[ \beta(\alpha(w)) = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \] and similarly, we may see that $\alpha(\beta(\alpha(w))) = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} = [\texttt{b}, \texttt{a}]$. Then both $\alpha([\texttt{b}, \texttt{a}]) = [\texttt{b}, \texttt{a}]$ and $\beta([\texttt{b}, \texttt{a}])= [\texttt{b},\texttt{a}]$. We will formalise and use this behaviour later; see Proposition \ref{prop:cutting words} and Proposition \ref{prop:alpha on conjugacy classes decreases}. \end{exmp} The images of $\alpha$ and $\beta$ are obviously contained in the set of alternating words. Moreover, as the $s_i$ in the previous definition all alternate between $\mathcal{S}^+_{\texttt{a}}$ and $\mathcal{S}^-_{\texttt{a}}$, none of the consecutive $\texttt{x}_i$ have the same sign in the image of $\alpha$ and no consecutive $\texttt{y}_i$ have the same sign in the image of $\beta$. \begin{prop} \label{prop:cutting words} The maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$ have the following properties: \begin{enumerate} \item \label{prop-case:alpha alternating} For every $w \in \mathcal{A}$, $\alpha(w^{-1}) = \alpha(w)^{-1}$ and $\beta(w^{-1}) = \beta(w)^{-1}$ \item \label{prop-case:interchange a b} $\psi \circ \alpha = \beta \circ \psi$ and $\psi \circ \beta = \alpha \circ \psi$, where $\psi \colon \mathbb{F}_2 \to \mathbb{F}_2$ is the automorphism defined via $\psi \colon \texttt{a} \mapsto \texttt{b}, \texttt{b} \mapsto \texttt{a}$. \item For any $w \in \mathcal{A}$, $\alpha(\alpha(w)) = \alpha(w)$. Moreover, $|\alpha(w)| \leq |w|$ with equality if and only if $\alpha(w) = w$. The analogous statement holds for $\beta$. \item \label{prop:cases,splitting} Let $v_1 \texttt{x} v_2$ be an alternating word with $v_1, v_2 \in \mathcal{A}$ and $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$. Then $\alpha(v_1 \texttt{x} v_2)$ is equal in $\mathbb{F}_2$ to the element represented by the non-reduced word $\alpha(v_1 \texttt{x}) \texttt{x}^{-1} \alpha(\texttt{x} v_2)$. The analogous statement holds for $\beta$. \end{enumerate} \end{prop} \begin{proof} To see $(1)$, note that if $w = \texttt{y}_0 s_1 \texttt{y}_1 \cdots \texttt{y}_{l-1} s_l \texttt{y}_{l}$ is the $\texttt{a}$-decomposition of $w$, then \[ \texttt{y}_l^{-1} s_l^{-1} \texttt{y}_{l-1}^{-1} \cdots \texttt{y}_1^{-1} s_1^{-1} \texttt{y}_0^{-1} \] is the $\texttt{a}$-decomposition of $w^{-1}$. As $s_i^{-1} \in \mathcal{S}_\texttt{a}^+$ if and only if $s_i \in \mathcal{S}_\texttt{a}^-$ and $s_i^{-1} \in \mathcal{S}_\texttt{a}^-$ if and only if $s_i \in \mathcal{S}_\texttt{a}^+$ we can conclude that $\alpha(w^{-1}) = \alpha(w)^{-1}$. The analogous argument holds for $\beta$. Point $(2)$ is evident from the symmetric way $\alpha$ and $\beta$ have been defined. To see $(3)$, note that $\alpha$ replaces each of the subwords $s_i$ by letters $\texttt{a}$ or $\texttt{a}^{-1}$. These have size strictly less than $|s_i|$ unless $s_i$ is the letter $\texttt{a}$ or $\texttt{a}^{-1}$ already. This shows $|\alpha(w)| \leq |w|$ with equality only if $\alpha(w) = w$ and it also shows that $\alpha \circ \alpha = \alpha$. For (\ref{prop:cases,splitting}) suppose that the $\texttt{a}$-decomposition of $v_1 \texttt{x}$ is $\texttt{y}^1_0 s^1_1 \texttt{y}^1_1 \cdots \texttt{y}^1_{l_1-1} s^1_{l_1}$ and the $\texttt{a}$-decomposition of $\texttt{x} v_2$ is $s^2_1 \texttt{y}^2_1 \cdots \texttt{y}^1_{l_1-1} s^2_{l_2} \texttt{y}^2_{l_2}$. Both, $s^1_{l_1}$ and $s^2_1$ lie in the same set $S_\texttt{a}^+$ or $S_\texttt{a}^-$ depending if $\texttt{x} = \texttt{a}$ or $\texttt{x} = \texttt{a}^{-1}$. Without loss of generality assume that $\texttt{x} = \texttt{a}$. The $\texttt{a}$-decomposition of $v_1 \texttt{x} v_2$ may be seen to be $\texttt{y}^1_0 s^1_1 \texttt{y}^1_1 \cdots \texttt{y}^1_{l_1-1} s \texttt{y}^2_1 \cdots \texttt{y}^2_{l_2-1} s^2_{l_2} \texttt{y}^2_{l_2}$ where $s \in S_\texttt{a}^+$ is equal to $s^1_{l_1} \texttt{a}^{-1} s^2_1$ in $\mathbb{F}_2$. Hence $\alpha(v_1 \texttt{a}) = \texttt{y}^1_0 \texttt{x}^1_1 \texttt{y}^1_1 \cdots \texttt{y}^1_{l_1-1} \texttt{a}$, $\alpha(\texttt{a} v_2) = \texttt{a} \texttt{y}^2_1 \cdots \texttt{y}^2_{l_2-1} \texttt{x}^2_{l_2} \texttt{y}^2_{l_2}$ and \[ \alpha(v_1 \texttt{x} v_2) = \texttt{y}^1_0 \texttt{x}^1_1 \texttt{y}^1_1 \cdots \texttt{y}^1_{l_1-1} \texttt{a} \texttt{y}^2_1 \cdots \texttt{y}^2_{l_2-1} \texttt{x}^2_{l_2} \texttt{y}^2_{l_2}. \] Comparing terms finishes the proposition. \end{proof} To study how the maps $\alpha, \beta \colon \mathcal{A} \to \mathcal{A}$ behave on powers of elements we need to define a version of them on conjugacy classes. Let $\bar{\mathcal{A}}_0$ be the set conjugacy classes of even length alternating words. Note that then necessarily every two representatives $w_1,w_2 \in \mathcal{A}$ of the same conjugacy class in $\bar{\mathcal{A}}_0$ are equal up to cyclic permutation of the letters. This is, there are elements $v_1, v_2 \in \mathcal{A}$ such that $w_1 = v_1 v_2$ and $w_2 = v_2 v_1$ as reduced words. Hence every representative $v \in \mathcal{A}$ of an element in $\bar{\mathcal{A}}_0$ is automatically reduced. \begin{rmk} \label{rmk:on conjugacy classes for acl} Every reduced representative $w \in \mathcal{A}$ of a class in $\bar{\mathcal{A}}_0$ has the same length. Every homogeneous quasimorphism $\bar{\phi} \colon \mathbb{F}_2 \to \mathbb{R}$ depends only on conjugacy classes and hence induces a well-defined map $\bar{\phi} \colon \bar{\mathcal{A}}_0 \to \mathbb{R}$. We say that an element $[w] \in \bar{\mathcal{A}}_0$ \emph{lies in the commutator subgroup} if one (and hence any) representative $w$ of $[w]$ lies in the commutator subgroup of $\mathbb{F}_2$. \end{rmk} \begin{defn} \label{defn:maps alpha bar and beta bar} Define the map $\bar{\alpha} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ as follows: Let $[w] \in \bar{\mathcal{A}}_0$. If $[w] = e$ set $\bar{\alpha}([w]) = e$. Else choose a representative $w \in \mathcal{A}$ of $[w]$ that starts with a power of $\texttt{a}$ and, as $w$ has even length, ends in a power of $\texttt{b}$. Suppose that $w$ starts with the letter $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and write $w= \texttt{x} w'$ for $w' \in \mathcal{A}$ such that $\texttt{x} w'$ is reduced. Then define $\bar{\alpha} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ via \[ \bar{\alpha} \colon [w] \mapsto [\alpha(\texttt{x} w' \texttt{x}) \texttt{x}^{-1}] \in \bar{\mathcal{A}}_0. \] Define $\bar{\beta} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ analogously: For every element $[w] \in \bar{\mathcal{A}}_0$ choose a representative $w \in \mathcal{A}$ which starts with the letter $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$ and write $w = \texttt{y} w'$. Then define $\bar{\beta} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ via \[ \bar{\beta} \colon [w] \mapsto [\beta(\texttt{y} w' \texttt{y} ) \texttt{y}^{-1} ] \in \bar{\mathcal{A}}_0. \] \end{defn} To see that $\bar{\alpha}, \bar{\beta} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ are well-defined, suppose that $w_1, w_2 \in \mathcal{A}$ are both even alternating words which start in a power of $\texttt{a}$ and both represent the same element $[w_1] = [w_2] \in \bar{\mathcal{A}}_0$. Let $\texttt{x}_1, \texttt{x}_2 \in \{ \texttt{a}, \texttt{a}^{-1} \}$ be the first letters of $w_1$ and $w_2$. Then there are elements $v_1, v_2 \in \mathcal{A}$ such that $w_1 = \texttt{x}_1 v_1 \texttt{x}_2 v_2$ as a reduced word and $w_2 = \texttt{x}_2 v_2 \texttt{x}_1 v_1$. Then, by $(3)$ of Proposition \ref{prop:cutting words}, \begin{align*} \alpha(w_1 \texttt{x}_1) \texttt{x}_1^{-1} = \alpha(\texttt{x}_1 v_1 \texttt{x}_2 v_2 \texttt{x}_1) \texttt{x}_1^{-1} &= \alpha(\texttt{x}_1 v_1 \texttt{x}_2) \texttt{x}_2^{-1} \alpha(\texttt{x}_2 v_2 \texttt{x}_1) \texttt{x}_1^{-1} \\ \alpha(w_2 \texttt{x}_2) \texttt{x}_2^{-1} =\alpha(\texttt{x}_2 v_2 \texttt{x}_1 v_1 \texttt{x}_2) \texttt{x}_2^{-1} &= \alpha(\texttt{x}_2 v_1 \texttt{x}_1) \texttt{x}_1^{-1} \alpha(\texttt{x}_1 v_1 \texttt{x}_2) \texttt{x}_2^{-1} \end{align*} which are conjugate in $\mathbb{F}_2$ and so $[\alpha(w_1 \texttt{x}_1) \texttt{x}_1^{-1}] = [\alpha(w_2 \texttt{x}_2) \texttt{x}_2^{-1}]$. This shows that $\bar{\alpha}$ is well defined and analogously that $\bar{\beta}$ is well defined. The definition of $\bar{\alpha}$ given above is useful for performing calculations. However, there is a more geometric way to think about $\bar{\alpha}$ and $\bar{\beta}$ analogous to the definition of $\alpha$ and $\beta$. A common way to depict conjugacy classes in the free group is via labels on a circle: Let $w = \texttt{z}_1 \cdots \texttt{z}_n \in \mathbb{F}_2$ be a cyclically reduced word in the letters $\texttt{z}_i$. Then $w$ labels a circle by cyclically labelling the sides of the circle counterclockwise by $\texttt{z}_1, \texttt{z}_2, \ldots, \texttt{z}_n$ so that $\texttt{z}_n$ is next to $\texttt{z}_1$ on the circle. Two cyclically reduced words $w \in \mathbb{F}_2$ then yield the same labelling up to rotation if and only if they define the same conjugacy class. Let $[w] \in \bar{\mathcal{A}}_0$ be a conjugacy class of a word $w \in \mathcal{A}$ of even length that contains both at least one $\texttt{a}$ and one $\texttt{a}^{-1}$ as a subword. We may similarly define an $\texttt{a}$-decomposition of such a cyclic labelling. One may show that in this geometric model the maps $\bar{\alpha}$ (resp. $\bar{\beta}$) can then be defined just like for $\alpha$ and $\beta$ by replacing the words in $\mathcal{S}_\texttt{a}^+$ by $\texttt{a}$ and the words in $\mathcal{S}_\texttt{a}^-$ by $\texttt{a}^-$. If $[w] \in \bar{\mathcal{A}}_0$ does not contain both $\texttt{a}$ and $\texttt{a}^{-1}$ as subwords then $\bar{\alpha}([w])=e$ in both cases. \begin{figure} \centering \subfloat[]{\includegraphics[width=0.6\textwidth]{cg.pdf} \label{fig:circle general}} \\ \subfloat[]{\includegraphics[width=0.3\textwidth]{ct.pdf} \label{fig:circle trivial}} \caption{Visulizing $\bar{\alpha}$: Conjugacy classes $[w]$ correspond to cyclic labels of a circle. One may define a $\texttt{a}$-decomposition and $\bar{\alpha}$ on such labels except when $[w]$ does not contain $\texttt{a}$ or $\texttt{a}^{-1}$ as a subword. See Example \ref{exmp:alpha bar}} \label{fig:circles and conjugacy classes} \end{figure} Consider the following example: \begin{exmp} \label{exmp:alpha bar} Let $w = \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b} \in \mathcal{A}$. Its conjugacy class is depicted in Figure \ref{fig:circles and conjugacy classes}. We observe that $w$ starts with $\texttt{a}$ and set $w' = \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b}$ so that $w = \texttt{a} w'$. By Definition \ref{defn:maps alpha bar and beta bar}, $\bar{\alpha}([w]) = [\alpha(\texttt{a} w' \texttt{a}) \texttt{a}^{-1}] = [\left( \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \right) \texttt{a}^{-1} ] = [\texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} \texttt{b}^{-1}]$. However, we could have also done an $\texttt{a}$-decomposition of the elements on a circle as pictured in Figure \ref{fig:circles and conjugacy classes} ($\textrm{A}$) with $s_1 = \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b} \texttt{a} \in \mathcal{S}_\texttt{a}^+$ and $s_2 = \texttt{a}^{-1} \texttt{b} \texttt{a}^{-1} \in \mathcal{S}_\texttt{a}^-$ and obtained the same result. Similarly, let $w = \texttt{a} \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b}$. It's conjugacy class is represented by a cyclic labelling of a circle in Figure \ref{fig:circles and conjugacy classes} (\textrm{B}). The first letter of $w$ is $\texttt{a}$. Set $w' = \texttt{b} \texttt{a} \texttt{b}^{-1} \texttt{a} \texttt{b}$ so that $w = \texttt{a} w'$. The $\texttt{a}$-decomposition of $\texttt{a} w' \texttt{a} = s_1 \in \mathcal{S}_{\texttt{a}}^+$. Hence $\bar{\alpha}([w]) = [\alpha(\texttt{a} w' \texttt{a}) \texttt{a}^{-1}] = [ \left( \texttt{a} \right) \texttt{a}^{-1}] = [e] \in \bar{\mathcal{A}}_0$. \end{exmp} \begin{prop} \label{prop:alpha on conjugacy classes decreases} Let $\bar{\alpha}, \bar{\beta} \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ be defined as above and let $[w] \in \bar{\mathcal{A}}_0$. Then $|\bar{\alpha}([w])| \leq |[w]|$ with equality if and only if $\bar{\alpha}([w]) = [w]$. The analogous statement holds for $\bar{\beta}$. If $[w]$ is a non-trivial class in the commutator subgroup of $\mathbb{F}_2$ then $\bar{\alpha}([w])$ and $\bar{\beta}([w])$ are non-trivial. If $\bar{\alpha}([w]) = [w] = \bar{\beta}([w])$ then $[w]$ may be represented by $w = [\texttt{a}, \texttt{b}]^n$ for $n \in \mathbb{Z}$. \end{prop} \begin{proof} To see that $\bar{\alpha}$, $\bar{\beta}$ decrease length unless they fix classes is the same argument as in the proof of Proposition \ref{prop:cutting words}. If $[w]$ is a non-trivial class in the commutator subgroup of $\mathbb{F}_2$ then there is a reduced representative $w$ such that $w = \texttt{a} v_1 \texttt{a}^{-1} v_2$ for some appropriate $v_1,v_2 \in \mathcal{A}$ and we see that $\bar{\alpha}([w])$ is non-trivial as it also contains the subletters $\texttt{a}$ and $\texttt{a}^{-1}$. If $w \in \mathcal{A}$ is a representative such that $\bar{\alpha}$ fixes $[w]$ then $w$ has to be of the form $w = \prod_{i=1}^k \texttt{a} \texttt{y}_i \texttt{a}^{-1} \texttt{y}'_i$ for some $\texttt{y}_i,\texttt{y}'_i \in \{ \texttt{b}, \texttt{b}^{-1} \}$, $k \geq 1$ and similarly, if $\bar{\beta}$ fixes a class then the a representative has to be of the form $w = \prod_{i=1}^k \texttt{x}_i \texttt{b} \texttt{x}'_i \texttt{b}^{-1}$ for some $\texttt{x}_i, \texttt{x}'_i \in \{ \texttt{a}, \texttt{a}^{-1} \}$, $k \geq 1$. Comparing both yields the statement. \end{proof} \begin{prop} \label{prop:powers of alpha, beta} Assume that $w \in \mathcal{A}$ is non-empty, has even length and that $c_1, c_2 \in \mathcal{A}$ are words such that $c_1 w c_2 \in \mathcal{A}$ is again an alternating word. Then there are words $d_1, d_2, w' \in \mathcal{A}$ such that $\alpha(c_1 w^n c_2) = d_1 w'^{n-1} d_2 \in \mathcal{A}$ for all $n \geq 1$ as reduced words where $w'$ has even length and $[w'] = \bar{\alpha}([w]) \in \bar{\mathcal{A}}_0$. If $w$ lies in the commutator subgroup then $w'$ is non-empty. The analogous statement holds for $\beta$. \end{prop} \begin{proof} If $w \in \mathcal{A}$ does not contain both a positive and a negative power of $\texttt{a}$, the statement follows by an easy calculation. Note that this is the case if and only if $\bar{\alpha}([w]) = [e]$. Otherwise $w$ contains at least one sub-letter $\texttt{a}$ and one sub-letter $\texttt{a}^{-1}$. This is the case if $w$ lies in the commutator subgroup. Suppose without loss of generality that $w = v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3$ as a reduced word for some $v_1, v_2, v_3 \in \mathcal{A}$. By multiple applications of Proposition \ref{prop:cutting words}, we see that \begin{align*} \alpha(c_1 w^n c_2) &= \alpha(c_2 \left( v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3 \right)^n c_2) \\ &= \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1} \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 (v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3)^{n-1} c_2) \\ &= \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1} \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1} \alpha( \texttt{a} v_2 \texttt{a}^{-1} v_3 (v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3)^{n-2} c_2) \\ &= \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1} \left( \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1} \right)^2 \alpha( \texttt{a} v_2 \texttt{a}^{-1} v_3 (v_1 \texttt{a} v_2 \texttt{a}^{-1} v_3)^{n-3} c_2) \\ &= \cdots \\ &= \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1} (\alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1} )^{n-1} \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 c_2) \end{align*} as non-reduced elements in the free group. Then we define $d_1$, $d_2$ and $w'$ to be the reduced representative of \[ \alpha(c_1 v_1 \texttt{a}) \texttt{a}^{-1}, \text{ } \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 c_2) \text{ and } \alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1} \] respectively. Moreover, $\alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a})$ is a reduced alternating word which starts and ends in $\texttt{a}$ and contains the $\texttt{a}^{-1}$ as a sub-letter. If follows that $w'$, the reduced representative of $\alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1}$, starts with $\texttt{a}$, contains $\texttt{a}^{-1}$ and ends with a power of $\texttt{b}$, so $w'$ is non-empty. Further observe that $\bar{\alpha}([\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1])$ is represented by $\alpha(\texttt{a} v_2 \texttt{a}^{-1} v_3 v_1 \texttt{a}) \texttt{a}^{-1}$ and hence $[w'] = \bar{\alpha}(w)$. \end{proof} \subsection{Letter-Thin Triples, $\alpha$ and $\beta$} \label{subsec:letter-thin and alpha and beta} In order to streamline proofs later and ease notation we define an equivalence relation on triples $(x_1,x_2,x_3)$. We think of such a triple as the sides of a (thin) triangle. We stress that the $x_i$ are not actually the side of triangles in some metric space; see Figure \ref{fig:triangles}. Here, we study a special type of triples, namely \emph{letter-thin triples} in Definition \ref{defn:letter-thin}. \begin{defn} \label{defn:equivalent triples} Let $(x_1,x_2,x_3)$ be a triple of elements in $\mathbb{F}_2$ and let $\phi \colon \mathbb{F}_2 \to \mathbb{F}_2$ be a set-theoretic function. We will understand by $\phi(x_1,x_2,x_3)$ the triple $(\phi(x_1), \phi(x_2), \phi(x_3))$. We define $\sim$ to be the equivalence relation on triples generated by \begin{enumerate} \item[(i)] $(x_1,x_2,x_3) \sim (x_2, x_3, x_1)$ \item[(ii)] $(x_1,x_2,x_3) \sim (x_3^{-1}, x_2^{-1}, x_1^{-1})$ \item[(iii)] $(x_1,x_2,x_3) \sim \phi_\texttt{a}(x_1, x_2, x_3)$, where $\phi_\texttt{a} \colon \mathbb{F}_2 \to \mathbb{F}_2$ is the automorphism defined via $\texttt{a} \mapsto \texttt{a}^{-1}$ and $\texttt{b} \mapsto \texttt{b}$. \item[(iv)] $(x_1,x_2,x_3) \sim \phi_\texttt{b}(x_1, x_2, x_3)$, where $\phi_\texttt{b} \colon \mathbb{F}_2 \to \mathbb{F}_2$ is the automorphism defined via $\texttt{a} \mapsto \texttt{a}$ and $\texttt{b} \mapsto \texttt{b}^{-1}$. \end{enumerate} for all $x_1,x_2,x_3 \in \mathbb{F}_2$ and say that $(x_1,x_2,x_3)$ is \emph{equivalent} to $(y_1, y_2, y_3)$ if $(x_1,x_2,x_3) \sim (y_1, y_2, y_3)$ under this relation. \end{defn} Imagining $(x_1,x_2,x_3)$ as labelling the sides of a triangle, two triples are equivalent if they may be obtained from each other by a sequence of rotations $(i)$, flips $(ii)$ or by changing the signs of its labels $(iii)$ \& $(iv)$. \begin{prop} \label{prop:alpha respects equivalence} Let $x_1,x_2,x_3,y_1,y_2,y_3 \in \mathbb{F}_2$ such that $(x_1,x_2,x_3) \sim (y_1, y_2, y_3)$. Then if $x_1,x_2,x_3 \in \mathcal{A}$ also $y_1,y_2,y_3 \in \mathcal{A}$. Moreover, in this case $\alpha(x_1,x_2,x_3) \sim \alpha(y_1,y_2,y_3)$ and $\beta(x_1,x_2,x_3) \sim \beta(y_1, y_2, y_3)$. \end{prop} \begin{proof} The first part is clear from the definitions. Note that $\alpha$ commutes both with ``rotating the side'' $(i)$ and taking inverses $(ii)$ as $\alpha$ satisfies that $\alpha(w^{-1}) = \alpha(w)^{-1}$ for $w \in \mathcal{A}$. Let $w = \texttt{y}_0 s_1 \texttt{y}_1 \cdots \texttt{y}_{k-1} s_k \texttt{y}_k$ be the $\texttt{a}$-decomposition of $w$ (see Definition \ref{defn:alpha and beta}), where $\texttt{y}_i \in \{ \texttt{b}, \texttt{b}^{-1} \}$ and $s_i \in \mathcal{S}^+_\texttt{a} \cup \mathcal{S}^-_\texttt{a}$ alternates between $\mathcal{S}^+_\texttt{a}$ and $\mathcal{S}^-_\texttt{a}$. Then \[ \phi_\texttt{a}(w) = \texttt{y}_0 \phi_\texttt{a}(s_1) \texttt{y}_1 \cdots \texttt{y}_{k-1} \phi_\texttt{a}(s_k) \texttt{y}_k \] where $\phi(s_i) \in \mathcal{S}^+_\texttt{a}$ if and only if $s_i \in \mathcal{S}^-_\texttt{a}$ and $\phi(s_i) \in \mathcal{S}^-_\texttt{a}$ if and only if $s_i \in \mathcal{S}^+_\texttt{a}$. So $\alpha(\phi_\texttt{a}(w)) = \phi_\texttt{a}(\alpha(w))$ and hence $\alpha \circ \phi_\texttt{a}(x_1, x_2, x_3)$ is equivalent to $\alpha(x_1,x_2,x_3)$. Similarly, $\phi_\texttt{b}(w) = \phi_\texttt{b}(\texttt{y}_0) \phi_\texttt{b}(s_1) \phi_\texttt{b}(\texttt{y}_1) \cdots \phi_\texttt{b}(\texttt{y}_{k-1}) \phi_\texttt{b}(s_k) \phi_\texttt{b}(\texttt{y}_k)$ where both $\phi_\texttt{b}(s_i)$ and $s_i$ lie in the same set $\mathcal{S}_\texttt{a}^+$ or $\mathcal{S}_\texttt{a}^-$. We see that once more, $\alpha(\phi_\texttt{b}(w)) = \phi_\texttt{b}(\alpha(w))$ and hence also $\alpha \circ \phi_\texttt{b}(x_1,x_2,x_3)$ is equivalent to $\alpha(x_1,x_2,x_3)$. Analogously, we see the statement for $\beta$. \end{proof} For a visualisation of the following definition we refer the reader to Figure \ref{fig:triangles}. \begin{defn} \label{defn:letter-thin} Let $x_1, x_2, x_3 \in \mathcal{A}$ be \emph{alternating} elements. The triple $(x_1, x_2, x_3)$ is called \emph{letter-thin triple} in one of the following cases: \begin{itemize} \item[ $\text{[T1]}$ ] There are (possibly trivial) elements $c_1, c_2, c_3 \in \mathcal{A}$ such that \begin{itemize} \item[$\text{[T1$\at$]}$] $ (x_1,x_2,x_3) \sim (c_1^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1} \texttt{a} c_3, c_3^{-1} \texttt{a}^{-1} c_1)$ or \item[$\text{[T1$\bt$]}$] $ (x_1,x_2,x_3) \sim (c_1^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1} \texttt{b} c_3, c_3^{-1} \texttt{b}^{-1} c_1)$ \end{itemize} where all words are required to be reduced. \item[ $\text{[T2]}$ ] There are (possibly trivial) elements $c_1, c_2 \in \mathcal{A}$ such that \begin{itemize} \item[$\text{[T2$\at$]}$] $(x_1,x_2,x_3) \sim (c_1^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1}, \texttt{b} c_1)$ or \item[$\text{[T2$\bt$]}$] $(x_1,x_2,x_3) \sim (c_1^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1}, \texttt{a} c_1)$ \end{itemize} where all words are required to be reduced. \end{itemize} In all cases, $\sim$ denotes the equivalence of triples of Definition \ref{defn:equivalent triples}. We say that a letter-thin triple $(x_1,x_2,x_3)$ is \emph{of type $\text{[T1$\at$]}$, $\text{[T1$\bt$]}$, $\text{[T2$\at$]}$} or \emph{$\text{[T2$\bt$]}$} if it is equivalent to the corresponding triple above. \end{defn} Note for example in the representatives of $\text{[T1$\at$]}$ above, necessarily $c_1$, $c_3$ are either empty or their first letter is a power of $\texttt{b}$. Similarly, $c_2$ is either empty or its first letter is a power of $\texttt{a}$, else the $x_i$ would not be alternating. Note that for any letter-thin triple $(x_1,x_2,x_3)$ of type $\text{[T1$\at$]}$ we may always find elements $d_1, d_2, d_3 \in \mathcal{A}$ with first letter a power of $\texttt{b}$ such that \begin{align} \label{equ:maybe letter thin} (x_1,x_2,x_3) = (d_1^{-1} \texttt{x}_1 d_2, d_2^{-1} \texttt{x}_2 d_3, d_3^{-1} \texttt{x}_3 d_1) \end{align} where $\texttt{x}_i \in \{ \texttt{a}, \texttt{a}^{-1} \}$ are such that \emph{not all of $\texttt{x}_1$, $\texttt{x}_2$ and $\texttt{x}_3$ are equal} i.e. have the same parity. As we consider the triples only up to equivalence one may wonder if we can assume that any triple as in Equation (\ref{equ:maybe letter thin}) such that not all of $d_i$ are empty is letter-thin of type $\text{[T1$\at$]}$. However, this is not the case: As $\texttt{x}_1$, $\texttt{x}_2$, $\texttt{x}_3$ do not all have the same parity, there is exactly one $i$ such that $\texttt{x}_i = \texttt{x}_{i+1}$ where indices are considered$\mod 3$. Then one may see that $(x_1,x_2,x_3)$ is of type $\text{[T1$\at$]}$ \emph{if and only if} $d_{i+1}$ is non-trivial. For example, $(d_1^{-1} \texttt{a}, \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} d_1)$ is \emph{not} letter-thin for any $d_1, d_3 \in \mathcal{A}$ empty or starting with a power of $\texttt{b}$. \begin{exmp} $(\texttt{a}, \texttt{a}, \texttt{a}^{-1})$ is not letter-thin and by the previous discussion also the triple $(\texttt{b}^{-1} \texttt{a}^{-1} , \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a} \texttt{b})$ is not letter-thin. However, $(\texttt{b}^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a}^{-1}, \texttt{a} \texttt{b})$ \emph{is} letter-thin. To see this, note that \begin{equ*}{rcl} (\texttt{b}^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a}^{-1}, \texttt{a} \texttt{b}) &\overset{(iii)}{\sim}& (\texttt{b}^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1} \texttt{a}, \texttt{a}^{-1} \texttt{b}) \\ & = & (c_1^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1} \texttt{a} c_3, c_3^{-1} \texttt{a}^{-1} c_1) \end{equ*} for $c_1 = \texttt{b}$, $c_2 = e$ and $c_3 = e$ and where $\overset{(iii)}{\sim}$ denotes the equivalence $(iii)$ of the definition of '$\sim$'; see Definition \ref{defn:equivalent triples}. \end{exmp} Note that by definition, if $(x_1,x_2,x_3)$ is letter-thin then \emph{all $x_1, x_2, x_3$ are alternating words}. See Figure \ref{fig:triangles} for the explanation of the name \emph{letter-thin triple}: First consider elements $g,h \in \mathbb{F}_2 = \langle \texttt{a}, \texttt{b} \rangle$. The triple $(g,h,(gh)^{-1})$ corresponds to sides of a geodesic triangle in the Cayley graph $\textrm{Cay}(\mathbb{F}_2, \{ \texttt{a}, \texttt{b} \})$ with endpoints $e, g, gh$. Note further that there are words $c_1, c_2, c_3 \in \mathbb{F}_2$ such that $g = c_1^{-1} c_2$, $h = c_2^{-1} c_3$, $(gh)^{-1} = c_3^{-1} c_1$ and all these expressions are freely reduced. A \emph{letter-thin} triple $(x_1,x_2,x_3)$ is such that each $x_i$ is in addition alternating and corresponds \emph{almost} to the sides of a geodesic triangle in a Cayley graph, apart from one letter $r \in \{ \texttt{a}, \texttt{b} \}$ in the ``middle'' of the triangle. Figure \ref{fig:triangles} (\textrm{B}) corresponds to case $\text{[T1]}$ of Definition \ref{defn:letter-thin}, Figure \ref{fig:triangles} (\textrm{C}) corresponds to case $\text{[T2]}$ of Definition \ref{defn:letter-thin}. These letter-thin triples $(x_1,x_2,x_3)$ do \emph{not} label sides of triangles in a Cayley graph or any other metric space. \begin{figure} \centering \subfloat[]{\includegraphics[width=0.3\textwidth]{thin.pdf} \label{fig:thin triangle}} \hfill \subfloat[]{\includegraphics[width=0.3\textwidth]{ltg.pdf} \label{fig:letter-thin triple}} \hfill \subfloat[]{\includegraphics[width=0.3\textwidth]{ltd.pdf} \label{fig:letter-thin degenerate}} \caption{Different ``triangles'': (\textrm{A}) arises as a generic thin triangle in the Cayley graph $\mathrm{Cay}(\mathbb{F}_2, \{ \texttt{a}, \texttt{b} \})$ of the free group with standard generating set. Figures (\textrm{B}) and (\textrm{C}) correspond to letter-thin triples $\text{[T1$\at$]}$, $\text{[T2$\at$]}$. The grey dotted circles indicate the part of the letter-thin triples which can not be empty. These letter-thin triples do \emph{not} live in a Cayley graph or any well-known metric space.}. \label{fig:triangles} \end{figure} Observe that $(x_1,x_2,x_3)$ is letter-thin if and only if $\psi(x_1,x_2,x_3)$ is letter-thin for $\psi$ defined as in Proposition \ref{prop:cutting words} (\ref{prop-case:interchange a b}) i.e. $\psi$ is the automorphism $\psi \colon \mathbb{F}_2 \to \mathbb{F}_2$ defined via $\psi \colon \texttt{a} \mapsto \texttt{b}$ and $\psi \colon \texttt{b} \mapsto \texttt{a}$. The maps $\alpha$ and $\beta$ respect letter-thin triples: \begin{lemma} \label{lemma:alpha keeps thin.} If $(x_1,x_2,x_3)$ is letter-thin. Then both $\alpha(x_1, x_2, x_3)$ and $\beta(x_1,x_2,x_3)$ are letter-thin. \end{lemma} \begin{proof} We will proceed as follows: Let $(x_1,x_2,x_3)$ be a letter-thin triple. By Proposition \ref{prop:alpha respects equivalence} it is enough to check that $\alpha(x_1,x_2,x_3)$ is letter-thin for one representative of the equivalence class. Hence it suffices to check that $\alpha(x_1, x_2, x_3)$ is letter thin for \begin{enumerate} \item Type $\text{[T1$\at$]}$: $(x_1,x_2,x_3) = (c_1^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1} \texttt{a} c_3, c_3^{-1} \texttt{a}^{-1} c_1)$ \item Type $\text{[T1$\bt$]}$: $(x_1,x_2,x_3) = (c_1^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1} \texttt{b} c_3, c_3^{-1} \texttt{b}^{-1} c_1)$ \item Type $\text{[T2$\at$]}$: $(x_1,x_2,x_3) = (c_1^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1}, \texttt{b} c_1)$ \item Type $\text{[T2$\bt$]}$: $(x_1,x_2,x_3) = (c_1^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1}, \texttt{a} c_1)$ \end{enumerate} By symmetry, this will show the analogous statement for $\beta$. Proposition \ref{prop:cutting words}, (\ref{prop:cases,splitting}) allows us to compute $\alpha$ piecewise i.e. after each occurrence of a letter $\texttt{a}$ or $\texttt{a}^{-1}$ in a reduced word. For any reduced word $c \in \mathcal{A}$ starting with a power of $\texttt{b}$ or being empty, we will write $c_+$ for the reduced word represented by $\texttt{a}^{-1} \alpha(\texttt{a} c)$, which itself is not reduced since $\alpha(\texttt{a} c)$ starts with an $\texttt{a}$. Similarly, we will write $c_-$ for the reduced word represented by $\texttt{a} \alpha(\texttt{a}^{-1} c)$. Note that $c_+$ and $c_-$ are either empty or their first letter is a power of $\texttt{b}$, as $\alpha(\texttt{a}^{\pm} c)$ is alternating. If $c$ is a word which already has a subscript, say $c_i$, then we will write $c_{i,+}$ and $c_{i,-}$, respectively. We consider each of the above cases independently. For letter-thin triples $(x_1,x_2,x_3)$ of type $\text{[T1$\at$]}$ we compute $\alpha(x_1,x_2,x_3)$ and we will state exactly which equivalences $(i)$, $(ii)$, $(iii)$ and $(iv)$ of Definition \ref{defn:equivalent triples} are needed to obtain one of the representatives for $\text{[T1$\at$]}$, $\text{[T1$\bt$]}$, $\text{[T2$\at$]}$ and $\text{[T2$\bt$]}$ of letter-thin triples as in Definition \ref{defn:letter-thin}. For letter-thin triples $(x_1,x_2,x_3)$ of type $\text{[T1$\bt$]}$, $\text{[T2$\at$]}$ and $\text{[T2$\bt$]}$ we will just state the type of $\alpha(x_1,x_2,x_3)$ without explicitly giving the equivalence. \begin{enumerate} \item Type $\text{[T1$\at$]}$: Suppose $(x_1,x_2,x_3) = (c_1^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1} \texttt{a} c_3, c_3^{-1} \texttt{a}^{-1} c_1)$. As $(x_1,x_2,x_3)$ are alternating $c_2$ is either empty or starts with a positive or a negative power of $\texttt{a}$. We consider these cases separately: \begin{itemize} \item $c_2$ is empty. In this case we compute using Proposition \ref{prop:cutting words}, \begin{align*} \alpha(c_1^{-1} \texttt{a} \texttt{b}) &= \alpha(c_1^{-1} \texttt{a}) \texttt{a}^{-1} \alpha( \texttt{a} \texttt{b}) = \alpha( \texttt{a}^{-1} c_1)^{-1} \texttt{b} = (\texttt{a}^{-1} c_{1,-})^{-1} \texttt{b} = (c_{1,-})^{-1} \texttt{a} \texttt{b} \\ \alpha(\texttt{b}^{-1} \texttt{a} c_3) &= \alpha(\texttt{b}^{-1} \texttt{a}) \texttt{a}^{-1} \alpha(\texttt{a} c_3) = \texttt{b}^{-1} \texttt{a} c_{3,+} \\ \alpha(c_3^{-1} \texttt{a}^{-1} c_1) &= \alpha(c_3^{-1} \texttt{a}^{-1}) \texttt{a} \alpha(\texttt{a}^{-1} c_1) = \alpha(\texttt{a} c_3)^{-1} c_{1,-} = (\texttt{a} c_{3,+})^{-1} c_{1,-} = (c_{3,+})^{-1} \texttt{a}^{-1} c_{1,-} \end{align*} and hence \[ \alpha(x_1, x_2, x_3)=((c_{1,-})^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} c_{1,-}) \] which is of type $\text{[T1$\at$]}$. Indeed, for $c_1' = c_{1,-}$, $c_2'=e$ and $c_3' = c_{3,+}$ we see that \[ \alpha(x_1, x_2, x_3) = ({c'_1}^{-1} \texttt{a} \texttt{b} c'_2, {c'_2}^{-1} \texttt{b}^{-1} \texttt{a} c'_3, {c'_3}^{-1} \texttt{a}^{-1} c'_1). \] and hence $\alpha(x_1,x_2,x_3)$ is of type $\text{[T1$\at$]}$. \item $c_2 = \texttt{a} d_2$ where, $d_2 \in \mathcal{A}$. \[ \alpha(x_1, x_2, x_3)=((c_{1,-})^{-1} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} c_{1,-}) \] which is of type $\text{[T2$\bt$]}$ if $c_{1,-}$ is trivial and of type $\text{[T1$\bt$]}$ else. To see this we distinguish between three different cases: \begin{itemize} \item $c_{1,-}$ is trivial: Then \begin{align*} \alpha(x_1, x_2, x_3) &=(\texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1}) \\ &\overset{(i)}{\sim} ((d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1}, \texttt{a} d_{2,+}) \\ &\overset{(iv)}{\sim} (\phi_b(d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{3,+}, \phi_b(c_{3,+})^{-1} \texttt{a}^{-1}, \texttt{a} \phi_b(d_{2,+})) \\ &= ({c'_1}^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c'_2, {c'_2}^{-1} \texttt{a}^{-1}, \texttt{a} c'_1) \end{align*} for $c'_1 = \phi_b(d_{2,+})^{-1}$ and $c'_2 = c_{3,+}$ and hence of type $\text{[T2$\bt$]}$. Here $\sim$ denotes the equivalences on triples defined in Definition \ref{defn:equivalent triples} with the corresponding numbering $(i) - (iv)$. \item $c_{1,-}$ is non-trivial and starts with first letter $\texttt{b}$. Then define $d_1$ via $c_{1,-} = \texttt{b} d_1$. Hence $\alpha(x_1, x_2, x_3)$ equals: \begin{equ*}{ccl} & &(d_1^{-1} \texttt{b}^{-1} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} \texttt{b} d_1) \\ &\overset{(iv)}{\sim} &(\phi_b(d_1)^{-1} \texttt{b} \texttt{a} \phi_b(d_{2,+}), \phi_b(d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} \phi_b(c_{3,+}), \phi_b(c_{3,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \phi_b(d_1)) \\ &= &({c'_1}^{-1} \texttt{b} \texttt{a} c'_2, {c'_2}^{-1} \texttt{a}^{-1} \texttt{b} c'_3, {c'_3}^{-1} \texttt{b}^{-1} c'_1) \end{equ*} for $c'_1 = \phi_b(d_1)$, $c'_2 = \phi_b(d_{2,+})$, $c'_3 = \texttt{a} \phi_b(c_{3,+})$ and hence is of type $\text{[T1$\bt$]}$. \item $c_{1,-}$ is non-trivial and starts with first letter $\texttt{b}^{-1}$. Then define $d_1$ via $c_{1,-} = \texttt{b}^{-1} d_1$. Hence $\alpha(x_1, x_2, x_3)$ equals: \begin{equ*}{ccl} & &(d_1^{-1} \texttt{b} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} d_1) \\ & \overset{(ii)}{\sim} & d_1^{-1} \texttt{b} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} d_1) \\ & = & ({c'_1}^{-1} \texttt{b} \texttt{a} c'_2, {c'_2}^{-1} \texttt{a}^{-1} \texttt{b} c'_3, {c'_3}^{-1} \texttt{b}^{-1} c'_1) \end{equ*} for $c'_1 = d_1$, $c'_2 = c_{3,+}$, $c'_3 = \texttt{a} d_{2,+}$ and hence of type $\text{[T1$\bt$]}$. \end{itemize} \item $c_2 = \texttt{a}^{-1} d_2$ where $d_2 \in \mathcal{A}$. \[ \alpha(x_1, x_2, x_3)=((c_{1,-})^{-1} \texttt{a} \texttt{b} \texttt{a}^{-1} d_{2,-}, (d_{2,-})^{-1} \texttt{a} c_{3,+}, (c_{3,+})^{-1} \texttt{a}^{-1} c_{1,-}) \] which is of type $\text{[T1$\bt$]}$ if $c_{3,+}$ is non-trivial and of type $\text{[T2$\bt$]}$, else. This can be seen analogously to the previous case. \end{itemize} \item Type $\text{[T1$\bt$]}$: Suppose $(x_1,x_2,x_3) = (c_1^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1} \texttt{b} c_3, c_3^{-1} \texttt{b}^{-1} c_1)$. Up to equivalence, there are the following sub-cases: \begin{itemize} \item Both of $c_1, c_3$ are empty. Then \[ \alpha(x_1, x_2, x_3)= ( \texttt{b} \texttt{a} c_{2,+}, (c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} ) \] which is of type $\text{[T1$\bt$]}$ \item $c_1$ is not empty, $c_3$ is empty. Then either \begin{itemize} \item $c_1 = \texttt{a} d_1$. In this case \[ \alpha(x_1, x_2, x_3)= ((d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a} d_{1,+}) \] which is of type $\text{[T1$\bt$]}$ \item $c_1 = \texttt{a}^{-1} d_1$. In this case \[ \alpha(x_1, x_2, x_3)= ((d_{1,-})^{-1} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}, \texttt{b}^{-1} \texttt{a}^{-1} d_{1,+}) \] which is of type $\text{[T1$\at$]}$. \end{itemize} \item $c_1$ is empty and $c_3$ is not. Then either \begin{itemize} \item $c_3 = \texttt{a} d_3$, in which case \[ \alpha(x_1, x_2, x_3)= ( \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} d_{3,+}, (d_{3,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1}) \] which is of type $\text{[T1$\bt$]}$. \item $c_3 = \texttt{a}^{-1} d_3$, in which case \[ \alpha(x_1, x_2, x_3)= ( \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} d_{3,-}, (d_{3,-})^{-1} \texttt{a} \texttt{b}^{-1}) \] which is of type $\text{[T1$\at$]}$. \end{itemize} \item Both of $c_1, c_3$ are non-empty. Then either \begin{itemize} \item $c_1 = \texttt{a} d_1$, $c_3 = \texttt{a} d_3$. In this case \[ \alpha(x_1, x_2, x_3)= ( (d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} d_{3,+}, (d_{3,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} d_{1,+}) \] which is of type $\text{[T1$\bt$]}$. \item $c_1 = \texttt{a} d_1$, $c_3 = \texttt{a}^{-1} d_3$. In this case \[ \alpha(x_1, x_2, x_3)= ( (d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} d_{3,-}, (d_{3,-})^{-1} \texttt{a} d_{1,+}) \] which is of type $\text{[T1$\bt$]}$ if $d_{3,-}$ is non-trivial, and of type $\text{[T2$\bt$]}$, else. \item $c_1 = \texttt{a}^{-1} d_1$, $c_3 = \texttt{a} d_3$. In this case \[ \alpha(x_1, x_2, x_3)= ( (d_{1,-})^{-1} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} d_{3,+}, (d_{3,+})^{-1} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\bt$]}$ if $d_{1,-}$ is non-trivial and of type $\text{[T2$\bt$]}$, else. \item $c_1 = \texttt{a}^{-1} d_1$, $c_3 = \texttt{a}^{-1} d_3$. In this case \[ \alpha(x_1, x_2, x_3)= ( (d_{1,-})^{-1} \texttt{a} c_{2,+},(c_{2,+})^{-1} \texttt{a}^{-1} \ d_{3,-}, (d_{3,-})^{-1} \texttt{a} \texttt{b}^{-1} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\bt$]}$ if $c_{2,+}$ is non-trivial and of type $\text{[T2$\bt$]}$, else. \end{itemize} \end{itemize} \item Type $\text{[T2$\at$]}$: Suppose $(x_1,x_2,x_3) = (c_1^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b} c_2, c_2^{-1} \texttt{b}^{-1}, \texttt{b} c_1)$. We distinguish between the following cases \begin{itemize} \item Both of $c_1, c_2$ are empty. Then \[ \alpha(x_1,x_2,x_3) = (\texttt{b}^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1}, \texttt{b}) \] which is of type $\text{[T2$\at$]}$. \item One of $c_1, c_2$ is empty. Up to equivalence and changing indices we may assume that $c_2$ is empty. Then either \begin{itemize} \item $c_1 = \texttt{a} d_1$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1}, \texttt{b} \texttt{a} d_{1,+}) \] which is of type $\text{[T2$\at$]}$ or \item $c_1 = \texttt{a}^{-1} d_1$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,-})^{-1} \texttt{a} \texttt{b}, \texttt{b}^{-1}, \texttt{b} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\bt$]}$. \end{itemize} \item Both of $c_1, c_2$ are non-empty. Then either \begin{itemize} \item $c_1 = \texttt{a} d_1$, $c_2 = \texttt{a} d_2$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1}, \texttt{b} \texttt{a} d_{1,+}) \] which is of type $\text{[T1$\bt$]}$ or \item $c_1 = \texttt{a} d_1$, $c_2 = \texttt{a}^{-1} d_2$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b} \texttt{a}^{-1} d_{2,-}, (d_{2,-})^{-1} \texttt{a} \texttt{b}^{-1}, \texttt{b} \texttt{a} d_{1,+}) \] which is of type $\text{[T2$\at$]}$ or \item $c_1 = \texttt{a}^{-1} d_1$, $c_2 = \texttt{a} d_2$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,-})^{-1} \texttt{a} d_{2,+}, (d_{2,+})^{-1} \texttt{a}^{-1} \texttt{b}^{-1}, \texttt{b} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\at$]}$ or \item $c_1 = \texttt{a}^{-1} d_1$, $c_2 = \texttt{a}^{-1} d_2$ in which case \[ \alpha(x_1,x_2,x_3) = ((d_{1,-})^{-1} \texttt{a} \texttt{b} \texttt{a}^{-1} d_{2,-}, (d_{2,-})^{-1} \texttt{a} \texttt{b}^{-1}, \texttt{b} \texttt{a}^{-1} d_{1,-}) \] which is of type $\text{[T1$\bt$]}$. \end{itemize} \end{itemize} \item Type $\text{[T2$\bt$]}$: Suppose $(x_1,x_2,x_3) = (c_1^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_2, c_2^{-1} \texttt{a}^{-1}, \texttt{a} c_1)$. We see that \[ \alpha(x_1,x_2,x_3) = ((c_{1,+})^{-1} \texttt{a}^{-1} \texttt{b} \texttt{a} c_{2,+}, (c_{2,+})^{-1} \texttt{a}^{-1}, \texttt{a} c_{1,+} ) \] which is of type $\text{[T2$\bt$]}$. \end{enumerate} This concludes the proof of Lemma \ref{lemma:alpha keeps thin.}. \end{proof} \subsection{Brooks Quasimorphisms, Homomorphisms and Letter-Thin Triples} \label{subsec:brooks qm, homomorphisms and letter-thin} For what follows we want to study how the Brooks quasimorphism $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}}$ defined in Example \ref{exmp: extemal brooks quasimorphisms on free group} or certain homomorphisms behave on letter-thin triples. This will be done in Propositions \ref{prop:letter thin triples and two quasimorphisms} and \ref{prop: letter thin triples and homomorphisms}, respectively. \begin{prop} \label{prop:letter thin triples and two quasimorphisms} Let $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}} \colon \mathbb{F}_2 \to \mathbb{Z}$ be as above. Then \[ |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| = 1 \] for every letter-thin triple $(x_1,x_2,x_3)$. In particular $\eta_0(x_1)+\eta_0(x_2)+\eta_0(x_3) \in \{ -1, +1 \}$. \end{prop} \begin{proof} First note that if $w = w_1 w_2 \in \mathbb{F}_2$ as a reduced word and if $\texttt{z}_1$ is the last letter of $w_1$ and $\texttt{z}_2$ is the first letter of $w$, then \begin{align} \label{equ:split up words brooks} \eta_0(w) &= \eta_0(w_1) + \eta_0(\texttt{z}_1 \texttt{z}_2) + \eta_0(w_2). \end{align} Let $(x_1,x_2,x_3)$ be a triple. Note that the value \[ |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| \] is invariant under the equivalences $(i)$ and $(ii)$ of Definition \ref{defn:equivalent triples}. Up to these equivalences we see that any letter-thin triple $(x_1,x_2,x_3)$ is equivalent via $(i)$ and $(ii)$ to the following: \begin{itemize} \item Type $\text{[T1$\at$]}$: $(c_1^{-1} \texttt{x} \texttt{y} c_2, c_2^{-1} \texttt{y}^{-1} \texttt{x} c_3, c_3^{-1} \texttt{x}^{-1} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. If $c_i$ is empty set $\texttt{z}_i = e$. Else let $\texttt{z}_i$ be the first letter of $c_i$. Then, by using successively Equation (\ref{equ:split up words brooks}) we see that \begin{align*} \eta_0(x_1) &= \eta_0(c_1^{-1}) + \eta_0(\texttt{z}_1^{-1} \texttt{x}) + \eta_0(\texttt{x} \texttt{y}) + \eta_0(\texttt{y} \texttt{z}_2) + \eta_0(c_2) \\ \eta_0(x_2) &= \eta_0(c_2^{-1}) + \eta_0(\texttt{z}_2^{-1} \texttt{y}^{-1}) + \eta_0(\texttt{y}^{-1} \texttt{x}) + \eta_0(\texttt{x} \texttt{z}_3) + \eta_0(c_3) \\ \eta_0(x_3) &= \eta_0(c_3^{-1}) + \eta_0(\texttt{z}_3^{-1} \texttt{x}^{-1}) + \eta_0(\texttt{x}^{-1} \texttt{z}_1) + \eta_0(c_1) \end{align*} Using that $\eta_0(c^{-1}) = - \eta_0(c)$ for any $c \in \mathbb{F}_2$ we see that \[ |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| = |\eta_0(\texttt{x} \texttt{y}) + \eta_0(\texttt{y}^{-1} \texttt{x})| \] and hence we see that for any choice $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$, $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$ \[ |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| = 1. \] \item Type $\text{[T1$\bt$]}$: $(c_1^{-1} \texttt{y} \texttt{x} c_2, c_2^{-1} \texttt{x}^{-1} \texttt{y} c_3, c_3^{-1} \texttt{y}^{-1} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This case is analogous to the previous case. \item Type $\text{[T2$\at$]}$: $(c_1^{-1} \texttt{y}^{-1} \texttt{x} \texttt{y} c_2, c_2^{-1} \texttt{y}^{-1}, \texttt{y} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. Again, if $c_i$ is empty set $\texttt{z}_i = e$. Else let $\texttt{z}_i$ be the first letter of $c_i$. By successively using Equation (\ref{equ:split up words brooks}) we see that \begin{align*} \eta_0(x_1) &= \eta_0(c_1^{-1}) + \eta_0(\texttt{z}_1^{-1} \texttt{y}^{-1}) + \eta_0(\texttt{y}^{-1} \texttt{x}) + \eta_0(\texttt{x} \texttt{y}) + \eta_0(\texttt{y} \texttt{z}_2) + \eta_0(c_2) \\ \eta_0(x_2) &= \eta_0(c_2^{-1}) + \eta_0(\texttt{z}_2^{-1} \texttt{y}^{-1}) \\ \eta_0(x_3) &= \eta_0(\texttt{y} \texttt{z}_1) + \eta_0(c_1) \end{align*} and again we observe that \begin{align*} |\eta_0(x_1) + \eta_0(x_2) + \eta_0(x_3)| &= |\eta_0(\texttt{y}^{-1} \texttt{x}) + \eta_0(\texttt{x} \texttt{y})| = 1 \end{align*} for any choice of $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$, $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. \item Type $\text{[T2$\bt$]}$: $(c_1^{-1} \texttt{x}^{-1} \texttt{y} \texttt{x} \texttt{b} c_2, c_2^{-1} \texttt{x}^{-1}, \texttt{x} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This case is analogous to the previous case. \end{itemize} \end{proof} Recall that $\eta_\texttt{x} \colon \mathbb{F}_2 \to \mathbb{Z}$ denotes the homomorphism which counts the letter $\texttt{x}$. \begin{prop} \label{prop: letter thin triples and homomorphisms} Let $\eta = \eta_\texttt{x} + \eta_\texttt{y} \colon \mathbb{F}_2 \to \mathbb{Z}$ for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ or $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. Then \[ |\eta(x_1) + \eta(x_2) + \eta(x_3)| = 1 \] for any $(x_1,x_2,x_3)$ letter-thin. In particular $\eta(x_1) + \eta(x_2) + \eta(x_3) \in \{ -1, +1 \}$. \end{prop} \begin{proof} Let $\eta$ be as in the proposition and suppose that $(x_1,x_2,x_3)$ is letter-thin. Just like in the proof of the previous proposition we will consider the four different types of letter thin triples up to equivalences $(i)$ and $(ii)$ of Definition \ref{defn:equivalent triples}. \begin{itemize} \item Type $\text{[T1$\at$]}$: $(c_1^{-1} \texttt{x} \texttt{y} c_2, c_2^{-1} \texttt{y}^{-1} \texttt{x} c_3, c_3^{-1} \texttt{x}^{-1} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. We directly calculate, using that $\eta$ is a homomorphism: \begin{align*} \eta(x_1) &= \eta(c_1^{-1} \texttt{x} \texttt{y} c_2) = -\eta(c_1) + \eta(\texttt{x}) + \eta(\texttt{y}) + \eta(c_2) \\ \eta(x_2) &= \eta(c_2^{-1} \texttt{y}^{-1} \texttt{x} c_3) = -\eta(c_2) - \eta( \texttt{y}) + \eta(\texttt{x}) + \eta(c_3) \\ \eta(x_3) &= \eta(c_3^{-1} \texttt{x}^{-1} c_1) = - \eta(c_3) - \eta(\texttt{x}) + \eta(c_1) \end{align*} and hence \[ |\eta(x_1)+ \eta(x_2) + \eta(x_3)| = |\eta(\texttt{x})| = 1 \] for any $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$. \item Type $\text{[T1$\bt$]}$: $(c_1^{-1} \texttt{y} \texttt{x} c_2, c_2^{-1} \texttt{x}^{-1} \texttt{y} c_3, c_3^{-1} \texttt{y}^{-1} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This case is analogous to the previous case. \item Type $\text{[T2$\at$]}$: $(c_1^{-1} \texttt{y}^{-1} \texttt{x} \texttt{y} c_2, c_2^{-1} \texttt{y}^{-1}, \texttt{y} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. Again we calculate \begin{align*} \eta(x_1) &= \eta(c_1^{-1} \texttt{y}^{-1} \texttt{x} \texttt{y} c_2) = - \eta(c_1) - \eta(\texttt{y}) + \eta(\texttt{x}) + \eta(\texttt{y}) + \eta(c_2) \\ \eta(x_2) &= \eta(c_2^{-1} \texttt{y}^{-1}) = -\eta(c_2) - \eta(\texttt{y}) \\ \eta(x_3) &= \eta(\texttt{y} c_1) = \eta(\texttt{y})+\eta(c_1) \end{align*} and hence again \[ |\eta(x_1)+ \eta(x_2) + \eta(x_3)| = |\eta(\texttt{x})| = 1 \] for any $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$. \item Type $\text{[T2$\bt$]}$: $(c_1^{-1} \texttt{x}^{-1} \texttt{y} \texttt{x} \texttt{b} c_2, c_2^{-1} \texttt{x}^{-1}, \texttt{x} c_1)$, for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This case is analogous to the previous case. \end{itemize} \end{proof} \section{Gaps via Letter-Quasimorphisms} \label{sec:gaps via Letter-Quasimorphisms} The aim of this section is to define letter-quasimorphisms and deduce the criterion for $1/2$ gaps in $\mathrm{scl}$. There will be two types of letter-quasimorphisms: \emph{(general) letter-quasimorphisms} (Definition~\ref{defn:letter quasihomomorphism}) and \emph{well-behaved letter-quasimorphisms} (Definition \ref{defn:well behaved letter quasimorphisms}). The former is useful for application, the latter will be useful for proofs. For each letter-quasimorphism $\Phi \colon G \to \mathcal{A}$ there will be an associated well-behaved letter-quasimorphism $\tilde{\Phi} \colon G \to \mathcal{A}$ where $\tilde{\Phi}(g)$ is obtained from $\Phi(g)$ by modifying its beginning and its end; see Proposition \ref{prop:every letter-qm induces well behaved}. \subsection{Letter-Quasimorphisms and Well-Behaved Letter-Quasimorphisms} \label{subsec:letter-quasimorphisms and well-behaved letter quasimorphisms} As always $\mathcal{A}$ denotes the set of alternating words of $\mathbb{F}_2$ in the generators $\texttt{a}$ and $\texttt{b}$. \begin{defn} \label{defn:letter quasihomomorphism} Let $G$ be a group. We say that $\Phi \colon G \to \mathcal{A}$ is a \emph{letter-quasimorphism} if $\Phi$ is alternating, i.e. $\Phi(g^{-1}) = \Phi(g)^{-1}$ for every $g \in G$ and if for every $g,h \in G$ one of the following holds: \begin{enumerate} \item \label{defn:letter-qm:thin} $\Phi(g) \Phi(h) \Phi(g h)^{-1} = e$, or \item \label{defn:letter-qm:general} there are elements $c_1, c_2, c_3 \in \mathcal{A}$ and letters $\texttt{x}_1, \texttt{x}_2, \texttt{x}_3$ such that either $\texttt{x}_1, \texttt{x}_2, \texttt{x}_3 \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and $\texttt{x}_1 \texttt{x}_2 \texttt{x}_3 \in \{ \texttt{a}, \texttt{a}^{-1} \}$ or $\texttt{x}_1, \texttt{x}_2, \texttt{x}_3 \in \{ \texttt{b}, \texttt{b}^{-1} \}$ and $\texttt{x}_1 \texttt{x}_2 \texttt{x}_3 \in \{ \texttt{b}, \texttt{b}^{-1} \}$ which satisfy that $\Phi(g) = c_1^{-1} \texttt{x}_1 c_2$, $\Phi(h) = c_2^{-1} \texttt{x}_2 c_3$ and $\Phi(g h)^{-1} = c_3^{-1} \texttt{x}_3 c_1$ as freely reduced alternating words. \end{enumerate} \end{defn} The motivating example for letter-quasimorphisms is the following: \begin{exmp} \label{exmp:letter quasimorphisms on free group} Consider the map $\Phi \colon \mathbb{F}_2 \to \mathcal{A}$ defined as follows. Suppose that $w \in \mathbb{F}_2$ has reduced representation $\texttt{a}^{n_1} \texttt{b}^{m_1} \cdots \texttt{a}^{n_k} \texttt{b}^{m_k}$ with all $n_i, m_i \in \mathbb{Z}$ where all but possibly $n_1$ and / or $m_k$ are non-zero. Then set \[ \Phi(w) = \texttt{a}^{\textrm{sign}(n_1)} \texttt{b}^{\textrm{sign}(m_1)} \cdots \texttt{a}^{\textrm{sign}(n_k)} \texttt{b}^{\textrm{sign}(m_k)} \] where $\textrm{sign} \colon \mathbb{Z} \to \{ +1, 0, -1 \}$ is defined as usual. This may be seen to be a letter-quasimorphism and will be vastly generalised to amalgamated free products; see Lemma \ref{lemma:amalgamated yields letter-quasimorphism}. Observe that for any group $G$ and any homomorphism $\Omega \colon G \to \mathbb{F}_2$ the map $\Phi \circ \Omega \colon G \to \mathcal{A}$ is a letter-quasimorphism. Suppose that $G$ is \emph{residually free}. Then for ever non-trivial element $g \in G$ there is a homomorphism $\Omega_g \colon G \to \mathbb{F}_2$ such that $\Omega_g(g) \in \mathbb{F}_2$ is nontrivial. By applying a suitable automorphism on $\mathbb{F}_2$ to $\Omega_g$ we may assume that $\Omega_g(g)$ starts in a power of $\texttt{a}$ and ends in a power of $\texttt{b}$. Then $\Phi_g := \Phi \circ \Omega_g$ is a letter quasimorphism such that $\Phi_g(g)$ is nontrivial and such that $\Phi_g(g^n) = \Phi_g(g)^n$. \end{exmp} \begin{defn} \label{defn:well behaved letter quasimorphisms} We will call triples $(x_1,x_2,x_3)$ \emph{degenerate} if they are equivalent to a triple $(w, w^{-1}, e)$ for some $w \in \mathcal{A}$. Let $G$ be a group. A map $\Psi \colon G \to \mathcal{A}$ is called \emph{well-behaved letter-quasimorphism} if $\Psi$ is alternating, i.e. $\Psi(g^{-1}) = \Psi(g)^{-1}$ for every $g \in G$, and for all $g,h \in G$, the triple \[ (\Psi(g), \Psi(h), \Psi(gh)^{-1}) \] is either letter-thin (see Definition \ref{defn:letter-thin}) or degenerate. \end{defn} \begin{rmk} \label{prop: alpha and beta preserve well-behaved letter-quasimorphisms} Note that a triple $(x_1,x_2,x_3)$ is degenerate if and only if there is some $w \in \mathcal{A}$ such that $(x_1,x_2,x_3)$ equals $(w,w^{-1},e)$, $(w,e, w^{-1})$ or $(e,w,w^{-1})$. Note that if $\Phi \colon G \to \mathcal{A}$ is a well-behaved letter-quasimorphism then also $\alpha \circ \Phi \colon G \to \mathcal{A}$ and $\beta \circ \Phi \colon G \to \mathcal{A}$ are well-behaved letter-quasimorphisms. This follows immediately from Lemma \ref{lemma:alpha keeps thin.} and the fact that $\alpha$ (resp. $\beta$) satisfies $\alpha(w^{-1}) = \alpha(w)^{-1})$ (resp. $\beta(w^{-1}) = \beta(w)^{-1})$) for any $w \in \mathcal{A}$. \end{rmk} It is easy to see that every well-behaved letter-quasimorphism is also a letter-quasimorphism. The contrary does not hold. The map $\Phi \colon \mathbb{F}_2 \to \mathcal{A}$ described in Example \ref{exmp:letter quasimorphisms on free group} is a letter-quasimorphism but not a well-behaved letter-quasimorphism. For example for $g= \texttt{a}$, $h = \texttt{a}$ we obtain $(\Phi(g), \Phi(h), \Phi(h^{-1} g^{-1})) = (\texttt{a}, \texttt{a}, \texttt{a}^{-1})$, which is neither letter-thin nor degenerate. However, we may assign to each letter-quasimorphism $\Phi$ a well-behaved letter-quasimorphism $\tilde{\Phi}$. This will be done by pre-composing $\Phi$ with a map $w \mapsto \tilde{w}$ defined as follows. Set $\tilde{w} = e$ whenever $w \in \{ \texttt{a}, e, \texttt{a}^{-1} \}$. Else let $\texttt{z}_s$ be the first and $\texttt{z}_e$ be the last letter of $w \in \mathcal{A}$. Define $\tilde{w}$ as the reduced element in $\mathbb{F}_2$ freely equal to $\tilde{w} := \zeta_s(\texttt{z}_s) w \zeta_e(\texttt{z}_e)$ where \[ \zeta_s(\texttt{z}) = \begin{cases} e & \text{ if } \texttt{z}=\texttt{a} \\ \texttt{a} & \text{ if } \texttt{z} = \texttt{b} \text{ or } \texttt{b}^{-1} \\ \texttt{a}^2 & \text{ if } \texttt{z} = \texttt{a}^{-1} \end{cases} \] and \[ \zeta_e(\texttt{z}) = \begin{cases} e & \text{ if } \texttt{z} = \texttt{a}^{-1} \\ \texttt{a}^{-1} & \text{ if } \texttt{z} = \texttt{b} \text{ or } \texttt{b}^{-1} \\ \texttt{a}^{-2} & \text{ if } \texttt{z} = \texttt{a}. \end{cases} \] The key point is that $\tilde{w}$ starts with $\texttt{a}$ and ends with $\texttt{a}^{-1}$, unless $w \in \{ \texttt{a}, e, \texttt{a}^{-1} \}$. Observe that $\zeta_e(\texttt{z})^{-1} = \zeta_s(\texttt{z})$, and hence the map $w \mapsto \tilde{w}$ is alternating, i.e. $\widetilde{w^{-1}}= \tilde{w}^{-1}$. For example, $\texttt{a} \mapsto e$, $\texttt{a} \texttt{b} \texttt{a}^{-1} \mapsto \texttt{a} \texttt{b} \texttt{a}^{-1}$ and $\texttt{a}^{-1} \texttt{b} \texttt{a} \texttt{b} \texttt{a} \mapsto \texttt{a} \texttt{b} \texttt{a} \texttt{b} \texttt{a}^{-1}$. If $\Phi \colon G \to \mathcal{A}$ is a letter-quasimorphism then we define $\tilde{\Phi} \colon G \to \mathcal{A}$ via $\tilde{\Phi}(g) := \widetilde{\Phi(g)}$. \begin{prop} \label{prop:every letter-qm induces well behaved} If $\Phi \colon G \to \mathcal{A}$ is a letter-quasimorphism then $\tilde \Phi \colon G \to \mathcal{A}$ is a well-behaved letter-quasimorphism, called the \emph{associated} well-behaved letter-quasimorphism. \end{prop} \begin{proof} As $w \mapsto \tilde{w}$ commutes with taking inverses, if $\Phi$ is alternating then so is $\tilde{\Phi}$. In what follows we will use the following easy to check claim. \begin{claim} Let $(x_1,x_2,x_3)$ be an arbitrary triple obtained from $(y_1,y_2,y_3)$ by applying a sequence of the equivalences $(i)$ and $(ii)$ of Definition \ref{defn:equivalent triples}. Then $(\tilde{x}_1, \tilde{x}_2, \tilde{x}_3) \sim (\tilde{y}_1,\tilde{y}_2,\tilde{y}_3)$. In this case we say that the triples $(x_1,x_2,x_3)$ and $(y_1,y_2,y_3)$ are \emph{equivalent up to rotation and inverses}. \end{claim} Let $g,h \in G$. We wish to show that $(\tilde{\Phi}(g), \tilde{\Phi}(h), \tilde{\Phi}(gh)^{-1})$ is a letter-thin triple or degenerate, i.e. equivalent to $(w, w^{-1}, e)$ for some $w \in \mathcal{A}$. If $(\Phi(g), \Phi(h), \Phi(gh)^{-1})$ is equivalent up to rotation and inverses to $(u_1,u_2,u_3)$ the above claim implies that it suffices to check that $(\tilde{u}_1,\tilde{u}_2,\tilde{u}_3)$ is either letter-thin or equivalent to $(w, w^{-1}, e)$. First suppose that $g,h$ are as in Case (\ref{defn:letter-qm:thin}) of Definition \ref{defn:letter quasihomomorphism} i.e. $\Phi(g) \Phi(h) \Phi(gh)^{-1} = e$. If one of $\Phi(g)$, $\Phi(h)$ and $\Phi(gh)$ are trivial then the two other elements are inverses. Hence, up to rotation and taking inverses we may assume that \[ (\Phi(g), \Phi(h), \Phi(gh)^{-1})=(u,u^{-1},e) \] for some $u \in \mathcal{A}$. Hence $(\tilde{u}, \tilde{u}^{-1}, e)$ is degenerate. If none of $\Phi(g)$, $\Phi(h)$ and $\Phi(gh)^{-1}$ are trivial then, as $\Phi$ maps to alternating elements, there are elements $u_1, u_2$ such that $u_1$ ends in a power of $\texttt{a}$ and $u_2$ starts in a power of $\texttt{b}$, such that $(\Phi(g), \Phi(h), \Phi(gh))$ is equivalent up to rotation and taking inverses to $(u_1, u_2, u_3)$ where $u_3 = u_2^{-1} u_1^{-1}$ as a reduced word. Further, write $u_1 = u_1' \texttt{x}$ as a reduced word for $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$ and an appropriate word $u_1' \in \mathcal{A}$. If $u_1'$ is empty, then $\tilde{u}_1=e$. Let $\texttt{z}_2$ be the last letter of $u_2$. Then \[ (\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (e, \texttt{a} u_2 \zeta_e(\texttt{z}_2), \zeta_e(\texttt{z}_2)^{-1} u_2^{-1} \texttt{a}^{-1} ) \] which is equivalent to $(w, w^{-1}, e)$ for $w = \texttt{a} u_2 \zeta_e(\texttt{z}_2)$. If $u_1'$ is non-empty, let $\texttt{z}_1$ be the first letter of $u_1'$ and as before let $\texttt{z}_2$ be the last letter of $u_2$. Then \[ (\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (\zeta_s(\texttt{z}_1) u_1' \texttt{a}^{-1} , \texttt{a} u_2 \zeta_e(\texttt{z}_2), \zeta_e(\texttt{z}_2)^{-1} u_2^{-1} \texttt{x}^{-1} u'^{-1}_1 \zeta_s(\texttt{z}_1)^{-1} ) \] which can be seen to be letter-thin of type $\text{[T1$\at$]}$. This shows that $(\tilde{\Phi}(g), \tilde{\Phi}(h), \tilde{\Phi}(gh)^{-1})$ is letter-thin or degenerate if $\Phi(g) \Phi(h) \Phi(gh)^{-1} = e$. Hence, suppose that $g,h$ are as in Case (\ref{defn:letter-qm:general}) of Definition \ref{defn:letter quasihomomorphism}. Then $(\Phi(g), \Phi(h), \Phi(gh))$ is equivalent up to rotation and inverses to \[ (u_1,u_2,u_3) = (c_1^{-1} \texttt{x} c_2, c_2^{-1} \texttt{x} c_3, c_3^{-1} \texttt{x}^{-1} c_1) \] for $\texttt{x} \in \{ \texttt{a}, \texttt{b} \}$ where $c_1,c_2,c_3 \in \mathcal{A}$ are \emph{arbitrary} i.e. we do not assume that $c_2$ is non-empty as in Definition \ref{defn:letter-thin}. First, suppose that $\texttt{x} = \texttt{b}$. Define \[ d_i = \begin{cases} c_i \zeta_e(\texttt{z}_i) & \text{ if } c_i \not = e \\ \texttt{a}^{-1} & \text{ else} \end{cases} \] where $\texttt{z}_i$ is the last letter of $c_i$. We may see then, that \[ (\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{b} d_2, d_2^{-1} \texttt{b} d_3, d_3^{-1} \texttt{b}^{-1} d_1) \] which is letter thin of type $\text{[T1$\bt$]}$ as all $d_i$'s are non trivial. Hence, suppose that $\texttt{x} = \texttt{a}$. For what follows, if $c_i$ is non-empty, we will denote by $\texttt{z}_i$ the last letter of $c_i$ and let $d_i$ be the freely reduced word represented by $c_i \zeta_e(\texttt{z}_i)$. Observe that if $c_i$ is non-empty then so is $d_i$. There are the following cases: \begin{itemize} \item[(i)] $c_1 \not = e$, $c_2 \not = e$, $c_3 \not = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{a} d_2, d_2^{-1} \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} d_1) $ \item[(ii)] $c_1 \not = e$, $c_2 \not = e$, $c_3 = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{a} d_2, d_2^{-1} \texttt{a}^{-1} , \texttt{a} d_1) $ \item[(iii)] $c_1 \not = e$, $c_2 = e$, $c_3 \not = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{a}^{-1} , \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} d_1) $ \item[(iv)] $c_1 = e$, $c_2 \not = e$, $c_3 \not = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (\texttt{a} d_2, d_2^{-1} \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} ) $ \item[(v)] $c_1 \not = e$, $c_2 = e$, $c_3 = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (d_1^{-1} \texttt{a}, e , \texttt{a}^{-1} d_1) $ \item[(vi)] $c_1 = e$, $c_2 \not = e$, $c_3 = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = ( \texttt{a} d_2, d_2^{-1} \texttt{a}^{-1} , e ) $ \item[(vii)] $c_1 = e$, $c_2 = e$, $c_3 \not = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = ( e, \texttt{a} d_3, d_3^{-1} \texttt{a}^{-1} ) $ \item[(viii)] $c_1 = e$, $c_2 = e$, $c_3 = e$: Then $(\tilde{u}_1, \tilde{u}_2, \tilde{u}_3) = (e, e, e) $ \end{itemize} and cases $(i)-(iv)$ can be seen to be letter-thin of type $\text{[T1$\at$]}$ and cases $(v)-(viii)$ can be seen to be degenerate. This completes the proof. \end{proof} Both letter-quasimorphisms and well-behaved letter-quasimorphisms are examples of \emph{quasimorphism} in the sense of Hartnick--Schweitzer \cite{hartnick-schweitzer}; see Subsection \ref{subsec:generalised qm}. Let $\Phi$ be a letter-quasimorphism and let $\bar{\eta} \colon \mathbb{F}_2 \to \mathbb{R}$ be an ordinary homogeneous quasimorphism with defect $D$ which vanishes on the generators $\texttt{a}, \texttt{b}$. We wish to calculate the defect of $\bar{\eta}\circ \Phi$. Fix $g,h \in G$. If $\Phi(g) \Phi(h) = \Phi(gh)$, then \[ |\bar{\eta} \circ \Phi(g) + \bar{\eta} \circ \Phi(h) - \bar{\eta} \circ \Phi(gh) | \leq D \] Else, up to rotating the factors we see that \[ (\Phi(g), \Phi(h), \Phi(gh)^{-1}) = (d_1^{-1} \texttt{x} d_2, d_2^{-1} d_3, d_3^{-1} d_1) \] for some appropriate $d_1,d_2,d_3 \in \mathcal{A}$, $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1}, \texttt{b}, \texttt{b}^{-1} \}$. Then, as $\bar{\eta}$ is homogeneous $\bar{\eta}(d_1^{-1} \texttt{x} d_2) = \bar{\eta}(\texttt{x} d_2 d_1^{-1})$ and hence $|\bar{\eta}(\texttt{x} d_2 d_1^{-1})- \bar{\eta}(d_2 d_1^{-1})| \leq D$ as we assumed that $\bar{\eta}$ vanishes on the generators. Then we may estimate \[ |\bar{\eta} \circ \Phi(g) + \bar{\eta} \circ \Phi(h) + \bar{\eta} \circ \Phi(gh)^{-1} | = |\bar{\eta}(d_1^{-1} \texttt{x} d_2) + \bar{\eta}(d_2^{-1} d_3) + \bar{\eta}(d_3^{-1} d_1)| \leq 4D \] and after homogenisation of $\phi = \bar{\eta} \circ \Phi(g)$ we estimate that $D(\bar{\phi}) \leq 8D$ using that homogenisation at most doubles the defect; see Proposition \ref{prop:defect of homogenisation doubles}. Hence if $\Phi(g) \in \mathbb{F}_2'$ is such that $\Phi(g^n) = w^n$ for some non-trivial $w \in \mathcal{A}$ which also lies in the commutator subgroup $\mathbb{F}'$ and $\eta \colon \mathbb{F}_2 \to \mathbb{R}$ is homogenous and extremal to $\Phi(g)$ with defect $1$ then, by Bavard, \[ \mathrm{scl}(g) \geq \frac{\bar{\phi}(g)}{16} \geq \frac{\bar{\eta}(\Phi(g))}{16} = \frac{\mathrm{scl}(\Phi(g))}{8} \] and in particular $\mathrm{scl}(g) \geq 1/16$. This is already a good estimate but we see that we can do much better; see Theorem \ref{thm:main}. We will see that this notion is much more flexible than homomorphisms. There are groups $G$ such that for every non-trivial element $g \in G'$ there is a letter-quasimorphisms $\Phi$ such that $\Phi(g)$ is non-trivial. This may be possible even if the group $G$ is not residually free, for example if $G$ is a right-angled Artin group; see Section \ref{sec:RAAGs and scl}. \subsection{Main Theorem} We now deduce our main criterion for $1/2$-gaps in $\mathrm{scl}$: \begin{thm} \label{thm:main} Let $G$ be a group and let $g_0 \in G$. Suppose there is a letter-quasimorphism $\Phi \colon G \to \mathcal{A}$ such that $\Phi(g_0)$ is non-trivial and that $\Phi(g_0^n) = \Phi(g_0)^n$ for all $n \in \mathbb{N}$. Then there is an explicit homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ with $D(\bar{\phi}) \leq 1$ such that $\bar{\phi}(g_0) \geq 1$. If $g_0 \in G'$, then $\textrm{scl}(g_0) \geq 1/2$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{thm} In particular, the $\Phi(g_0) \in \mathcal{A}$ of the Theorem has to be alternating and of \emph{even length}, else $\Phi(g_0)^n$ would not be an alternating word. \begin{proof} Let $\Phi \colon G \to \mathcal{A}$ be the letter-quasimorphism as in the theorem and let $\tilde{\Phi} \colon G \to \mathcal{A}$ be the associated well behaved letter-quasimorphism described above. As $\tilde \Phi(g_0)$ is obtained from $\Phi(g_0)$ by just possibly changing the beginning and the end of the word $\Phi(g_0)$, it is easy to see that there are words $c_1, c_2, w \in \mathcal{A}$ such that $\tilde \Phi(g_0^n) = c_1^{-1} w^{n-1} c_2$ as a freely reduced word for all $n \geq 1$. Consider the sequence $\gamma_i$ of maps $\gamma_i \colon \mathcal{A} \to \mathcal{A}$ defined via $\gamma_0 = id$, $\gamma_{2k+1} = (\alpha \circ \beta )^k \circ \alpha$ and $\gamma_{2k} = (\beta \circ \alpha)^k$ and note that $\gamma_i$ is either $\alpha \circ \gamma_{i-1}$ or $\beta \circ \gamma_{i-1}$; see Definition \ref{defn:alpha and beta}. Analogously define the sequence $\bar{\gamma}_i \colon \bar{\mathcal{A}}_0 \to \bar{\mathcal{A}}_0$ of maps via $\bar{\gamma}_0 = id$, $\bar{\gamma}_{2k+1} = (\bar{\alpha} \circ \bar{\beta} )^k \circ \bar{\alpha}$ and $\bar{\gamma}_{2k} = (\bar{\beta} \circ \bar{\alpha})^k$ and note that every $\bar{\gamma}_i$ is either $\bar{\alpha} \circ \bar{\gamma}_{i-1}$ or $\bar{\beta} \circ \bar{\gamma}_{i-1}$; see Definition \ref{defn:maps alpha bar and beta bar}. For every letter-thin triple $(x_1, x_2, x_3)$ also $\gamma_i(x_1, x_2, x_3)$ is letter-thin by multiple applications of Lemma \ref{lemma:alpha keeps thin.}. Furthermore, if $(x_1, x_2, x_3)$ is a degenerate triple as in Definition \ref{defn:well behaved letter quasimorphisms}, then also $\gamma_i(x_1,x_2,x_3)$ is a degenerate triple as $\gamma_i$ satisfies $\gamma_i(x^{-1}) = \gamma_i(x)^{-1}$ for all $x \in \mathcal{A}$. Let $w$ be as above and consider the sequence $\bar{\gamma}_i(w) \in \bar{\mathcal{A}}_0$ of conjugacy classes in $\bar{\mathcal{A}}_0$. By Proposition \ref{prop:alpha on conjugacy classes decreases}, if $\bar{\gamma}_i(w)$ is a non-trivial equivalence class in the commutator subgroup then $\bar{\gamma}_{i+1}(w)$ either is non-trivial and has strictly smaller word-length or $\bar{\gamma}_{i}(w) = \bar{\gamma}_{i+1}(w)$; see also Remark \ref{rmk:on conjugacy classes for acl}. Hence, there are the following cases: \begin{itemize} \item For all $i \in \mathbb{N}$, $\bar{\gamma}_i(w)$ lies in $\mathbb{F}_2'$, the commutator subgroup. Then, there is an $N$ such that $\bar{\gamma}_{N}(w) = \bar{\gamma}_{N+i}(w)$ for all $i \in \mathbb{N}$. Both $\bar{\alpha}$ and $\bar{\beta}$ then fix the class $\bar{\gamma}_N(w)$. By Proposition \ref{prop:alpha on conjugacy classes decreases}, $\bar{\gamma}_N(w)$ may be represented by $[\texttt{a}, \texttt{b}]^k$ for $k \in \mathbb{Z} \backslash \{ 0 \}$. Hence, the quasimorphism $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}}$ studied in Example \ref{exmp: extemal brooks quasimorphisms on free group} and Proposition \ref{prop:letter thin triples and two quasimorphisms}, satisfies that $|\bar{\eta}_0(\bar{\gamma}_N(w))| \geq 2$. Define $\psi \colon G \to \mathbb{Z}$ via \[ \psi(g) := \begin{cases} \eta_0 \circ \gamma_N \circ \tilde \Phi(g) & \text{ if } \gamma_N \circ \tilde \Phi(g) \not = e \\ 1 & \text{ else} \end{cases} \] and observe that if $\gamma_N \circ \tilde \Phi(g)$ is non-trivial, then $\psi(g^{-1}) = - \psi(g)$. By multiple applications of Proposition \ref{prop:powers of alpha, beta}, we see that there are some elements $d_1, d_2, w' \in \mathcal{A}$ such that $\gamma_N \circ \tilde \Phi(g^n) = d_1 w'^{n-K} d_2$ for all $n \geq K$, for $K \leq N+1$ and $[w'] = \bar{\gamma}_N([w])$. We see that \begin{align*} |\bar{\psi}(g_0)| &= \lim_{n \to \infty} |\psi(g_0^n)|/n \\ &= \lim_{n \to \infty} |\eta_0 \circ \gamma_N \circ \tilde \Phi(g_0^n)| / n \\ &= \lim_{n \to \infty} |\eta_0 (d_1 w'^{n-K} d_2)|/n \\ &= |\bar{\eta_0} (\bar{\gamma}_N([w]))| \geq 2. \end{align*} By multiple applications of Lemma \ref{lemma:alpha keeps thin.} and the fact that $\alpha(w^{-1}) = \alpha(w)^{-1}$, $\beta(w^{-1}) = \beta(w)^{-1}$ and $\alpha(e)=e=\beta(e)$ we see that $\gamma_N \circ \tilde \Phi$ is a well-behaved letter-quasimorphism. Let $g,h \in G$. We wish to compute the defect $| \psi(g) + \psi(h) - \psi(gh) |$. To ease notation define $(x_1,x_2,x_3)$ as the triple \begin{align*} (x_1,x_2,x_3) = (\gamma_N \circ \tilde \Phi(g), \gamma_N \circ \tilde \Phi(h), \gamma_N \circ \tilde \Phi(gh)^{-1}) \end{align*} which is either letter-thin or degenerate as $\gamma_N \circ \tilde \Phi$ is a well-behaved letter-quasimorphism. If $(x_1,x_2,x_3)$ letter-thin then none of its components $x_i$ are empty. Hence \begin{equ*}{rcl} |\psi(g)+\psi(h)-\psi(gh)| &=& |\psi(g)+\psi(h)+\psi(h^{-1} g^{-1})| \\ &=& |\eta_0(x_1)+ \eta_0(x_2) + \eta_0(x_3)| \\ &=& 1 \end{equ*} by Proposition \ref{prop:letter thin triples and two quasimorphisms}. Suppose that $(x_1,x_2,x_3)$ is degenerate. Then one may see that $(x_1,x_2,x_3)$ equals $(v, v^{-1}, e)$, $(v,e, v^{-1})$ or $(e, v, v^{-1})$ for some $v \in \mathcal{A}$. Using that $-\eta_0(v) = \eta_0(v^{-1})$ for $e \not = v \in \mathcal{A}$ we see that two terms of $\psi(g) + \psi(h) - \psi(gh)$ will cancel and for the other will be $1$. Hence, $|\psi(g) + \psi(h) - \psi(gh)| =1$. Finally, if $(x_1,x_2,x_3) = (e,e,e)$ then $\psi(g) + \psi(h) - \psi(gh) = 1$. In particular we see that for any $g,h \in G$, $\psi(g) + \psi(h) - \psi(gh) \in \{ 1, -1 \}$, so $\psi$ is a quasimorphism. Moreover, by possibly changing the sign of $\psi$ we may assume that $\bar{\psi}(g_0) \geq 2$. \item Otherwise, let $N \in \mathbb{N}$ be the smallest integer such that $\bar{\gamma}_N(w) \not \in \mathbb{F}_2'$. Then $\bar{\gamma}_N(w) \in \mathcal{A}$ is represented by a non-trivial even word which is not in the commutator. Hence \[ |\eta_\texttt{a}(\bar{\gamma}_N(w))| + |\eta_\texttt{b}(\bar{\gamma}_N(w))| \geq 2 \] where $\eta_\texttt{a} \colon \mathbb{F}_2 \to \mathbb{Z}$ (resp. $\eta_\texttt{b} \colon \mathbb{F}_2 \to \mathbb{Z}$) denotes the homomorphism counting the letter $\texttt{a}$ (resp. $\texttt{b}$). Observe that homomorphisms are already homogenised. There is some $\eta = \eta_\texttt{x} + \eta_\texttt{y}$ where $\texttt{x} \in \{ \texttt{a}, \texttt{a}^{-1} \}$, $\texttt{y} \in \{ \texttt{b}, \texttt{b}^{-1} \}$ such that $\eta(\bar{\gamma}_N(w)) \geq 2$. As before, define $\psi \colon G \to \mathbb{Z}$ via \[ \psi(g) := \begin{cases} \eta\circ \gamma_N \circ \tilde \Phi(g) & \text{ if } \gamma_N \circ \tilde \Phi(g) \not = e \\ 1 & \text{ else}. \end{cases} \] By a similar argument as above we see that $\bar{\psi}(g_0) \geq 2$. Again, the triple \begin{align*} (x_1,x_2,x_3) = (\gamma_N \circ \tilde \Phi(g), \gamma_N \circ \tilde \Phi(h), \gamma_N \circ \tilde \Phi(h^{-1} g^{-1})) \end{align*} is either letter-thin or degenerate. By the same argument as in the previous case and using Proposition \ref{prop: letter thin triples and homomorphisms} we conclude that for any $g,h \in G$, $|\psi(g) + \psi(h) - \psi(gh)|=1$, so $\psi$ is a quasimorphism. In particular we see that for any $g,h \in G$, $\psi(g) + \psi(h) - \psi(gh) \in \{ 1, -1 \}$. \end{itemize} In both cases, set \[ \phi(g) := \frac{\psi(g)+1}{2}. \] Then we see that, for any $g,h \in G$, \[ \delta^1 \phi(g,h) = \phi(g) + \phi(h) - \phi(gh) = \frac{\psi(g) + \psi(h) - \psi(gh)+1}{2} \in \{ 0, 1 \}. \] Hence, by Theorem \ref{thm:ghys} due to Ghys (see also \cite{ghys}), there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ on the circle such that $\rho^*\mathrm{eu}_b = [\delta^1 \phi] \in \mathrm{H}^2_b(G,\mathbb{Z})$ and hence $\rho^*\mathrm{eu}^\mathbb{R}_b = [\delta^1 \bar{\phi}] \in \mathrm{H}^2_b(G, \mathbb{R})$. Here, $\mathrm{eu}_b$ (resp. $\mathrm{eu}_b^\mathbb{R}$) denotes the (real) bounded Euler class. Moreover, we observe that $\bar{\phi}(g) = \bar{\psi}(g)/2$, for $\bar{\phi}$ the homogenisation of $\phi$. Furthermore, as $D(\psi) = 1$ we estimate by Proposition \ref{prop:defect of homogenisation doubles} that $D(\bar{\psi}) \leq 2$ and hence $D(\bar{\phi}) \leq 1$. We conclude that there is a quasimorphism $\phi \colon G \to \mathbb{R}$ with homogenisation $\bar{\phi}$ such that $D(\bar{\phi}) \leq 1$, $\bar{\phi}(g_0) \geq 1$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ with $[\delta^1 \phi] = \rho^*\mathrm{eu}_b^\mathbb{R} \in \mathrm{H}^2_b(G,\mathbb{R})$ where $\mathrm{eu}_b^\mathbb{R}$ is the real bounded Euler class. \end{proof} Applying Theorem \ref{thm:main} to Example \ref{exmp:letter quasimorphisms on free group} we recover that in every residually free group $G$, every non-trivial element $g \in G'$ has stable commutator length at least $1/2$. This gap is realised by a quasimorphism induced by a circle action which has not been known previously. As said in the introduction we think of letter-quasimorphisms as simplifications of elements. Sometimes information about $w$ can not be recovered by $\Phi(w)$. For example for the word $w = \texttt{a} \texttt{b} \texttt{a}^{-1} \texttt{b}^{-1} \texttt{a} \texttt{b}^{-3} \texttt{a}^{-1} \texttt{b}^3$, we may compute\footnote{These calculations are done with \texttt{scallop}, see \cite{scallop}} $\mathrm{scl}(w) = 3/4$ but $\mathrm{scl}(\Phi(w)) = 1/2$. This example may be generalised: Pick an alternating word $w \in \mathcal{A}$ that starts and ends in a power of $\texttt{b}$. Then $[\texttt{a}, w] \in \mathcal{A}$ and $\mathrm{scl}([\texttt{a}, w]) = 1/2$. Then for any choice of words $v_1, v_2 \in \mathbb{F}_2$ such that $\Phi(v_1) = w$, $\Phi(v_2) = w^{-1}$ and such that $v = \texttt{a} v_1 \texttt{a}^{-1} v_2 \in \mathbb{F}_2'$ we have that $\Phi(v) = [\texttt{a}, w]$. However, $\mathrm{scl}(v)$ is experimentally arbitrarily large. \begin{rmk} \label{rmk:quasimorphisms are pullback of hs qm} As pointed out in the proof all of $\gamma_i \circ \tilde \Phi$ are well-behaved letter-quasimorphisms for any $i \in \mathbb{N}$. The quasimorphisms $\psi$ defined in the proof are then pullbacks of the quasimorphism $\eta_0 = \eta_{\texttt{a} \texttt{b}} - \eta_{\texttt{b} \texttt{a}}$ or homomorphisms $\eta = \eta_\texttt{x} + \eta_\texttt{y}$ via these well-behaved letter-quasimorphisms $\gamma_i \circ \tilde{\Phi} \colon G \to \mathcal{A} \subset \mathbb{F}_2$. \end{rmk} \begin{rmk} \label{rmk:criterion for gaps} In light of Theorem \ref{thm:Bavards duality}, a criterion for groups to have the optimal $\mathrm{scl}$-gap of $1/2$ may hence be as follows: \begin{center} \emph{Let $G$ be a non-abelian group. If for every non-trivial element $g \in G'$ there is a letter-quasimorphism $\Phi \colon G \to \mathcal{A}$ such that $\Phi(g^n) = \Phi(g)^n$ where $\Phi(g)$ is non-trivial. Then $G$ has a gap of $1/2$ in stable commutator length.} \end{center} By Example \ref{exmp:letter quasimorphisms on free group} residually free groups have this property and the criterion has some qualitative similarities to being residually free. We will later see that also non-residually free groups, like right-angled Artin groups, have this property; see Section \ref{sec:RAAGs and scl}. \end{rmk} \section{Left Orders and Left-Relatively Convex Subgroups} \label{sec:Left orders and convex subgroups} For what follows we will use the notation and conventions of \cite{convexsub}. We further emphasise that nothing in this section is original work. An order $\prec$ on a set $\mathcal{X}$ is a subset of $\mathcal{X} \times \mathcal{X}$ where we stress that a pair $(x,y) \in \mathcal{X} \times \mathcal{X}$ is in this subset by writing $x \prec y$. Furthermore, the following holds: \begin{itemize} \item For all $x, y \in \mathcal{X}$ either $x \prec y$ or $y \prec x$. We have $x \prec y$ and $y \prec x$ if and only if $x = y$. \item For all $x, y, z \in \mathcal{X}$ such that $x \prec y$ and $y \prec z$ we have $x \prec z$. \end{itemize} A set $\mathcal{X}$ with a left group action has a \emph{$G$-invariant order} if for all $g \in G$, $x_1,x_2 \in \mathcal{X}$, $x_1 \prec x_2$ implies that $g.x_1 \prec g.x_2$. A group $G$ is said to be \emph{left orderable} if the set $G$ has a $G$-invariant order with respect to its left action on itself. A subgroup $H < G$ is said to be \emph{left relatively convex} in $G$ if the $G$-set $G/H$ has some $G$-invariant order. Note that this definition is valid even if $G$ itself is \emph{not} left-orderable. If $G$ itself is orderable, then this is equivalent to the following: There is an order $\prec$ on $G$ such that for every $h_1, h_2 \in H$ and $g \in G$ with $h_1 \prec g \prec h_2$ we may conclude $g \in H$. In this case we simply say that $H$ is convex in $G$. As $e \in H$, this means that $H$ is a neighbourhood of $e$. It is not hard to see that left relatively convex is transitive: \begin{prop} \label{prop:left-relatively convex is transitive} \footnote{See Section 2 of \cite{convexsub}} Let $K < H < G$ be groups. Then $G/K$ is $G$-orderable such that $H/K$ is convex if and only if $G/H$ is $G$-orderable and $H/K$ is $H$-orderable. \end{prop} An easy example of a pair $H < G$ such that $H$ is left relatively convex in $G$ is $\mathbb{Z} < \mathbb{Z}^2$ embedded in the second coordinate via the standard lexicographic order. Similarly, every subgroup $G < \mathbb{Z} \times G$ embedded via the second coordinate, is left relatively convex for an arbitrary group $G$. Every generator of a non-abelian free group generates a left relatively convex subgroup in the total group; see \cite{DH}. In fact, \cite{convexsub} show that each maximal cyclic subgroup of a right-angled Artin group is left relatively convex. We wish to state the main Theorem of \cite{convexsub}. For this let $\mathrm{T}$ denote an \emph{oriented} simplicial tree, with vertices $\mathrm{V}(\mathrm{T})$ and edges $\mathrm{E}(\mathrm{T})$ and two maps $\iota, \tau \colon \mathrm{E}(\mathrm{T}) \to \mathrm{V}(\mathrm{T})$ assigning to each oriented edge its initial and terminal vertex respectively. Suppose that $G$ acts on $\mathrm{T}$ and denote by $G_v$ (resp. $G_e$) the stabilisers of a vertex $v \in \mathrm{V}(\mathrm{T})$ (resp. an edge $e \in \mathrm{E}(\mathrm{T})$). Note that stabilisers of an edge $e$ naturally embed into $G_{\iota(e)}$ and $G_{\tau(e)}$. \begin{thm} \footnote{Theorem 14 of \cite{convexsub}} Suppose that $\mathrm{T}$ is a left G-tree such that, for each $\mathrm{T}$-edge $e$, $G_e$ is left relatively convex in $G_{\iota(e)}$ and in $G_{\tau(e)}$. Then, for each $v \in \mathrm{V}(\mathrm{T})$, $G_v$ is left relatively convex in $G$. Moreover, if there exists some $v \in \mathrm{V}(\mathrm{T})$ such that $G_v$ is left orderable, then $G$ is left orderable. \end{thm} We deduce the following corollary, see Example 19 of \cite{convexsub} using Bass--Serre Theory. \begin{corr} \label{corr:orders of amalgamations} Let $A, B$ and $C$ be groups and let $\kappa_A \colon C \hookrightarrow A$ and $\kappa_B \colon C \hookrightarrow B$ be injections and let $G = A \star_C B$ be the corresponding amalgamated free product (see Section \ref{sec:amalgamation}). If $\kappa_A(C)$ is left relatively convex in $A$ and $\kappa_B(C)$ is left relatively convex in $B$, then $A$ and $B$ are left relatively convex in $G$. \end{corr} Let $H<G$ be a left relatively convex subgroup and let $\prec$ be a $G$-invariant order of $G/H$. we define the \emph{sign-function} $\textrm{sign} \colon G \to \{ -1, 0, 1 \}$ on representatives $g \in G$ of cosets in $G/H$ via \[ \textrm{sign}(g) = \begin{cases} +1 & \text{ if } gH \succ H \\ 0 & \text{ if } g \in H \\ -1 & \text{ if } gH \prec H \end{cases} \] \begin{prop} \label{prop:orders well defined} Let $H < G$ be a left relatively convex subgroup and let $\prec$ be the $G$-invariant order of $G/H$. Then the sign-function with respect to $\prec$ on elements in $G$ is independent under left or right multiplication by elements of $H$. That is for every $g \in G \smallsetminus H$ and for every $h \in H$, $\textrm{sign}(h g) = \textrm{sign}(g) = \textrm{sign}(g h)$. \end{prop} \begin{proof} Clearly $\textrm{sign}(gh) = \textrm{sign}(g)$ as both $g$ and $gh$ define the same coset. On the other hand, if $h g H \succ H$ then by left multiplication $g H \succ H$ and similarly if $h g H \prec H$ then $g H \prec H$, so $\textrm{sign}(hg) = \textrm{sign}(g)$. \end{proof} \section{Amalgamted Free Products} \label{sec:amalgamation} Let $A, B, C$ be groups and let $\kappa_A \colon C \hookrightarrow A$, $\kappa_B \colon C \hookrightarrow B$ be injections. The \emph{amalgamated free product} $G =A \star_C B$ with respect to $\kappa_A$ and $\kappa_B$ it the group via \[ G = A \star_C B = A \star B / \langle \langle \kappa_A(c)^{-1} \kappa_B(c) \mid c \in C \rangle \rangle. \] It is a well-known fact that the homomorphism $A \to A \star_C B$ (resp. $B \to A \star_C B$) defined by mapping $a \in A$ (resp. $b \in B$) to the corresponding element $a \in G$ (resp. $b \in G$) is \emph{injective} and that $C$ embeds in $G$ via these injections. See \cite{serre} for a reference. Every element $g \in G$ with $g \in G \smallsetminus C $ may be written as a product \[ g = d_1 \cdots d_k \] such that all of $d_i$ are either in $A \smallsetminus \kappa_A(C)$ or in $B \smallsetminus \kappa_B(C)$ and alternate between both. Furthermore for any other such expression \[ g = d'_1 \cdots d'_{k'} \] one may deduce that $k'=k$ and that there are elements $c_i \in C$, $i \in \{ 1, \ldots, k-1 \}$ such that $d'_1 = d_1 c_1$, $d'_i = c_{i-1}^{-1} d_i c_i$ and $d'_k = c_{k-1} d_k$. For what follows, let $\prec_A$ (resp. $\prec_B)$ be a left order on $A/\kappa_A(C)$ (resp. $B/\kappa_B(C)$) and let $\textrm{sign}_A$ (resp. $\textrm{sign}_B$) be its sign on $A$ (resp. $B$). We define the map $\Phi \colon G \to \mathcal{A}$ as follows: If $g \in C$ set $\Phi(g) = e$. Else let $g = d_1 \cdots d_k$ be the normal form described above. Then, set \[ \Phi(g) = \prod_{i=1}^k \Phi(d_i) \] where we define \[ \Phi(d_i) = \begin{cases} \texttt{a}^{ \textrm{sign}_A(d_i)} & \text{ if } d_i \in A \smallsetminus \kappa_A(C) \\ \texttt{b}^{ \textrm{sign}_B(d_i)} & \text{ if } d_i \in B \smallsetminus \kappa_B(C) \end{cases} \] and we note that $\Phi$ is well defined. To see this let $d'_1 \cdots d'_k$ be another normal form for $g$ and let $c_i \in C$ for $i \in \{0, \ldots, k+1 \}$ be such that $d'_i = c_{i-1}^{-1} d_i c_i$ with $c_0=c_{k+1}=e$. Then \[ \textrm{sign}(d_i) = \textrm{sign}(c_{i-1}^{-1} d_i) = \textrm{sign}(c_{i-1}^{-1} d_i c_i) = \textrm{sign}(d'_i) \] by Proposition \ref{prop:orders well defined} and ``$\textrm{sign}$'' either ``$\textrm{sign}_A$'' or ``$\textrm{sign}_B$''. We claim that: \begin{lemma} \label{lemma:amalgamated yields letter-quasimorphism} Let $G = A \star_C B$ and $\Phi \colon G \to \mathcal{A}$ be as above. Then $\Phi$ is a letter-quasimorphism. \end{lemma} We will prove this by giving another description of $\Phi$ in terms of paths in the Bass--Serre tree associated to the amalgamated free product $G = A \star_C B$: Let $\mathrm{T}$ be the tree with vertex set $\mathrm{V} (\mathrm{T}) = \{ g A \mid g \in G \} \sqcup \{ g B \mid g \in G \}$ and oriented edges \[ \mathrm{E}(\mathrm{T}) = \{ (g A, g B) \mid g \in G \} \sqcup \{ (g B, g A) \mid g \in G \} \subset \mathrm{V}(\mathrm{T}) \times \mathrm{V}(\mathrm{T}) \] We define $\iota, \tau \colon \mathrm{E}(\mathrm{T}) \to \mathrm{V}(\mathrm{T})$ via $\iota((g A, g B)) = g A$, $\tau((g A, g B))= g B$ and similarly, $\iota(g B, g A) = g B$, $\tau(g B, g A)= g A$. Moreover, we set $(g A, g B)^{-1} = (g B, g A)$ and $(g B, g A)^{-1} = (g A, g B)$. It is well-known that $\mathrm{T}$ is indeed a connected tree. $G$ acts on $\mathrm{T}$ by left multiplication. We have that $\mathrm{Stab}_G(g A) = g A g^{-1} < G$, respectively $\mathrm{Stab}_G(h B) = h B h^{-1} < G$, $\mathrm{Stab}_G(g A, g B) = g C g^{-1}$ and $\mathrm{Stab}_G(g B, g A) = g C g^{-1}$ A \emph{reduced path of edges} is a sequence $\wp = (e_1, \ldots e_n)$, $e_i \in \mathrm{E}(\mathrm{T})$ such that $\tau(e_i)=\iota(e_{i+1})$ for every $i \in \{ 1, \ldots, n-1 \}$, without backtracking. We call $n$ the \emph{length of the path}. For what follows, $\mathcal{P}$ will be the set of all paths of edges. We define the following map $\Xi \colon \mathcal{P} \to \mathcal{A}$ assigning an alternating word to each path of edges. Let $\wp \in \mathcal{P}$. If $\wp$ has length $1$, then set $\Xi(\wp) :=e$. Else, suppose that $\wp$ has length $2$, i.e. $\wp = (e_1,e_2)$. Suppose that $e_1 = (g_1 A, g_1 B)$ and $e_2 = (g_2 B, g_2 A)$ and note that $g_1 B = g_2 B$. In particular, $g_1^{-1} g_2 \in B$. Set $\Xi(\wp)=\Xi((e_1,e_2)) = \texttt{b}^{\textrm{sign}_B(g_1^{-1} g_2)}$. Similarly, if $e_1 = (g_1 B, g_1 A)$ and $e_2 = (g_2 A, g_2 B)$ note that $g_1 A = g_2 A$ and set $\Xi(\wp) = \Xi((e_1,e_2)) = \texttt{a}^{\textrm{sign}_A(g_1^{-1} g_2)}$. Finally, for an arbitrary paths $\wp = (e_1, \ldots, e_n)$ set $\Xi(\wp) = \Xi(e_1,e_2) \cdot \Xi(e_2, e_3) \cdots \Xi(e_{n-2}, e_{n-1})\cdot \Xi(e_{n-1}, e_n)$. Note that $\Xi$ is well defined. To see this, note that the stabilizer of any edge $(g A, g B)$ (resp. $(g B, g A)$) is $g C g^{-1}$. Hence, if $(g A, g B) = (g' A, g' B)$ (resp. $(g B, g A) = (g' B, g' A)$) there is a $c \in C$ such that $g c = g'$. If $(e_1,e_2)$ is a path of edges such that without loss of generality $e_1 = (g_1 A, g_1 B)= (g'_1 A, g'_1 B)$ and $e_2 = (g_2 A, g_2 B)=(g'_2 A, g'_2 B)$ then there are $c_1,c_2$ such that $g_1 = g'_1 c_1$ and $g_2 = g'_2 c_2$. Hence \[ \textrm{sign}_B(g_1^{-1} g_2) = \textrm{sign}_B(c_1^{-1} {g'_1}^{-1} g'_2 c_2) = \textrm{sign}_B({g'_1}^{-1} g'_2) \] by Proposition \ref{prop:orders well defined}. Define the \emph{inverse of a path} $\wp = (e_1, \ldots, e_n)$ as $\wp^{-1} := (e_n^{-1}, \ldots, e_1^{-1} )$. We see that $\Xi(\wp^{-1}) = \Xi(\wp)^{-1}$ using that $\textrm{sign}(g^{-1}) = - \textrm{sign}(g)$. We collect some further properties of $\Xi$. We note that if $\wp \in \mathcal{P}$ is a path then so is $^g \wp$, where $^g \wp$ denotes the image of $\wp$ under the action of $g \in G$. \begin{prop} \label{prop:properties of xi} $\Xi \colon \mathcal{P} \to \mathcal{A}$ has the following properties: \begin{itemize} \item[(i)] For any $\wp \in \mathcal{P}$ and $g \in G$ we have $\Xi(^g \wp) = \Xi(\wp)$. \item[(ii)] Let $\wp_1, \wp_2$ be two paths of edges such that the last edge in $\wp_1$ is $e_1$, the first edge of $\wp_2$ is $e_2$ such that $\tau(e_1)=\iota(e_2)$ and such that $e_1 \not = e_2^{-1}$. Then $\Xi(\wp_1 \cdot \wp_2) = \Xi(\wp_1) \Xi(e_1, e_2) \Xi(\wp_2)$ as reduced words, where $\wp_1 \cdot \wp_2$ denotes the concatenation of paths. \item[(iii)] Let $g \in G$ and let $\wp(g)$ be the unique path of edges from one of edges $\{ (A,B), (B, A) \}$ to one of the edges $\{ (g A, g B), (g B, g A) \}$. Then $\Xi(\wp(g)) = \Phi(g)$, for $\Phi$ as above. \end{itemize} \end{prop} \begin{proof} To see $(i)$ note that for any path $(e_1, e_2)$ with $e_1 = (g_1 A, g_1 B)$ and $e_2 = (g_2 B, g_2 A)$ we have \[ \Xi(e_1, e_2) = \texttt{b}^{\textrm{sign}(g_1^{-1} g_2)} = \texttt{b}^{\textrm{sign}(g_1^{-1} g^{-1} g g_2)} = \Xi(^g(e_1,e_2)) \] and the same argument holds for paths with $e_1=(g_1 B, g_1 A)$ and $e_2 = (g_2 A, g_2 B)$. Point $(ii)$ is immediate from the definition. To see $(iii)$, without loss of generality assume that the normal form of $g$ is $g = a_1 b_1 \cdots a_k b_k$. Then \[ \wp(g) = (B, A),(a_1 A, a_1 B),(a_1 b_1 B, a_1 b_1 A), \ldots, (g B, g A) \] and comparing $\Xi(\wp(g))$ with $\Phi(g)$ yields $(iii)$. \end{proof} We can now prove Lemma \ref{lemma:amalgamated yields letter-quasimorphism}: \begin{proof} Let $g,h \in G$. First, suppose that the midpoints of \begin{align} \label{equ:midpoint} \{ (A,B), (B,A) \} \text{, } \{ (gA,gB), (gB,gA) \} \text{ and } \{ (ghA,ghB), (ghB,ghA) \} \end{align} lie on a common geodesic segment in $\mathrm{T}$. If the midpoint of $\{ (gA,gB), (gB,gA) \}$ lies in the middle of this segment then there are paths $\wp_1$ and $\wp_2$ such that $\wp(g) = \wp_1 \cdot e$, $^g \wp(h) = e \cdot \wp_2$ and $\wp(gh) = \wp_1 \cdot e \cdot \wp_2$ for $e$ either $(gA, gB)$ or $(gB, gA)$. We see that in this case $\Xi(\wp_1 \cdot e) \cdot \Xi(e \cdot \wp_2) = \Xi(\wp_1 \cdot e \cdot \wp_2)$ as reduced words in $\mathcal{A}$ and hence $\Phi(g) \Phi(h) = \Phi(gh)$. Analogously we see that $\Phi(g) \Phi(h) = \Phi(gh)$ when the midpoint of $\{ (A,B), (B,A) \}$ or $\{ (ghA,ghB), (ghB,ghA) \}$ lies in the middle of this segment. Hence in this case $\Phi$, $g,h \in G$ are as in $(1)$ of Definition \ref{defn:letter quasihomomorphism}. Now suppose that the midpoints in (\ref{equ:midpoint}) do not lie on a common geodesic segment. Then there are non-trivial paths $\wp_1, \wp_2, \wp_3 \in \mathcal{P}$ with initial edges $e_1, e_2, e_3$ satisfying $\iota(e_1)=\iota(e_2)=\iota(e_3)$ and $e_i \not = e_j$ for $i \not = j$ such that \[ \wp(g) = \wp_1^{-1} \cdot \wp_2 \text{ , } ^g \wp(h) = \wp_2^{-1} \cdot \wp_3 \text{ , and } ^{gh} \wp(((gh)^{-1}) = \wp_3^{-1} \cdot \wp_1. \] By Proposition \ref{prop:properties of xi} we infer that \begin{align*} \Phi(g) &= c_1^{-1} \Xi(e_1^{-1}, e_2) c_2 \\ \Phi(h) &= c_2^{-1} \Xi(e_2^{-1}, e_3) c_3 \\ \Phi(gh)^{-1} &= c_3^{-1} \Xi(e_3^{-1}, e_1) c_1 \end{align*} for $c_i = \Xi(p_i)$, $i \in \{1,2,3 \}$. Without loss of generality assume that $e_i = (g_i A, g_i B)$, the case $e_i = (g_i B, g_i A)$ is analogous. Then \begin{align*} \Phi(g) &= c_1^{-1} \texttt{x}_1 c_2 \\ \Phi(h) &= c_2^{-1} \texttt{x}_2 c_3 \\ \Phi(gh)^{-1} &= c_3^{-1} \texttt{x}_3 c_1 \end{align*} \[ \texttt{x}_1= \texttt{b}^{\textrm{sign}_B(g_1^{-1} g_2)} \text{, } \texttt{x}_2 = \texttt{b}^{\textrm{sign}_B(g_2^{-1} g_3)} \text{, and } \texttt{x}_3 = \texttt{b}^{\textrm{sign}_B(g_3^{-1} g_1)} \] We claim that $\textrm{sign}_B(g_1^{-1} g_2) + \textrm{sign}_B(g_2^{-1} g_3) + \textrm{sign}_B(g_3^{-1} g_1) \in \{ -1, +1 \}$. To see this, note that all of the signs are either $\{ +1, -1 \}$ as the edges $e_i$ were assumed to be distinct. Suppose that $\textrm{sign}_B(g_1^{-1} g_2)= \textrm{sign}_B(g_2^{-1} g_3)= \textrm{sign}_B(g_3^{-1} g_1)=1$. Then $g_1^{-1} g_2 C \succ C$, hence $g_3^{-1} g_2 C = (g_3^{-1} g_1) g_1^{-1} g_2 C \succ g_3^{-1} g_1 C \succ C$, so $\textrm{sign}_B(g_3^{-1} g_2) = 1$ and hence $\textrm{sign}_B(g_2^{-1} g_3)=-1$, contradiction. Similarly, not all signs can be negative. Hence indeed $\textrm{sign}_B(g_1^{-1} g_2) + \textrm{sign}_B(g_2^{-1} g_3) + \textrm{sign}_B(g_3^{-1} g_1) \in \{ -1, +1 \}$ and so $\texttt{x}_1 \texttt{x}_2 \texttt{x}_3 \in \{ \texttt{b}, \texttt{b}^{-1} \}$. This shows that $\Phi$ is as in $(2)$ of Definition \ref{defn:letter quasihomomorphism}, hence $\Phi$ is a letter-quasimorphism. \end{proof} \begin{thm} \label{thm:amalgamation} Let $A, B, C$ be groups and $\kappa_A \colon C \hookrightarrow A$, $\kappa_B \colon C \hookrightarrow B$ be injections such that both $\kappa_A(C)$ and $\kappa_B(C)$ are left relatively convex subgroup of $A$ resp. $B$. Let $G = A \star_C B$ be the amalgamated free product for this data. Then for every element $g_0 \in G$ which does not conjugate into $A$ or $B$, there is a homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ such that $\bar{\phi}(g_0) \geq 1$, $D(\bar{\phi}) \leq 1$ and $\bar{\phi}$ vanishes on $A$ and $B$. If $g_0 \in G'$, then $scl(g_0) \geq 1/2$. If $G$ is countable then there is an action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{thm} \begin{rmk} \label{rmk:chen-heuer} The methods developed in this paper may be modified to obtain similar gap results for HNN-extensions and graphs of groups, as well gap results for certain one-relator groups. A generalisation of this and direct proofs of these results using both quasimorphisms and surface mappings will appear in the forthcoming preprint \cite{chen_heuer}. \end{rmk} The existence of a uniform gap was known before; see \cite{calegari_fujiwara} and Subsection \ref{subsec:spectral gaps of scl}. \begin{proof} Let $g_0 \in G$ be as in the Theorem. Then, if $g_0$ does not conjugate into $A$ or $B$ we may conjugate $g_0$ by an element $g_1 \in G$ such that \[ g' = g_1 g_0 g_1^{-1} = a_1 b_1 \cdots a_k b_k \] for \emph{all of} $a_i \in A \smallsetminus \kappa_A(C)$ and $b_i \in B \smallsetminus \kappa_B(C)$. It follows that $\Phi(g')=w$ is a non-empty alternating word of even length and that $\Phi({g'}^n) = w^n$ for $n \in \mathbb{N}$. By Theorem \ref{thm:main} there is a homogeneous quasimorphism $\bar{\phi} \colon G \to \mathbb{R}$ with $D(\bar{\phi}) \leq 1$ and $1 \leq \bar{\phi}(g_0) = \bar{\phi(g')}$ using that homogeneous quasimorphisms are invariant under conjugation. If $G$ is countable then this quasimorphism $\bar{\phi}$ is moreover induced by a circle action $\rho \colon G \to \mathrm{Homeo}^+(S^1)$. \end{proof} \section{Right-Angled Artin Groups} \label{sec:RAAGs and scl} In this section all graphs will be simplicial, i.e. do not contain multiple edges between two vertices or loops. Let $\Gamma$ be a finite simplicial graph with vertices $\mathrm{V}(\Gamma)$ and edges $\mathrm{E}(\Gamma)$. Given a subset $\Lambda \subset \mathrm{V}(\Gamma)$ the \emph{full subgraph on $\Lambda$ in $\Gamma$} is the graph with vertices $\Lambda$ where two elements $v,w \in \Lambda$ are connected by an edge if and only if they are connected in $\Gamma$. For a vertex $v \in \Gamma$, the \emph{link of $v$} is the full subgraph of the set $\{w \mid (v,w) \in \mathrm{E}(\Gamma) \}$ in $\Gamma$ and denoted by $\textrm{Lk}(v)$. The \emph{closed star} is the full subgraph of $\textrm{Lk}(v) \cup \{ v \}$ in $\Gamma$ and denoted by $\textrm{St}(v)$. The \emph{right-angled Artin group} or \emph{RAAG} on $\Gamma$ is the group $\mathrm{A}(\Gamma)$ with group presentation \[ \mathrm{A}(\Gamma) = \langle \mathrm{V}(\Gamma) \mid [v, w]; (v,w) \in \mathrm{E}(\Gamma) \rangle \] A word $w$ in the generators $\mathrm{V}(\Gamma)$ representing an element $[w] \in \mathrm{A}(\Gamma)$ is called \emph{reduced} if it has minimal word length among all words representing $[w]$. A word $w$ is said to be cyclically reduced if it has minimal word length among all of its conjugates. The \emph{support} of an element $g \in \mathrm{A}(\Gamma)$ is the set of vertices that appear in a reduced word representing $g$. It is well-known that the support is well-defined. Let $\Gamma$ be a finite simplicial graph, let $\mathrm{A}(\Gamma)$ be the right-angled Artin group of $\Gamma$ and let $v \in \Gamma$. Then $\mathrm{A}(\Gamma)$ can be thought of as an amalgamated free product of $\mathrm{A}(\textrm{St}(v))$ and $A(\Gamma \backslash \{ v \} )$ where the common subgroup is $\mathrm{A}(\textrm{Lk}(v))$. i.e. \[ \mathrm{A}(\Gamma) = \mathrm{A}(\textrm{St}(v)) \star_{\mathrm{A}(\textrm{Lk}(v))} \mathrm{A}(\Gamma \backslash \{v \}). \] This will be used both in the proof of Theorem \ref{thm:raags and scl} and for induction arguments. \begin{prop} \label{prop: convex subgroups of raartin groups} (Section 4 of \cite{convexsub}) Let $\Lambda \subset \Gamma$ be a full subgraph of $\Gamma$. Then $\mathrm{A}(\Lambda) < \mathrm{A}(\Gamma)$ induced by the embedding, is a left relatively convex subgroup. \end{prop} \begin{proof} We follow the proof of \cite{convexsub}. We may induct on the following statement: For any $\Gamma$ of size at most $k$ and every full subgraph $\Lambda \subset \Gamma$, $\mathrm{A}(\Lambda)$ is left relatively convex in $\mathrm{A}(\Gamma)$. For $k=2$ this is just the case of free-abelian and non-abelian free groups mentioned before. Assume the statement is true for all $n \leq k$. Let $\Gamma$ be a graph with $k+1$ vertices and let $\Lambda \subset \Gamma$ be a full subgraph. If $\Lambda = \Gamma$ there is nothing to show. Else pick $v \in \mathrm{V}(\Gamma) \backslash \mathrm{V}(\Lambda)$ and set $\Gamma'$ to be the full subgraph in $\Gamma$ on the vertices $\mathrm{V}(\Gamma) \backslash \{ v \}$. Hence $\Lambda \subset \Gamma' \subset \Gamma$ with $\Gamma'$ of size $k$. We wish to show that $\mathrm{A}(\Gamma') < \mathrm{A}(\Gamma)$ is a left-relatively convex subgroup. Consider the amalgamation \[ \mathrm{A}(\Gamma) = \mathrm{A}(\textrm{St}(v)) \star_{\mathrm{A}(\textrm{Lk}(v))} \mathrm{A}(\Gamma') \] By induction, $\mathrm{A}(\textrm{Lk}(v)) < \mathrm{A}(\Gamma')$ is a left relatively convex subgroup. Also $\mathrm{A}(\textrm{Lk}(v)) < \mathrm{A}(\textrm{St}(v))$ is a left relatively convex subgroup as $\mathrm{A}(\textrm{St}(v)) = \langle v \rangle \times \mathrm{A}(\textrm{Lk}(v))$. We may use Corollary \ref{corr:orders of amalgamations} to see that $\mathrm{A}(\Gamma') < \mathrm{A}(\Gamma)$ is a left relatively convex subgroup. By induction hypothesis, $\mathrm{A}(\Lambda) < \mathrm{A}(\Gamma')$ is a left-relatively convex subgroup and by transitivity $\mathrm{A}(\Lambda) < \mathrm{A}(\Gamma')$ is a left relatively convex subgroup. \end{proof} We deduce: \begin{thm} \label{thm:quasimorphisms on raags} Let $g \in \mathrm{A}(\Gamma)$ be an element in an right-angled Artin group $\mathrm{A}(\Gamma)$ such that $g_0$ does not conjugate into a subgroup of a clique of $\Gamma$. Then there is a homogeneous quasimorphism $\bar{\phi}$ which vanishes on the generators $\mathrm{V}(\Gamma)$ such that $\bar{\phi}(g_0) \geq 1$ and $D(\bar{\phi}) \leq 1$. Moreover, there is an action $\rho \colon \mathrm{A}(\Gamma) \to \mathrm{Homeo}^+(S^1)$ such that $[\delta^1 \bar{\phi}]=\rho^*\mathrm{eu}^\mathbb{R}_b \in \mathrm{H}^2_b(G,\mathbb{R})$, for $\mathrm{eu}^\mathbb{R}_b$ the real bounded Euler class. \end{thm} Observe that no non-trivial element in the commutator subgroup of a right-angled Artin group conjugates into a clique. An application of Bavard's Duality Theorem \ref{thm:Bavards duality} yields: \begin{thm} \label{thm:raags and scl} Let $g_0$ be a non-trivial element in the commutator subgroup of a right-angled Artin group. Then $\mathrm{scl}(g_0) \geq 1/2$. This bound is sharp. \end{thm} \begin{proof} (of Theorem \ref{thm:quasimorphisms on raags}) Let $g \in \mathrm{A}(\Gamma)$ be such an element. We may suppose that $g$ is cyclically reduced, as homogeneous quasimorphisms are invariant under conjugation. Choose a vertex $v$ in the support of $g$ such that there is another vertex $w$ in the support of $g$ which is non-adjacent to $v$. Such a vertex exists as $g$ does not conjugate into a clique. Write $\mathrm{A}(\Gamma)$ as \[ \mathrm{A}(\Gamma) = \mathrm{A}(\textrm{St}(v)) \star_{\mathrm{A}(\textrm{Lk}(v))} \mathrm{A}(\Gamma \backslash \{v \}) \] and observe that $g$ does not conjugate into any factor of this amalgamation as both $v$ and $w$ are in the support of $g$. By Proposition \ref{prop: convex subgroups of raartin groups}, both $\mathrm{A}(\textrm{Lk}(v)) < \mathrm{A}(\textrm{St}(v))$ and $\mathrm{A}(\textrm{Lk}(v)) < \mathrm{A}(\Gamma \backslash \{v \})$ are left relatively convex subgroups. We conclude using Theorem \ref{thm:amalgamation}. Commutators in $\mathrm{A}(\Gamma)$ have $\mathrm{scl}$ at most $1/2$. Hence this bound is sharp. \end{proof} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Parton densities summarize the structure of the nucleon probed in high--momentum transfer processes such as deep--inelastic lepton--nucleon scattering and production of high--mass systems (jets, heavy particles) in nucleon--nucleon collisions. They are defined in the context of a factorization procedure, by which the cross section of these processes is separated into a short--distance quark/gluon subprocess, calculable in perturbative QCD, and the distribution of the partons in the initial state, and thus represent long--distance, low--energy characteristics of the nucleon. As such, they are governed by the same low--energy dynamics which determines other nucleon observables like the vector and axial couplings (to which they are related by the partonic sum rules), form factors, meson--nucleon couplings \textit{etc.} Of particular interest are the charge ($u - \bar u, d - \bar d$) and flavor ($u - d, \bar u - \bar d$) non--singlet quark densities, which exhibit only weak scale dependence and are of non--perturbative origin; they represent quasi--observables which directly probes the QCD quark structure of the nucleon at low resolution scales. The long--distance behavior of strong interactions at low energies is governed by the spontaneous breaking of chiral symmetry in QCD. The Goldstone boson nature of the pion explains its small mass on the hadronic scale and requires its coupling to other hadrons to vanish in the long--wavelength limit. The resulting ``chiral dynamics'' gives rise to a number of distinctive phenomena at distance scales $\sim 1/M_\pi$, such as the $\pi\pi, \pi N$ and $NN$ interaction at large distances, the pion pole in the axial current matrix element, \textit{etc.} An important question is how chiral dynamics affects the nucleon's parton densities, and whether one can see any signs of chiral effects in observables of high--momentum transfer processes. The prime candidate for an effect of chiral dynamics in parton densities has been the flavor asymmetry of the light antiquark densities in the nucleon. Measurements of the proton--neutron structure function difference in inclusive deep--inelastic scattering \cite{Amaudruz:1991at}, semi--inclusive meson production \cite{Ackerstaff:1998sr}, and particularly Drell--Yan pair production \cite{Baldit:1994jk,Hawker:1998ty,Towell:2001nh} have unambiguously shown that $\left[\bar d - \bar u\right] (x) > 0$ in the proton for $x < 0.3$, and have partly mapped the $x$--dependence of the asymmetry; see Refs.~\cite{Kumano:1997cy} for a review of earlier experimental results. The basic picture is that the ``bare'' proton can make a transition to a virtual state containing a pion, and fluctuations $p \rightarrow n \pi^+$ are more likely than $p \rightarrow \Delta^{++} \pi^-$, resulting in an excess of $\pi^+$ over $\pi^-$ in the ``dressed'' proton. Following the original prediction of Ref.~\cite{Thomas:1983fh}, which included only the nucleon intermediate state, this idea was implemented in a variety of dynamical models, which incorporate finite--size effects through various types of hadronic form factors associated with the $\pi N N$ and $\pi N \Delta$ vertices; see Refs.~\cite{Kumano:1997cy} for a review of the extensive literature. It was noted long ago \cite{FMS} that in order to reproduce the fast decrease of the observed asymmetry with $x$ one needs $\pi NN$ form factors much softer than those commonly used in meson exchange parametrizations of the $NN$ interaction \cite{Machleidt:hj}. However, even with such soft form factors the pion transverse momenta in the nucleon generally extend up to values $\gg M_\pi$ \cite{Koepf:1995yh}. This raises the question to what extent such models actually describe long--distance effects associated with soft pion exchange (momenta $\sim M_\pi$), and what part of their predictions is simply a parametrization of short--distance dynamics which should more naturally be described in terms of non-hadronic degrees of freedom. More generally, one faces the question how to formulate the concept of the ``pion cloud'' in the nucleon's partonic structure \cite{Sullivan:1971kd} in a manner consistent with chiral dynamics in QCD. A framework which allows one to address these questions in a systematic fashion is the transverse coordinate (or impact parameter) representation, in which the distribution of partons is studied as a function of the longitudinal momentum fraction, $x$, and the transverse distance, $b$, of the parton from the transverse center--of--momentum of the nucleon \cite{Burkardt:2002hr,Pobylitsa:2002iu}. In this representation, chiral dynamics can be associated with a distinct component of the partonic structure, located at $x \lesssim M_\pi / M_N$ and $b \sim 1/M_\pi$. In a previous work \cite{Strikman:2003gz} we have shown that in the gluon density this large--distance component is sizable and causes the nucleon's average gluonic transverse size, $\langle b^2 \rangle_g$ to grow if $x$ drops below $M_\pi / M_N$, in agreement with the $t$--slopes observed in exclusive $J/\psi$ photo-- and electroproduction at HERA \cite{Aktas:2005xu,Chekanov:2004mw}, FNAL \cite{Binkley:1981kv}, and experiments at lower energies. Essential in this is the fact that in the gluon density (more generally, in any isoscalar parton density) the pion cloud contributions from $N$ and $\Delta$ intermediate states have the same sign and add constructively. The special role of the $\Delta$ compared to other excited baryon states is supported by the fact that in the large--$N_c$ limit of QCD the $N$ and $\Delta$ are degenerate and enter on an equal footing. In this article we perform a comprehensive study of the chiral large--distance component of the nucleon's partonic structure, considering both its contribution to the total quark/antiquark/gluon densities and to the nucleon's average partonic transverse size. The method we use to calculate this component is phenomenological pion exchange formulated in the impact parameter representation, restricted to the region of large transverse distances. A physical lower limit in $b$ for $\pi B \; (B = N, \Delta)$ configurations in the nucleon wave function is set by the transverse ``core'' radius, estimated from the nucleon's axial form factor, $R_{\rm core} = 0.55 \, \textrm{fm}$, and we explicitly demonstrate the universal character of the pionic contributions in the region $b > R_{\rm core}$. This formulation preserves the basic physical picture of the ``pion cloud'' model of the nucleon's sea quark distributions, while restricting its application to the region actually governed by chiral dynamics. In fact, our study serves both a conceptual and a practical purpose. First, we want to establish in which region of transverse distances the results of the traditional pion cloud model are model--independent and can be associated with large--distance chiral dynamics. Second, we want to employ this model to actually calculate the universal large--distance component and study its properties. A preliminary account of our study of the flavor asymmetry $\bar d - \bar u$ was presented in Ref.~\cite{Strikman:2008wb}. The investigation reported here proceeds in several steps. In Sec.~\ref{sec:chiral}, we develop the theory of large--distance contributions to the partonic structure from a general, model--independent perspective. We outline the parametric region of $\pi B$ configurations in the nucleon wave function, the properties of the $b$--dependent momentum distribution of pions in the nucleon, its large--$b$ asymptotics, and the convolution formulas for the nucleon parton densities. In Sec.~\ref{sec:model}, we investigate the phenomenological pion cloud model in the impact parameter representation, and demonstrate that at large $b$ its predictions become independent of the $\pi N B$ form factors modeling the short--distance dynamics. We also comment on the extension of this model to $SU(3)$ flavor. In Sec.~\ref{sec:decomposition} we then apply this model to calculate the large--distance contributions to the sea quark distributions in the nucleon, including the isovector $(\bar d - \bar u)$ and isoscalar $(\bar u + \bar d)$ light quark sea, the strange sea ($s, \bar s$), and the $SU(3)$--flavor symmetry breaking asymmetry $(\bar u + \bar d - 2\bar s)$. We compare the calculated large--distance contributions to empirical parametrizations of the parton densities and thus indirectly infer the contribution from the short--distance region (``core''), which cannot be calculated in a model--independent way. In the course of this we see how the restriction to large $b$ solves several problems inherent in the traditional pion cloud model which formally allows for pionic configurations also at small impact parameters. In Sec.~\ref{sec:size} we consider the large--distance contributions to the nucleon's partonic transverse size $\langle b^2 \rangle$, which is accessible experimentally through the $t$--slope of hard exclusive processes $\gamma^\ast N \rightarrow M + N \, (M = \text{meson}, \gamma, \textit{etc.})$. Because of the emphasis on large distances this quantity is calculable in a practically model--independent manner and represents a clean probe of chiral dynamics in the partonic structure. Specifically, we show that at $x \sim 10^{-2}$ the large--distance contribution to the nucleon's singlet quark transverse size, $\langle b^2 \rangle_{q + \bar q}$, is larger than that to the gluonic size, $\langle b^2 \rangle_{g}$, which is consistent with the observation of a larger $t$--slope in deeply--virtual Compton scattering \cite{Aaron:2007cz,Chekanov:2008vy} than in exclusive $J/\psi$ production at HERA \cite{Aktas:2005xu,Chekanov:2004mw}. In Sec.~\ref{sec:largenc} we discuss the correspondence of the phenomenological pion exchange contribution to the nucleon parton densities with the large--$N_c$ limit of QCD. In particular, we show that the large--distance contributions obtained from pion exchange reproduce the general $N_c$--scaling of parton densities in QCD, thanks to the degeneracy of $N$ and $\Delta$ intermediate states in the large--$N_c$ limit. This re-affirms the need to include intermediate $\Delta$ states on the same footing as the nucleon, and shows that the phenomenological large--distance contributions considered here are a legitimate part of the nucleons partonic structure in large--$N_c$ QCD. Finally, in Sec.~\ref{sec:smallx} we focus on the physical limitations to the picture of individual $\pi B$ configurations at small $x$, arising from the non--chiral growth of the transverse sizes due to diffusion, and from chiral corrections to the structure of the pion. We also comment on the role of chiral dynamics at large longitudinal distances. Our summary and outlook are presented in Sec.~\ref{sec:summary}. The two appendices present technical material related to the meson--nucleon coupling constants for $SU(3)$ flavor symmetry, and the numerical evaluation of the $b$--dependent pion momentum distributions in the nucleon. In the context of our studies of the strange sea quark distributions, $s(x)$ and $\bar s(x)$, and the $SU(3)$ flavor symmetry--breaking asymmetry, $\left[ \bar u + \bar d - 2\bar s\right] (x)$, we consider also contributions from configurations containing $SU(3)$ octet mesons ($K\Lambda, K\Sigma, K\Sigma^\ast, \eta N$) to the nucleon's partonic structure at large distances. While such high--mass configurations are not governed by chiral dynamics and treated at a purely phenomenological level, it is interesting to compare their large--distance tails with those of chiral contributions from pions. We note that the issue of the strange sea in the nucleon ($s, \bar s$) and the question of possibly different $x$--distributions of $s$ and $\bar s$ has acquired new urgency following the results of the NuTeV experiment in semi--inclusive charged--current neutrino DIS, which can discriminate between $s$ and $\bar s$ via the process $W^+ + s \rightarrow c$ \cite{Goncharov:2001qe,Mason:2007zz}. Chiral contributions to the nucleon's parton densities have been studied extensively within chiral perturbation theory \cite{Chen:2001eg,Arndt:2001ye}, mostly with the aim of extrapolating lattice QCD results obtained at large pion masses toward lower values \cite{Detmold:2001jb}. Chiral perturbation theory was also applied to GPDs, including the impact parameter representation \cite{Belitsky:2002jp,Ando:2006sk,Diehl:2006ya,Kivel:2002ia}. Compared to these calculations, which use methods of effective field theory based on a power--counting scheme, we take here a more pragmatic approach. We study the pion distribution in the nucleon in a phenomenological approach which incorporates the finite bare nucleon size through form factors, and investigate numerically in which region the results become insensitive to the form factors and can be attributed to universal chiral dynamics \footnote{The approach taken here bears some similarity to the use of finite--size regulators in the chiral extrapolation of lattice QCD results \cite{Young:2005tr}.}. In this approach we maintain exact relativistic kinematics (physical pion and nucleon masses) and calculate distributions of finite support, which are then analyzed in the different parametric regions and matched with the asymptotic ``chiral'' predictions. This also allows us to deal with the strong cancellations between contributions from $N$ and $\Delta$ intermediate states in the isovector quark densities, which are difficult to accommodate within a power counting scheme. In fact, the cancellation becomes exact in the large--$N_c$ limit of QCD and ensures the proper $1/N_c$ counting required of the isovector antiquark distribution in QCD \cite{Strikman:2003gz}. In this study we focus on chiral large--distance contributions to the nucleon's partonic structure at moderately small momentum fractions, $x \gtrsim 10^{-2}$, which arise from individual $\pi B \, (B = N, \Delta)$ configurations in the nucleon wave function. When extending the discussion toward smaller $x$, several effects need to be taken into account which potentially modify this picture. One is diffusion in the partonic wave function, which causes the transverse size of the nucleon's partonic configurations to grow at small $x$ (however, this effect is suppressed at large $Q^2$). Another effect are possible chiral corrections to the structure of the pion itself, which were recently studied in an approach based on resummation of chiral perturbation theory in the leading logarithmic approximation \cite{Kivel:2008ry}. We discuss the limitations to the applicability of the picture of individual $\pi B$ configurations in Secs.~\ref{subsec:diffusion} and \ref{subsec:chiral_pion}. We also comment on the role of $\pi B$ configurations at large longitudinal separations and arbitrary transverse distances, and point out that there may be a window for a chiral regime at $x \gtrsim \sim 10^{-2}$; at smaller $x$ coherence effects become dominant; see Sec.~\ref{subsec:longitudinal}. A detailed investigation of this new regime will be the subject of a separate study. \section{Chiral dynamics and partonic structure} \label{sec:chiral} \subsection{Parametric region of chiral component} \label{subsec:parametric} As the first step of our study we want to delineate the parametric region where parton densities are governed by chiral dynamics and establish its numerical limits, as imposed by other, non--chiral physical scales. The primary object of our discussion is the pion longitudinal momentum and transverse coordinate distribution in a fast--moving nucleon, $f_\pi (y, b)$, where $y$ is the pion momentum fraction. Here we introduce this concept heuristically, appealing to its obvious physical meaning; its precise definition in terms of GPDs and its region of applicability will be elaborated in the following. Chiral dynamics generally governs contributions to nucleon observables from large distances, of the order $1/M_\pi$, which is assumed here to be much larger than all other hadronic length scales in question. These contributions result from exchange of ``soft'' pions in the nucleon rest frame; in the time--ordered formulation of relativistic dynamics these are pions with energies $E_\pi \sim M_\pi$ and momenta $|\bm{k}_\pi| \sim M_\pi$. Chiral symmetry provides that such pions couple weakly to the nucleon and to each other, so that their effects can be computed perturbatively. Boosting these weakly interacting pion--nucleon configurations to a frame in which the nucleon is moving with large velocity, we find that they correspond to longitudinal pion momentum fractions of the order \footnote{In invariant perturbation theory soft pions have virtualities $-k_\pi^2 \sim M_\pi^2$, and Eq.~(\ref{y_chiral}) results from the condition that the minimum pion virtuality which is kinematically required for a given longitudinal momentum fraction, $y$, be at most of the order $M_\pi^2$; \textit{cf}.\ Eq.~(\ref{t_min}) below.} \begin{equation} y \;\; \sim \;\; M_\pi / M_N . \label{y_chiral} \end{equation} At the same time, the soft pions' transverse momenta, which are not affected by the boost, correspond to transverse distances of the order \begin{equation} b \;\; \sim \;\; 1/M_\pi . \label{b_chiral} \end{equation} Together, Eqs.~(\ref{y_chiral}) and (\ref{b_chiral}) determine the parametric region where the pion distribution in the fast--moving nucleon is governed by chiral dynamics, and the soft pion can be regarded as a ``parton'' in the nucleon's wave function in the usual sense (see Fig.~\ref{fig:chiral}). The condition Eq.~(\ref{y_chiral}) implies that the pion momentum fraction in the nucleon is parametrically small, $y \ll 1$, \textit{i.e.}, the soft pion is a ``slow'' parton. As a consequence, one can generally neglect the recoil of the spectator system and identify the distance $b$ with the separation of the pion from the transverse center--of--momentum of the spectator system, $r = b/(1 - y)$ \footnote{The relation between the distance of a constituent from the transverse center--of--momentum, $b$, and its distance from the center--of--momentum of the spectator system, $r$, can easily be derived for the case of a non--interacting system, by starting from the well--known expression for the center--of--mass in the rest frame and performing a boost to large velocity, taking into account that for the non-interacting system the longitudinal momentum fractions are given by the ratios of the constituent masses to the total mass of the system. A more formal derivation, based on the light--cone components of the energy--momentum tensor, can be found in Ref.~\cite{Burkardt:2002hr}.} \cite{Burkardt:2002hr}. This circumstance greatly simplifies the spatial interpretation of chiral contributions to the parton densities. \begin{figure} \includegraphics[width=.32\textwidth]{fpi.eps} \caption[]{Parametric region where the pion distribution in the nucleon is governed by chiral dynamics. The variables are the pion longitudinal momentum fraction, $y$, and transverse position, $b$.} \label{fig:chiral} \end{figure} Pionic configurations in the nucleon wave function are physically meaningful only if the transverse separation of the pion and the spectator system is larger than the sum of the intrinsic ``non--chiral'' sizes of these objects. This basic fact imposes a limit on the applicability of chiral dynamics, even though the dynamics itself may not change dramatically at the limiting distance. In order to make the picture of Fig.~\ref{fig:chiral} quantitative, we have to estimate down to which values of $b$ the concept of pionic configurations is applicable. The transverse size of the ``core'' in the nucleon's partonic wave function in the valence region ($x \gtrsim 10^{-1}$) can be estimated from the transverse axial charge radius of the nucleon, which does not receive contributions from the pion cloud \cite{Frankfurt:2002ka,Strikman:2003gz}: \begin{equation} \langle b^2 \rangle_{\rm axial} \;\; = \;\; {\textstyle\frac{2}{3}} \langle r^2 \rangle_{\rm axial} \;\; \approx \;\; 0.3 \, \textrm{fm}^2 , \label{b2_axial} \end{equation} where the factor $2/3$ results from converting the 3--dimensional charge radius in the rest frame into the 2--dimensional transverse charge radius in the frame where the nucleon is moving fast. Identifying the core radius with the transverse RMS radius, we obtain \begin{equation} R_{\rm core} \;\; = \;\; \left[\langle b^2 \rangle_{\rm axial} \right]^{1/2} \;\; \approx \;\; 0.55 \, \textrm{fm} . \label{b_core} \end{equation} Equation~(\ref{b_core}) imposes a numerical lower limit for the pion impact parameter, $b$, in pionic configurations. Note that this number represents a rough estimate, as the interpretation of RMS radius in terms of a ``size'' depends on the shape of the transverse distribution of partons in the core. A more refined estimate, which takes into account the intrinsic transverse size of the pion as well as the effect of the recoil of the spectator system, is obtained by requiring that $b/(1 - y) > (R_{\rm core}^2 + R_{\pi}^2)^{1/2}$. Assuming that $R_{\pi}^2$ ranges between zero and $R_{\rm core}^2$, and anticipating that the typical $y$--values in the pion distribution at $b \sim R_{\rm core}$ are $y = (1-2) \times M_\pi / M_N \sim 0.2$, we obtain $b > 0.44 - 0.62 \, \text{fm}$, in good agreement with the estimate of Eq.~(\ref{b_core}). When considering the nucleon's partonic structure at small $x$ ($< 10^{-2}$) the above estimate of the nucleon core size needs to be modified to account for the non-chiral growth due to diffusion in the partonic wave function. Also, in this region the transverse size of the pion itself can grow due to chiral corrections. These effects will be discussed separately in Secs.~\ref{subsec:diffusion} and \ref{subsec:chiral_pion}. Chiral dynamics produces also configurations in the fast--moving nucleon characterized by large longitudinal separations of the pion and the spectator system, \begin{equation} l \;\; \sim \;\; 1/M_\pi , \end{equation} with no restriction on $b$. The relevance of these configurations for the nucleon's partonic structure cannot be ascertained without detailed consideration of the effective longitudinal sizes of the subsystems and possible coherence effects, and will be discussed in Sec.~\ref{subsec:longitudinal}. In the following we limit ourselves to chiral contributions at large transverse distances. \subsection{Pion distribution in the nucleon} \label{subsec:pion_distribution} In its region of applicability defined by Eqs.~(\ref{y_chiral}) and (\ref{b_chiral}), the $b$--dependent pion ``parton'' distribution can be calculated as the transverse Fourier transform of the ``pion GPD'' in the nucleon. The latter is defined as the transition matrix element of the operator measuring the number density of pions with longitudinal momentum fraction $y$ in the fast--moving nucleon, integrated over the pion transverse momenta, and with a transverse momentum transfer $\bm{\Delta}_\perp$ to the nucleon (see Fig.~\ref{fig:gpdpi}a): \begin{eqnarray} \lefteqn{ \int\!\frac{d^3 k}{(2\pi)^3} \; \delta (y - k_\parallel / P) } && \nonumber \\ &\times& \langle \bm{p}_2 | \, a_{\pi, a}^\dagger (\bm{k} + \bm{\Delta}/2) \, a_{\pi, a} (\bm{k} - \bm{\Delta}/2) \, | \bm{p}_1 \rangle_{P \rightarrow \infty} \nonumber \\ &=& (2\pi)^3 \, (2 P) \, \delta^{(3)}(\bm{p}_2 - \bm{p}_1 + \bm{\Delta}) \; H_\pi (y, t) , \label{H_pi_number} \end{eqnarray} where $p_{1\parallel} = P \rightarrow \infty, \, \Delta_\parallel = 0$, and \begin{equation} t \;\; \equiv -\bm{\Delta}_\perp^2 . \end{equation} Here $a_{\pi, a}^\dagger$ and $a_{\pi, a}$ denote the pion creation and annihilation operators, and the sum over isospin projections (subscript $a$) is implied. Eq.~(\ref{H_pi_number}) refers to the helicity--conserving component of the nucleon transition matrix element ($\lambda_2 = \lambda_1$), and $H_\pi (y, t)$ is the corresponding GPD; the helicity--flip GPD is defined in analogously but will not be needed in the present investigation. In terms of the pion GPD the transverse coordinate distribution is then obtained as ($b \equiv |\bm{b}|$) \begin{eqnarray} f_\pi (y, b) &=& \int\frac{d^2 \Delta_\perp}{(2\pi )^2} \; e^{-i (\bm{\Delta}_\perp \bm{b})} \; H_\pi (y, t) . \label{f_pi_fourier} \end{eqnarray} We note that a manifestly covariant definition of the pion GPD, as the matrix element of a pionic light--ray operator between nucleon states, was given in Ref.~\cite{Strikman:2003gz}; the equivalence of that definition to Eq.~(\ref{H_pi_number}) is shown by going to the frame where the nucleon is moving fast and expanding the pion fields in creation and annihilation operators. \begin{figure} \includegraphics[width=.4\textwidth]{gpdpi.eps} \caption[]{The pion GPD in the nucleon. (a) Transition matrix element of the density of pions with longitudinal momentum fraction $y \sim M_\pi / M_N$ and transverse momentum transfer $|\bm{\Delta}_\perp | \sim M_\pi$, Eq.~(\ref{H_pi_number}). (b) Invariants used in modeling finite--size effects with form factors. $t_{1, 2}$ are the pion virtualities in the invariant formulation, Eq.~(\ref{t_12}); $s_{1, 2}$ the invariant masses of the $\pi B$ systems in the time--ordered formulation, Eq.~(\ref{s_12}).} \label{fig:gpdpi} \end{figure} The pion GPD in the nucleon implies summation over all relevant baryonic intermediate states. Because the pion wavelength is assumed to be large compared to the typical nucleon/baryon radius, only the lowest--mass excitations can effectively contribute to the GPD in the region of Eqs.~(\ref{y_chiral}) and (\ref{b_chiral}). We therefore retain only the $N$ and $\Delta$ intermediate states in the sum: \begin{eqnarray} H_\pi &=& H_{\pi N} + H_{\pi \Delta} , \\ f_\pi &=& f_{\pi N} + f_{\pi \Delta} . \end{eqnarray} The inclusion of the $\Delta$, whose mass splitting with the nucleon introduces a non-chiral scale which is numerically comparable to the pion mass, represents a slight departure from strict chiral dynamics but is justified by the numerical importance of this contribution; \textit{cf.}\ the discussion of the $N_c \rightarrow \infty$ limit in QCD in Sec.~\ref{sec:largenc}. To study the properties of the $b$--dependent pion distribution at large distances we need a dynamical model which allows us to calculate the pion GPD in the relevant region of momenta. Here we follow a heuristic approach and start from the simplest possible system of pointlike pions and nucleons interacting according to a phenomenological Lagrangian. We shall see later how this definition can be amended to incorporate finite--size effects. In which region the results should be regarded as physical in the light of the discussion in Sec~.\ref{subsec:parametric} will be the matter of the following investigations. The pion GPD in the nucleon can be calculated using invariant perturbation theory, by evaluating the matrix element in Eq.~(\ref{H_pi_number}), or, equivalently, the matrix element of the pionic light--ray operator of Ref.~\cite{Strikman:2003gz}, using the Feynman rules for pointlike $\pi N$ interactions; see Ref.~\cite{Strikman:2003gz} for details. The resulting Feynman integral is computed by introducing light--cone coordinates and performing the integral over the ``minus'' (energy) component of the loop momentum using Cauchy's theorem. Closing the contour around the pole of the propagator of the spectator baryon, one arrives at a representation in which the spectator is on mass--shell, and the emitted and absorbed pion are off mass--shell, with virtualities \footnote{In Ref.~\cite{Strikman:2003gz} the pion virtualities were denoted by $-s_\pm$. Here we denote them by $t_{1, 2}$, reserving $s_{1, 2}$ for the invariant masses of the $\pi B$ system, Eq.~(\ref{s_12}).} \begin{equation} t_{1,2} \;\; \equiv \;\; k_{1, 2}^2 \;\; = \;\; - (\bm{k}_\perp \mp \bar y \bm{\Delta}_\perp / 2)^2 / {\bar y} \, + \, t_{{\rm min}} \label{t_12} \end{equation} (see Fig.~\ref{fig:gpdpi}b). Here $\bm{k}_\perp$ is the transverse momentum of the spectator baryon, \begin{equation} \bar y \;\; \equiv \;\; 1 - y , \end{equation} and \begin{equation} t_{{\rm min}} \;\; \equiv \;\; - \left[ y^2 M_N^2 + y (M_B^2 - M_N^2 ) \right] / {\bar y} \label{t_min} \end{equation} is the minimum virtuality required by kinematics for a given pion momentum fraction, $y$. The $\pi N$ and $\pi \Delta$ GPDs are then obtained as \begin{eqnarray} H_{\pi N} (y, t) &=& 3 g_{\pi NN}^2 \; I_8 (y, t; M_\pi, M_N) , \label{H_pi_N_from_I} \\ H_{\pi \Delta} (y, t) &=& 2 g_{\pi N\Delta}^2 \; I_{10} (y, t; M_\pi, M_\Delta) . \label{H_pi_Delta_from_I} \end{eqnarray} Here $g_{\pi NN}$ and $g_{\pi N\Delta}$ are the coupling constants in the conventions of Ref.~\cite{Strikman:2003gz} and Appendix~\ref{app:su3}, and the distributions are the isoscalar pion GPDs, corresponding to the sum of $\pi^+, \pi^-$ and $\pi^0$ distributions in the proton; \textit{cf.}\ Eq.~(\ref{H_pi_number}). The functions $I_8$ and $I_{10}$ denote the basic transverse momentum integrals arising in the calculation of the meson distribution with intermediate octet and decuplet baryons, \begin{eqnarray} \lefteqn{I_{8, 10} (y, t; M_\pi, M_B)} && \nonumber \\ &\equiv& \frac{y}{4\pi\bar y} \; \int\frac{d^2 k_\perp}{(2\pi )^2} \frac{\phi_{8, 10}} {(t_1 - M_\pi^2) (t_2 - M_\pi^2)} , \label{I_8_10} \end{eqnarray} where \begin{eqnarray} \phi_8 &\equiv& {\displaystyle \frac{1}{2}} \, \left[ -t_1 - t_2 + \bar y \, t + 2 (M_B - M_N)^2 \right], \label{phi_8} \\[2ex] \phi_{10} &\equiv& {\displaystyle \frac{1}{24 \, M_N^2 \, M_\Delta^2}} \left[ 2 M_\Delta^2 (-t_1 - t_2 + t ) \right. \nonumber \\[0ex] && \left. + \; (M_N^2 - M_\Delta^2 -t_1) (M_N^2 - M_\Delta^2 -t_2) \right] \nonumber \\[0ex] && \left. \times \; \left[ 2 (M_\Delta + M_N)^2 - t_1 - t_2 + \bar y \, t \right] . \right. \label{phi_10} \end{eqnarray} Note that while the $t_{1, 2}$ of Eq.~(\ref{t_12}) depend on the vector $\bm{\Delta}_\perp$, the integral Eq.~(\ref{I_8_10}) depends only on $t \equiv -\bm{\Delta}_\perp^2$ because of rotational invariance in transverse space. As it stands, the transverse momentum integral in Eqs.~(\ref{I_8_10})--(\ref{phi_10}) is divergent. This divergence is related to short--distance contributions in the pointlike particle approximation and does not affect the chiral long--distance behavior of the $b$--dependent distribution. Several ways of regularizing this divergence and extracting the chiral contribution will be discussed in the following. The pion GPD can equivalently be evaluated in time--ordered perturbation theory, where Fig.~\ref{fig:gpdpi}a is interpreted as a process where the fast--moving nucleon (momentum $P \gg M_N$) makes a transition to a $\pi B$ intermediate state, in which we evaluate the operator measuring the density of pions with longitudinal momentum $yP$, and then back to a nucleon state whose transverse momentum differs from the original one by $\bm{\Delta}_\perp$. In this formulation the intermediate particles are on mass--shell, but the energies of the $\pi B$ states before and after the operator are different from that of the initial/final nucleon state. The invariant masses of the intermediate states, which are directly proportional to the energies, are given by \begin{eqnarray} s_{1, 2} &=& (k_{1, 2} + p_B)^2 \\ &=& \frac{(\bm{k}_\perp \mp \bm{\Delta}_\perp / 2)^2 + M_\pi^2}{y} + \frac{\bm{k}_\perp^2 + M_B^2}{\bar y} - \frac{\bm{\Delta}_\perp^2}{4} \;\;\;\;\; \label{s_12} \end{eqnarray} (see Fig.~\ref{fig:gpdpi}b). The connection to the invariant formulation is established by noting that, for given $y$ and $\bm{k}_\perp$, \begin{equation} \Delta s_{1, 2} \;\; \equiv \;\; s_{1, 2} - M_N^2 \;\; = \;\; \frac{M_\pi^2 - t_{1, 2}}{y} , \label{correspondence} \end{equation} whence the denominators in Eq.~(\ref{I_8_10}) can also be interpreted as ``energy denominators.'' The minimum value of the invariant mass difference, $\Delta s_{\rm min}$, for given momentum fraction $y$ can be obtained by substituting $t_{1, 2}$ by $t_{\rm min}$, Eq.~(\ref{t_min}). Both the invariant and the time--ordered formulation will be useful for discussing the properties of the chiral long--distance contribution following from Eqs.~(\ref{I_8_10})--(\ref{phi_10}). \subsection{Large--$b$ asymptotics} \label{subsec:large_b} It is instructive to consider the asymptotic behavior of the distribution of pions for $b \rightarrow \infty$ and fixed $y$. It is determined by the leading branch cut singularity of the GPD in the $t$--channel and can be calculated by applying the Cutkosky rules to the Feynman graphs of Fig.~\ref{fig:gpdpi} with pointlike vertices \cite{Strikman:2003gz}. The asymptotic behavior is of the form \begin{equation} f_{\pi B} (y, b) \;\; \propto \;\; \frac{e^{\displaystyle -\kappa_{B} b}}{\kappa_{B} b} , \label{exponential_largeb} \end{equation} where $B = N, \Delta, \ldots$ denotes the intermediate baryon; the expression applies in principle also to higher--mass states, \textit{cf.}\ the discussion below. The decay constant, $\kappa_{B}$, depends on the pion momentum fraction, $y$, and is directly related to the minimum pion virtuality, Eq.~(\ref{t_min}), in the invariant formulation, or the minimum invariant mass difference in the time--ordered formulation, \textit{cf}.\ Eq.~(\ref{correspondence}): \begin{equation} \kappa_B \;\; = \;\; 2 \left( \frac{M_\pi^2 - t_{\rm min}}{\bar y} \right)^{1/2}. \label{kappa_B_virtuality} \end{equation} To exhibit the $y$--dependence of the decay constant in the parametric region of chiral dynamics, $y \sim M_\pi / M_N$, Eq.~(\ref{y_chiral}), we set \begin{equation} y \;\; = \;\; \eta \, M_\pi / M_N , \end{equation} where the scaling variable, $\eta$, is generally of order unity. Substituting Eq.~(\ref{t_min}) into Eq.~(\ref{kappa_B_virtuality}) and dropping terms suppressed by powers of $M_\pi / M_N$, we obtain \begin{equation} \kappa_{B} \;\; = \;\; 2 \left[ (1 + \eta^2) M_\pi^2 + \eta \frac{(M_B^2 - M_N^2) M_\pi}{M_N} \right]^{1/2} . \label{kappa_B_eta} \end{equation} This result has several interesting implications: \begin{itemize} \item[(a)] For the nucleon intermediate state ($B = N$) the second term is zero, and one has $\kappa_{N} \propto M_\pi$ with a coefficient of order unity and depending on $\eta$. In this case the $b$--distribution exhibits a ``Yukawa tail'' with a $y$--dependent range of the order $1/M_\pi$, as expected. \item[(b)] For a higher--mass intermediate state ($B \neq N$) the decay constant is determined by competition of the chiral scale, $M_\pi^2$, and the non--chiral scale, $(M_B^2 - M_N^2) M_\pi / M_N$. The larger the $N$--$B$ mass splitting, the smaller $\eta$ has to be for the chiral scale to dominate. This effect suppresses the contribution of higher--mass baryons to $f_{\pi} (y, b)$ at large $b$ and finite $\eta$. Note also that the pre-exponential factor, which is not shown in Eq.~(\ref{exponential_largeb}) for brevity, vanishes $\propto y$ for $y \rightarrow 0$ \cite{Strikman:2003gz}. \item[(c)] For $\eta \rightarrow 0$ one finds $\kappa_{B} \rightarrow 2 M_\pi$ irrespective of the $N$--$B$ mass splitting. In this limit the transverse ``Yukawa tail'' has the range one would naively expect from the analogy with the 3--dimensional situation. However, this limit is purely formal, as this region makes a vanishing contribution to the nucleon's partonic structure at moderate $x$; \textit{cf}.\ the discussion in Secs.~\ref{subsec:contribution_nucleon} and \ref{sec:smallx} below. \end{itemize} For pion momentum fractions parametrically of order unity, $y \sim 1$, Eq.~(\ref{kappa_B_virtuality}) gives a decay constant of the order $\kappa_{B} \sim M_N$. An exponential decay with range $\sim 1/M_N$ is not a chiral contribution to the pion distribution, as is expected, because the values of $y$ lie outside the parametric region of Eq.~(\ref{y_chiral}). In sum, the large--$b$ asymptotic behavior obtained from the naive pion distribution with pointlike $\pi N$ couplings fully supports the general arguments of Sec.~\ref{subsec:parametric} concerning the parametric region of the chiral component. One notes that the characteristic transverse range of the chiral contribution of the pion distribution, $1/(2 M_\pi) = 0.71 \, \text{fm}$, is numerically not substantially larger than our estimate of the non--chiral ``core'' size, Eq.~(\ref{b_core}). This shows that an effective field theory approach to chiral dynamics, which implicitly assumes that the core has zero size and builds up its structure by counter terms, is not practical here, and underscores the rationale for our phenomenological approach, where finite--size effects are included explicitly. \subsection{Contribution to nucleon parton densities} \label{subsec:contribution_nucleon} The chiral contribution to the nucleon's parton densities is obtained as the convolution of the pion momentum distribution in the nucleon with the relevant parton distribution in the pion. For the gluon, the isoscalar quark/antiquark, and the isovector quark/antiquark densities it takes the form \footnote{Equation~(46) of Ref.~\cite{Strikman:2003gz} incorrectly writes the convolution formula for the antiquark flavor asymmetry with the antiquark density in the pion rather than the valence quark density. The correct expression is with the valence quark density, \textit{cf.}\ Eq.~(\ref{conv_isovector}). This does not affect the conclusions about the large--$N_c$ behavior presented in Ref.~\cite{Strikman:2003gz}, which was the sole point of the discussion there.} \begin{eqnarray} \lefteqn{g(x, b)_{\rm chiral}} && \nonumber \\ &=& \int_x^1 \frac{dy}{y} \; \left[ f_{\pi N} + f_{\pi \Delta}\right] (y, b) \; g_\pi (z), \label{conv_gluon} \\[3ex] \lefteqn{ \left[u + d \right] (x, b)_{\rm chiral} \;\; = \;\; \left[ \bar u + \bar d \right] (x, b)_{\rm chiral} } && \nonumber \\ &=& \int_x^1 \frac{dy}{y} \; \left[ f_{\pi N} + f_{\pi \Delta}\right] (y, b) \; q_\pi^{\text{tot}} (z), \label{conv_isoscalar} \\[3ex] \lefteqn{ \left[u - d \right] (x, b)_{\rm chiral} \;\; = \;\; \left[ \bar d - \bar u \right] (x, b)_{\rm chiral} } && \nonumber \\ &=& \int_x^1 \frac{dy}{y} \; \left[ {\textstyle \frac{2}{3} f_{\pi N} - \frac{1}{3} f_{\pi \Delta}} \right] (y, b) \; q_\pi^{\text{val}} (z) , \label{conv_isovector} \end{eqnarray} where \begin{equation} z \;\; \equiv \;\; x/y \label{z_def} \end{equation} is the parton momentum fraction in the pion. Here $f_{\pi N}$ and $f_{\pi \Delta}$ are the isoscalar pion distributions (sum of $\pi^+, \pi^-$ and $\pi^0$) with $N$ and $\Delta$ intermediate states in the conventions of Refs.~\cite{Koepf:1995yh,Strikman:2003gz} and Appendix~\ref{app:su3}; the isovector nature of the asymmetry, Eq.~(\ref{conv_isovector}), is encoded in the numerical prefactors. The functions $g_\pi$, $q_\pi^{\text{tot}}$, and $q_\pi^{\text{val}}$ are the gluon, isoscalar (total), and isovector (valence) quark/antiquark densities in the pion, \begin{eqnarray} q_\pi^{\text{tot}} (z) &=& \left[ \bar u + \bar d \right]_{\pi\pm, \pi 0} (z) \;\; = \;\; \left[ u + d \right]_{\pi\pm, \pi 0} (z) \nonumber \\ &=& {\textstyle\frac{1}{2}} \left[ u + \bar u + d + \bar d \right]_{\pi\pm, \pi 0} (z) , \\[2ex] q_\pi^{\text{val}} (z) &=& \pm \left[ \bar d - \bar u \right]_{\pi\pm} (z) \;\; = \;\; \pm \left[ u - d \right]_{\pi\pm} (z) \nonumber \\ &=& \pm {\textstyle\frac{1}{2}} \left[ u - \bar u - d + \bar d \right]_{\pi\pm} (z) ; \label{q_pi_val} \end{eqnarray} the latter is normalized as \begin{equation} \int_0^1 dz \, q_\pi^{\text{val}} (z) \;\; = \;\; 1 . \label{valence_normalization} \end{equation} The $\pi^0$ does not have a valence distribution because of charge conjugation invariance, and we assume isospin symmetry. Note that the parton densities in the pion, as well as the result of the convolution integrals in Eqs.~(\ref{conv_gluon})--(\ref{conv_isovector}), depend on the resolution scale; we have suppressed this dependence for brevity. The convolution formulas for the strange antiquark density and the $SU(3)$--flavor symmetry breaking asymmetry will be given in Sec.~\ref{sec:decomposition}. The expressions in Eqs.~(\ref{conv_gluon})--(\ref{conv_isovector}) apply to parton momentum fractions of the order $x \sim M_\pi / M_N$ but otherwise not exceptionally small, and transverse distances $b \sim 1/M_\pi$. In deriving them we have assumed that the ``decay'' of the pion into partons happens locally on the transverse distance scale of the chiral $b$--distribution, $b \sim 1/M_\pi$ (see Fig.~\ref{fig:chiral}). This is justified parametrically, as for the values of $x$ under consideration the parton momentum fraction in the pion does not reach small values ($x < z < 1$ in the convolution integral) and one can neglect chiral effects which cause the size of the pion itself to grow at small $z$. To see in which region of $x$ the chiral contribution to the isovector antiquark density is localized, it is convenient to write the convolution formula Eq.~(\ref{conv_isovector}) in the form \begin{eqnarray} x \left[ \bar d - \bar u \right] (x, b)_{\rm chiral} &=& \int_x^1 dy \; \left[ {\textstyle \frac{2}{3} f_{\pi N} - \frac{1}{3} f_{\pi \Delta}} \right] (y, b) \nonumber \\[1ex] &\times& \; z q_\pi^{\text{val}} (z) , \label{conv_isovector_x} \end{eqnarray} where we have multiplied both sides of Eq.~(\ref{conv_isovector}) by $x$ and used Eq.~(\ref{z_def}) on the right--hand side. Now both functions in the integrand vanish for small arguments: $f_{\pi B} (y) \rightarrow 0$ for $y \rightarrow 0$, and $z q_\pi^{\text{val}} (z) \rightarrow 0$ for $z \rightarrow 0$. Noting that the valence distribution $z q_\pi^{\text{val}} (z)$ is localized around $z \sim 1/2$ at low scales, and that the pion momentum distribution is centered around $y \sim M_\pi / M_N$, we conclude that the convolution produces a sea quark distribution in the nucleon centered around values $x = yz \sim (1/2) \times M_\pi / M_N$, in agreement with the general expectation. The same argument applies to the bulk of the chiral isoscalar density, Eq.~(\ref{conv_isoscalar}), which arises mainly from the valence quark content of the pion; only at very small $x$ the non-valence quarks in the pion produce a distinct contribution. Note also that the valence quark density in the pion at $z \sim 1/2$ is generated mostly by relatively small--size configurations in the pion, justifying our approximation of neglecting the intrinsic transverse size of the pion in the convolution integrals. One immediately sees from Eqs.~(\ref{conv_isoscalar}) and (\ref{conv_isovector}) that the chiral large--distance component is larger in the isoscalar than in the isovector quark distributions, because the $N$ and $\Delta$ contributions add in the isoscalar sector, Eq.~(\ref{conv_isoscalar}), while they partly cancel in the isovector sector, Eq.~(\ref{conv_isovector}) \cite{Koepf:1995yh}. This is contrary to the general expectation that chiral effects manifest themselves mostly in the sea quark flavor asymmetry $\bar d - \bar u$. The cancellation between $N$ and $\Delta$ contributions in the isovector case becomes perfect in the large--$N_c$ limit of QCD and restores the proper $N_c$ scaling of the isovector distributions; see Sec.~\ref{sec:largenc}. In principle one can use the asymptotic expressions for the pion distribution in the nucleon, Eqs.~(\ref{exponential_largeb}) and (\ref{kappa_B_virtuality}), to do a numerical estimate of the large--distance contribution to the nucleon parton densities based on Eqs.~(\ref{conv_gluon})--(\ref{conv_isovector}). This approach was taken in Ref.~\cite{Strikman:2003gz} to estimate the chiral contribution to the nucleon's gluonic transverse size, $\langle b^2 \rangle_g$, proportional to the $b^2$--weighted integral of the impact--parameter dependent gluon density. Because of the weighting with $b^2$ this quantity emphasizes large transverse distances, and the estimates of the $b$--integrated chiral contribution are relatively insensitive to the lower limit in $b$ imposed in the integral (see also Sec.~\ref{sec:size}). In the present investigation we are interested in the antiquark densities \textit{per se} (not weighted with $b^2$), where there is no such enhancement of large distances, and estimates of the chiral contribution are more sensitive to the lower limit in $b$. We therefore approach this problem differently, by analyzing the phenomenological pion cloud model (which incorporates finite--size effects) and establishing down to which $b$ the numerical predictions are insensitive to the short--distance cutoff (Sec.~\ref{sec:model}). The numerical evaluation of the long--distance contribution based on Eqs.~(\ref{conv_gluon})--(\ref{conv_isovector}) will then be done based on the results of this investigation (Secs.~\ref{sec:decomposition} and \ref{sec:size}). \section{Pion cloud model in impact parameter representation} \label{sec:model} \subsection{Modeling finite--size effects} For a quantitative study of the chiral large--distance component in the nucleon's partonic structure we need a dynamical model which allows us to compute the distribution of pions beyond its leading asymptotic behavior. In addition, we must address the question down to which values of $b$ numerical study of this component is meaningful, in the sense that it is not overwhelmed by short--distance contributions unrelated to chiral dynamics. Ultimately, this question can only be answered in a dynamical model which smoothly ``interpolates'' between the chiral long--distance regime and the effective short--distance dynamics. Here we study this question in the framework of the phenomenological pion cloud model, where the short--distance dynamics is not treated explicitly, but modeled by form factors implementing a finite hadronic size unrelated to chiral dynamics. This study serves two purposes --- it establishes what part of the predictions of the traditional pion cloud model actually arises from the long--distance region governed by chiral dynamics, and it offers a practical way of computing this universal long--distance contribution. In the phenomenological pion cloud model, the pion GPD in the nucleon is defined by the graph of Fig.~\ref{fig:gpdpi}, \textit{cf.}\ Eqs.~(\ref{I_8_10})--(\ref{phi_10}), in which now form factors are associated with the $\pi N B$ vertices, rendering the transverse momentum integral explicitly finite. Two different schemes to implement these form factors are commonly used and have extensively been discussed in the literature. One, based on the invariant formulation in which the spectator baryon in on mass--shell, restricts the virtualities of the exchanged pions by inserting in Eq.~(\ref{I_8_10}) a form factor \begin{equation} \mathcal{F}\left( \frac{M_\pi^2 - t_{1, 2}}{\Lambda^2_{\text{virt}}} \right) \label{ff_virt} \end{equation} for each $\pi N B$ vertex (see Fig.~\ref{fig:gpdpi}b). Here $\mathcal{F}(a)$ denotes a function of finite range which vanishes for $a \rightarrow \infty$; for example, an exponential, $\exp(-a)$, or the dipole form factor, $(1 + a)^{-2}$. These form factors can be compared to those in the well--known meson exchange parametrizations of the $NN$ interaction, where the exchanged pion is regarded as a virtual particle \cite{Machleidt:hj}. The other scheme, based on the time--ordered formulation, restricts the invariant mass of the $\pi B$ systems in the intermediate states by form factors of the type \cite{Zoller:1991cb} \begin{equation} \mathcal{F}\left( \frac{s_{1, 2} - M_N^2} {\Lambda^2_{\text{inv.\ mass}}} \right) \label{ff_inv} \end{equation} (see Fig.~\ref{fig:gpdpi}b). An advantage of this scheme is that it preserves the momentum sum rule in the transition $N \rightarrow \pi B$, \textit{i.e.}, the longitudinal momentum distribution of the baryon $B$ in the nucleon is given by $f_{\pi B} (1 - y)$ for $B = N, \Delta$ \cite{Zoller:1991cb,Melnitchouk:1992yd}. The relation between the two different cutoff schemes can easily be derived from Eq.~(\ref{correspondence}). Effectively, \begin{equation} \Lambda^2_{\text{virt}} \;\; = \;\; y \, \Lambda^2_{\text{inv.\ mass}} , \end{equation} \textit{i.e.}, a constant invariant mass cutoff amounts to a $y$--dependent virtuality cutoff which tends to zero as $y \rightarrow 0$. In the traditional formulation of the pion cloud model, without restriction to the large--$b$ region, the two schemes lead to rather different pion momentum distributions. The distributions at large $b$ and $y \sim M_\pi / M_N$, however, are dominated by vanishing pion virtualities \textit{viz.}\ invariant mass differences, so that the results in the two schemes become effectively equivalent, up to small finite renormalization effects. In the following numerical studies we shall employ the virtuality cutoff as used in Ref.~\cite{Koepf:1995yh}; the equivalence of the two schemes for our purposes will be demonstrated explicitly in Sec.~\ref{subsec:effective}. We emphasize that we are interested in the pion cloud model with form factors only as a means to identify the chiral large--distance contribution and delineate the region where it is universal and independent of the form factors. We do not consider those aspects of the model related to the fitting of data without restriction to large distances (tuning of cutoff parameters, $\pi N B$ couplings, \textit{etc.}); those have been discussed extensively in the literature reviewed in Refs.~\cite{Kumano:1997cy}. \subsection{Universality at large $b$} We first consider the dependence of the pion distribution in the nucleon on the impact parameter, $b$. Specifically, we want to demonstrate that it reproduces the ``universal'' chiral behavior Eq.~(\ref{exponential_largeb}) at large $b$, and investigate for which values of $b$ the distribution is substantially modified by the form factors. To this end we calculate the pion GPD by numerical evaluation of the loop integral, Eq.~(\ref{I_8_10}), with a virtuality cutoff of the type of Eq.~(\ref{ff_virt}), and perform the transformation to the impact parameter representation according to Eq.~(\ref{f_pi_fourier}); useful formulas for the numerical calculation are collected in Appendix~\ref{app:evaluation}. Figure~\ref{fig:fb} shows $f_{\pi N}(y, b)$ obtained with an exponential form factor ($\Lambda_{\text{virt}} = 1.0\, \text{GeV}$, a typical value in traditional applications of the pion cloud model), as a function of $b$ for $y = 0.07$ and 0.3, which is 1/2 and 2 times $M_\pi / M_N$, respectively. Also shown are the distributions obtained with pointlike particles (no form factors), in which the loop integral was regularized by subtraction at $\bm{\Delta}_\perp^2 = 0$; this subtraction of a $\bm{\Delta}_\perp^2$--independent term in the GPD corresponds to a modification of the impact parameter distribution by a delta function term $\propto \delta^{(2)}(\bm{b})$, which is ``invisible'' at finite $b$ \cite{Strikman:2003gz}. One sees that for $b \gtrsim 0.5 \, \textrm{fm}$ the results of the two calculations coincide, showing that in this region the pion distribution is not sensitive to the form factors. Comparison of different functional forms of the form factor (exponential, dipole) also supports this conclusion. Furthermore, we note that for large $b$ both distributions in Fig.~\ref{fig:fb} exhibit the universal asymptotic behavior derived earlier \cite{Strikman:2003gz}. \begin{figure} \includegraphics[width=.48\textwidth]{fb_area.eps} \caption[]{The transverse spatial distribution of pions in the nucleon, $f_{\pi N} (y, b)$, as a function of $b$, for values $y = 0.07$ and 0.3. Shown is the radial distribution $2\pi b \, f_{\pi N} (y, b)$, whose integral over $b$ (area under the curve) gives the pion momentum distribution. \textit{Solid lines:} Pion cloud model with virtuality cutoff (exponential form factor, $\Lambda_{\pi N} = 1.0 \, \textrm{GeV}$) \cite{Koepf:1995yh}. \textit{Dashed line:} Distribution for pointlike particles, regulated by subtraction at $\bm{\Delta}_\perp^2 = 0$; the integral over $b$ does not exist in this case. The estimated ``core'' radius, Eq.~(\ref{b_core}), is marked by an arrow.} \label{fig:fb} \end{figure} It is interesting that the $b$--value where in Fig.~\ref{fig:fb} the ``universal'' behavior of $f_{\pi N} (y, b)$ sets in is numerically close to the transverse radius of the nucleon's ``core,'' inferred earlier from independent considerations, $R_{\rm core} \approx 0.55 \, \text{fm}$, \textit{cf.}\ Eq.~(\ref{b_core}). This shows that the pion cloud model can safely be used to compute the large--$b$ parton densities over the entire region defined by Eq.~(\ref{b_core}). \begin{figure} \includegraphics[width=.48\textwidth]{virt_ally.eps} \caption[]{The median pion virtuality in the unregularized integral, Eqs.~(\ref{I_8_10})---(\ref{phi_10}), as a function of $b$, for $y = 0.07$ (solid line) and $0.3$ (dotted line). It is defined as the value of the virtuality cutoff, $\Lambda^2_{\text{virt}}$, for which $f_{\pi N} (y, b)$ reaches half of its value for $\Lambda^2_{\text{virt}} \rightarrow \infty$, corresponding to the unregularized integral.} \label{fig:virt} \end{figure} Figure~\ref{fig:virt} illustrates the connection between the transverse distance and the pion virtualities in Eq.~(\ref{I_8_10}) from a different perspective. Shown there is the median pion virtuality in the unregularized loop integral, defined as the value of the virtuality cutoff, $\Lambda^2_{\text{virt}}$, for which the regularized $f_{\pi N} (y, b)$ reaches half of its value for $\Lambda^2_{\text{virt}} \rightarrow \infty$; the latter coincides with the value obtained by regularization through subtraction. The function $f_{\pi N} (y, b)$ is always positive when evaluated with an exponential virtuality cutoff, and monotonously decreasing as a function of $\Lambda^2_{\text{virt}}$, so that the median value of $\Lambda^2_{\text{virt}}$ provides a sensible measure of the average virtualities in the integral Eq.~(\ref{I_8_10}) for given $y$ and $b$. One sees that the average pion virtualities in the loop strongly decrease with increasing $b$, indicating the approach to the universal chiral region. We recall that the leading asymptotic behavior at $b \rightarrow \infty$ is determined by quasi--on--shell pions, \textit{cf.}\ the derivation in Sec.~\ref{subsec:large_b}. \subsection{Effective pion momentum distribution} \label{subsec:effective} \begin{figure} \includegraphics[width=.48\textwidth]{fy_virt_inv.eps} \caption[]{Effective momentum distribution of pions in $\pi N$ configurations with impact parameters $b > b_0$, Eq.~(\ref{fy_bint}), in the pion cloud model. \textit{Solid lines:} Distributions obtained with a virtuality cutoff, Eq.~(\ref{ff_virt}) (exponential form factor, $\Lambda_{\text{virt}} = 1.0 \, \textrm{GeV}$), for $b_0 = 0$ (full integral), $b_0 = 0.55 \, \text{fm}$ and $b_0 = 1.1 \, \text{fm}$. \textit{Dashed lines:} Same for distributions obtained with an invariant mass cutoff, Eq.~(\ref{ff_virt}) (exponential form factor, $\Lambda_{\text{virt}} = 1.66 \, \textrm{GeV}$). The value of $\Lambda_{\text{virt}}$ was chosen such that it produces the same total number of pions ($y$--integral) for the full distribution as the given virtuality cutoff. The value $y = M_\pi / M_N$ is indicated by an arrow.} \label{fig:fy} \end{figure} We now want to investigate the distribution of pions at large transverse distances as a function of the momentum fraction, $y$. In keeping with our general line of approach, we do this by studying how the momentum distribution of the pion cloud model with form factors is modified when a restriction on the minimum $b$ is imposed. We define the effective momentum distribution of pions with $b > b_0$ as the integral \begin{equation} \int d^2 b \; \Theta (b > b_0) \; f_{\pi B} (y, b) \hspace{3em} (B = N, \Delta); \label{fy_bint} \end{equation} for $b_0 = 0$ we recover the momentum distribution of pions in the traditional usage of the pion cloud model. Figure~\ref{fig:fy} (solid lines) shows the $b$--integrated distribution Eq.~(\ref{fy_bint}), obtained with an exponential virtuality cutoff ($\Lambda_{\text{virt}} = 1.0 \, \text{GeV}$), for $b_0 = 0$ (full integral) as well as $b_0 = 0.55 \, \text{fm}$ and $1.1 \, \text{fm}$, corresponding to 1 and 2 times the phenomenological core radius, $R_{\text{core}}$. One sees that the restriction to large $b$--values strongly suppresses large pion momentum fractions and shifts the strength of the distribution toward values of the order $y \sim M_\pi / M_N$, in agreement with the general expectations formulated in Sec.~\ref{sec:chiral}. From the perspective of the traditional pion cloud model, the results of Fig.~\ref{fig:fy} show that less than half of the pions in that model arise from the region $b > R_{\text{core}}$, where the pion cloud can be regarded as a distinct component of the nucleon wave function Also shown in Figure~\ref{fig:fy} (dashed lines) are the corresponding distributions obtained with an invariant mass cutoff, Eq.~(\ref{ff_inv}). For the sake of comparison the cutoff parameter $\Lambda^2_{\text{inv}}$ was chosen here such that it gives the same total number of pions ($y$--integral) for the ``full'' distributions in which no restriction on $b$ is imposed ($b_0 = 0$); the value of $\Lambda_{\text{inv}} = 1.66 \, \text{GeV}$ thus obtained is within the range considered in phenomenological applications of the pion cloud model \cite{Melnitchouk:1998rv}. One sees that the full distributions are quite different for the virtuality and the invariant mass cutoff, as dictated by the relation (\ref{correspondence}). However, when restricted to large $b$ the $y$--distributions in the two regularization schemes become more and more alike, as their strength shifts toward values of the order $y \sim M_\pi / M_N$. This explicitly demonstrates the equivalence of the virtuality and the invariant mass regularization in the context of our approach, as announced above. \subsection{Extension to $SU(3)$ flavor} \begin{figure}[b] \includegraphics[width=.48\textwidth]{fk_part.eps} \caption[]{Effective momentum distribution of kaons in $K\Lambda$ configurations with impact parameters $b > b_0$, \textit{cf.}\ Eq.~(\ref{fy_bint}), in the meson cloud model with virtuality cutoff (exponential form factor, $\Lambda_{\text{virt}} = 1.0 \, \text{GeV}$). \textit{Solid line:} $b_0 = 0$ (full integral). \textit{Dashed lines:} $b_0 = 0.55 \, \text{fm}$ and $b_0 = 1.1 \, \text{fm}$.} \label{fig:fk_part} \end{figure} In our studies of the strange sea and the $SU(3)$--breaking flavor asymmetry below we shall consider also the contributions from $K$ and $\eta$ mesons to the sea quark distributions at large distances. Because the masses of these mesons are numerically comparable to the typical hadronic mass scale (as given, say, by the vector meson mass), their contributions to the partonic structure of the nucleon cannot be associated with chiral dynamics, even at large transverse distances. Still, in the context of the present discussion of the pion cloud model, it is instructive to study the distribution of $K$ and $\eta$ in the impact parameter representation, and contrast it with that of the $\pi$. The pseudoscalar octet meson couplings to the nucleon, as determined by $SU(3)$ flavor symmetry, and the definition of their impact parameter--dependent momentum distributions are summarized in Appendix~\ref{app:su3}. Significant contributions come only from the $K\Lambda$ and $K\Sigma^\ast$ channels. The large--$b$ behavior of these distributions is formally governed by the asymptotic expression, Eqs.~(\ref{exponential_largeb}) and (\ref{kappa_B_virtuality}), with the $\pi$ mass replaced by the $K$ mass. Figure~\ref{fig:fk_part} shows the numerically computed effective momentum distribution of $K$ in $K\Lambda$ configurations, with and without restriction to large $b$, \textit{cf.}\ Eq.~(\ref{fy_bint}), which should be compared to the corresponding distributions for the $\pi$ in Fig.~\ref{fig:fy}. One sees that the overall magnitude of $f_{K\Lambda}$ is substantially smaller than that of $f_{\pi N}$, because of the smaller coupling constant (no isospin degeneracy, \textit{cf}.\ Appendix~\ref{app:su3}) and the larger meson and intermediate baryon mass. More importantly, one notes that the restriction to large $b$ suppresses the $K$ distribution much more strongly than the $\pi$ distribution; only about $1/5$ of all kaons in the meson cloud model are located at transverse distances $b > 0.55 \, \text{fm}$, and less than $1\%$ are found at $b > 1.1 \, \text{fm}$. While hardly surprising, these numbers show clearly that the $K$ (and $\eta$) contribution to the partonic structure above the nucleon's core radius, $R_{\rm core} = 0.55 \, \text{fm}$ is extremely small. \section{Large--distance component of the nucleon sea} \label{sec:decomposition} \subsection{Isovector sea $\bar d - \bar u$} \label{subsec:isovector} \begin{figure} \includegraphics[width=.48\textwidth]{asym.eps} \caption[]{\textit{Solid line:} Large--distance contribution to the antiquark flavor asymmetry, $[\bar d - \bar u](x)$, obtained from $\pi N$ and $\pi \Delta$ configurations restricted to impact parameters $b > R_{\rm core} \ = 0.55 \, \textrm{fm}$, \textit{cf.}\ Eq.~(\ref{fy_bint}). \textit{Dotted lines:} Same with $b > 0.8 \; R_{\rm core}$ (upper line) and $1.2 \; R_{\rm core}$ (lower line). \textit{Data:} Result of analysis of final FNAL E866 Drell--Yan data \cite{Towell:2001nh}; statistical and systematic errors were added in quadrature. All curves and data points refer to the scale $Q^2 = 54\, \text{GeV}^2$.} \label{fig:conv} \end{figure} We now apply the formalism developed in Secs.~\ref{sec:chiral} and \ref{sec:model} to study the chiral large--distance contributions to the sea quark distributions in the nucleon. To this end, we evaluate the convolution formulas, Eqs.~(\ref{conv_isoscalar})--(\ref{conv_isovector}), with the $b$--integrated pion distribution, Eq.~(\ref{fy_bint}), where the lower limit, $b_0$, is taken sufficiently large to exclude the model--dependent small--distance region, \textit{cf.}\ Fig.~\ref{fig:fb}. Our standard value for this parameter is the phenomenological ``core'' radius, Eq.~(\ref{b_core}); variation of this value will allow us to estimate the sensitivity of the results to unknown short--distance dynamics. While not permitting a complete description of the sea quark distributions, our results allow us to quantify how much comes from the ``universal'' large--distance region, providing guidance for future comprehensive models of the partonic structure. We first consider the isovector antiquark distribution in the proton, $[\bar d - \bar u](x)$, which experiences only non--singlet QCD evolution and is largely independent of the normalization scale. The convolution formula Eq.~(\ref{conv_isovector}) involves the valence quark distribution of the pion; the normalization of this distribution is fixed by Eq.~(\ref{valence_normalization}), and its shape has been determined accurately by fits to the $\pi N$ Drell--Yan data; see Ref.~\cite{Gluck:1999xe} and references therein. We use the leading--order parametrization of the valence distribution provided in Ref.~\cite{Gluck:1999xe}; the differences to the next--to--leading order parametrization are minor in this case. Figure~\ref{fig:conv} shows the chiral long--distance contribution obtained when $b_0$ is taken to be the phenomenological ``core'' radius, $R_{\rm core} = 0.55 \, \textrm{fm}$ (solid line), as well as the band covered when $b_0$ is changed from this value by $\pm 20\%$ (dotted lines). Also shown in the figure are the results of an analysis of the final data from the FNAL E866 Drell--Yan experiment, presented at a common scale $Q^2 = 54 \, \text{GeV}^2$ \cite{Towell:2001nh}. One sees that the large--distance contribution to the asymmetry is practically zero for $x > 0.3$, as expected from the general considerations of Sec.~\ref{sec:chiral}. At $x\sim 0.1$ the large--distance contribution accounts for $\sim 30 \%$ of the measured asymmetry, indicating that most of it results from the nucleon's core at small transverse distances. This conclusion is robust and, as demonstrated in Sec.~\ref{sec:model}, does not depend on the form factors employed in the calculation within the pion cloud model (the specific results shown here were obtained with an exponential virtuality cutoff with $\Lambda_{\pi N} = 1.0 \, \textrm{GeV}$ and $\Lambda_{\pi\Delta} = 0.8 \, \textrm{GeV}$ \cite{Koepf:1995yh}). At small $x$ ($\sim 0.01$) the large--distance contribution obtained in our approach comes closer to the data; however, the quality of the present data is rather poor, and it is difficult to infer the magnitude of the required ``core'' contribution by comparing the present estimate of the large--distance contribution to the data in this region of $x$. One sees from Eq.~(\ref{conv_isovector}) that the isovector antiquark distribution involves strong cancellations between the contributions from $\pi N$ and $\pi \Delta$ intermediate states. This is not accidental --- the cancellation between the two becomes exact in the large--$N_c$ limit of QCD, and is in fact necessary to restore the proper $N_c$ scaling of the isovector distribution; see Sec.~\ref{sec:largenc}. \subsection{Isoscalar sea $\bar u + \bar d$} The isoscalar light antiquark distribution, $[\bar u + \bar d](x)$, is subject to singlet QCD evolution and thus exhibits stronger scale dependence than the isovector distribution. The convolution formula for this distribution, Eq.~(\ref{conv_isoscalar}), involves the total (singlet) antiquark distribution in the pion, which we may write in the form \begin{equation} q_\pi^{\text{tot}}(z) \;\; = \;\; q_\pi^{\text{val}}(z) + 2 q_\pi^{\text{sea}}(z) , \label{q_pi_split} \end{equation} where $q_\pi^{\text{val}}$ is the valence distribution, Eq.~(\ref{q_pi_val}), and $q_\pi^{\text{sea}}$ the ``sea'' distribution \footnote{The relation of our conventions for the pion parton densities to those of Ref.~\cite{Gluck:1999xe} (GRS) is $q_\pi^{\text{val}} = \frac{1}{2} v_\pi (\text{GRS}), \; q_\pi^{\text{sea}} = \bar q_\pi (\text{GRS})$.} \begin{equation} q_\pi^{\text{sea}} \; = \; \bar u_{\pi +} \; = \; d_{\pi +} \; = \; u_{\pi -} \; = \; \bar d_{\pi -} . \label{q_pi_sea} \end{equation} The pion sea was determined within a radiative parton model analysis, supplemented by a constituent quark picture which relates the pion to nucleon parton densities, and found to be relatively small \cite{Gluck:1999xe}. Again, we use the leading--order parametrization for the parton densities in the pion. \begin{figure}[b] \includegraphics[width=.48\textwidth]{udbar.eps} \caption[]{\textit{Solid/dashed/dashed--dotted line:} Large--distance contribution to the isoscalar antiquark density, $x [\bar u + \bar d]$, resulting from $\pi N$ and $\pi \Delta$ configurations restricted to $b > R_{\rm core} \ = 0.55 \, \textrm{fm}$. The plot shows separately the contributions arising from the valence, sea, and total antiquark density in the pion, \textit{cf.}\ Eq.~(\ref{q_pi_split}). \textit{Dotted line:} MSTW2008LO leading--order parametrization \cite{Martin:2009iq}.} \label{fig:udbar} \end{figure} Figure~\ref{fig:udbar} shows our result for the chiral large--distance contribution to the isoscalar antiquark distribution, separately for the valence and sea distributions in the pion as well as the total, at the scale $Q^2 = 2 \, \text{GeV}^2$. One sees that the sea in the pion becomes important only at $x \ll M_\pi/M_N$, where the antiquark momentum fraction in the pion can become small, $z \ll 1$. Altogether, the large--distance contribution accounts for only $\sim 1/5$ of the total $\bar u + \bar d$ in the nucleon at $x \sim 0.1$. The antiquark distribution obtained from $\pi B \, (B = N, \Delta)$ configurations cannot be larger than the total antiquark distribution in the nucleon, which includes the radiatively generated sea. The large--distance contribution calculated in our approach easily satisfies this theoretical constraint, as can be seen from the comparison with the parametrization obtained in the recent leading--order global fit of Ref.~\cite{Martin:2009iq} (MSTW20008LO), see Fig.~\ref{fig:udbar}. We note that the traditional pion cloud model without restriction to large $b$, which generates pions with transverse momenta of the order $\sim 1 \, \text{GeV}$ and virtualities $\sim 1 \, \text{GeV}^2$, produces an isoscalar sea which comes close to saturating the empirical $\bar u + \bar d$ at large $x$ with the usual range of parameters parameters, and can even overshoot it for certain choices \cite{Koepf:1995yh,Melnitchouk:1998rv}. The restriction of $\pi B$ configurations to large $b$ in our approach solves this problem in a most natural way. \subsection{Strange sea $s, \bar s$} \label{subsec:sbar} The strange sea ($s, \bar s$) in the nucleon at large distances has two distinct components. One is the chiral component, arising from $s$ and $\bar s$ in the pion in $\pi N$ and $\pi \Delta$ configurations. It is given by a similar convolution formula as the isoscalar sea, $\bar u + \bar d$, Eq.~(\ref{conv_isoscalar}), \begin{equation} \bar s (x, b)_{\rm chiral} \;\; = \;\; \int_x^1 \frac{dy}{y} \; \left[ f_{\pi N} + f_{\pi \Delta}\right](y, b) \; \bar s_\pi (z) , \end{equation} and similarly for $s$, where $\bar s_\pi (z)$ and $s_\pi (z)$ are the strange (anti--) quark distributions in the pion. Assuming that the sea in the pion is mostly generated radiatively \cite{Gluck:1999xe}, we take them to be equal and proportional to the non--strange sea in the pion, Eq.~(\ref{q_pi_sea}), \begin{equation} \bar s_\pi (z) \; = \; s_\pi (z) \; = \; q_\pi^{\text{sea}}(z) . \label{pion_sea_su3} \end{equation} The other component comes from valence $\bar s$ quarks in $KY (Y = \Lambda, \Sigma, \Sigma^\ast)$ and $\eta N$ configurations in the nucleon. Because the masses of these mesons are numerically comparable to the typical hadronic mass scale (as given, say, by the vector meson mass), their contribution to the partonic structure of the nucleon cannot strictly be associated with chiral dynamics, even at large transverse distances. We include them in our numerical studies because (a) it is instructive to contrast their contribution to those of $\pi N$ and $\pi \Delta$; (b) they contribute to $\bar s$ only and could in principle generate different $x$--distributions for $s$ and $\bar s$, as suggested by the model of Ref.~\cite{Brodsky:1996hc} (we shall comment on this model below). The couplings of the octet mesons to the nucleon, as determined by $SU(3)$ symmetry and the quark model value of the $F/D$ ratio, as well as the definitions of the corresponding meson momentum distributions are summarized in Appendix~\ref{app:su3}. The contribution of $K$ and $\eta$ to $\bar s(x, b)$ in the proton is obtained as \begin{eqnarray} \bar s (x, b) &=& \int_x^1 \frac{dy}{y} \; \left\{ {\textstyle\frac{2}{3}} f_{\eta N} (y, b) \; \bar s_\eta (z) \right. \nonumber \\ &+& \left. \left[ f_{K\Lambda} + f_{K\Sigma} + f_{K\Sigma^\ast} \right] (y, b) \; \bar s_K (z) \right\} , \label{conv_sbar} \end{eqnarray} where the factor $2/3$ accounts for the probability of the $\eta$ to be in a configuration with a valence $\bar s$ quark (we assume a pure octet state of the $\eta$ and do not take into account singlet--octet mixing, as the $\eta$ contribution turns out to be negligibly small anyway). The functions $\bar s_\eta (z)$ and $\bar s_K (z)$ are the normalized momentum distributions of $\bar s$ in $\eta$ and $K$, \begin{equation} \int_0^1 dz \, \bar s_{\eta, K} (z) \;\; = \;\; 1 . \end{equation} Assuming $SU(3)$ symmetry, we will approximate these distributions by the valence quark distribution in the pion, \begin{equation} \bar s_{\eta, K} (z) \;\; \approx \;\; q_\pi^{\text{val}} (z) . \end{equation} We again use the leading--order parametrization of Ref.~\cite{Gluck:1999xe} for the valence quark density in the pion. Numerical evaluation of the meson distributions shows that the contributions from $\eta N$ and $K\Sigma$ in Eq.~(\ref{conv_sbar}) are negligible because of their relatively small coupling; we retain only the $K\Lambda$ and $K\Sigma^\ast$ terms in the following. \begin{figure} \includegraphics[width=.48\textwidth]{sbar.eps} \caption[]{\textit{Dashed line:} Large--distance contribution to the strange sea in the nucleon, $s = \bar s$, from $\pi N$ and $\pi \Delta$ configurations ($b > R_{\rm core} \ = 0.55 \, \textrm{fm}$). \textit{Solid line:} Large--distance contribution to $\bar s$ from $K\Lambda$ and $K\Sigma^\ast$ configurations, involving the valence strange quark distribution in the kaon; $K\Sigma$ and $\eta N$ are numerically negligible. \textit{Dotted line:} MSTW2008LO leading--order parametrization of the total $s = \bar s$ \cite{Martin:2009iq}, multiplied by $1/10$ for easier comparison.} \label{fig:sbar} \end{figure} Figure~\ref{fig:sbar} shows the different large--distance contributions to the strange sea, integrated over $b > R_{\text{core}} = 0.55 \, \text{fm}$. One sees that for $x > 0.1$ the large--distance strange sea is mostly $\bar s$ coming from the valence $\bar s$ in $K\Lambda$ and $K\Sigma^\ast$ configurations; the precise magnitude of this contribution is sensitive to the lower limit in $b$, \textit{cf.}\ Fig.~\ref{fig:fk_part}. For $x < 0.1$ the large--distance strange sea in the nucleon originates mostly from the strange sea in the pion in $\pi B \, (B = N, \Delta)$ configurations, which contributes equally to $s$ and $\bar s$. The different mechanisms result in $s(x) \neq \bar s(x)$ for the large--distance component of the strange sea. However, the overall magnitude of the large--distance component represents only $\sim 1/20$ of the empirically determined average strange sea, $\frac{1}{2} \left[ s + \bar s \right](x)$ \cite{Martin:2009iq}, so that one cannot draw any conclusions about the $x$--distributions of the total $s$ and $s$ in the nucleon from the large--distance component. Note that the large--distance component at $b > R_{\text{core}}$ represents a much smaller fraction of the total sea in the case of $s$ and $\bar s$ than for $\bar u + \bar d$, at least in the region $x \gtrsim 0.01$. There are significant differences between the leading--order parametrizations of $[s + \bar s](x)$ obtained in the global fits of Refs.~\cite{Martin:2009iq} and \cite{Gluck:2007ck}; up to a factor $\sim 2$ at $x = 0.1$. However, this does not change our basic conclusion, that the large--distance $s$ and $\bar s$ are only a small fraction of the total. Also, some of the next--to--leading order fits by several groups \cite{Martin:2009iq,Lai:2007dq} have begun to extract information on the shapes of $s(x)$ and $\bar s (x)$ individually, by incorporating neutrino scattering data which discriminate between the two. The difference $[s - \bar s](x)$ is very poorly determined by the existing data, and the fits serve mostly to limit the range of allowed values. We note that our approach to large--distance contributions and the convolution formulas of Eqs.~(\ref{conv_gluon})--(\ref{conv_isovector}) remain valid also for next--to--leading order parton densities, if the parton densities in the pion are taken to be the next--to--leading order ones. In the present study we restrict ourselves to the leading order, because at this order the parton densities are renormalization--scheme--independent and possess a simple probabilistic interpretation, and because the present comparison of our results with the data does not warrant high accuracy. We would like to comment on the approach of Ref.~\cite{Brodsky:1996hc}, where the shapes of $s(x)$ and $\bar s(x)$ were investigated in a light--front wave function model with $K\Lambda$ components, whose amplitude was adjusted to fit the observed total strange sea, $[s + \bar s](x)$. As just explained, our results show that only a very small fraction of the total $s$ and $\bar s$ sea arise from transverse distances $b > R_{\rm core} \approx 0.55 \, \text{fm}$ where the notion of meson--baryon components in the nucleon wave function is physically sensible. Even in the traditional meson cloud model without restriction to large $b$, $K\Lambda$ configurations with standard form factors \cite{Koepf:1995yh,Holtmann:1996be} would account only for $\sim 1/4$ of the present value of $s + \bar s$ \cite{Martin:2009iq}. This shows that the assumption of saturation of the strange sea by $K\Lambda$ configurations made in Ref.~\cite{Brodsky:1996hc} would require a $KN\Lambda$ coupling $\sim 2$ times larger than the $SU(3)$ value and is not realistic. While we see indications for $s(x) \neq \bar s(x)$ in the large--distance contribution, and certainly nothing requires the shapes to be equal, the magnitude of the effect cannot be reliably predicted on the basis of the model of Ref.~\cite{Brodsky:1996hc}. \subsection{Flavor asymmetry $\bar u + \bar d - 2\bar s$} \begin{figure}[b] \includegraphics[width=.48\textwidth]{qbar8.eps} \caption[]{\textit{Solid line:} Large--distance contribution to the antiquark $SU(3)$ flavor asymmetry asymmetry in the nucleon $\bar u + \bar d - 2 \bar s$ from valence $\bar u + \bar d$ in the pion in $\pi N$ and $\pi \Delta$ configurations, \textit{cf.}\ Fig.~\ref{fig:udbar} ($b > R_{\rm core} \ = 0.55 \, \textrm{fm}$). \textit{Dashed line:} Contribution from the valence $\bar s$ in the kaon in $K\Lambda$ and $K\Sigma^\ast$ configurations, \textit{cf.}\ Fig.~\ref{fig:sbar}. \textit{Dotted line:} Leading--order parametrization of Ref.~\cite{Martin:2009iq}.} \label{fig:qbar8} \end{figure} \label{subsec:qbar3} The antiquark $SU(3)$ flavor asymmetry $\bar u + \bar d - 2\bar s$ is a non--singlet combination of the isoscalar non--strange and strange sea, which exhibits only weak scale dependence. Since we assume $SU(3)$ flavor symmetry of the sea quarks in the pion, Eq.~(\ref{pion_sea_su3}), only the valence $\pi$ and $K$ components of $\bar u + \bar d$ and $\bar s$ enter in this combination (we neglect the $\eta N$ and $K\Sigma$ contributions): \begin{eqnarray} \left[ \bar u + \bar d - 2 \bar s \right] (x, b) &\approx& \int_x^1 \frac{dy}{y} \left[ f_{\pi N} + f_{\pi \Delta} - 2 f_{K\Lambda} \right. \nonumber \\ && \left. - 2 f_{K\Sigma^\ast} \right](y, b) \;\; \bar q_\pi^{\text{val}} (z) . \label{conv_qbar8} \end{eqnarray} The large--distance contribution to this asymmetry is shown in Fig.~\ref{fig:qbar8}. One sees that the asymmetry overwhelmingly results from the valence $\bar u$ and $\bar d$ content of the pion in $\pi N$ and $\pi\Delta$ configuration; the $\bar s$ in the kaon of $K\Lambda$ and $K\Sigma^\ast$ contributes only at the level of $< 10\%$ of the pion. Overall, the large--distance contribution accounts for $\sim 1/3$ of the observed $SU(3)$ flavor asymmetry at $x \sim 0.1$. \section{Transverse size of nucleon} \label{sec:size} \subsection{Transverse size and GPDs} An interesting characteristic of the nucleon's partonic structure is the average squared transverse radius of the partons with given longitudinal momentum fraction $x$. It is defined as \begin{equation} \langle b^2 \rangle_f (x) \;\; \equiv \;\; \frac{\displaystyle\int d^2 b \; b^2 \; f(x, b)}{\displaystyle f(x)} \hspace{3em} (f = q, \bar q, g), \label{b2_def} \end{equation} where $f(x, b)$ is the impact parameter--dependent distribution of partons, related to the total parton density by \begin{equation} \displaystyle\int d^2 b \; f(x, b) \;\; = \;\; \displaystyle f(x) . \end{equation} The average is meaningful thanks to the positivity of $f(x, b)$ \cite{Burkardt:2002hr,Pobylitsa:2002iu}. Physically, Eq.~(\ref{b2_def}) measures the average transverse size of configurations in the nucleon wave function contributing to the parton density at given $x$. The transverse size implicitly depends also on the scale, $Q^2$; this dependence arises from the DGLAP evolution of the impact--parameter dependent parton distribution and was studied in Ref.~\cite{Frankfurt:2003td}. The average transverse quark/antiquark/gluon size of the nucleon is directly related to the $t$--slope of the corresponding nucleon GPD at $t = 0$, \begin{equation} \langle b^2 \rangle_f (x) \;\; = \;\; 4 \, \frac{\partial}{\partial t} \left[ \frac{H_f (x, t)}{H_f (x, 0)} \right]_{t = 0} . \end{equation} Here $H_f (x, t) \equiv H_f (x, \xi = 0, t)$ denotes the ``diagonal'' GPD (zero skewness, $\xi = 0$), with $H_f (x, 0) = f(x)$, which is related to the impact parameter--dependent distribution as ($b \equiv |\bm{b}|$) \begin{equation} H_f (x, t = -\bm{\Delta}_\perp^2) \;\; = \;\; \int d^2 b \; e^{-i (\bm{\Delta}_\perp \bm{b})} \; f (x, b) . \label{H_f_fourier} \end{equation} For a general review of GPDs and their properties we refer to Refs.\cite{Goeke:2001tz}. \subsection{Transverse size from hard exclusive processes} \label{subsec:hard_exclusive} By virtue of its connection with the GPDs, the transverse size of the nucleon is in principle accessible experimentally, through the $t$--slope of hard exclusive processes, \begin{equation} \gamma^\ast(Q^2) + N \rightarrow M + N \hspace{2em} (M = \text{meson}, \gamma, \ldots), \nonumber \end{equation} at $Q^2 \gg 1\, \text{GeV}^2$ and $|t| \lesssim 1\, \text{GeV}^2$, whose amplitude can be calculated using QCD factorization and is proportional to the nucleon GPDs. In general, such processes require a longitudinal momentum transfer to the nucleon and probe the ``non--diagonal'' GPDs ($\xi \neq 0$), so that the connection between the observable $t$--slope and the transverse size can be established only with the help of a GPD parametrization which relates the distributions at $\xi \neq 0$ to those at $\xi = 0$. The connection becomes simple in the limit of high--energy scattering, $\xi \approx x_B / 2 \ll 1$, where the GPDs probed in the hard exclusive process can be related to the diagonal ones in a well--controlled approximation; see Refs.~\cite{Frankfurt:1997ha,Shuvaev:1999ce} for details. In this approximation the amplitudes for light vector meson electroproduction at small $x$ ($\phi, \rho$) and heavy vector meson photo/electroproduction ($J/\psi, \Upsilon )$ are proportional to the diagonal gluon GPD, and thus \begin{equation} (d\sigma/dt)^{\gamma^\ast N \rightarrow V + N} \;\; \propto \;\; H_g^2 (x = x_B, t) . \label{dsigma_gluon} \end{equation} The gluonic average transverse size can be directly inferred from the relative $t$--dependence of the differential cross section \begin{equation} \langle b^2 \rangle_g \;\; = \;\; 4 \, \frac{\partial}{\partial t} \left[ \frac{d\sigma/dt \, (t)}{d\sigma/dt \, (0)} \right]^{1/2}_{t = 0} . \end{equation} The universal $t$--dependence of exclusive $\rho^0$ and $\phi$ electroproduction at sufficiently large $Q^2$ and exclusive $J/\psi$ photo/electroproduction, implied by Eq.~(\ref{dsigma_gluon}), is indeed observed experimentally and represents an important test of the approach to the hard reaction mechanism; see Ref.~\cite{Levy:2009ek} for a recent compilation of results. Experimental information on the nucleon's gluonic size and its dependence on $x$ comes mainly from the extensive data on the $t$--dependence of exclusive $J/\psi$ photo/electroproduction, measured in the HERA H1 \cite{Aktas:2005xu} and ZEUS \cite{Chekanov:2004mw} experiments, as well as the FNAL E401/E458 \cite{Binkley:1981kv} and other fixed--target experiments; see Ref.~\cite{Jung:2009eq} for a recent summary. The $t$--dependence of the cross section measured in the HERA experiments is well described by an exponential, \begin{equation} (d\sigma/dt)^{\gamma N \rightarrow J/\psi + N} \;\; \propto \;\; \exp (B_{J/\psi} t) , \end{equation} and assuming that this form is valid near $t = 0$, the nucleon's average gluonic transverse size is obtained as \begin{equation} \langle b^2 \rangle_g \;\; = \;\; 2 B_{J/\psi} . \label{b2_gluon_slope} \end{equation} For a more accurate estimate, the measured $t$--slope is reduced by $\sim 0.3 \, \text{GeV}^{-2}$ to account for the finite size of the produced $J/\psi$. The exponential slope measured by H1 at $\langle W\rangle = 90 \, \text{GeV}$ is $B_{J/\psi} = 4.630 \pm 0.060 {}^{+0.043}_{-0.163} \, \text{GeV}^{-2}$ \cite{Aktas:2005xu}, and ZEUS quotes a value of $B_{J/\psi} = 4.15 \pm 0.05 {}^{+0.30}_{-0.18} \, \text{GeV}^{-2}$ \cite{Chekanov:2004mw}. The central values correspond to a transverse gluonic size at $x \sim 10^{-3}$ in the range $\langle b^2 \rangle_g = 0.31-0.35 \, \text{fm}^2$, substantially smaller than the transverse size of the nucleon in soft hadronic interactions. It is also found that the gluonic size increases with $\log (1/x)$ with a coefficient much smaller than the soft--interaction Regge slope, \textit{cf.}\ the discussion in Sec.~\ref{subsec:diffusion}. Comparatively little is known about the quark size of the nucleon at small $x$. As explained above, light vector meson production at small $x$ couples mainly to the gluon GPD. Interesting new information comes from the $t$--dependence of deeply--virtual Compton scattering (DVCS) recently measured at HERA. The H1 experiment \cite{Aaron:2007cz} obtained an exponential slope of $B_\gamma = 5.45 \pm 0.19 \pm 0.34 \, \text{GeV}^{-2}$ by measuring $t$ through the photon transverse momentum; larger by one unit than the $J/\psi$ slope measured by the same experiment. ZEUS \cite{Chekanov:2008vy} extracted a DVCS slope of $B_\gamma = 4.5 \pm 1.3 \pm 0.4 \, \text{GeV}^2$ by measuring the transverse momentum of the recoiling proton, again larger than the $J/\psi$ slope measured by that experiment; however, the exponential fit to the ZEUS data is rather poor and the extracted $B_\gamma$ has large errors. We note that in both experiments the $B_\gamma$ values were determined by an exponential fit over the entire measured region of $t$ and thus reflect the average $t$--dependence, not directly the slope at $t = 0$. Still, the data provide some indication that the $t$--slope of DVCS at $t = 0$ is larger than that of $J/\psi$ production (the $Q^2$ in the DVCS experiments here are comparable to the effective scale in $J/\psi$ photoproduction, $Q^2_{\rm eff} \approx 3 \, \text{GeV}^2$). In leading--order (LO) QCD factorization, the DVCS amplitude is proportional to the singlet quark GPDs, and the $t$--slope of this process is directly related to the nucleon's singlet quark size, \begin{equation} \langle b^2 \rangle_{q + \bar q} \;\; = \;\; 2 B_\gamma , \label{b2_dvcs_slope} \end{equation} \textit{cf.}\ Eq.~(\ref{b2_gluon_slope}). One would thus conclude that \begin{equation} \langle b^2 \rangle_{q + \bar q} \;\; > \;\; \langle b^2 \rangle_g . \label{b2_quark_vs_gluon} \end{equation} At next--to--leading order (NLO) the DVCS amplitude also involves the gluon GPD, and substantial cancellation is found between the gluon and singlet quark contributions to the amplitude. This cancellation amplifies the effect of a difference in $\langle b^2 \rangle_{q + \bar q}$ and $\langle b^2 \rangle_g$ on the DVCS $t$--slope. Because the gluon contribution is negative and cancels $\sim 1/2$ of the quark contribution \cite{Freund:2001hm}, the relative change in the slope should be $\sim 2$ times larger than the relative change in the average transverse sizes which caused it \cite{Lim:2006xu}. The quark transverse size of the nucleon in the valence quark region is measured in hard exclusive processes at Jefferson Lab, in particular with the 12 GeV Upgrade. In this kinematics the skewness of the GPDs needs to be taken into account ($\xi \neq 0$), and the analysis relies on GPD parametrizations. It is interesting that the $t$--slope of $\rho^0$ production measured in the recent CLAS experiment \cite{Morrow:2008ek} seems to be compatible with the Regge--based GPD parametrization of Ref.~\cite{Guidal:2004nd} (however, it is presently unclear how to describe the absolute cross section within this framework). A detailed phenomenological study of the transverse distribution of valence quarks, based on parton densities and form factor data, was performed in Ref.~\cite{Diehl:2004cx}. \subsection{Chiral contribution} We now want to study the contribution of the chiral large--distance region, $b \sim 1/M_\pi$, to the nucleon's average transverse size. Adopting a two--component description, we define \begin{eqnarray} \langle b^2 \rangle_f \! &=& \! \frac{\displaystyle \int \!\! d^2b \, b^2 \; \left[ f(x, b)_{\rm core} + \Theta (b > b_0) \, f(x, b)_{\rm chiral} \right]}{\displaystyle f(x)} \nonumber \\ &\equiv& \langle b^2 \rangle_{f, \, {\rm core}} \; + \; \langle b^2 \rangle_{f, \, {\rm chiral}} . \label{core_cloud} \end{eqnarray} Here $f(x, b)_{\rm core}$ denotes the parton density arising from average configurations in the nucleon, distributed over transverse distances $b \sim R_{\rm core}$. The function $f(x, b)_{\rm chiral}$ is the chiral component of the parton distribution, extending over distances $b \sim 1/M_\pi$. Following the same approach as above, we integrate it over $b$ with a lower cutoff, $b_0$, of the order of the core radius, Eq.~(\ref{b_core}); the sensitivity of the results to the precise value of $b_0$ will be investigated below. Note that in Eq.~(\ref{core_cloud}) the $b^2$--weighted integral in the numerator is computed in two separate pieces, while the denominator in both cases is the total parton density (core plus chiral) at the given value of $x$; the $\langle b^2 \rangle_{f, \, {\rm chiral}}$ thus defined represents the contribution of the chiral component to the overall transverse size of the nucleon, not the ``intrinsic'' size of the chiral component alone. The ``core'' contribution to $\langle b^2 \rangle$ was estimated in Sec.~\ref{subsec:parametric} and Ref.~\cite{Strikman:2003gz}, by relating it to the slope of the nucleon's axial form factor, which does not receive contributions from the pion cloud: \begin{equation} \langle b^2 \rangle_{\rm core} \;\; \approx \;\; {\textstyle\frac{2}{3}} \, \langle r^2 \rangle_{\rm axial} \;\; = \;\; 0.3 \, \text{fm}^2 . \label{bulk_axial} \end{equation} We have already used this result to fix the short--distance cutoff in the integral over the chiral contribution. A more quantitative determination of the ``non--chiral'' transverse sizes of the nucleon, including the differences between quarks, antiquarks and gluons and their $x$--dependence, requires a dynamical model of the nucleon which smoothly interpolates between small and large distances and will be the subject of a separate study. Here we focus on the chiral contribution, $\langle b^2 \rangle_{f, \, {\rm chiral}}$, which can be calculated in a model--independent manner; we compare it to the ``generic'' core size given by Eq.~(\ref{bulk_axial}), keeping in mind that the latter may have a richer structure than reflected by this simple estimate. The chiral contribution to the transverse size, Eq.~(\ref{core_cloud}), is obtained by calculating the $b^2$--weighted integral of the $b$--dependent pion momentum distribution in the nucleon studied in Sec.~\ref{sec:model}, \textit{cf.}\ Eq.~(\ref{fy_bint}), and substituting the result in the convolution formula for the nucleon parton density, Eq.~(\ref{conv_isoscalar}) \textit{et seq.} Useful formulas for the numerical evaluation of the $b^2$--weighted integrals are presented in Appendix~\ref{app:evaluation}. Because of the weighting factor $b^2$, the chiral contribution to the transverse size is much less sensitive to unknown short--distance dynamics (\textit{i.e.}, to the cutoff $b_0$) than the contribution to the parton density itself, and thus represents a much more interesting quantity for studying effects of chiral dynamics in the partonic structure. Furthermore, the $b^2$--weighted integral can reliably be computed using the asymptotic form of the distribution of pions at large $b$, Eq.~(\ref{exponential_largeb}), as was done in Ref.~\cite{Strikman:2003gz}. We can use this to estimate analytically the sensitivity of $\langle b^2 \rangle_{f, {\rm chiral}}$ to the lower limit, $b_0$. Evaluating the integral \begin{eqnarray} I &\equiv& \int d^2 b \; \Theta (b > b_0) \; b^2 \; f_{\pi N} (y, b) \end{eqnarray} with the asymptotic expression Eq.~(\ref{exponential_largeb}), and taking the logarithmic derivative with respect to $b_0$, we obtain \begin{eqnarray} - \frac{b_0}{I} \; \frac{\partial I}{\partial b_0} &\approx& \frac{1}{5} \; \ll \; 1 \hspace{2em} (y = M_\pi / M_N), \end{eqnarray} where we have used Eq.~(\ref{kappa_B_eta}) for $\kappa_N$ and $b_0 = R_{\rm core} = 0.55\, \text{fm}$. This shows that the sensitivity of $\langle b^2 \rangle_{\rm chiral}$ is indeed low --- a 20\% change in $b_0$ causes only a 4\% change in $\langle b^2 \rangle_{f, {\rm chiral}}$. \begin{figure} \includegraphics[width=.48\textwidth]{conv_b2.eps} \caption[]{The chiral large--distance contribution to the nucleon's average transverse size, $\langle b^2 \rangle_f$, as defined by Eq.~(\ref{core_cloud}), as a function of $x$ ($Q^2 = 3 \, \text{GeV}^2$). \textit{Solid line:} Gluonic size ($f = g$), \textit{cf.}\ Ref.~\cite{Strikman:2003gz}. \textit{Dashed line:} Singlet quark size ($f = u + \bar u + d + \bar d$) \textit{Dotted line:} Strange quark size ($f = s + \bar s$). In all cases, the curves show the sum of contributions from $\pi N$ and $\pi \Delta$ configurations. \label{fig:b2}} \end{figure} Our results for the chiral contribution to the nucleon's average transverse size and its dependence on $x$ are summarized in Fig.~\ref{fig:b2}, for the scale $Q^2 = 3 \, \text{GeV}^2$. The curves shown are the sum of contributions from $\pi N$ and $\pi \Delta$ configurations; heavier mesons make negligible contributions at large distances. For reasons of consistency the nucleon parton densities in the denominator of Eq.~(\ref{core_cloud}) were evaluated using the older parametrization of Ref.~\cite{Gluck:1998xa}, which served as input to the analysis of the pion parton distributions of Ref.~\cite{Gluck:1999xe}. The following features are worth mentioning: \begin{itemize} \item[(a)] The chiral contribution to the transverse size is practically zero above $x \sim M_\pi / M_N \sim 0.1$ and grows rapidly as $x$ drops below this value, in agreement with the basic picture described in Sec.~\ref{sec:chiral}. The rise of $\langle b^2 \rangle_{f, {\rm chiral}}$ with decreasing $x$ is more pronounced than that of the parton density itself because the former quantity emphasizes the contributions from large distances. \item[(b)] The singlet $u$-- and $d$--quark size grows more rapidly with decreasing $x$ than the gluonic radius. This has a simple explanation: the quark/antiquark density in the pion sits at relatively large momentum fractions $z \sim 0.5$, while the gluon density in the pion requires $z < 0.1$ to be sizable; because $z = x/y$ in the convolution integral, and the pion momentum fractions are of the order $y \sim M_\pi / M_N$, the relevant values of $z$ are reached much earlier for the quark than for the gluon as $x$ decreases below $M_\pi / M_N$. Thus, the chiral large--distance contribution suggests that the transverse quark size of the nucleon at $x \lesssim 0.01$ is larger than the transverse gluon size, \textit{cf.}\ Eq.~(\ref{b2_quark_vs_gluon}). The difference between the chiral contribution to the average sizes at $x = 0.01$ is \begin{equation} \langle b^2 \rangle_{q + \bar q, {\rm chiral}} - \langle b^2 \rangle_{g, {\rm chiral}} \;\; = \;\; 0.09 \, \text{fm}^2 . \end{equation} Assuming identical core sizes for the quark and gluon distribution, this would correspond to a difference of the leading--order DVCS and $J/\psi$ $t$--slopes, \textit{cf.}\ Eqs.~(\ref{b2_gluon_slope}) and (\ref{b2_dvcs_slope}), \begin{equation} B_\gamma - B_{J/\psi} \;\; = \;\; 1.1 \, \text{GeV}^2 , \label{b_diff} \end{equation} well consistent with the HERA results summarized in Sec.~\ref{subsec:hard_exclusive}. It should be remembered that the chiral prediction, Eq.~(\ref{b_diff}), is for the exact $t$--slope of the cross section at $t = 0$, while the HERA results represent effective slopes, obtained by fitting the empirical $t$--dependence over the measured range; the comparison may be affected by possible deviations of the true $t$--dependence from the exponential shape. More quantitative conclusions would require detailed modeling of the core contributions to the transverse size, which themselves can grow with decreasing $x$ due to diffusion, see Sec.~\ref{subsec:diffusion}. \item[(c)] The chiral contribution to the transverse strange quark size of the nucleon closely follows that to the gluonic size. This is natural, as $s + \bar s$ is mostly generated radiatively, by conversion of gluons into $s\bar s$, in both the pion and the nucleon. \end{itemize} \section{Pion cloud and large--$N_c$ QCD} \label{sec:largenc} The relation of the chiral component of the large--$b$ parton densities to the large--$N_c$ limit of QCD is a problem of both principal and practical significance. First, in the large--$N_c$ limit QCD is expected to become equivalent to an effective theory of mesons, in which baryons appear as solitonic excitations, establishing a connection with the phenomenological notion of meson exchange. Second, our calculations show that contributions from $\Delta$ intermediate states are numerically large, and the large--$N_c$ limit provides a conceptual framework which allows one to treat $N$ and $\Delta$ states on the same footing and relate their masses and coupling constants. We now want to verify that the large--distance component of the nucleon's partonic structure, calculated from phenomenological pion exchange, exhibits the correct $N_c$--scaling required of parton densities in QCD (\textit{cf.}\ also the discussion in Ref.~\cite{Strikman:2003gz}). The general $N_c$ scaling of the unpolarized quark densities in the nucleon in QCD is of the form \cite{Diakonov:1996sr} \begin{eqnarray} g(x) &\sim & N_c^2 \times \text{function} (N_c x) , \label{nc_gluon} \\[0ex] [u + d](x), \; [\bar u + \bar d](x) &\sim & N_c^2 \times \text{function} (N_c x) , \label{nc_isoscalar} \\[0ex] [u - d](x), \; [\bar u - \bar d](x) &\sim & N_c \times \text{function} (N_c x) , \label{nc_isovector} \end{eqnarray} where the scaling functions are stable in the large--$N_c$ limit but can be different between the various distributions. Equations~(\ref{nc_isoscalar}) and (\ref{nc_isovector}) were derived by assuming non--exceptional configurations ($x \sim N_c^{-1}$) and fixing the normalization of the scaling function from the lowest moments of the parton densities, \textit{i.e.}, from the conditions that the total number of quarks scale as $N_c$, and the nucleon isospin as $N_c^0$. The transverse coordinate--dependent parton distributions should generally scale in the same manner as the total densities, Eqs.~(\ref{nc_isoscalar}) and (\ref{nc_isovector}), as the nucleon radius is stable in the large--$N_c$ limit (this applies even to the nucleon's chiral radii, because $M_\pi \sim N_c^0$). Turning now to the pion cloud contribution to the parton densities at large $b$, it follows from the expressions of Eqs.~(\ref{H_pi_N_from_I})--(\ref{phi_10}) and their Fourier transform, Eq.~(\ref{f_pi_fourier}), that the $b$--dependent distributions of pions in the nucleon scale as \cite{Strikman:2003gz} \begin{equation} f_{\pi N}(y, b), f_{\pi\Delta} (y, b) \;\; \sim \;\; N_c^2 \times \text{function} (N_c x) . \label{f_pi_scaling} \end{equation} This behavior applies to values $y \sim M_\pi / M_N \sim N_c^{-1}$ and values $b \sim N_c^0$, corresponding to $|t| \sim N_c^0$ in the pion GPD. In arriving at Eq.~(\ref{f_pi_scaling}) we have used that $M_N, M_\Delta \sim N_c$; that $g_{\pi N N} \sim N_c^{3/2}$, as implied the Goldberger--Treiman relation; and that $g_{\pi N \Delta}$ scales in the same way as $g_{\pi NN}$. Equation~(\ref{f_pi_scaling}) states that the momentum distribution of pions in the nucleon at large $N_c$ scales like that of isoscalar quarks or gluons. At the same time, the parton densities in the pion scale as \begin{equation} g_\pi (z), \, q_\pi (z) \;\; \sim \;\; \text{function} (z) , \end{equation} where $z \sim N_c^0$ in typical configurations; that is, they have no explicit $N_c$ dependence at large $N_c$. One thus concludes that the $N_c$--scaling of the convolution integral for the pion cloud contribution to the nucleon's antiquark densities, for both $B = N$ and $\Delta$ intermediate states, is \begin{equation} \int_x^1 \frac{dy}{y} f_{\pi B}(y, b) \; q_\pi (z) \;\; \sim \;\; N_c^2 \times \text{function} (N_c x) . \end{equation} This correctly reproduces the general $N_c$--scaling of the isoscalar quark and gluon distribution, Eq.~(\ref{nc_isoscalar}), where the $N$ and $\Delta$ contributions are added, \textit{cf.}\ Eq.~(\ref{conv_isovector}). However, it may seem that the pion cloud contribution at large $b$ cannot reproduce the $N_c$ scaling of the isovector distribution, Eq.~(\ref{nc_isoscalar}), which is suppressed by one power. The paradox is resolved when one notes that in the large--$N_c$ limit the $N$ and $\Delta$ become degenerate, \begin{equation} M_\Delta - N_N \;\; \sim \;\; N_c^{-1} , \end{equation} and their couplings are related by \cite{Adkins:1983ya} \begin{equation} g_{\pi N \Delta} \;\; = \;\; {\textstyle\frac{3}{2}} \, g_{\pi N N} . \label{g_largenc} \end{equation} Using these relations one has \begin{equation} f_{\pi \Delta} (y, b) \;\; = \;\; 2 \, f_{\pi N} (y, b) \hspace{3em} (y \sim N_c^{-1}), \label{nc_f} \end{equation} as can be seen from Eqs.~(\ref{H_pi_N_from_I})--(\ref{phi_10}) and Eq.~(\ref{f_pi_fourier}), keeping in mind that $t, t_1, t_2 \sim N_c^0$ in the region of interest. By virtue of Eq.~(\ref{nc_f}) the $N$ and $\Delta$ contributions at large $N_c$ cancel exactly in the isovector convolution integral, Eq.~(\ref{nc_isovector}), ensuring that the result has the proper $N_c$--scaling behavior as Eq.~(\ref{nc_isovector}). In sum, our arguments show that the pion exchange contribution at large $b$ is a legitimate part of the nucleon's partonic structure in large--$N_c$ QCD, exhibiting the same scaling behavior as the corresponding ``average'' distributions. The inclusion of $\pi \Delta$ configurations at the same level as $\pi N$ is essential because they reproduce the proper $N_c$--scaling of the isovector distributions, and because they make numerically sizable contributions --- twice larger than $\pi N$ --- to the isoscalar distributions. In Ref.~\cite{Strikman:2003gz} we have shown that the isoscalar large--$b$ pion distribution in the nucleon $\left[ f_{\pi N} + f_{\pi \Delta} \right](y, b)$ obtained from phenomenological soft--pion exchange, can equivalently be computed in the chiral soliton picture of the nucleon at large $N_c$, as a certain longitudinal Fourier transform of the universal classical pion field of the soliton at large transverse distances. Extending this connection to the isovector pion distribution, $\left[ \frac{2}{3} f_{\pi N} - \frac{1}{3} f_{\pi \Delta} \right] (y, b)$, which is suppressed in the large--$N_c$ limit, remains an interesting problem for further study. In particular, this requires establishing the connection between soft--pion exchange and the collective rotations of the classical soliton. \section{Small $x$--regime and longitudinal distances} \label{sec:smallx} \subsection{Growth of core size through diffusion} \label{subsec:diffusion} In our studies so far we have focused on chiral contributions to the nucleon's partonic structure at moderately small momentum fractions, $x \gtrsim 10^{-2}$, which arise from individual $\pi B \, (B = N, \Delta)$ configurations in the nucleon wave function. When considering smaller values of $x$ several effects must be taken into account which potentially limit the validity of the present approximations. One of them is the growth of the transverse size of ``average'' partonic configurations in the nucleon due to diffusion. Generally, the partons at small $x$ are decay products of partons at larger $x$; the decay process has the character of a random walk in transverse space and leads to a logarithmic growth of the transverse area occupied by the partons: \begin{eqnarray} \langle b^2 \rangle_{\rm parton} &=& \langle b^2 \rangle_{\rm parton} (x_0) \; + \; 4 \, \alpha'_{\rm parton} \, \ln (x_0 / x) \nonumber \\ && (x < x_0 \sim 10^{-2}). \end{eqnarray} The rate of growth --- the effective Regge slope, $\alpha'_{\rm parton}$ --- depends on the type of parton and generally decreases with increasing scale $Q^2$, because higher $Q^2$ increase the effective transverse momenta in the decay process \cite{Frankfurt:2003td}. Measurements of the energy dependence of the $t$--slope of exclusive $J/\psi$ production at HERA H1 and ZEUS \cite{Aktas:2005xu,Chekanov:2004mw} indicate that the rate of growth for gluons at a scale $Q^2 \approx 3 \, \text{GeV}^2$ is approximately $\alpha'_g \approx 0.14 \, \text{GeV}^{-2}$ \footnote{The value quoted here corresponds to the arithmetic mean of the parametrizations of $\alpha'_{J/\psi}$ quoted by the HERA H1 \cite{Aktas:2005xu} and ZEUS \cite{Chekanov:2004mw} experiments; see Ref.~\cite{Jung:2009eq} for details.} significantly smaller than the rate of growth of the transverse nucleon size in soft hadronic interactions, $\alpha'_{\rm soft} \approx 0.25 \, \text{GeV}^{-2}$. Using the former value as a general measure of the rate of growth of the nucleon's transverse size due to diffusion, we estimate that at $Q^2 \approx 3 \, \text{GeV}^2$ the transverse size of the ``core'' increases from $R_{\rm core}^2 = 0.3\, \text{fm}^2$ at $x = 10^{-2}$ to $0.35 \, (0.4)\, \text{fm}^2$ at $x = 10^{-3} \, (10^{-4})$. In principle this effect pushes the region of $\pi B$ configurations governed by chiral dynamics out to larger $b$ as $x$ decreases. However, the rate of growth at this scale is still rather small, leaving ample room for such configurations in the region $x > 10^{-3}$. Note that at lower scales the rate of growth is larger; studies based on DGLAP evolution show that $\alpha'_g$ approaches the soft value at $Q^2 \sim 0.4\, \text{GeV}^2$ \cite{Frankfurt:2003td}. \subsection{Chiral corrections to pion structure} \label{subsec:chiral_pion} Another effect which needs to be taken into account at small $x$ are modifications of the parton density in the pion itself due to chiral dynamics. The same mechanism as discussed above for the nucleon in principle works also in the pion itself --- the pion can fluctuate into configurations containing a ``slow'' pion and a two--pion spectator system. When evaluated in chiral perturbation theory, the momentum fraction of the slow pion relative to its parent in such configurations is of the order $y(\text{$\pi$ in $\pi$}) \sim M_\pi / (4\pi F_\pi)$, where $F_\pi$ is the pion decay constant, and $4\pi F_\pi$ represents the generic short--distance scale appearing in the context of the renormalization of the loop integrals. Such contributions to the parton density and the GPD in the pion were recently computed in an all--order resummation of the leading logarithmic approximation to chiral perturbation theory \cite{Kivel:2008ry}, which does not require knowledge of the higher--order terms in the chiral Lagrangian. For the nucleon parton densities this mechanism could become important for $x \ll 10^{-2}$, where the effective parton momentum fractions in the pion can reach small values $z \lesssim 0.1$. In the present study we restrict ourselves to nucleon parton densities at $x \gtrsim 10^{-2}$, for which the convolution integrals are dominated by ``non--chiral'' values of $z$. The incorporation of such corrections to the partonic structure of the pion and extension of the present nucleon structure calculation toward smaller $x$ remains an interesting problem for future study. In particular, it should be investigated how the expressions derived in the leading--log approximation of chiral perturbation theory compare to a ``single--step'' calculation of pion structure including finite mass and size (form factors), along the lines done here for the nucleon. \subsection{Chiral dynamics at large longitudinal distances} \label{subsec:longitudinal} In our studies in Secs.~\ref{sec:chiral}--\ref{sec:size} we considered chiral contributions to the nucleon's partonic structure at large transverse distances, which arise from $\pi B$ configurations at large transverse separations, $b \sim 1/M_\pi$. As already indicated in Sec.~\ref{subsec:parametric}, there is in principle another class of $\pi B$ configurations governed by chiral dynamics, namely those corresponding to large longitudinal separations in the nucleon rest frame, \begin{equation} l \;\; \sim \;\; 1/M_\pi , \label{r_longitudinal} \end{equation} and arbitrary values of the transverse separation, down to $b = 0$. We now want to discuss in which region of $x$ such configurations can produce distinct contributions to the partonic structure. The main limitation in admitting $\pi B$ configurations of the type Eq.~(\ref{r_longitudinal}) as part of the partonic structure arises from the possible longitudinal overlap of the relevant partonic configurations in the pion and the ``core.'' To determine the region where this effect plays a role, it is useful to consider instead of the parton densities the structure function for $\gamma^\ast N$ scattering and appeal to the notion of the coherence length of the virtual photon. Contributions to the partonic structure of the type of the convolution integrals of Eqs.~(\ref{conv_gluon})--(\ref{conv_isovector}) correspond to the impulse approximation of $\gamma^\ast N$ scattering, which requires that the coherence length of the process be smaller than the longitudinal distance between the constituents, so that interference effects can be neglected; see \textit{e.g.}\ Ref.~\cite{Frankfurt:1981mk}. Generally, the coherence length for $\gamma^\ast N$ scattering in the nucleon rest frame is given by \begin{equation} l_{\rm coh} \;\; = \;\; (2 M_N x)^{-1} , \end{equation} where $M_N$ is the nucleon mass and $x \approx Q^2/W^2 \ll 1$ the Bjorken variable; $W$ is the center--of--mass energy of the scattering process. Thus, one would naively think that in scattering from a $\pi N$ system with longitudinal separation $\sim (2 M_\pi)^{-1}$ coherence effects set in if $x < M_\pi / M_N \sim 0.1$. However, this argument neglects the fact that in the fast--moving nucleon the pion carries only a fraction of the order $y \sim M_\pi / M_N \sim 0.1$ of the nucleon's momentum, so that the effective center--of--mass energy for $\gamma^\ast\pi$ scattering is actually lower by this factor, and the coherence length smaller by this factor, than in the $\gamma^\ast N$ process. Interference effectively takes place only when the coherence length for both scattering on the pion and on the baryon in the $\pi B$ configuration is $\sim (2 M_\pi)^{-1}$, which requires \begin{equation} x \;\; \lesssim \;\; 0.01 . \end{equation} For larger values of $x$ coherence effects are small, and there is in principle room for a chiral component of the partonic structure at small $b$ and longitudinal distances $\sim 1/M_\pi$. In order to calculate this component one would need to model the finite--size effects limiting the longitudinal extension of the pion and the spectator system, which is related to the ``small--$x$ behavior'' of the parton densities of the respective systems. We leave this problem to a future study. Interestingly, this could result in partial ``readmission'' of the small--$b$ component of the pion cloud model which was excluded in the present study, potentially affecting \textit{e.g.}\ the comparison with the measured flavor asymmetry $\bar d - \bar u$ in Sec.~\ref{subsec:isovector}. We note that the interference effects in scattering from $\pi B$ configurations described here are large in the region in which chiral corrections to the structure of the pion would become important, \textit{cf.}\ the discussion in Sec.~\ref{subsec:chiral_pion}. An interesting question is whether in the chiral perturbation theory approach these effects come into play already at the level of the leading logarithmic approximation \cite{Kivel:2008ry}, or only at the level of subleading or finite terms. \section{Summary and outlook} \label{sec:summary} The transverse coordinate representation based on GPDs represents a most useful framework for studying the role of chiral dynamics in the nucleon's partonic structure. It allows one to identify the parametric region of the chiral component ($x \lesssim M_\pi / M_N, b \sim 1/M_\pi$) and provides a practical scheme for calculating it in a model--independent way. Let us briefly summarize the main results of our investigation. \begin{itemize} \item[(a)] The contributions from $\pi B \, (B = N, \Delta)$ configurations to the parton distributions become independent of the $\pi NB$ form factors at transverse distances $b \gtrsim 0.5 \, \textrm{fm}$, and thus can be associated with universal chiral dynamics. The lower limit in $b$ approximately coincides with the nucleon's core radius, $R_{\rm core} = 0.55 \, \textrm{fm}$, inferred previously from other phenomenological considerations. \item[(b)] Only $\sim 1/3$ of the measured antiquark flavor asymmetry $[\bar d - \bar u](x)$ at $x > 10^{-2}$ comes from the large--distance region $b > R_{\rm core}$, showing that most of it resides in the nucleon's core at small transverse distances. The traditional pion cloud model, which attempts to explain the entire asymmetry from pionic contributions, gets most of the effect from small $b$ where the concept of $\pi B$ configurations is not applicable. \item[(c)] The isoscalar antiquark distribution $[\bar u + \bar d](x)$ obtained from pions at large $b$ remains safely below the total antiquark distribution determined by QCD fits to deep--inelastic scattering data, leaving room for the (non--perturbatively and perturbatively generated) antiquarks in the core. This naturally solves a problem of the traditional pion cloud model, where the pionic contribution can saturate or even exceed the total antiquark density for certain non-exceptional parameter values. \item[(d)] The strange sea quark distributions, $s(x)$ and $\bar s(x)$, overwhelmingly sit at small transverse distances, $b < R_{\rm core}$. Neither chiral ($\pi N, \pi\Delta$) nor $K\Lambda$ configurations at large $b$ account for more than a few percent of the empirical $s + \bar s$. The predictions of Ref.~\cite{Brodsky:1996hc} for the $x$--dependence of $s(x)$ and $\bar s(x)$ from $K\Lambda$ fluctuations rely on the region where the concept of distinct meson--baryon configurations is not applicable and require a probability of $K\Lambda$ fluctuations several times larger than what is obtained from the standard $SU(3)$ couplings. \item[(e)] The pionic contributions to the nucleon's transverse size, $\langle b^2 \rangle$, are much less sensitive to short--distance dynamics than those to the parton distributions themselves, and thus furnish a new set of clean chiral observables. The large--distance contributions to the nucleon's singlet quark size at $x < 0.1$ are larger than those to the gluonic size, suggesting that $\langle b^2 \rangle_{q + \bar q} > \langle b^2 \rangle_g$, in agreement with the pattern of $t$--slopes of deeply--virtual Compton scattering and exclusive $J/\psi$ production measured at HERA and FNAL. \end{itemize} In the present study we have limited ourselves to the universal large--distance contributions to the partonic structure, which are governed by soft pion exchange and can be calculated in a model--independent way. A complete description should include also a model of the short--distance part, which actually carries most of the parton densities. One way of combining the two would be a two--component picture, in which the constituents in the ``core'' act as a source of the chiral pion fields which propagate out to distances $\sim 1/M_\pi$. Such an approach would be very effective if the characteristic transverse sizes of the ``cloud'' and the ``core'' were numerically very different. However, this is not the case --- the characteristic range of two--pion exchange $1/(2 M_\pi) = 0.71 \, \text{fm}$ is numerically not much larger than our estimate of the ``core'' size, $R_{\rm core} = 0.55 \, \text{fm}$. Another approach, which appears more promising, is based on the idea of a smooth ``interpolation'' between the chiral large--distance dynamics and the short--distance regime. In particular, the effective theory of Ref.~\cite{Diakonov:1984tw}, which is based on the large--$N_c$ limit of QCD, uses constituent quarks as interpolating degrees of freedom; it is valid in a wide region, from distances of the order $\sim 1/M_\pi$ down to distances of the order $\rho \approx 0.3 \, \text{fm}$ --- the range of the non-perturbative chiral--symmetry breaking forces in the QCD vacuum. It leads to a picture of the nucleon as quarks bound by a self--consistent pion field (chiral quark--soliton model) \cite{Diakonov:1987ty}, which is fully field--theoretical and relativistic and provides a very good description of the nucleon's quark and antiquark densities, including subtle effects such as the sea quark flavor asymmetry and its polarization \cite{Diakonov:1996sr}. The results of Ref.~\cite{Strikman:2003gz} and the present work (see in particular Sec.~\ref{sec:largenc}) show that this large--$N_c$ description of nucleon structure is equivalent to phenomenological soft--pion exchange at large transverse distances, thanks to the universality of chiral dynamics; it thus, in a sense, contains the result of the present work as a limiting case. Using this large--$N_c$ picture as a script to model the impact parameter--dependent parton densities at all $b$, would certainly be an interesting problem for further study. Direct experimental study of the chiral component of the nucleon's partonic structure through hard exclusive processes at $x < 0.1$ would be possible with a future electron--ion collider (EIC). The simplest observables are the $t$--dependences of the differential cross sections for various channels ($J/\psi, \phi , \rho, \pi$) at $|t| \ll 0.1\, \text{GeV}^2$, and their change with $x$; at sufficiently large $Q^2$ such measurements can be related directly to the $t$--dependence of the gluon and quark GPDs at small $t$. In particular, such measurements should be able to resolve variations of the $t$--slope with $t$ and possible deviations from exponential $t$--dependence. Measurements of exclusive processes require high luminosity and the capability to detect the recoiling baryon at small angles, which is possible with appropriate forward detectors. Another interesting option are pion knockout processes, corresponding to exclusive scattering from a pion at transverse distances $b \sim 1/M_\pi$, where both the recoiling pion and the nucleon are identified in the final state; see Ref.~\cite{Strikman:2003gz} for a detailed discussion. The partonic content of the nucleon's pion cloud can in principle also be probed in high--energy $pp$ collisions with hard processes, such as dijet and Drell--Yan pair production. Such processes, including accompanying spectator interactions, are most naturally described in the transverse coordinate (impact parameter) representation employed in our investigation here. Interesting new effects appear in collisions at multi--TeV energies (LHC), where the cross sections for hard processes can approach the geometric limit (black--disk regime) and the probability for multiple hard interactions becomes significant. In this situation it is important to realize that the $\pi B$ configurations participate in the high--energy scattering process with a fixed transverse orientation, which is frozen during the collision; depending on this orientation one may either have a violent collision of the pion with the other proton or no interaction at all. The averaging over the orientations of the $\pi B$ configuration must be performed in the colliding $pp$ system with given transverse geometry, not in the partonic wave functions of the individual protons. This circumstance affects \textit{e.g.} the rate of multijet events in peripheral collisions \cite{Rogers:2008ua}. More generally, the pion cloud represents an example of transverse correlations in the nucleon's partonic wave function, which are neglected in the usual mean--field approximation for high--energy $pp$ collisions. In particular, such correlations play a role in central inclusive diffraction, where they reduce the rapidity gap survival probability relative to the mean--field result \cite{Frankfurt:2006jp}. \acknowledgments The authors are indebted to A.~Freund, J.~Goity, V.~Guzey, N.~Kivel, P.~Nadolsky, M.~V.~Polyakov, and A.~W.~Thomas for enlightening discussions and useful hints. Notice: Authored by Jefferson Science Associates, LLC under U.S.\ DOE Contract No.~DE-AC05-06OR23177. The U.S.\ Government retains a non--exclusive, paid--up, irrevocable, world--wide license to publish or reproduce this manuscript for U.S.\ Government purposes. Supported by other DOE contracts.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The successful launch in 2008 and subsequent smooth operation of the \textit{Fermi}\ Gamma-ray Space Telescope (\textit{Fermi}-GST) has brought the community a powerful instrument which is monitoring the entire $\gamma$-ray sky about every 3 hours. Thus, as an ``all-sky-monitor", \textit{Fermi}/LAT\ delivers highly time-resolved $\gamma$-ray spectra and detailed variability curves for a steadily increasing number of AGN. For the first time, detailed studies of AGN properties at $\gamma$-ray energies become possible and already many interesting results have been obtained (e.g., Abdo et al. 2009c, Abdo et al. 2010a, Abdo et al. 2010c). However, only when combined with, and accompanied by, dedicated ground- and space-based multi-frequency observations, can the \textit{Fermi}/LAT\ unfold its full capability of providing a tremendous opportunity for systematic and detailed studies of the physical processes at work in AGN. Consequently, a large suite of different multi-wavelength (MW) monitoring data and projects (``single-dish", VLBI, polarization, spectra) across the whole electromagnetic spectrum (cm/mm/sub-mm, IR/optical/UV, X-ray, TeV) are essential to complement the \textit{Fermi}\ $\gamma$-ray observations. Together, important fundamental questions about e.g., $\gamma$-ray production, overall emission and variability processes as well as the location of the $\gamma$-ray emission region can be effectively addressed. In this framework, the \textit{Fermi}\ AGN group has realized a detailed plan for ad-hoc as well as intensive long-term campaigns. Many of these have been triggered, often for sources detected in flaring states. Here, the 15\,GHz OVRO 40-m and F-GAMMA cm to sub-mm (Effelsberg 100-m, IRAM 30-m, APEX 12-m) monitoring programs, the GASP (radio/IR/optical) collaboration including many IR/optical telescopes (radio: UMRAO, Mets\"ahovi, SMA, Medicina, Noto), RATAN-600, ATCA, Kanata, ATOM, SMARTS, Stewart observatory, MDM, WIRO, KVA, INAOEP, VLT/VISIR as well as VLBI: MOJAVE, TANAMI, the Boston 43\,GHz program, a VLBA multi-frequency ToO program and the EVN/LBA have been participating in one or more of the various campaigns. In addition the X-ray bands have often been covered by the space-based X-ray observatories \textit{Swift}, \textit{Suzaku}, and RXTE. In particular, \textit{Swift} has proven to be extremely valuable in quickly providing detailed and simultaneous observations at optical/UV and X-ray bands for many sources. Furthermore, \textit{Spitzer} has participated in the case of 3C\,454.3 with important near-IR data. Finally, first combined \textit{Fermi}/LAT\ and TeV campaigns led to joint studies with the Cherenkov telescopes HESS and VERITAS as e.g., in the case of PKS\,2155-304 (Aharonian et al. 2009) and 3C\,66A (Abdo et al. in prep.). Since the launch of \textit{Fermi}-GST in 2008, many sources have been target of detailed MW campaigns triggered by the \textit{Fermi}\ AGN group. Table 1 provides a short summary of those which have been published so far. Many other MW campaign publications are accepted, have been submitted or are in progress, e.g., for the galactic plane source J\,0109+6134 (Abdo et al. 2010d), PKS\,1510-089, Mrk\,501, Mrk\,421 and 3C\,454.3. As examples, we review here a few interesting results from three selected MW campaigns recently conducted (2008--2009) by the \textit{Fermi}\ AGN group together with many MW collaborators. \begin{table} \caption[]{Publication summary of sources studied by the \textit{Fermi}~AGN group including (quasi-) simultaneous MW data. Joint GeV/TeV projects are also included.} \label{table1} \centering \[ \resizebox{\columnwidth}{!}{% \begin{tabular}[2]{@{}ll@{}} \hline \hline \noalign{\smallskip} Source & Reference \\ \noalign{\smallskip} \hline \noalign{\smallskip} RGB\,J0710+591 & Acciari, V.~A., et~al.\ 2010, \apjl, 715, L49 \\ 5 FSRQs & Abdo, A.~A., et~al.\ 2010, \apj, 716, 835 \\ PKS 1424+240 & Acciari, V.~A., et~al.\ 2010, \apjl, 708, L100\\ 3C\,279 & Abdo, A.~A., et~al.\ 2010, \nat, 463, 919\\ PKS 1502+106 & Abdo, A.~A., et~al.\ 2010, \apj, 710, 810\\ NGC\,1275 & Acciari, V.~A., et~al.\ 2009, \apjl, 706, L275,\\ & Abdo, A.~A., et~al.\ 2009, \apj, 699, 31\\ 3C\,454.3 & Abdo, A.~A., et~al.\ 2009, \apj, 699, 817\\ PKS 2155-304 & Aharonian, F., et~al.\ 2009, \apjl, 696, L150\\ PMN J0948+0022 & Abdo, A.~A., et~al.\ 2009, \apj, 707, 727,\\ & Abdo, A.~A., et~al.\ 2009, \apj, 699, 976\\ PKS 1454-354 & Abdo, A.~A., et~al.\ 2009, \apj, 697, 934 \\ \noalign{\smallskip} \hline \end{tabular} } \] \end{table} \section{The $\gamma$-ray/optical polarization angle event in 3C\,279} After about 100 days of \textit{Fermi}/LAT\ routine operations, the quiescent phase of flat-spectrum radio quasar (FSRQ) 3C\,279 turned into a phase of strong $\gamma$-ray activity and a MW campaign was triggered including a large number of instruments (see Fig.~\ref{3c279}, Abdo et al. 2010b). As seen from Fig.~\ref{3c279}, 3C\,279 went into a high $\gamma$-ray state at around MJD 54780 lasting for about 120 days and characterized by a double-peak structure with overall variations of the flux by a factor $\sim$\,10. The observed $\gamma$-ray luminosity of $\sim\,10^{48}$\,erg\,s$^{-1}$ dominates the power emitted across the whole electromagnetic spectrum (see Fig.~\ref{3c279_SED}). \begin{figure}[thbp] \includegraphics[clip,width=\columnwidth]{lfu_fig1.eps} \caption{ Multi-frequency light curves of 3C\,279 obtained during the large MW campaign between July 2008 and June 2009 (see Abdo et al. 2010b for details). Many MW facilities participated such as \textit{Swift}-XRT and RXTE at X-ray bands; many telescopes of the GASP collaboration, Kanata, \textit{Swift} UVOT and KVA at IR/optical/UV bands, as well as SMA, UMRAO, OVRO, Mets\"ahovi, Medicina, Noto and Effelsberg at radio bands. Note the smooth optical polarization angle swing during the period of the second, rapid $\gamma$-ray flare (dotted lines).} \label{3c279} \end{figure} \begin{figure*}[t] \includegraphics[clip,width=0.80\linewidth]{lfu_fig2.eps} \hfill\parbox[b]{0.17\textwidth}{\caption[]{ The red data points denote the period of the $\gamma$-ray/optical event. The blue data points have been taken during the period of the isolated X-ray flare (see Abdo et al. 2010b for details).} } \label{3c279_SED} \end{figure*} The most striking event occured during the rapid, second $\gamma$-ray flare (around MJD 54880, doubling time scale of about one day). Here, a highly correlated behavior of the $\gamma$-ray and optical bands is evident between MJD 54800 and 54830, with the $\gamma$-ray flare coincident with a significant drop of the level of optical polarization, from about 30\,\% down to a few percent lasting for about 20 days. In particular, this event is associated with a dramatic change of the electric vector position angle (EVPA) by 208$^{\circ}$ (12$^{\circ}$/day), in contrast to being relatively constant earlier, at about 50$^{\circ}$, which corresponds to the jet direction of 3C\,279 as observed by VLBI. The close association of the $\gamma$-ray flare with the optical polarization angle event clearly suggests that the $\gamma$-ray emission was produced in a single, coherent event and happened co-spatial with the optical. It furthermore suggests highly ordered magnetic fields in the $\gamma$-ray emission region. Compared to the higher energy emission, the radio cm/mm bands showed less strong variability and no obvious ``correlated event" is evident from the light curves shown in Fig.~\ref{3c279}. Still, the source appears to be at higher radio flux levels (factor $\sim$\,2) during the period of the overall $\gamma$-ray high state as seen from the 230\,GHz SMA data. However, assuming the source was still optically thick at these bands, synchrotron self-absorption arguments constrain the transverse size of the emission region to $<\,5\,\times\,10^{16}$\,cm, in good agreement with the values obtained from the shortest $\gamma$-ray variability. The gradual rotation of the optical polarization angle requires a non-axisymmetric trajectory of the emission pattern, since in a uniform, axially-symmetric case, any e.g., compression due to a shock moving along the jet would result in a change of polarization degree, but not in a gradual change of the EVPA. Consequently, two models have been discussed to explain the observed behavior in a non-axisymmetric/curved geometry: the emission region/knot propagating outwards along (i) helical magnetic field lines (similar to the optical polarization event observed in BL\,Lacertae, Marscher et al. 2008) and (ii) along the curved trajectory of a bent jet. In both scenarios the distance of the dissipation region from the central engine can be constrained from the $\sim$\,20\,day time-scale of the event. The distance obtained is about 5 orders of magnitude larger than the gravitational radius of the black hole in 3C\,279 and implies a jet opening angle of $<$\,0.2$^{\circ}$, smaller than typically observed with VLBI. Although less likely, models resulting in a much smaller distance (sub-parsec) can not be completely ruled out. At the large distances implied by the two models (parsecs), the seed photons for the IC emission should then mostly be provided by the torus IR and jet synchrotron emission rather than BLR or accretion disk emission. Another interesting feature is the isolated X-ray flare at MJD 54950, about 60\,days after the second $\gamma$-ray flare. The hard X-ray spectrum during this period and the similarity of its shape and time-scale to the $\gamma$-ray flare argue in favor of an isolated event which is difficult to reconcile with simple one-zone models. \section{PMN\,J0948+0022 and Narrow-line Seyfert 1 galaxies} Before the launch of \textit{Fermi}-GST the known types of $\gamma$-ray emitting AGN were blazars and radio galaxies. Indeed, the early \textit{Fermi}/LAT\ three month results (Abdo et al. 2009c) confirmed that the extragalactic $\gamma$-ray sky is dominated by radio-loud AGN, being mostly blazars and a few radio galaxies. However, an important and impressive early discovery of {\it Fermi}-GST is the detection of $\gamma$-rays from a different class of AGN: Narrow Line Seyfert\,1 galaxies (NLS1). These types of objects are believed to be active nuclei similar to those of Seyferts with optical spectra showing permitted lines from the broad-line region, although much narrower than typically seen in Seyfert 1s or blazars (FWHM(H$\beta)\,<$\,2000\,km\,s$^{-1}$). This and other characteristics make them a unique class of AGN, whereas a large fraction is radio-quiet, and only less than 7\,\% (Komossa et al. 2006) are found to be radio-loud. The first \textit{Fermi}/LAT\ detection of $\gamma$-rays from a NLS1, namely in PMN\,J0948+0022 (Abdo et al. 2009b), certainly once more raised the question whether relativistic jets exist in this type of object, as indicated by previous studies in particular for the most radio-loud NLS1 (e.g., Foschini et al. 2009a). \begin{figure*}[t] \centering \includegraphics[clip,width=0.80\linewidth]{lfu_fig3.eps} \hfill\parbox[b]{0.17\textwidth}{\caption[]{ The SED of PMN\,J0948+0022 as obtained during the MW campaign, here shown in comparison to other $\gamma$-ray emitting FSRQs, BL\,Lacs and radio galaxies (from Foschini et al. 2009b, see also Abdo et al. 2009b, Abdo et al. 2009d).} } \label{0948_SED} \end{figure*} The answer came promptly. MW follow-up observations of PMN\,J0948+0022 performed right after its $\gamma$-ray detection (Abdo et al. 2009b) as well as through a triggered MW campaign during March--July 2009 (Abdo et al. 2009d) demonstrated the efficiency of MW observations/campaigns in conjunction with \textit{Fermi}/LAT: these MW studies have demonstrated that PMN\,J0948+0022 hosts a relativistic jet. Here, early SED studies using non-simultaneous plus simultaneously acquired MW data (Effelsberg 100-m, OVRO 40-m, \textit{Swift} satellite) revealed an SED similar to that of powerful FSRQs with the typical double-humped appearance peaking in the far-IR and in the 40--400\,MeV range (see Fig.~\ref{0948_SED}). Signs of the accretion disk peaking in the \textit{Swift} UV frequency range are clearly seen, which yields a lower limit to the black hole mass of $1.5\,\times\,10^{8}$\,M$_{\odot}$. The time-resolved SEDs have been fitted using the one-zone synchrotron/IC model of Ghisellini \& Tavecchio (2009) resulting in synchrotron/SSC components (dominating the radio to X-ray frequencies) and an EC component producing the $\gamma$-ray emission. The physical parameters are similar to those of blazars, however, with lower power compared to FSRQs but higher values than typically seen for BL\,Lacs (see Fig.~\ref{0948_SED}). From the radio perspective alone, the presence of a relativistic jet in PMN\,J0948+0022 appears obvious due to several findings, such as (i) flux density as well as spectral variability/flare over the duration of the campaign with flat ($\alpha_{5-15\,\mathrm{GHz}}\sim 0$) to highly inverted (max.:\,$\alpha_{5-15\,GHz}=0.98\pm0.05$) Effelsberg/RATAN radio spectra, (ii) equipartition Doppler factors of up to $\sim$\,7, (iii) a highly compact, unresolved core on pc-scales (MOJAVE VLBA and EVN/LBA) with a 15\,GHz core size of $<$\,60\,$\mu$as and corresponding core brightness temperature of $1.0\times 10^{12}$\,K and finally, (iv) VLBI core fractional linear polarization of 0.7\%. This is the signature of a relativistic radio jet similar to those seen in powerful blazar type objects. The radio flare seen in the OVRO/Mets\"ahovi light curves as well as Effelsberg/RATAN radio spectra appears to be delayed with respect to the $\gamma$-ray peak by 1.5--2 months. In summary, the \textit{Fermi}/LAT\ and MW observations of PMN\,J0948+0022 clearly demonstrate for the first time the existence of a $\gamma$-ray emitting NLS1 hosting a relativistic jet similar to blazars even though the environment in the vicinity of the central engine is most likely pretty different. However, this strongly challenges our view that jets can only develop in elliptical galaxies. Follow-up \textit{Fermi}/LAT\ and MW observations of PMN\,J0948+0022 and the three other NLS1 detected by \textit{Fermi}/LAT\ (Abdo et al. 2009e) will certainly shed further light on this interesting new type of $\gamma$-ray emitting AGN. \section{The early $\gamma$-ray flare of 3C\,454.3 during 2008} During the early check-out phase of \textit{Fermi}/LAT\ and the subsequent early operation in survey mode (July-October 2008), strong and highly variable $\gamma$-ray emission from the quasar 3C\,454.3 was detected (Abdo et al. 2009a) showing a large outburst in July 2008 and subsequently, distinct symmetrically shaped sub-flares on time scales of days (see Fig.~\ref{3c454_lc1}). Such rapid $\gamma$-ray variability indicates a highly compact emission region and relativistic beaming with a Doppler factor of $\delta\,>\,8$ in order to be optically thin to pair production. The observed $\gamma$-ray spectrum obtained from the early \textit{Fermi}/LAT\ data has demonstrated for the first time the existence of a spectral break for a high-luminosity blazar above 100\,MeV, which may be regarded as evidence for an intrinsic break in the energy distribution of the radiating particles. Alternatively, the spectral softening above 2\,GeV could be due to $\gamma$-ray absorption via photon-photon pair production on the soft X-ray photon field of the host active galactic nucleus or due to the superposition of two different spectral components at high energies (Abdo et al. 2009a, see also Finke \& Dermer 2010 for a more detailed study of the spectral break). \begin{figure} \centering \includegraphics[clip,width=\columnwidth,angle=0]{lfu_fig4.eps} \caption{Selection of multi-band light curves for 3C\,454.3 (Abdo et al. in prep.) obtained at $\gamma$-rays, optical R band (GASP and ATOM telescopes) and mm/sub-mm bands (SMA, IRAM 30-m). Note the similar variability pattern.} \label{3c454_lc1} \end{figure} The large multi-wavelength campaign (Abdo et al. in prep.) triggered by the Fermi AGN team shortly after the detection of the high $\gamma$-ray state of the source in July/August 2008, resulted in a so far unprecedented frequency coverage from cm/mm/sub-mm bands, IR/optical/UV, X-ray to the GeV range. These data sets demonstrate an active phase of 3C\,454.3 across the whole electromagnetic spectrum. Figure~\ref{3c454_lc1} shows a selection of MW light curves including the $\gamma$-ray, optical (R) and short-mm bands. Interestingly, a similar variability pattern is seen at all bands, even down to the short-mm bands - up to frequencies higher than or close to the radio synchrotron turn-over at around 100\,GHz. A detailed time series and cross-band analysis of the best sampled light curves reveals (i) strong correlations between $\gamma$-ray/optical and optical/mm-bands (ii) a quasi-periodic modulation of the variability with a fast (about 20\,days) and slow (about 60\,days) component seen at all bands, with similar start and stop times. The optical polarization angle data, however, shows no obvious strong pattern, in contrast to the case of 3C\,279. \begin{figure*} \centering \includegraphics[trim=50 10 65 5, clip, width=0.80\linewidth]{lfu_fig5.ps} \hfill\parbox[b]{0.17\textwidth}{\caption[]{ The full collection of multi-frequency radio cm/mm/sub-mm light curves of 3C\,454.3 obtained during the MW campaign (Abdo et al. in prep.). Many radio telescopes were involved: UMRAO, Medicina, Noto, Effelsberg, OVRO, Mets\"ahovi, IRAM 30-m, SMA. The great frequency coverage has been used to study detailed spectral evolution.} } \label{3c454_lc2} \end{figure*} Doppler factors in the range of 3--9 derived from the radio variability (Fig.~\ref{3c454_lc2}) are in good agreement with those obtained from synchrotron self-absorption and $\gamma$-pair production arguments. Three epochs of multi-frequency VLBA ToO observations clearly show that the total single-dish variability originates from the core region while the core spectrum nicely resembles the (inverted) total single-dish spectrum. The evolution of the synchrotron turnover frequency as obtained from the detailed radio light curves shown in Fig.~\ref{3c454_lc2} is in good agreement with the shock-in-jet model of Marscher \& Gear (1985) (as are the increasing time lags towards longer radio wavelengths), at least in the synchrotron and adiabatic phases. However, departures from the Compton phase indicate additional processes at work as indicated already by the cross-band analysis. Detailed modeling with geometrical (e.g., helical jet) as well as SSC/EC models are in progress in order to explain the complex behavior of the source in a consistent manner. \section{Conclusions} As a powerful ``all-sky-monitor" the \textit{Fermi}/LAT\ instrument provides a unique opportunity to explore the high energy $\gamma$-ray sky and the $\gamma$-ray characteristics of the AGN population. In particular, when combined with ground- and space-based multi-frequency observations, \textit{Fermi}/LAT\ unfolds its full capability in addressing fundamental questions about energy production in AGN. This becomes possible due to the large efforts of the MW community in providing detailed, (quasi-) simultaneous broad-band data for a large number of \textit{Fermi}-detected AGN. Since 2008 the \textit{Fermi}\ team has triggered a large number of MW campaigns. The success of such campaigns, although challenging for both observers and theoreticians---as demonstrated by the examples presented here---increasingly provides deeper insight into the physical processes involved. \begin{acknowledgements} The Fermi/LAT Collaboration acknowledges support from a number of agencies and institutes for both development and the operation of the LAT as well as scientific data analysis. These include NASA and DOE in the United States, CEA/Irfu and IN2P3/CNRS in France, ASI and INFN in Italy, MEXT, KEK, and JAXA in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council and the National Space Board in Sweden. Additional support from INAF in Italy and CNES in France for science analysis during the operations phase is also gratefully acknowledged. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro} Clustering is the unsupervised task of assigning a categorical value $y_i \in \{1,\ldots,k\}$ to each data point $x_i \in \mathbf{X}$, where no such example categories are given in the training data; i.e., we should map $\mathbf X= \{x_1,\ldots,x_n\}\mapsto \mathbf Y = \{y_1,\ldots,y_n\}$ with $\bf X$ the input matrix of n data points, each of dimension d; where $y_i = \kappa$ implies that data point $x_i$ is assigned to the $\kappa$-th cluster. Clustering methods complete this task by measuring similarity (the distance) between training pairs, using a similarity function $s(x_i,x_j) \in \mathbb{R}_+$. This similarity function should typically reflect subjective criteria fixed by the user. Basically, this means that the user decides what makes a good clustering. As mentioned in \cite{learn}, ``since classes are a high-level abstraction, discovering them automatically is challenging, and perhaps impossible since there are many criteria that could be used to cluster data (e.g., we may equally well cluster objects by colour, size, or shape). Knowledge about some classes is not only a realistic assumption, but also indispensable to narrow down the meaning of clustering". Taking the example of MNIST \cite{MNIST_digits}, one usually groups the same numbers together because these numbers share the highest amount of features (e.g., mutual information based models do that). However one may want to group numbers given their roundness. In this case, we may obtain two clusters, namely straight shaped numbers (i.e., 1, 4,7) and round shaped numbers (i.e., all the others). Both clustering solutions are relevant, since each clustering addresses a different yet possible user subjective criteria (i.e., clustering semantics). Finding an automated way to derive and incorporate user criteria in a clustering task based on intended semantics can be very hard. Nowadays, the wide availability of shared annotated datasets is a valuable asset and provides examples of possible user criteria. Hence, we argue that, given ``similar'' annotated data, classification logic can be used to derive a user criteria that one can apply to clustering similar non-annotated data. For example, we consider the situation where a human is placed in front of two datasets, each one consisting of letters of a certain alphabet she does not understand. The first dataset is annotated, grouping the same letters together. Only by seeing the first dataset, the person can understand the grouping logic used (grouping same geometrical shapes together) and replicate that logic to the second non annotated dataset and cluster correctly its letters. In this paper, we are interested in tackling the problem of clustering data when the logic (i.e., user clustering criteria) is encoded into some available labelled datasets. This raises two main challenges, namely (1) find a solution that works well on the classification task but (2) ensure transferability in its decision mechanism so it is applicable to clustering data from a different domain. We believe that addressing these challenges calls for the design of a scoring function that should be as general as possible to ensure transferability but is specific enough not to miss the user criteria. More specifically, the scoring function should be a comparing the logic used to produce a certain clustering to the one used to produce clusterings of the already seen training datasets. Using the concept of logic is useful as a logic is general enough to be used on any dataset and specific enough as is it is the main common property shared by all training dataset. Our goal is then to find a suitable metric that retrieves and encapsulate the seen concept for scoring a clustering outcome. Moreover, modern applications require solutions that are effective when data is of high dimension (i.e., large $d$). While distance-based approaches are broadly used for clustering (e.g., Euclidean distance), we argue that they are not suitable for our problem since they would yield in data specific models in addition to their poor performance in high dimensional spaces due to the curse of dimensionality. To lower dimensionality, a solution is to perform instance-wise embeddings $x_i \mapsto z_i$, e.g., with an autoencoder. However this mechanism is still domain specific. To achieve training on more general patterns, we think it is necessary to take the dataset in its entirety. Therefore, instead of learning a metric that compares pairs of data points in a dataset instance (like a similarity measure), a learned metric is applied to sets of data points so comparison is done between sets. The metric can be intuitively understood as a distance between the logic underlying a given clustering and the general logic that was used to produce clusterings in training datasets. For this, we propose a solution where we use a graph autoencoder \cite{GAE} to embed a set of data points into a vector of chosen dimension. Then, we use the critic part of a Wasserstein GAN (WGAN) \cite{WGAN} to produce a continuous score of the embedded clustering outcome. This critic represents the metric we seek. Thus, our main contributions are: \vspace{-2mm} \begin{itemize} \item We provide a framework for joint metric learning and clustering tasks. \vspace{-2mm} \item We show that our proposed solution yields a learned metric that is transferable to datasets of different sizes and dimensions, and across different domains (either vision or tabular) and tasks. \vspace{-2mm} \item We obtain results competitive to the state-of-the-art with only a small number of training datasets, relatively simple networks, and no prior knowledge (only an upper bound of the cluster number that can be set to a high value). \vspace{-6mm} \item Our method is scalable to large datasets both in terms of number of points or dimensions (e.g the SVHN dataset used in section \ref{sec:experiments}) as it does not have to compute pairwise distances and therefore does not heavily suffer when the number of points or dimensions increase. \vspace{-2mm} \item We test the metric on datasets of varying complexity and perform on par with the state-of-the-art while maintaining all the advantages cited above. \end{itemize} \section{Related Work}\label{related} Using auto-encoders before applying classic clustering algorithms resulted in a significant increase of clustering performance, while still being limited by these algorithms capacity. Deep Embedding Clustering (DEC) \cite{DEC} gets rid of this limitation at the cost of more complex objective functions. It uses an auto-encoder along with a cluster assignment loss as a regularisation. The obtained clusters are refined by minimising the KL-divergence between the distribution of soft labels and an auxiliary target distribution. DEC became a baseline for deep clustering algorithms. Most deep clustering algorithms are based on classical center-based, divergence-based or hierarchical clustering formulations and hence bear limitations like the need for an \textit{a priori} number of clusters. MPCKMeans \cite{mpckmeans} is more related to metric learning as they use constraints for both metric learning and the clustering objective. However, their learned metrics remain dataset specific and are not transferable. Constrained Clustering Network (CCN) \cite{transfer_clustering}, learns a metric that is transferable across domains and tasks. Categorical information is reduced to pairwise constraints using a similarity network. Along with the learned similarity function, the authors designed a loss function to regularise the clustering classification. But, using similarity networks only captures local properties instance-wise rather than global geometric properties of dataset clustering. Hence, the learned metric remains non fully transferable, and requires to adapt the loss to the domain to which the metric is transferred to. In Deep Transfer Clustering (DTC) \cite{learn} and Autonovel \cite{autonovel}, the authors tackle the problem of discovering novel classes in an image collection given labelled examples of other classes. They extended DEC to a transfer learning setting while estimating the number of classes in the unlabelled data. Autonovel uses self-supervised learning to train the representation from scratch on the union of labelled and unlabelled datasets then trains the data representation by optimizing a joint objective function on the labelled and unlabelled subsets of data. We consider these two approaches as our state of the art baselines. \section{Our Framework} To restate our objective, we seek an evaluation metric \begin{equation}\label{eq:map2} \begin{split} r : \mathbb R^{\bf n\times d} \times \mathbb {N}^{\bf n}\rightarrow \mathbb{R}\\ (\bf X,\bf y)\mapsto r(\bf X,\bf y) \end{split} \end{equation} where $\bf X \in \mathbb R^{n\times d}$ is a dataset of $n$ points in $d$ dimensions and $\bf y \in \mathbb N^n$ a partition of $\bf X$ (i.e. a clustering of $\bf X$). Metric $r$ should provide a score for \emph{any} labelled dataset of any dimensionality; and in particular this score should be such that $r(\bf{X},\bf y)$ is high when the hamming distance between the ground truth labels $\bf y^*$ and $\bf y$ is small (taking cluster label permutations into account). This would mean that we could perform clustering on any given dataset, simply by solving an optimisation problem even if such a dataset had not been seen before. Formally stated, our goal is: (1) to produce a metric $r$ that grades the quality of a clustering such that $\bf{y}^*=\argmax_{\bf y} r(\bf X, \bf y)$; (2) Implement an optimisation algorithm that finds $\bf y^*$; (3) use (1) and (2) to perform a clustering on a new unrelated and unlabelled dataset. We use a collection $\mathcal{D} = \{\mathbf{X}_l,\mathbf{y}_l^*\}_{l=1}^\ell$ of labelled datasets as examples of correctly `clustered' datasets, and learn $r$ such that $\mathbb{E}[r(\mathbf{X},\mathbf{y})]$ is high. In order to make $r$ transferable between datasets, we embed each dataset with its corresponding clustering ($\mathbf X_l,\mathbf y_l)$ into a vector $\mathbf z_l \in \mathbb R^{\bf e}$. More formally, the embedding function is of the form: \begin{equation} \begin{split} g: \, \, & \mathbb R^{\bf n\times d}\times \mathbf Y \rightarrow \mathbb R^\mathbf e \\ & (\bf X,\bf y)\mapsto \bf z \end{split} \end{equation} Therefore, the metric $r$ is actually the composition of two functions $g$ and $c_\theta$ (the scoring function from $\mathbb R^{\bf e}$ to $\mathbb R$). Our training procedure is structured around 3 blocs A, B and C detailed in next sections and depicted in figure \ref{framework} and is summarised in the following main steps: \vspace{15mm} \begin{enumerate}[{Bloc A}. step 1] \item Select a labelled dataset $(\bf{X},\bf{y}^*) \sim\mathcal{D}$ \vspace{-2mm} \item Given a metric function $r$ (output from bloc B step 2, or initialised randomly), we perform a clustering of dataset $\bf X$: $\mathbf{\hat y} =\argmax_\mathbf{y} r(\mathbf{X},\mathbf{y})$ \end{enumerate} \vspace{-2mm} \begin{enumerate}[{Bloc B}. step 1] \item $\bf y^*$ and $\bf{\hat{y}}$ are represented as graphs where each clique represents a cluster.\vspace{-2mm} \item Graph convolutional autoencoders perform feature extraction from $\bf \hat{y}$ and $\bf y^*$ and output embeddings $\bf \hat{z}$ and $\bf z^*$ \end{enumerate} \vspace{-2mm} \begin{enumerate}[{Bloc C}. step 1] \item The metric $r$ is modelled by a WGAN critic that outputs evaluations of the clusterings: $r(\bf X,\bf y^*) = c_\theta(\bf z^*)$ and $r(\bf X,\bf \hat{y}) = c_\theta(\bf \hat z)$\vspace{-2mm} \item Train the model using the error between $r(\bf X,\bf y^*)$ and $r(\bf X,\bf \hat{y})$. \end{enumerate} \vspace{-3mm} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{Figure.png} \caption{Our framework's 3 components: the clustering mechanism (A), the GAE (B) and the WGAN (C). (A) takes an unlabelled dataset $\mathbf {X}$ as input and outputs a clustering $\mathbf{\hat{y}}$ that maximises a metric $r$. $\mathbf{\hat{y}}$ is then turned into a graph $\mathcal{G}(\mathbf{X},\mathbf{\hat{y}})$ then into an embedding vector $\mathbf{\hat{z}}$ using (B). Same goes for the correctly labelled dataset, which is embedded as $\mathbf{\hat{z}^*}$. Then, (C), which is the metric itself, evaluates $\mathbf{\hat{z}}$ and $\mathbf{z}^*$ using $c_\theta$ and is trained to produce a new metric $r$ which is then used for (A) in the next iteration.} \label{framework} \end{figure} \begin{comment} \begin{table}[!ht] \centering \caption{Summary of notations} \begin{tabular}{lp{6.3cm}} \hline $\mathbf X$ & A dataset of $n$ points, $x_i \in \mathbb{R}^{\bf d}$\\ $\mathbf y^*$ & True clustering (cluster labels) of $\mathbf X$, $\in \{1,\dots,k\}^n$ \\ $\mathbf y$ & A possible clustering of $\mathbf X$ \\ $\hat{\bf y}$ & Clustering retained after optimisation for a fixed function $r$\\ $\mathcal{M}_{n,m}$ & Set of matrices with $n$ lines and $m$ columns\\ $\mathcal{G}(\mathbf X,\mathbf y)$ & Graph representing a clustered version of $\mathbf X$\\ $A$ & An adjacency matrix \\ $X$ & Feature matrix of $\mathbf X$. $X \in \mathcal{M}_{n,d} $\\ $\bf z^*,\bf\hat{z}$ & Embedding, $\in \mathbb{R}^{\bf e}$, of $(\bf{X},\bf{y}^*)$, and $(\bf{X},\bf{y})$, respectively \\ $r$ & Metric $\mathbb{R}^{\bf n\times d}\times \mathbb N^{\mathbf n} \mapsto \mathbb{R}$, scoring of the clustering \\ CEM & Cross-entropy method\\ $\mathcal{S}$ & Set of all intermediate clustering solutions found through CEM \\[1ex] \hline \end{tabular} \end{table} \end{comment} \vspace{-8mm} \subsection{Clustering mechanism}\label{clustering} We seek the most suitable optimisation algorithm for clustering given $r$. Considering a neural network that performs the clustering, we need to find its weights $w$ such that the metric is maximised (see equation \eqref{w}). The type of algorithm to use depends on the nature of the metric $r$ to optimise on. \begin{equation}\label{w} \text{CEM}_r(\mathbf X)\xrightarrow{\text{finds}} w^* = \argmax_w r(\mathbf{X},\mathbf{y}^w) \end{equation} Where $\mathbf y^w$ is a clustering obtained with the weights $w$. The metric is assumed to hold certain properties, discussed in \ref{critic}: \begin{itemize} \item \textbf{Unique Maximum:} A unique optimal clustering. $r$ has a unique maximum. \vspace{-6mm} \item \textbf{Continuity\footnote{As a reminder, Let $T$ and $U$ be two topological spaces. A function $f:T\mapsto U$ is continuous in the open set definition if for every $t\in T$ and every open set $u$ containing $f(t)$, there exists a neighbourhood $v$ of $t$ such that $f(v)\subset u$.}}: Any two clusterings $\mathbf y$ and $\mathbf y'$ should be similar if $r(\mathbf y)$ and $r(\mathbf y')$ are close in $\mathbb{R}$ space. Hence, $r$ has to satisfy a continuity constraint. \end{itemize} There is no guarantee that the best metric for the clustering task is differentiable. Given the above assumptions, conditions are favourable for evolutionary strategies (ES) to iteratively converge towards the optimal solution. Indeed, if $r$ is continuous and the series $((\mathbf{X},\mathbf{y}_1),\dots,(\mathbf{X},\mathbf{y}_p))$ converges towards $(\mathbf{X},\mathbf{y}^*)$ then $(r(\mathbf{X},\mathbf{y}_1),\dots,r(\mathbf{X},\mathbf{y}_p))$ converges towards $r(\mathbf{X},\mathbf{y}^*)$. We choose the Cross-Entropy Method (CEM) \cite{CEM}, a popular ES algorithm for its simplicity, to optimise the clustering neural network weights by solving Eq.\eqref{w} (algorithm \ref{Alg:CEM_algo}). \begin{algorithm}[tb] \caption{CEM Algorithm} \label{Alg:CEM_algo} \begin{algorithmic} \STATE \textbf{Input:} Dataset $X\in\mathbb R^{\bf{n} \times \bf{d}}$; score function $r$; $\mu \in \mathbb{R}^{\bf{d}}$ and $\sigma \in \mathbb{R}^{\bf{d}}$; elite percentage to retain $p$; $n$ samples of $w_i \sim \mathcal{N}(\mu,\text{diag}(\sigma))$; $T$ number of iterations \FOR{$\textnormal{iteration}=1$ {\bfseries to} $T$} \STATE Produce $n$ samples of neural network weights $w_i \sim \mathcal{N(\mu,\text{diag}(\sigma))}$ \STATE Produce clusterings $y_i$ of $X$ using each $w_i$ \STATE Evaluate $r_i = r(X,y_i)$ \STATE Constitute the elite set of $p\%$ best $w_i$ \STATE Fit a Gaussian distribution with diagonal covariance to the elite set and get a new $\mu_t$ and $\sigma_t$ \ENDFOR \STATE {\bfseries return:} $\mu$, $w^*$ \end{algorithmic} \end{algorithm} \subsection{Graph based dataset embedding} To capture global properties and be transferable across different datasets, we argue that it is necessary to input all the points of a dataset at once. Hence, instead of pairwise similarities between random pairs of points, we propose to get a representation of the relation between a bunch of neighbouring points. Thus, we represent each dataset by a graph structure $\mathcal{G}(\mathbf{X},\mathbf y)$ where each node corresponds to a point in $\mathbf{X}$ and where cliques represent clusters as shown in figure \ref{framework}. This representation takes the form of a feature matrix $X$ and an adjacency matrix $A$. Using $X$, and $A$, we embed the whole dataset into a vector $\bf z \in \mathbb{R}^{\mathbf e}$. To do so, we use graph autoencoders (GAE). Our implementation is based on \cite{GAE}. \begin{comment} Specifically, we have $\{X,A\} \mapsto \bf z$, under the following mechanism: \begin{equation} \begin{aligned} GCN(X,A)= Relu(\Tilde{A}XW_0) = \Bar{X}\\ \end{aligned} \end{equation} With $\Tilde{A}$ the symmetrically normalized adjacency matrix and $(W_0,W_1)$ the GCN weight matrices. \begin{equation} \begin{aligned} z=\Tilde{A}\Bar{X}W_1 \end{aligned} \end{equation} Finally, the decoder outputs a new adjacency matrix using the sigmoid function $\sigma$: \begin{equation} \begin{aligned} \hat{A}=\sigma(zz^T) \end{aligned} \end{equation} \end{comment} We obtain $z \in \mathcal{M}_{n,m}$ which is dependent of the shape of the dataset (where $m$ is a user specified hyper-parameter). In order to make it independent from the number of points in $\mathcal{X}$, we turn the matrix $z$ into a square symmetrical one $z \xleftarrow{} z^Tz \in \mathcal{M}_{m,m}$. The final embedding corresponds to a flattened version of the principal triangular bloc of $z^Tz$, which shape is $\mathbf e=(\frac{m+1}{2},1)$. However, the scale of the output still depends on the number of points in the dataset. This could cause an issue when transferring to datasets with a vastly different number of data points. It should therefore require some regularisation; in order to simplify, we decided to use datasets with approximately the same number of points. \subsection{A critic as a metric}\label{critic} With embedded vectors of the same shape, we compare the clusterings proposed $\mathbf{\hat{z}}$ and the ground truth ones $\bf z$ using the metric $r$. $r$ is a function mapping an embedding vector $\mathbf z\in \mathbb R^{\mathbf e}$ to $\mathbb{R}$, we therefore parameterise it as: \begin{equation}\label{large_state_reward} r_\alpha(\mathbf X,\mathbf y)=r_\alpha(\mathbf z)=\alpha_1\phi_1(\mathbf z)+\alpha_2\phi_2(\mathbf z)+...+\alpha_h\phi_h(\mathbf z) \end{equation} Where $\phi_j(\mathbf z)\in \mathbb R$. As per \cite{Russell}, learning a viable metric is possible provided both the following constraints: (1) maximising the difference between the quality of the optimal decision and the quality of the second best; (2) minimising the amplitude of the metric function as using small values encourages the metric function to be simpler, similar to regularisation in supervised learning. When maximising the metric difference between the two clusterings that have the highest scores, we get a similarity score as in traditional metric learning problems. The problem is formulated by equation \eqref{general_optimization} where $\mathcal{S}$ is a set of solutions (i.e., clustering proposals) found using $r_\alpha$ and $\mathbf{y}^*$ is the true clustering, $\mathbf{y}^{\text{max}}$ is the best solution found in $\mathcal{S}$: $\mathbf{y}^{\text{max}} = \argmax_{\mathbf{y}\in\mathcal S}r_\alpha(\mathbf X, \mathbf{y})$. \begin{equation}\label{general_optimization} \begin{aligned} \min_\alpha r_\alpha(\mathbf X, \mathbf y^*) & -\max_\alpha \min_{\mathbf y'\in \mathcal S\setminus \mathbf y^{\text{max}}} r_\alpha(\mathbf X,\mathbf y^{\text{max}})-r_\alpha(\mathbf X,\mathbf y')\\ & \quad \text{s.t} \quad \mathbf{y}^*=\argmax_{\mathbf{y}\in \mathbf{Y}}r(\mathbf{y}) \end{aligned} \end{equation} \begin{algorithm}[h!] \footnotesize \caption{Critic2Metric (C2M)}\label{Complete_algo} \SetAlgoLined \KwInput{$b$: batch size, $epoch$: number of epochs; $p$: percentage of elite weights to keep; $iteration$: number of CEM iterations; $population$: number of weights to generate; $\mu \in \mathbb{R}^d$: CEM mean; $\sigma \in \mathbb{R}^d$: CEM standard deviation, $\theta$: critic's weights} \For{$n=1$ {\bfseries to} epoch}{ \For{$k=1$ {\bfseries to} b}{ Sample $(\mathbf X_{k},\mathbf y_k^*) \sim \mathcal D $ a correctly labelled dataset\\ Generate ground truth embeddings $\mathbf z_{(\mathbf X_{k},\mathbf y_k^*)}=GAE(\mathcal{G}(\mathbf X_k,\mathbf y_k^*))$ \\ Initialise clustering neural network weights $\{w_j\}_{j=1}^{population}$ \\ \For{$i=1$ {\bfseries to} iteration}{ \For{$j=1$ {\bfseries to} population} {Generate clusterings $\mathbf{\hat{y}}_k^{w_j}$ \\ Convert $\mathbf{\hat{y}}_k^{w_j}$ into a graph\\ $\mathbf z_{(\mathbf X_{k},\mathbf {\hat{y}}_k^{w_j})}= GAE(\mathcal{G}(\mathbf X_k,\hat{\mathbf y}_k^{w_j}))$ \\ Evaluate: $r(\mathbf X_k,\hat{\mathbf y}_k^{w_j}) = c_\theta(\mathbf z_{(\mathbf X_{k},\mathbf {\hat{y}}_k^{w_j})})$} Keep proportion $p$ of best weights $w_p$ \\ $w^* \xleftarrow{} \text{CEM}(w_p, \mu, \sigma)$} Generate clustering $\mathbf{y}_k^{w^*}$\\ $\mathbf z_{(\mathbf X_{k},\mathbf {\hat{y}}_k^{w^*})} = GAE(\mathcal{G}(\mathbf X_k,\hat{\mathbf y}_k^{w^*}))$ \\ Train critic as in \cite{WGAN} using $\mathbf z_{(\mathbf X_{k},\mathbf {\hat{y}}_k^{w^*})}$ and $\mathbf z_{(\mathbf X_{k},\mathbf y_k^*)}$ \; }} \end{algorithm} \vspace{-5mm} To solve equation \eqref{general_optimization}, we use a GAN approach where the clustering mechanism (i.e., CEM) plays the role of the generator while a critic (i.e., metric learning model) plays the role of the discriminator. In a classic GAN, the discriminator only has to discriminate between real and false samples, making it use a cross entropy loss. With this kind of loss, and in our case, the discriminator quickly becomes too strong. Indeed, the score output by the discriminator becomes quickly polarised around 0 and 1. \vspace{-1mm} For this reason, we represent $r$ as the critic of a WGAN \cite{WGAN}. This critic scores the realness or fakeness of a given sample while respecting a smoothing constraint. The critic measures the distance between data distribution of the training dataset and the distribution observed in the generated samples. Since WGAN assumes that the optimal clustering provided is unique, the metric solution found by the critic satisfies equation \eqref{general_optimization} constraints. $r$ reaching a unique maximum while being continuous, the assumptions made in section \ref{clustering} are correctly addressed. To train the WGAN, we use the loss $\mathcal{L}$ in equation \eqref{WGAN_loss} where $\bf \hat{z}$ is the embedding vector of a proposed clustering and $\bf z$ is the embedding vector of the desired clustering. Our framework is detailed in algorithm \ref{Complete_algo}. \vspace{-1mm} \begin{equation}\label{WGAN_loss} \mathcal{L}(\mathbf z^*,\mathbf {\hat{z}})=\max_{\theta}\mathbb{E}_{\mathbf z^*\sim p}[f_\theta(\mathbf z^*)] - \mathbb{E}_{\mathbf {\hat{z}}\sim p(\mathbf {\hat{z})}}[f_\theta(\mathbf {\hat{z}})] \end{equation} \section{Experiments}\label{sec:experiments} \vspace*{-\baselineskip} \begin{table*}[h!] \centering \begin{adjustbox}{width=\columnwidth, center} \begin{tabular}{||c || c || c || c || c || c || c || c || c || c ||} \hline \multicolumn{1}{||c|}{\textbf{Dataset family}} & \multicolumn{4}{||c|}{Synthetic data} & \multicolumn{3}{||c|}{MNIST} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}Street view\\ house numbers\end{tabular}} & \multicolumn{1}{c||}{Omniglot} \\ \hline \multicolumn{1}{||c|}{\textbf{Dataset}} & \multicolumn{1}{||c|}{Blob} & \multicolumn{1}{||c|}{Moon} & \multicolumn{1}{||c|}{Circles} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}Aniso-\\ tropic\end{tabular} } & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}MNIST-digits\\ \cite{MNIST_digits}\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}letters MNIST\\ \cite{MNIST_letters}\end{tabular} } & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}fashion MNIST\\ \cite{fashion_MNIST}\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}SVHN\\ \cite{SVHN}\end{tabular}} & \multicolumn{1}{c||}{\begin{tabular}{@{}c@{}}Omniglot\\ \cite{omniglot}\end{tabular} } \\ \hline \multicolumn{1}{||c|}{\textbf{Snapshot}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Blobs.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Aniso.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Circles.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Moons.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{MNIST_example.jpg}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{MNIST_letter.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{MNIST_fashion.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{SVHN.png}}} & \multicolumn{1}{c||}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Omniglot.PNG}}} \\ \hline \multicolumn{1}{||c|}{\textbf{\begin{tabular}{@{}c@{}}Feature\\ dimension\end{tabular} }} & \multicolumn{1}{||c|}{2} & \multicolumn{1}{||c|}{2} & \multicolumn{1}{||c|}{2} & \multicolumn{1}{||c|}{2} & \multicolumn{1}{||c|}{$28\times 28$} & \multicolumn{1}{||c|}{$28\times 28$} & \multicolumn{1}{||c|}{$28\times 28$} & \multicolumn{1}{||c|}{$32 \times 32$} & \multicolumn{1}{c||}{$105 \times 105$} \\ \hline \multicolumn{1}{||c|}{\textbf{\begin{tabular}{@{}c@{}}Maximum number\\ of clusters\end{tabular}}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}9\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}9\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}9\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}9\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{10} & \multicolumn{1}{||c|}{26} & \multicolumn{1}{||c|}{10} & \multicolumn{1}{||c|}{10} & \multicolumn{1}{c||}{47} \\ \hline \multicolumn{1}{||c|}{\textbf{Size}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}200\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}200\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}200\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}200\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{60000} & \multicolumn{1}{||c|}{145600} & \multicolumn{1}{||c|}{60000} & \multicolumn{1}{||c|}{73257} & \multicolumn{1}{c||}{32460} \\ \hline \end{tabular} \end{adjustbox} \caption{Datasets description} \vspace{-8mm} \label{tab:dataset} \end{table*} For empirical evaluation, we parameterise our framework as follows: The critic (block C in Fig~\ref{framework}) is a 5 layer network of sizes 256, 256, 512, 512, and 1 (output) neurons. All activation functions are LeakyRelu ($\alpha=0.2$) except last layer (no activation). RMSprop optimizer with $0.01$ initial learning rate and a decay rate of $0.95$. The CEM-trained neural network (bloc A in Fig~\ref{framework}) has 1 hidden layer of size 16 with Relu activation, and a final layer of size $k=50$ (the maximum number of clusters). The GAE (bloc B in Fig~\ref{framework}) has 2 hidden layers; sized 32 and 16 for synthetic datasets, and 100 and 50 for real datasets. We choose datasets based on 3 main criteria: having a similar compatible format; datasets should be large enough to allow diversity in subsampling configurations to guarantee against overfitting; datasets should be similar to the ones used in our identified baseline literature. All used datasets are found in table \ref{tab:dataset}. For training, we construct $n$ sample datasets and their ground truth clustering, each containing 200 points drawn randomly from a set of 1500 points belonging to the training dataset. Each one of these datasets, along with their clustering is an input to our model. To test the learned metric, we construct 50 new sample datasets from datasets that are different from the training one (e.g., if we train the model on MNIST numbers, we will use datasets from MNIST letters or fashion to test the metric). The test sample datasets contain 200 points each for synthetic datasets and 1000 points each otherwise. The accuracies are then averaged accross the 50 test sample datasets. To test the ability of the model to learn using only a few samples, we train it using 5 (few shots) and 20 datasets (standard), each containing a random number of clusters. For few shots trainings, we train the critic for 1 epoch and 10 epochs for standard trainings. To evaluate the clustering, we use Normalised-Mutual Information (NMI) \cite{NMI} and clustering accuracy (ACC) \cite{ACC}. NMI provides a normalised measure that is invariant to label permutations while ACC measures the one-to-one matching of labels. For clustering, we only need that the samples belonging to the same cluster are attributed the same label, independently from the label itself. However, since we want to analyse the behaviour of the metric learned through our framework, we are interested in seeing whether it is permutation invariant or not. Hence, we need the two measures. \subsection{Results on 2D synthetic datasets} Analysis on synthetic datasets (see table \ref{tab:dataset}) proves that our model behaves as expected. We do not compare our results to any baseline since existing unsupervised methods are well studied on them. We train our model using exclusively samples from blobs datasets. We then test the learned metric on the 4 different types of synthetic datasets (blobs, anisotropic, moons and circles). Results are displayed in table \ref{sci-kit_results}. We observe that the model obtains the best score on blobs since it is trained using this dataset. We can also notice that our model achieves high scores for the other types of datasets not included in training. \begin{table} [h!] \centering \begin{tabular}{LCCCC} \toprule \multicolumn{1}{l}{Types of datasets} & \multicolumn{2}{c}{Standard training} & \multicolumn{2}{c}{Few shots training} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} \\ \midrule \text{Blobs} & 98.4\% & 0.980 & 97.3\% & 0.965\\ \text{Anisotropic} & 97.9\% & 0.967 & 97.2\% & 0.945\\ \text{Circles} & 91.7\% & 0.902 & 92.7\% & 0.900\\ \text{Moons} & 92.1\% & 0.929 & 92.8\% & 0.938\\ \bottomrule \end{tabular} \caption{Average ACC and NMI on synthetic test datasets.} \vspace{-5mm} \label{sci-kit_results} \end{table} Our model succeeds in clustering datasets presenting non linear boundaries like circles while blobs datasets used in training are all linearly separable. Hence, the model learns intrinsic properties of training dataset that are not portrayed in the initial dataset structure, and thus that the metric appears to be transferable. \textbf{Critic's ablation study}. To test if the critic behaves as expected, i.e., grades the clustering proposals proportionally to their quality, we test it on wrongly labelled datasets to see if the score decreases with the number of mislabelled points. We consider 50 datasets from each type of synthetic datasets, create 50 different copies and mislabel a random number of points in each copy. A typical result is displayed in figure \ref{ablation} and shows that the critic effectively outputs an ordering metric as the score increases when the number of mislabelled points decreases, reaching its maximum when there is no mislabelled point. This shows that the metric satisfies the constraints stated in equation \ref{general_optimization}. \vspace{-1mm} \begin{figure}[h!] \centering \includegraphics[width=0.6\columnwidth]{capture.png} \caption{Metric values (i.e., scores given by the critic) for several clusterings of a dataset. Plots are from an anisotropic dataset (left) and a moons dataset (right). In a 2 cluster case (right), the formula used to compute mislabelled points has been made sensitive to label permutation to verify if permuted labels can fool the critic. The critic assigns a high score either when all the labels match the given ground truth or when all the labels are permuted (which again does not affect the correctness of the clustering)} \vspace{-2mm} \label{ablation} \end{figure} \vspace{-6mm} An interesting behaviour is shown in figure \ref{ablation}. Recall that since we are in the context of a clustering problem, we only need for the samples belonging to the same cluster to get the same label, independently from the cluster label itself. Thus, the formula used to compute mislabelled points has been made sensitive to label permutation to verify if permuted labels can fool the critic. For instance, in a 2 clusters case, one can switch the labels of all points in each cluster and still get the maximum score. Switching all labels makes all the points wrongly labelled compared to the given ground truth but nonetheless the clustering itself remains true. This explains the rounded shape in figure \ref{ablation} where the used datasets in the right panel only consisted of 2 clusters. The critic assigns a high score either when all the labels match the given ground truth or when all the labels are permuted (which does not affect the correctness of the clustering). \vspace{-3mm} \subsection{Results on MNIST datasets}\label{MNIST_section} \vspace{-1mm} MNIST datasets give similar results both in terms of ACC and NMI on all test datasets regardless of the used training dataset (see table \ref{MNIST_result}). Hence, the model effectively capture implicit features that are dataset independent. While standard training shows better results, the few shots training has close performance. \begin{table}[h!] \centering \begin{tabular}{LCCCCCC} \toprule \multicolumn{1}{l}{Training Dataset} & \multicolumn{6}{c}{Testing Dataset} \\ \cmidrule(lr){2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Numbers} & \multicolumn{2}{c}{Letters} & \multicolumn{2}{c}{Fashion} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \multicolumn{1}{c}{} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} \\ \midrule \text{Numbers (standard)} & 72.3\% & 0.733 & 81.3\% & 0.861 & 65.2\% & 0.792 \\ \text{Numbers (few shots)} & 68.5\% & 0.801 & 79.0\% & 0.821 & 61.8\% & 0.672 \\ \text{Letters (standard)} & 75.9\% & 0.772 & 83.7\% & 0.854 & 67.5\% & 0.800 \\ \text{Letters (few shots)} & 69.8\% & 0.812 & 78.7\% & 0.806 & 60.9\% & 0.641 \\ \text{Fashion (standard)} & 70.6\% & 0.706 & 83.4\% & 0.858 & 72.5\% & 0.762 \\ \text{Fashion (few shots)} & 70.1\% & 0.690 & 82.1\% & 0.834 & 70.7\% & 0.697 \\ \bottomrule \end{tabular} \caption{Mean clustering performance on MNIST dataset.} \label{MNIST_result} \end{table} \vspace{-12mm} \begin{table}[h!] \centering \begin{tabular}{LCCCCCC} \toprule \multicolumn{1}{l}{Training Dataset} & \multicolumn{6}{c}{Testing Dataset} \\ \cmidrule(lr){2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Numbers} & \multicolumn{2}{c}{Letters} & \multicolumn{2}{c}{Fashion} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} \\ \midrule \text{Numbers (standard)} & 78.3\% & 92.5\% & 86.0\% & 97.5\% & 69.2\% & 87.2\%\\ \text{Numbers (few shots)} & 75.8\% & 82.1\% & 83.3\% & 92.0\% & 65.1\% & 83.9\% \\ \text{Letters (standard)} & 77.4\% & 89.2\% & 88.8\% & 96.4\% & 70.2\% & 86.7\%\\ \text{Letters (few shots)} & 73.1\% & 80.6\% & 85.1\% & 91.5\% & 61.0\% & 76.3\% \\ \text{Fashion (standard} & 70.1\% & 83.1\% & 85.0\% & 98.6\% & 76.9\% & 94.7\%\\ \text{Fashion (few shots)} & 67.9\% & 77.4\% & 83.5\% & 95.3\% & 70.2\% & 88.0\%\\ \bottomrule \end{tabular} \caption{Critic based performance assessment: Best corresponds to the percentage of times the critic gives the best score to the desired solution. Top 3 is when this solution is among the 3 highest scores.} \label{MNIST_theoretic} \vspace{-4mm} \end{table} \begin{comment} \vspace*{-\baselineskip} \begin{table}[h!] \begin{adjustwidth}{-3cm}{-3cm} \begin{subtable}[t]{0.5\linewidth} \begin{tabular*}{\columnwidth}{LCCCCCC} \toprule \multicolumn{1}{l}{Training Dataset} & \multicolumn{6}{c}{Testing Dataset} \\ \cmidrule(lr){2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Numbers} & \multicolumn{2}{c}{Letters} & \multicolumn{2}{c}{Fashion} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \multicolumn{1}{c}{} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} \\ \midrule \text{Numbers (standard)} & 72.3\% & 0.733 & 81.3\% & 0.861 & 65.2\% & 0.792 \\ \text{Numbers (few shots)} & 68.5\% & 0.801 & 79.0\% & 0.821 & 61.8\% & 0.672 \\ \text{Letters (standard)} & 75.9\% & 0.772 & 83.7\% & 0.854 & 67.5\% & 0.800 \\ \text{Letters (few shots)} & 69.8\% & 0.812 & 78.7\% & 0.806 & 60.9\% & 0.641 \\ \text{Fashion (standard)} & 70.6\% & 0.706 & 83.4\% & 0.858 & 72.5\% & 0.762 \\ \text{Fashion (few shots)} & 70.1\% & 0.690 & 82.1\% & 0.834 & 70.7\% & 0.697 \\ \bottomrule \end{tabular*} \caption{Mean clustering performance on MNIST dataset.} \label{MNIST_result} \end{subtable} \begin{subtable}[t]{0.5\linewidth} \begin{tabular}{LCCCCCC} \toprule \multicolumn{1}{l}{Training Dataset} & \multicolumn{6}{c}{Testing Dataset} \\ \cmidrule(lr){2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Numbers} & \multicolumn{2}{c}{Letters} & \multicolumn{2}{c}{Fashion} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} \\ \midrule \text{Numbers (standard)} & 78.3\% & 92.5\% & 86.0\% & 97.5\% & 69.2\% & 87.2\%\\ \text{Numbers (few shots)} & 75.8\% & 82.1\% & 83.3\% & 92.0\% & 65.1\% & 83.9\% \\ \text{Letters (standard)} & 77.4\% & 89.2\% & 88.8\% & 96.4\% & 70.2\% & 86.7\%\\ \text{Letters (few shots)} & 73.1\% & 80.6\% & 85.1\% & 91.5\% & 61.0\% & 76.3\% \\ \text{Fashion (standard} & 70.1\% & 83.1\% & 85.0\% & 98.6\% & 76.9\% & 94.7\%\\ \text{Fashion (few shots)} & 67.9\% & 77.4\% & 83.5\% & 95.3\% & 70.2\% & 88.0\%\\ \bottomrule \end{tabular} \caption{Critic based performance assessment: Best corresponds to the percentage of times the critic gives the best score to the desired solution. Top 3 is when this solution is among the 3 highest scores.} \label{MNIST_theoretic} \end{subtable} \caption{Results on MNIST datasets} \end{adjustwidth} \vspace{-4mm} \end{table} \end{comment} \vspace{-2mm} Table \ref{MNIST_theoretic} shows the percentage of times the critic attributes the best score to the desired solution. It shows that ES algorithm choice has a significant impact on the overall performance. Even with a metric that attributes the best score to the desired clustering, the CEM may be stuck in a local optimum and fails to reconstruct back the desired clustering. Hence, a better optimisation can enhance the performance shown in table \ref{MNIST_result} closer to the one presented in table \ref{MNIST_theoretic}. \subsection{Comparative study} We compare our approach with baseline methods from the literature (table \ref{comparative_results}). For some methods, we followed the procedure in \cite{transfer_clustering} and used their backbone neural network as a pairwise similarity metric. Table \ref{Results_SVHN} reports results when training on SVHN and testing on MNIST numbers. We obtain close ACC values to CCN and ATDA \cite{ATDA}. These methods uses Omniglot as an auxiliary dataset to learn a pairwise similarity function, which is not required for our model. Our model only uses a small fraction of SVHN, has shallow networks and does not require any adaptation to its loss function to achieve comparable results. Finally, other cited methods require the number of clusters as an a priori indication. We achieve comparable results without needing this information. When the loss adaptation through Omniglot is discarded (denoted source-only in table \ref{Results_SVHN}), or if the number of clusters is not given, their accuracy falls and our model surpasses them by a margin. \vspace{-6mm} \begin{table}[h!] \begin{subtable}[c]{0.5\textwidth} \centering \begin{tabular}{LCC} \toprule \text{Method} & \multicolumn{2}{c}{\text{ACC}} \\ \midrule & \text{Loss Adaptation} & \text{Source Only}\\ \midrule \text{DANN \cite{DANN}} & 73.9\% & 54.9\%\\ \text{LTR \cite{LTR}} & 78.8\% & 54.9\%\\ \text{ATDA \cite{ATDA}} & 86.2\% & 70.1\%\\ \text{CCN \cite{transfer_clustering}} & 89.1\% & 52\%\\ \text{Ours (standard)} & - & 84.3\% \\ \text{Ours (few shots)} & - & 81.4\% \\ \bottomrule \end{tabular} \subcaption{Unsupervised cross-task transfer from SVHN to MNIST digits.} \vspace{-2mm} \label{Results_SVHN} \end{subtable} \begin{subtable}[c]{0.5\textwidth} \centering \begin{tabular}{LCC} \toprule \text{Method} & \text{ACC} & \text{NMI} \\ \midrule \text{k-means} & 18.9\% & 0.464 \\ \text{CSP \cite{CSP}} & 65.4\% & 0.812 \\ \text{MPCK-means \cite{mpckmeans}} & 53.9\% & 0.816 \\ \text{CCN \cite{transfer_clustering}} & 78.18\% & 0.874 \\ \text{DTC \cite{learn}} & 87.0\% & 0.945 \\ \text{Autonovel \cite{autonovel}} & 85.4\% & - \\ \text{Ours (standard)} & 83.4\% & 0.891 \\ \bottomrule \end{tabular} \subcaption{Unsupervised cross-task transfer from $\text{Omniglot}_\text{train}$ to $\text{Omniglot}_\text{test}$ ($k=100$ for all).} \vspace{-2mm} \label{Omniglot_results} \end{subtable} \caption{Comparative clustering performance} \vspace{-8mm} \label{comparative_results} \end{table} Table \ref{Omniglot_results} reports results when training on $\text{Omniglot}_\text{train}$ and testing on $\text{Omniglot}_\text{test}$. Values are averaged across $20$ alphabets which have $20$ to $47$ letters. We set the maximum number of clusters $k=100$. When the number of clusters is unknown, we get an ACC score relatively close to DTC and Autonovel. Compared to these two approaches, our method bears several significant advantages: \begin{itemize} \vspace{-2mm} \item \textbf{Deep Networks}: DTC and Autonovel used Resnets as a backbone which are very deep networks while we only used shallow networks (2 layers maximum) \vspace{-6mm} \item \textbf{Pairwise similarity}: in Autonovel the authors used a pairwise similarity statistic between datasets instances which we aimed to avoid due to its significant computational bottleneck. Moreover, this metric is recalculated after each training epoch, which adds more complexity. \vspace{-2mm} \item \textbf{Vision tasks:} While DTC can only handle vision tasks, we present a more general framework which includes vision but also tabular datasets. \vspace{-2mm} \item \textbf{Number of classes}: DTC and Autonovel used the labelled dataset as a probe dataset, and estimates the number of classes iteratively, and when the labelled clusters are correctly recovered, they used the ACC metric to keep the best clustering. This approach is effective, but requires access to the labelled dataset at inference time to estimate the number of classes. This is a shortcoming (memory or privacy limitations). Our approach does not require the labelled dataset once the metric is learned. Our metric automatically estimates the number of clusters required to any new unlabelled dataset. \end{itemize} \vspace{-2mm} \section{Conclusion}\label{sec:discussion} We presented a framework for cross domain/task clustering by learning a transferable metric. This framework consisted of ES methods, and GAE alongside a critic. Our model extracts dataset-independent features from labelled datasets that characterise a given clustering, performs the clustering and grades its quality. We showed successful results using only small datasets and relatively shallow architectures. Moreover, there is more room for improvement. Indeed, since our framework is composed of 3 different blocs (CEM, GAE, critic), overall efficiency can be enhanced by independently improving each bloc (i.e replacing CEM). In future work, we will study the criteria that determine why some auxiliary datasets are more resourceful than others given a target dataset. In our case, this means to study for instance why using the MNIST letters dataset as training allowed a better performance on Fashion MNIST than when using MNIST numbers. This would allow to deliver a minimum performance guarantee at inference time by creating a transferability measure between datasets. \textbf{Acknowledgements:} We gratefully acknowledge Orianne Debeaupuis for making the figure. We also acknowledge computing support from NVIDIA. This work was supported by funds from the French Program "Investissements d'Avenir". \vspace{-4mm} \bibliographystyle{splncs04} \section{Introduction}\label{intro} Clustering is the unsupervised task of assigning a categorical value $y_i \in \{1,\ldots,k\}$ to each data point $x_i \in \mathbf{X}$, where no such example categories are given in the training data; i.e., we should map $\mathbf X= \{x_1,\ldots,x_n\}\mapsto \mathbf Y = \{y_1,\ldots,y_n\}$ with $\bf X$ the input matrix of n data points, each of dimension d; where $y_i = \kappa$ implies that data point $x_i$ is assigned to the $\kappa$-th cluster. Clustering methods complete this task by measuring similarity (the distance) between training pairs, using a similarity function $s(x_i,x_j) \in \mathbb{R}_+$. This similarity function should typically reflect subjective criteria fixed by the user. Basically, this means that the user decides what makes a good clustering. As mentioned in \cite{learn}, ``since classes are a high-level abstraction, discovering them automatically is challenging, and perhaps impossible since there are many criteria that could be used to cluster data (e.g., we may equally well cluster objects by colour, size, or shape). Knowledge about some classes is not only a realistic assumption, but also indispensable to narrow down the meaning of clustering". Taking the example of MNIST \cite{MNIST_digits}, one usually groups the same numbers together because these numbers share the highest amount of features (e.g., mutual information based models do that). However one may want to group numbers given their roundness. In this case, we may obtain two clusters, namely straight shaped numbers (i.e., 1, 4,7) and round shaped numbers (i.e., all the others). Both clustering solutions are relevant, since each clustering addresses a different yet possible user subjective criteria (i.e., clustering semantics). Finding an automated way to derive and incorporate user criteria in a clustering task based on intended semantics can be very hard. Nowadays, the wide availability of shared annotated datasets is a valuable asset and provides examples of possible user criteria. Hence, we argue that, given ``similar'' annotated data, classification logic can be used to derive a user criteria that one can apply to clustering similar non-annotated data. For example, we consider the situation where a human is placed in front of two datasets, each one consisting of letters of a certain alphabet she does not understand. The first dataset is annotated, grouping the same letters together. Only by seeing the first dataset, the person can understand the grouping logic used (grouping same geometrical shapes together) and replicate that logic to the second non annotated dataset and cluster correctly its letters. In this paper, we are interested in tackling the problem of clustering data when the logic (i.e., user clustering criteria) is encoded into some available labelled datasets. This raises two main challenges, namely (1) find a solution that works well on the classification task but (2) ensure transferability in its decision mechanism so it is applicable to clustering data from a different domain. We believe that addressing these challenges calls for the design of a scoring function that should be as general as possible to ensure transferability but is specific enough not to miss the user criteria. More specifically, the scoring function should be a comparing the logic used to produce a certain clustering to the one used to produce clusterings of the already seen training datasets. Using the concept of logic is useful as a logic is general enough to be used on any dataset and specific enough as is it is the main common property shared by all training dataset. Our goal is then to find a suitable metric that retrieves and encapsulate the seen concept for scoring a clustering outcome. Moreover, modern applications require solutions that are effective when data is of high dimension (i.e., large $d$). While distance-based approaches are broadly used for clustering (e.g., Euclidean distance), we argue that they are not suitable for our problem since they would yield in data specific models in addition to their poor performance in high dimensional spaces due to the curse of dimensionality. To lower dimensionality, a solution is to perform instance-wise embeddings $x_i \mapsto z_i$, e.g., with an autoencoder. However this mechanism is still domain specific. To achieve training on more general patterns, we think it is necessary to take the dataset in its entirety. Therefore, instead of learning a metric that compares pairs of data points in a dataset instance (like a similarity measure), a learned metric is applied to sets of data points so comparison is done between sets. The metric can be intuitively understood as a distance between the logic underlying a given clustering and the general logic that was used to produce clusterings in training datasets. For this, we propose a solution where we use a graph autoencoder \cite{GAE} to embed a set of data points into a vector of chosen dimension. Then, we use the critic part of a Wasserstein GAN (WGAN) \cite{WGAN} to produce a continuous score of the embedded clustering outcome. This critic represents the metric we seek. Thus, our main contributions are: \vspace{-2mm} \begin{itemize} \item We provide a framework for joint metric learning and clustering tasks. \vspace{-2mm} \item We show that our proposed solution yields a learned metric that is transferable to datasets of different sizes and dimensions, and across different domains (either vision or tabular) and tasks. \vspace{-2mm} \item We obtain results competitive to the state-of-the-art with only a small number of training datasets, relatively simple networks, and no prior knowledge (only an upper bound of the cluster number that can be set to a high value). \vspace{-6mm} \item Our method is scalable to large datasets both in terms of number of points or dimensions (e.g the SVHN dataset used in section \ref{sec:experiments}) as it does not have to compute pairwise distances and therefore does not heavily suffer when the number of points or dimensions increase. \vspace{-2mm} \item We test the metric on datasets of varying complexity and perform on par with the state-of-the-art while maintaining all the advantages cited above. \end{itemize} \section{Related Work}\label{related} Using auto-encoders before applying classic clustering algorithms resulted in a significant increase of clustering performance, while still being limited by these algorithms capacity. Deep Embedding Clustering (DEC) \cite{DEC} gets rid of this limitation at the cost of more complex objective functions. It uses an auto-encoder along with a cluster assignment loss as a regularisation. The obtained clusters are refined by minimising the KL-divergence between the distribution of soft labels and an auxiliary target distribution. DEC became a baseline for deep clustering algorithms. Most deep clustering algorithms are based on classical center-based, divergence-based or hierarchical clustering formulations and hence bear limitations like the need for an \textit{a priori} number of clusters. MPCKMeans \cite{mpckmeans} is more related to metric learning as they use constraints for both metric learning and the clustering objective. However, their learned metrics remain dataset specific and are not transferable. Constrained Clustering Network (CCN) \cite{transfer_clustering}, learns a metric that is transferable across domains and tasks. Categorical information is reduced to pairwise constraints using a similarity network. Along with the learned similarity function, the authors designed a loss function to regularise the clustering classification. But, using similarity networks only captures local properties instance-wise rather than global geometric properties of dataset clustering. Hence, the learned metric remains non fully transferable, and requires to adapt the loss to the domain to which the metric is transferred to. In Deep Transfer Clustering (DTC) \cite{learn} and Autonovel \cite{autonovel}, the authors tackle the problem of discovering novel classes in an image collection given labelled examples of other classes. They extended DEC to a transfer learning setting while estimating the number of classes in the unlabelled data. Autonovel uses self-supervised learning to train the representation from scratch on the union of labelled and unlabelled datasets then trains the data representation by optimizing a joint objective function on the labelled and unlabelled subsets of data. We consider these two approaches as our state of the art baselines. \section{Our Framework} To restate our objective, we seek an evaluation metric \begin{equation}\label{eq:map2} \begin{split} r : \mathbb R^{\bf n\times d} \times \mathbb {N}^{\bf n}\rightarrow \mathbb{R}\\ (\bf X,\bf y)\mapsto r(\bf X,\bf y) \end{split} \end{equation} where $\bf X \in \mathbb R^{n\times d}$ is a dataset of $n$ points in $d$ dimensions and $\bf y \in \mathbb N^n$ a partition of $\bf X$ (i.e. a clustering of $\bf X$). Metric $r$ should provide a score for \emph{any} labelled dataset of any dimensionality; and in particular this score should be such that $r(\bf{X},\bf y)$ is high when the hamming distance between the ground truth labels $\bf y^*$ and $\bf y$ is small (taking cluster label permutations into account). This would mean that we could perform clustering on any given dataset, simply by solving an optimisation problem even if such a dataset had not been seen before. Formally stated, our goal is: (1) to produce a metric $r$ that grades the quality of a clustering such that $\bf{y}^*=\argmax_{\bf y} r(\bf X, \bf y)$; (2) Implement an optimisation algorithm that finds $\bf y^*$; (3) use (1) and (2) to perform a clustering on a new unrelated and unlabelled dataset. We use a collection $\mathcal{D} = \{\mathbf{X}_l,\mathbf{y}_l^*\}_{l=1}^\ell$ of labelled datasets as examples of correctly `clustered' datasets, and learn $r$ such that $\mathbb{E}[r(\mathbf{X},\mathbf{y})]$ is high. In order to make $r$ transferable between datasets, we embed each dataset with its corresponding clustering ($\mathbf X_l,\mathbf y_l)$ into a vector $\mathbf z_l \in \mathbb R^{\bf e}$. More formally, the embedding function is of the form: \begin{equation} \begin{split} g: \, \, & \mathbb R^{\bf n\times d}\times \mathbf Y \rightarrow \mathbb R^\mathbf e \\ & (\bf X,\bf y)\mapsto \bf z \end{split} \end{equation} Therefore, the metric $r$ is actually the composition of two functions $g$ and $c_\theta$ (the scoring function from $\mathbb R^{\bf e}$ to $\mathbb R$). Our training procedure is structured around 3 blocs A, B and C detailed in next sections and depicted in figure \ref{framework} and is summarised in the following main steps: \vspace{15mm} \begin{enumerate}[{Bloc A}. step 1] \item Select a labelled dataset $(\bf{X},\bf{y}^*) \sim\mathcal{D}$ \vspace{-2mm} \item Given a metric function $r$ (output from bloc B step 2, or initialised randomly), we perform a clustering of dataset $\bf X$: $\mathbf{\hat y} =\argmax_\mathbf{y} r(\mathbf{X},\mathbf{y})$ \end{enumerate} \vspace{-2mm} \begin{enumerate}[{Bloc B}. step 1] \item $\bf y^*$ and $\bf{\hat{y}}$ are represented as graphs where each clique represents a cluster.\vspace{-2mm} \item Graph convolutional autoencoders perform feature extraction from $\bf \hat{y}$ and $\bf y^*$ and output embeddings $\bf \hat{z}$ and $\bf z^*$ \end{enumerate} \vspace{-2mm} \begin{enumerate}[{Bloc C}. step 1] \item The metric $r$ is modelled by a WGAN critic that outputs evaluations of the clusterings: $r(\bf X,\bf y^*) = c_\theta(\bf z^*)$ and $r(\bf X,\bf \hat{y}) = c_\theta(\bf \hat z)$\vspace{-2mm} \item Train the model using the error between $r(\bf X,\bf y^*)$ and $r(\bf X,\bf \hat{y})$. \end{enumerate} \vspace{-3mm} \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{Figure.png} \caption{Our framework's 3 components: the clustering mechanism (A), the GAE (B) and the WGAN (C). (A) takes an unlabelled dataset $\mathbf {X}$ as input and outputs a clustering $\mathbf{\hat{y}}$ that maximises a metric $r$. $\mathbf{\hat{y}}$ is then turned into a graph $\mathcal{G}(\mathbf{X},\mathbf{\hat{y}})$ then into an embedding vector $\mathbf{\hat{z}}$ using (B). Same goes for the correctly labelled dataset, which is embedded as $\mathbf{\hat{z}^*}$. Then, (C), which is the metric itself, evaluates $\mathbf{\hat{z}}$ and $\mathbf{z}^*$ using $c_\theta$ and is trained to produce a new metric $r$ which is then used for (A) in the next iteration.} \label{framework} \end{figure} \begin{comment} \begin{table}[!ht] \centering \caption{Summary of notations} \begin{tabular}{lp{6.3cm}} \hline $\mathbf X$ & A dataset of $n$ points, $x_i \in \mathbb{R}^{\bf d}$\\ $\mathbf y^*$ & True clustering (cluster labels) of $\mathbf X$, $\in \{1,\dots,k\}^n$ \\ $\mathbf y$ & A possible clustering of $\mathbf X$ \\ $\hat{\bf y}$ & Clustering retained after optimisation for a fixed function $r$\\ $\mathcal{M}_{n,m}$ & Set of matrices with $n$ lines and $m$ columns\\ $\mathcal{G}(\mathbf X,\mathbf y)$ & Graph representing a clustered version of $\mathbf X$\\ $A$ & An adjacency matrix \\ $X$ & Feature matrix of $\mathbf X$. $X \in \mathcal{M}_{n,d} $\\ $\bf z^*,\bf\hat{z}$ & Embedding, $\in \mathbb{R}^{\bf e}$, of $(\bf{X},\bf{y}^*)$, and $(\bf{X},\bf{y})$, respectively \\ $r$ & Metric $\mathbb{R}^{\bf n\times d}\times \mathbb N^{\mathbf n} \mapsto \mathbb{R}$, scoring of the clustering \\ CEM & Cross-entropy method\\ $\mathcal{S}$ & Set of all intermediate clustering solutions found through CEM \\[1ex] \hline \end{tabular} \end{table} \end{comment} \vspace{-8mm} \subsection{Clustering mechanism}\label{clustering} We seek the most suitable optimisation algorithm for clustering given $r$. Considering a neural network that performs the clustering, we need to find its weights $w$ such that the metric is maximised (see equation \eqref{w}). The type of algorithm to use depends on the nature of the metric $r$ to optimise on. \begin{equation}\label{w} \text{CEM}_r(\mathbf X)\xrightarrow{\text{finds}} w^* = \argmax_w r(\mathbf{X},\mathbf{y}^w) \end{equation} Where $\mathbf y^w$ is a clustering obtained with the weights $w$. The metric is assumed to hold certain properties, discussed in \ref{critic}: \begin{itemize} \item \textbf{Unique Maximum:} A unique optimal clustering. $r$ has a unique maximum. \vspace{-6mm} \item \textbf{Continuity\footnote{As a reminder, Let $T$ and $U$ be two topological spaces. A function $f:T\mapsto U$ is continuous in the open set definition if for every $t\in T$ and every open set $u$ containing $f(t)$, there exists a neighbourhood $v$ of $t$ such that $f(v)\subset u$.}}: Any two clusterings $\mathbf y$ and $\mathbf y'$ should be similar if $r(\mathbf y)$ and $r(\mathbf y')$ are close in $\mathbb{R}$ space. Hence, $r$ has to satisfy a continuity constraint. \end{itemize} There is no guarantee that the best metric for the clustering task is differentiable. Given the above assumptions, conditions are favourable for evolutionary strategies (ES) to iteratively converge towards the optimal solution. Indeed, if $r$ is continuous and the series $((\mathbf{X},\mathbf{y}_1),\dots,(\mathbf{X},\mathbf{y}_p))$ converges towards $(\mathbf{X},\mathbf{y}^*)$ then $(r(\mathbf{X},\mathbf{y}_1),\dots,r(\mathbf{X},\mathbf{y}_p))$ converges towards $r(\mathbf{X},\mathbf{y}^*)$. We choose the Cross-Entropy Method (CEM) \cite{CEM}, a popular ES algorithm for its simplicity, to optimise the clustering neural network weights by solving Eq.\eqref{w} (algorithm \ref{Alg:CEM_algo}). \begin{algorithm}[tb] \caption{CEM Algorithm} \label{Alg:CEM_algo} \begin{algorithmic} \STATE \textbf{Input:} Dataset $X\in\mathbb R^{\bf{n} \times \bf{d}}$; score function $r$; $\mu \in \mathbb{R}^{\bf{d}}$ and $\sigma \in \mathbb{R}^{\bf{d}}$; elite percentage to retain $p$; $n$ samples of $w_i \sim \mathcal{N}(\mu,\text{diag}(\sigma))$; $T$ number of iterations \FOR{$\textnormal{iteration}=1$ {\bfseries to} $T$} \STATE Produce $n$ samples of neural network weights $w_i \sim \mathcal{N(\mu,\text{diag}(\sigma))}$ \STATE Produce clusterings $y_i$ of $X$ using each $w_i$ \STATE Evaluate $r_i = r(X,y_i)$ \STATE Constitute the elite set of $p\%$ best $w_i$ \STATE Fit a Gaussian distribution with diagonal covariance to the elite set and get a new $\mu_t$ and $\sigma_t$ \ENDFOR \STATE {\bfseries return:} $\mu$, $w^*$ \end{algorithmic} \end{algorithm} \subsection{Graph based dataset embedding} To capture global properties and be transferable across different datasets, we argue that it is necessary to input all the points of a dataset at once. Hence, instead of pairwise similarities between random pairs of points, we propose to get a representation of the relation between a bunch of neighbouring points. Thus, we represent each dataset by a graph structure $\mathcal{G}(\mathbf{X},\mathbf y)$ where each node corresponds to a point in $\mathbf{X}$ and where cliques represent clusters as shown in figure \ref{framework}. This representation takes the form of a feature matrix $X$ and an adjacency matrix $A$. Using $X$, and $A$, we embed the whole dataset into a vector $\bf z \in \mathbb{R}^{\mathbf e}$. To do so, we use graph autoencoders (GAE). Our implementation is based on \cite{GAE}. \begin{comment} Specifically, we have $\{X,A\} \mapsto \bf z$, under the following mechanism: \begin{equation} \begin{aligned} GCN(X,A)= Relu(\Tilde{A}XW_0) = \Bar{X}\\ \end{aligned} \end{equation} With $\Tilde{A}$ the symmetrically normalized adjacency matrix and $(W_0,W_1)$ the GCN weight matrices. \begin{equation} \begin{aligned} z=\Tilde{A}\Bar{X}W_1 \end{aligned} \end{equation} Finally, the decoder outputs a new adjacency matrix using the sigmoid function $\sigma$: \begin{equation} \begin{aligned} \hat{A}=\sigma(zz^T) \end{aligned} \end{equation} \end{comment} We obtain $z \in \mathcal{M}_{n,m}$ which is dependent of the shape of the dataset (where $m$ is a user specified hyper-parameter). In order to make it independent from the number of points in $\mathcal{X}$, we turn the matrix $z$ into a square symmetrical one $z \xleftarrow{} z^Tz \in \mathcal{M}_{m,m}$. The final embedding corresponds to a flattened version of the principal triangular bloc of $z^Tz$, which shape is $\mathbf e=(\frac{m+1}{2},1)$. However, the scale of the output still depends on the number of points in the dataset. This could cause an issue when transferring to datasets with a vastly different number of data points. It should therefore require some regularisation; in order to simplify, we decided to use datasets with approximately the same number of points. \subsection{A critic as a metric}\label{critic} With embedded vectors of the same shape, we compare the clusterings proposed $\mathbf{\hat{z}}$ and the ground truth ones $\bf z$ using the metric $r$. $r$ is a function mapping an embedding vector $\mathbf z\in \mathbb R^{\mathbf e}$ to $\mathbb{R}$, we therefore parameterise it as: \begin{equation}\label{large_state_reward} r_\alpha(\mathbf X,\mathbf y)=r_\alpha(\mathbf z)=\alpha_1\phi_1(\mathbf z)+\alpha_2\phi_2(\mathbf z)+...+\alpha_h\phi_h(\mathbf z) \end{equation} Where $\phi_j(\mathbf z)\in \mathbb R$. As per \cite{Russell}, learning a viable metric is possible provided both the following constraints: (1) maximising the difference between the quality of the optimal decision and the quality of the second best; (2) minimising the amplitude of the metric function as using small values encourages the metric function to be simpler, similar to regularisation in supervised learning. When maximising the metric difference between the two clusterings that have the highest scores, we get a similarity score as in traditional metric learning problems. The problem is formulated by equation \eqref{general_optimization} where $\mathcal{S}$ is a set of solutions (i.e., clustering proposals) found using $r_\alpha$ and $\mathbf{y}^*$ is the true clustering, $\mathbf{y}^{\text{max}}$ is the best solution found in $\mathcal{S}$: $\mathbf{y}^{\text{max}} = \argmax_{\mathbf{y}\in\mathcal S}r_\alpha(\mathbf X, \mathbf{y})$. \begin{equation}\label{general_optimization} \begin{aligned} \min_\alpha r_\alpha(\mathbf X, \mathbf y^*) & -\max_\alpha \min_{\mathbf y'\in \mathcal S\setminus \mathbf y^{\text{max}}} r_\alpha(\mathbf X,\mathbf y^{\text{max}})-r_\alpha(\mathbf X,\mathbf y')\\ & \quad \text{s.t} \quad \mathbf{y}^*=\argmax_{\mathbf{y}\in \mathbf{Y}}r(\mathbf{y}) \end{aligned} \end{equation} \begin{algorithm}[h!] \footnotesize \caption{Critic2Metric (C2M)}\label{Complete_algo} \SetAlgoLined \KwInput{$b$: batch size, $epoch$: number of epochs; $p$: percentage of elite weights to keep; $iteration$: number of CEM iterations; $population$: number of weights to generate; $\mu \in \mathbb{R}^d$: CEM mean; $\sigma \in \mathbb{R}^d$: CEM standard deviation, $\theta$: critic's weights} \For{$n=1$ {\bfseries to} epoch}{ \For{$k=1$ {\bfseries to} b}{ Sample $(\mathbf X_{k},\mathbf y_k^*) \sim \mathcal D $ a correctly labelled dataset\\ Generate ground truth embeddings $\mathbf z_{(\mathbf X_{k},\mathbf y_k^*)}=GAE(\mathcal{G}(\mathbf X_k,\mathbf y_k^*))$ \\ Initialise clustering neural network weights $\{w_j\}_{j=1}^{population}$ \\ \For{$i=1$ {\bfseries to} iteration}{ \For{$j=1$ {\bfseries to} population} {Generate clusterings $\mathbf{\hat{y}}_k^{w_j}$ \\ Convert $\mathbf{\hat{y}}_k^{w_j}$ into a graph\\ $\mathbf z_{(\mathbf X_{k},\mathbf {\hat{y}}_k^{w_j})}= GAE(\mathcal{G}(\mathbf X_k,\hat{\mathbf y}_k^{w_j}))$ \\ Evaluate: $r(\mathbf X_k,\hat{\mathbf y}_k^{w_j}) = c_\theta(\mathbf z_{(\mathbf X_{k},\mathbf {\hat{y}}_k^{w_j})})$} Keep proportion $p$ of best weights $w_p$ \\ $w^* \xleftarrow{} \text{CEM}(w_p, \mu, \sigma)$} Generate clustering $\mathbf{y}_k^{w^*}$\\ $\mathbf z_{(\mathbf X_{k},\mathbf {\hat{y}}_k^{w^*})} = GAE(\mathcal{G}(\mathbf X_k,\hat{\mathbf y}_k^{w^*}))$ \\ Train critic as in \cite{WGAN} using $\mathbf z_{(\mathbf X_{k},\mathbf {\hat{y}}_k^{w^*})}$ and $\mathbf z_{(\mathbf X_{k},\mathbf y_k^*)}$ \; }} \end{algorithm} \vspace{-5mm} To solve equation \eqref{general_optimization}, we use a GAN approach where the clustering mechanism (i.e., CEM) plays the role of the generator while a critic (i.e., metric learning model) plays the role of the discriminator. In a classic GAN, the discriminator only has to discriminate between real and false samples, making it use a cross entropy loss. With this kind of loss, and in our case, the discriminator quickly becomes too strong. Indeed, the score output by the discriminator becomes quickly polarised around 0 and 1. \vspace{-1mm} For this reason, we represent $r$ as the critic of a WGAN \cite{WGAN}. This critic scores the realness or fakeness of a given sample while respecting a smoothing constraint. The critic measures the distance between data distribution of the training dataset and the distribution observed in the generated samples. Since WGAN assumes that the optimal clustering provided is unique, the metric solution found by the critic satisfies equation \eqref{general_optimization} constraints. $r$ reaching a unique maximum while being continuous, the assumptions made in section \ref{clustering} are correctly addressed. To train the WGAN, we use the loss $\mathcal{L}$ in equation \eqref{WGAN_loss} where $\bf \hat{z}$ is the embedding vector of a proposed clustering and $\bf z$ is the embedding vector of the desired clustering. Our framework is detailed in algorithm \ref{Complete_algo}. \vspace{-1mm} \begin{equation}\label{WGAN_loss} \mathcal{L}(\mathbf z^*,\mathbf {\hat{z}})=\max_{\theta}\mathbb{E}_{\mathbf z^*\sim p}[f_\theta(\mathbf z^*)] - \mathbb{E}_{\mathbf {\hat{z}}\sim p(\mathbf {\hat{z})}}[f_\theta(\mathbf {\hat{z}})] \end{equation} \section{Experiments}\label{sec:experiments} \vspace*{-\baselineskip} \begin{table*}[h!] \centering \begin{adjustbox}{width=\columnwidth, center} \begin{tabular}{||c || c || c || c || c || c || c || c || c || c ||} \hline \multicolumn{1}{||c|}{\textbf{Dataset family}} & \multicolumn{4}{||c|}{Synthetic data} & \multicolumn{3}{||c|}{MNIST} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}Street view\\ house numbers\end{tabular}} & \multicolumn{1}{c||}{Omniglot} \\ \hline \multicolumn{1}{||c|}{\textbf{Dataset}} & \multicolumn{1}{||c|}{Blob} & \multicolumn{1}{||c|}{Moon} & \multicolumn{1}{||c|}{Circles} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}Aniso-\\ tropic\end{tabular} } & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}MNIST-digits\\ \cite{MNIST_digits}\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}letters MNIST\\ \cite{MNIST_letters}\end{tabular} } & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}fashion MNIST\\ \cite{fashion_MNIST}\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}SVHN\\ \cite{SVHN}\end{tabular}} & \multicolumn{1}{c||}{\begin{tabular}{@{}c@{}}Omniglot\\ \cite{omniglot}\end{tabular} } \\ \hline \multicolumn{1}{||c|}{\textbf{Snapshot}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Blobs.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Aniso.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Circles.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Moons.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{MNIST_example.jpg}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{MNIST_letter.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{MNIST_fashion.PNG}}} & \multicolumn{1}{||c|}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{SVHN.png}}} & \multicolumn{1}{c||}{\raisebox{-\totalheight}{\includegraphics[width=20mm, height=20mm]{Omniglot.PNG}}} \\ \hline \multicolumn{1}{||c|}{\textbf{\begin{tabular}{@{}c@{}}Feature\\ dimension\end{tabular} }} & \multicolumn{1}{||c|}{2} & \multicolumn{1}{||c|}{2} & \multicolumn{1}{||c|}{2} & \multicolumn{1}{||c|}{2} & \multicolumn{1}{||c|}{$28\times 28$} & \multicolumn{1}{||c|}{$28\times 28$} & \multicolumn{1}{||c|}{$28\times 28$} & \multicolumn{1}{||c|}{$32 \times 32$} & \multicolumn{1}{c||}{$105 \times 105$} \\ \hline \multicolumn{1}{||c|}{\textbf{\begin{tabular}{@{}c@{}}Maximum number\\ of clusters\end{tabular}}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}9\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}9\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}9\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}9\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{10} & \multicolumn{1}{||c|}{26} & \multicolumn{1}{||c|}{10} & \multicolumn{1}{||c|}{10} & \multicolumn{1}{c||}{47} \\ \hline \multicolumn{1}{||c|}{\textbf{Size}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}200\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}200\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}200\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{\begin{tabular}{@{}c@{}}200\\ (custom)\end{tabular}} & \multicolumn{1}{||c|}{60000} & \multicolumn{1}{||c|}{145600} & \multicolumn{1}{||c|}{60000} & \multicolumn{1}{||c|}{73257} & \multicolumn{1}{c||}{32460} \\ \hline \end{tabular} \end{adjustbox} \caption{Datasets description} \vspace{-8mm} \label{tab:dataset} \end{table*} For empirical evaluation, we parameterise our framework as follows: The critic (block C in Fig~\ref{framework}) is a 5 layer network of sizes 256, 256, 512, 512, and 1 (output) neurons. All activation functions are LeakyRelu ($\alpha=0.2$) except last layer (no activation). RMSprop optimizer with $0.01$ initial learning rate and a decay rate of $0.95$. The CEM-trained neural network (bloc A in Fig~\ref{framework}) has 1 hidden layer of size 16 with Relu activation, and a final layer of size $k=50$ (the maximum number of clusters). The GAE (bloc B in Fig~\ref{framework}) has 2 hidden layers; sized 32 and 16 for synthetic datasets, and 100 and 50 for real datasets. We choose datasets based on 3 main criteria: having a similar compatible format; datasets should be large enough to allow diversity in subsampling configurations to guarantee against overfitting; datasets should be similar to the ones used in our identified baseline literature. All used datasets are found in table \ref{tab:dataset}. For training, we construct $n$ sample datasets and their ground truth clustering, each containing 200 points drawn randomly from a set of 1500 points belonging to the training dataset. Each one of these datasets, along with their clustering is an input to our model. To test the learned metric, we construct 50 new sample datasets from datasets that are different from the training one (e.g., if we train the model on MNIST numbers, we will use datasets from MNIST letters or fashion to test the metric). The test sample datasets contain 200 points each for synthetic datasets and 1000 points each otherwise. The accuracies are then averaged accross the 50 test sample datasets. To test the ability of the model to learn using only a few samples, we train it using 5 (few shots) and 20 datasets (standard), each containing a random number of clusters. For few shots trainings, we train the critic for 1 epoch and 10 epochs for standard trainings. To evaluate the clustering, we use Normalised-Mutual Information (NMI) \cite{NMI} and clustering accuracy (ACC) \cite{ACC}. NMI provides a normalised measure that is invariant to label permutations while ACC measures the one-to-one matching of labels. For clustering, we only need that the samples belonging to the same cluster are attributed the same label, independently from the label itself. However, since we want to analyse the behaviour of the metric learned through our framework, we are interested in seeing whether it is permutation invariant or not. Hence, we need the two measures. \subsection{Results on 2D synthetic datasets} Analysis on synthetic datasets (see table \ref{tab:dataset}) proves that our model behaves as expected. We do not compare our results to any baseline since existing unsupervised methods are well studied on them. We train our model using exclusively samples from blobs datasets. We then test the learned metric on the 4 different types of synthetic datasets (blobs, anisotropic, moons and circles). Results are displayed in table \ref{sci-kit_results}. We observe that the model obtains the best score on blobs since it is trained using this dataset. We can also notice that our model achieves high scores for the other types of datasets not included in training. \begin{table} [h!] \centering \begin{tabular}{LCCCC} \toprule \multicolumn{1}{l}{Types of datasets} & \multicolumn{2}{c}{Standard training} & \multicolumn{2}{c}{Few shots training} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} \\ \midrule \text{Blobs} & 98.4\% & 0.980 & 97.3\% & 0.965\\ \text{Anisotropic} & 97.9\% & 0.967 & 97.2\% & 0.945\\ \text{Circles} & 91.7\% & 0.902 & 92.7\% & 0.900\\ \text{Moons} & 92.1\% & 0.929 & 92.8\% & 0.938\\ \bottomrule \end{tabular} \caption{Average ACC and NMI on synthetic test datasets.} \vspace{-5mm} \label{sci-kit_results} \end{table} Our model succeeds in clustering datasets presenting non linear boundaries like circles while blobs datasets used in training are all linearly separable. Hence, the model learns intrinsic properties of training dataset that are not portrayed in the initial dataset structure, and thus that the metric appears to be transferable. \textbf{Critic's ablation study}. To test if the critic behaves as expected, i.e., grades the clustering proposals proportionally to their quality, we test it on wrongly labelled datasets to see if the score decreases with the number of mislabelled points. We consider 50 datasets from each type of synthetic datasets, create 50 different copies and mislabel a random number of points in each copy. A typical result is displayed in figure \ref{ablation} and shows that the critic effectively outputs an ordering metric as the score increases when the number of mislabelled points decreases, reaching its maximum when there is no mislabelled point. This shows that the metric satisfies the constraints stated in equation \ref{general_optimization}. \vspace{-1mm} \begin{figure}[h!] \centering \includegraphics[width=0.6\columnwidth]{capture.png} \caption{Metric values (i.e., scores given by the critic) for several clusterings of a dataset. Plots are from an anisotropic dataset (left) and a moons dataset (right). In a 2 cluster case (right), the formula used to compute mislabelled points has been made sensitive to label permutation to verify if permuted labels can fool the critic. The critic assigns a high score either when all the labels match the given ground truth or when all the labels are permuted (which again does not affect the correctness of the clustering)} \vspace{-2mm} \label{ablation} \end{figure} \vspace{-6mm} An interesting behaviour is shown in figure \ref{ablation}. Recall that since we are in the context of a clustering problem, we only need for the samples belonging to the same cluster to get the same label, independently from the cluster label itself. Thus, the formula used to compute mislabelled points has been made sensitive to label permutation to verify if permuted labels can fool the critic. For instance, in a 2 clusters case, one can switch the labels of all points in each cluster and still get the maximum score. Switching all labels makes all the points wrongly labelled compared to the given ground truth but nonetheless the clustering itself remains true. This explains the rounded shape in figure \ref{ablation} where the used datasets in the right panel only consisted of 2 clusters. The critic assigns a high score either when all the labels match the given ground truth or when all the labels are permuted (which does not affect the correctness of the clustering). \vspace{-3mm} \subsection{Results on MNIST datasets}\label{MNIST_section} \vspace{-1mm} MNIST datasets give similar results both in terms of ACC and NMI on all test datasets regardless of the used training dataset (see table \ref{MNIST_result}). Hence, the model effectively capture implicit features that are dataset independent. While standard training shows better results, the few shots training has close performance. \begin{table}[h!] \centering \begin{tabular}{LCCCCCC} \toprule \multicolumn{1}{l}{Training Dataset} & \multicolumn{6}{c}{Testing Dataset} \\ \cmidrule(lr){2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Numbers} & \multicolumn{2}{c}{Letters} & \multicolumn{2}{c}{Fashion} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \multicolumn{1}{c}{} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} \\ \midrule \text{Numbers (standard)} & 72.3\% & 0.733 & 81.3\% & 0.861 & 65.2\% & 0.792 \\ \text{Numbers (few shots)} & 68.5\% & 0.801 & 79.0\% & 0.821 & 61.8\% & 0.672 \\ \text{Letters (standard)} & 75.9\% & 0.772 & 83.7\% & 0.854 & 67.5\% & 0.800 \\ \text{Letters (few shots)} & 69.8\% & 0.812 & 78.7\% & 0.806 & 60.9\% & 0.641 \\ \text{Fashion (standard)} & 70.6\% & 0.706 & 83.4\% & 0.858 & 72.5\% & 0.762 \\ \text{Fashion (few shots)} & 70.1\% & 0.690 & 82.1\% & 0.834 & 70.7\% & 0.697 \\ \bottomrule \end{tabular} \caption{Mean clustering performance on MNIST dataset.} \label{MNIST_result} \end{table} \vspace{-12mm} \begin{table}[h!] \centering \begin{tabular}{LCCCCCC} \toprule \multicolumn{1}{l}{Training Dataset} & \multicolumn{6}{c}{Testing Dataset} \\ \cmidrule(lr){2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Numbers} & \multicolumn{2}{c}{Letters} & \multicolumn{2}{c}{Fashion} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} \\ \midrule \text{Numbers (standard)} & 78.3\% & 92.5\% & 86.0\% & 97.5\% & 69.2\% & 87.2\%\\ \text{Numbers (few shots)} & 75.8\% & 82.1\% & 83.3\% & 92.0\% & 65.1\% & 83.9\% \\ \text{Letters (standard)} & 77.4\% & 89.2\% & 88.8\% & 96.4\% & 70.2\% & 86.7\%\\ \text{Letters (few shots)} & 73.1\% & 80.6\% & 85.1\% & 91.5\% & 61.0\% & 76.3\% \\ \text{Fashion (standard} & 70.1\% & 83.1\% & 85.0\% & 98.6\% & 76.9\% & 94.7\%\\ \text{Fashion (few shots)} & 67.9\% & 77.4\% & 83.5\% & 95.3\% & 70.2\% & 88.0\%\\ \bottomrule \end{tabular} \caption{Critic based performance assessment: Best corresponds to the percentage of times the critic gives the best score to the desired solution. Top 3 is when this solution is among the 3 highest scores.} \label{MNIST_theoretic} \vspace{-4mm} \end{table} \begin{comment} \vspace*{-\baselineskip} \begin{table}[h!] \begin{adjustwidth}{-3cm}{-3cm} \begin{subtable}[t]{0.5\linewidth} \begin{tabular*}{\columnwidth}{LCCCCCC} \toprule \multicolumn{1}{l}{Training Dataset} & \multicolumn{6}{c}{Testing Dataset} \\ \cmidrule(lr){2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Numbers} & \multicolumn{2}{c}{Letters} & \multicolumn{2}{c}{Fashion} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \multicolumn{1}{c}{} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} & \multicolumn{1}{c}{ACC} & \multicolumn{1}{c}{NMI} \\ \midrule \text{Numbers (standard)} & 72.3\% & 0.733 & 81.3\% & 0.861 & 65.2\% & 0.792 \\ \text{Numbers (few shots)} & 68.5\% & 0.801 & 79.0\% & 0.821 & 61.8\% & 0.672 \\ \text{Letters (standard)} & 75.9\% & 0.772 & 83.7\% & 0.854 & 67.5\% & 0.800 \\ \text{Letters (few shots)} & 69.8\% & 0.812 & 78.7\% & 0.806 & 60.9\% & 0.641 \\ \text{Fashion (standard)} & 70.6\% & 0.706 & 83.4\% & 0.858 & 72.5\% & 0.762 \\ \text{Fashion (few shots)} & 70.1\% & 0.690 & 82.1\% & 0.834 & 70.7\% & 0.697 \\ \bottomrule \end{tabular*} \caption{Mean clustering performance on MNIST dataset.} \label{MNIST_result} \end{subtable} \begin{subtable}[t]{0.5\linewidth} \begin{tabular}{LCCCCCC} \toprule \multicolumn{1}{l}{Training Dataset} & \multicolumn{6}{c}{Testing Dataset} \\ \cmidrule(lr){2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Numbers} & \multicolumn{2}{c}{Letters} & \multicolumn{2}{c}{Fashion} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \multicolumn{1}{c}{} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} & \multicolumn{1}{c}{Best} & \multicolumn{1}{c}{Top 3} \\ \midrule \text{Numbers (standard)} & 78.3\% & 92.5\% & 86.0\% & 97.5\% & 69.2\% & 87.2\%\\ \text{Numbers (few shots)} & 75.8\% & 82.1\% & 83.3\% & 92.0\% & 65.1\% & 83.9\% \\ \text{Letters (standard)} & 77.4\% & 89.2\% & 88.8\% & 96.4\% & 70.2\% & 86.7\%\\ \text{Letters (few shots)} & 73.1\% & 80.6\% & 85.1\% & 91.5\% & 61.0\% & 76.3\% \\ \text{Fashion (standard} & 70.1\% & 83.1\% & 85.0\% & 98.6\% & 76.9\% & 94.7\%\\ \text{Fashion (few shots)} & 67.9\% & 77.4\% & 83.5\% & 95.3\% & 70.2\% & 88.0\%\\ \bottomrule \end{tabular} \caption{Critic based performance assessment: Best corresponds to the percentage of times the critic gives the best score to the desired solution. Top 3 is when this solution is among the 3 highest scores.} \label{MNIST_theoretic} \end{subtable} \caption{Results on MNIST datasets} \end{adjustwidth} \vspace{-4mm} \end{table} \end{comment} \vspace{-2mm} Table \ref{MNIST_theoretic} shows the percentage of times the critic attributes the best score to the desired solution. It shows that ES algorithm choice has a significant impact on the overall performance. Even with a metric that attributes the best score to the desired clustering, the CEM may be stuck in a local optimum and fails to reconstruct back the desired clustering. Hence, a better optimisation can enhance the performance shown in table \ref{MNIST_result} closer to the one presented in table \ref{MNIST_theoretic}. \subsection{Comparative study} We compare our approach with baseline methods from the literature (table \ref{comparative_results}). For some methods, we followed the procedure in \cite{transfer_clustering} and used their backbone neural network as a pairwise similarity metric. Table \ref{Results_SVHN} reports results when training on SVHN and testing on MNIST numbers. We obtain close ACC values to CCN and ATDA \cite{ATDA}. These methods uses Omniglot as an auxiliary dataset to learn a pairwise similarity function, which is not required for our model. Our model only uses a small fraction of SVHN, has shallow networks and does not require any adaptation to its loss function to achieve comparable results. Finally, other cited methods require the number of clusters as an a priori indication. We achieve comparable results without needing this information. When the loss adaptation through Omniglot is discarded (denoted source-only in table \ref{Results_SVHN}), or if the number of clusters is not given, their accuracy falls and our model surpasses them by a margin. \vspace{-6mm} \begin{table}[h!] \begin{subtable}[c]{0.5\textwidth} \centering \begin{tabular}{LCC} \toprule \text{Method} & \multicolumn{2}{c}{\text{ACC}} \\ \midrule & \text{Loss Adaptation} & \text{Source Only}\\ \midrule \text{DANN \cite{DANN}} & 73.9\% & 54.9\%\\ \text{LTR \cite{LTR}} & 78.8\% & 54.9\%\\ \text{ATDA \cite{ATDA}} & 86.2\% & 70.1\%\\ \text{CCN \cite{transfer_clustering}} & 89.1\% & 52\%\\ \text{Ours (standard)} & - & 84.3\% \\ \text{Ours (few shots)} & - & 81.4\% \\ \bottomrule \end{tabular} \subcaption{Unsupervised cross-task transfer from SVHN to MNIST digits.} \vspace{-2mm} \label{Results_SVHN} \end{subtable} \begin{subtable}[c]{0.5\textwidth} \centering \begin{tabular}{LCC} \toprule \text{Method} & \text{ACC} & \text{NMI} \\ \midrule \text{k-means} & 18.9\% & 0.464 \\ \text{CSP \cite{CSP}} & 65.4\% & 0.812 \\ \text{MPCK-means \cite{mpckmeans}} & 53.9\% & 0.816 \\ \text{CCN \cite{transfer_clustering}} & 78.18\% & 0.874 \\ \text{DTC \cite{learn}} & 87.0\% & 0.945 \\ \text{Autonovel \cite{autonovel}} & 85.4\% & - \\ \text{Ours (standard)} & 83.4\% & 0.891 \\ \bottomrule \end{tabular} \subcaption{Unsupervised cross-task transfer from $\text{Omniglot}_\text{train}$ to $\text{Omniglot}_\text{test}$ ($k=100$ for all).} \vspace{-2mm} \label{Omniglot_results} \end{subtable} \caption{Comparative clustering performance} \vspace{-8mm} \label{comparative_results} \end{table} Table \ref{Omniglot_results} reports results when training on $\text{Omniglot}_\text{train}$ and testing on $\text{Omniglot}_\text{test}$. Values are averaged across $20$ alphabets which have $20$ to $47$ letters. We set the maximum number of clusters $k=100$. When the number of clusters is unknown, we get an ACC score relatively close to DTC and Autonovel. Compared to these two approaches, our method bears several significant advantages: \begin{itemize} \vspace{-2mm} \item \textbf{Deep Networks}: DTC and Autonovel used Resnets as a backbone which are very deep networks while we only used shallow networks (2 layers maximum) \vspace{-6mm} \item \textbf{Pairwise similarity}: in Autonovel the authors used a pairwise similarity statistic between datasets instances which we aimed to avoid due to its significant computational bottleneck. Moreover, this metric is recalculated after each training epoch, which adds more complexity. \vspace{-2mm} \item \textbf{Vision tasks:} While DTC can only handle vision tasks, we present a more general framework which includes vision but also tabular datasets. \vspace{-2mm} \item \textbf{Number of classes}: DTC and Autonovel used the labelled dataset as a probe dataset, and estimates the number of classes iteratively, and when the labelled clusters are correctly recovered, they used the ACC metric to keep the best clustering. This approach is effective, but requires access to the labelled dataset at inference time to estimate the number of classes. This is a shortcoming (memory or privacy limitations). Our approach does not require the labelled dataset once the metric is learned. Our metric automatically estimates the number of clusters required to any new unlabelled dataset. \end{itemize} \vspace{-2mm} \section{Conclusion}\label{sec:discussion} We presented a framework for cross domain/task clustering by learning a transferable metric. This framework consisted of ES methods, and GAE alongside a critic. Our model extracts dataset-independent features from labelled datasets that characterise a given clustering, performs the clustering and grades its quality. We showed successful results using only small datasets and relatively shallow architectures. Moreover, there is more room for improvement. Indeed, since our framework is composed of 3 different blocs (CEM, GAE, critic), overall efficiency can be enhanced by independently improving each bloc (i.e replacing CEM). In future work, we will study the criteria that determine why some auxiliary datasets are more resourceful than others given a target dataset. In our case, this means to study for instance why using the MNIST letters dataset as training allowed a better performance on Fashion MNIST than when using MNIST numbers. This would allow to deliver a minimum performance guarantee at inference time by creating a transferability measure between datasets. \textbf{Acknowledgements:} We gratefully acknowledge Orianne Debeaupuis for making the figure. We also acknowledge computing support from NVIDIA. This work was supported by funds from the French Program "Investissements d'Avenir". \vspace{-4mm} \bibliographystyle{splncs04}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In recent years, the strongly interacting fermion physics becomes the focus of theoretical and experimental attention\cite{Giorgini2008}. This is much attributed to the rapid progress of the atomic Fermi gas experiments. By tuning the external magnetic field, one can control the $S$-wave scattering length $a$ or interaction strength between two atomic fermions. The crossover from Bardeen-Cooper-Schrieffer (BCS) to Bose-Einstein condensation (BEC) can be realized by the so-called Feshbach resonance\cite{Science2004}. At the resonance point, the scattering length can be singular with the existence of a zero-energy bound sate. Although the scattering length is singular, the scattering cross-section is saturated as $\sigma \sim 4\pi/k^2$ (with $k$ being the relative momentum between two atomic fermions) due to the unitary property limit. The divergent scattering fermion thermodynamics is referred to as the unitary Fermi gas thermodynamics in the literature\cite{Ho2004}. Dealing with the strongly interacting matter is related with a variety of realistic many-body topics. Usually, the thermodynamics of dilute fermion system is determined by the two-body scattering length $a$, particle number density $n$ and temperature $T$. In the unitary limit with $a=\pm \infty$, the dynamical scattering limit should drop out in the thermodynamic quantities. At unitarity, the dynamical detail should not affect the thermodynamics; i.e., the unitary fermion system can manifest the universal properties\cite{Ho2004}. Due to lack of any small expansion parameter, the unitary Fermi gas provides an intractable problem in statistical physics. The fundamental issue is on the zero-temperature ground state energy. Based on the dimensional analysis, the ground state energy should be proportional to that of the ideal Fermi gas with a universal constant $\xi =1+\be$, which excites many theoretical and experimental efforts. The world average value of $\xi$ is $0.42-0.46$\cite{constant1,constant2,constant3,constant4,constant5}. Recently, we have attempted a quasi-linear approximation method to explore the strongly interacting limit fermion thermodynamics\cite{Chenjs}. The obtained ground state energy or the universal constant $\xi=\049$ is reasonably consistent with some theoretical or experimental investigations. Generally, the finite-temperature thermodynamics is as intriguing as the zero-temperature ground state energy. There have been several Monte Carlo finite-temperature calculations of a unitary Fermi gas\cite{MC1,MC2}. In the strongly correlation unitary fermions, the nonlinear quantum fluctuations/correlations compete with dynamical high order effects. In the weak degenerate Boltzmann regime, the nonlinear correlations make the second order virial coefficient $a_2$ vanish. To a great extent, the vanishing leading order quantum correction reflects the \textit{intermediate} crossover characteristics of a unitary Fermi gas\cite{Chenjs}. Can the intermediate characteristics be described in another way? In \cite{Haldane1991,Wu1994}, the generalized exclusion statistics was developed to describe the anyon behavior in the low-dimensional strongly correlation quantum system. Physically, the behavior of a unitary Fermi gas is between Bose gas and Fermi gas\cite{MC1}. Similarly, the behavior of anyons is also between bosons and fermions. Can one use the anyons statistics to describe the intermediate unitary Fermi gas? Recently, the generalized exclusion statistics has been generalized to describe the unitary Fermi gas thermodynamics\cite{Bhaduri1,Bhaduri2}. As a hypothesis, the priority is that the thermodynamics at finite-temperature can be investigated quantitatively. From the general viewpoint of statistical mechanics, calculating entropy is not a simple task. Either in classical or quantum theory, the entropy describes how the microscopic states are counted properly. From the quantum degenerate viewpoint, the low-temperature behavior of the entropy is a characteristic quantity. For example, according to the Landau theory for the strong correlation Fermi-Liquid, the slope of entropy per particle versus temperature is related with the effective fermion mass $m^*/m$. In physics, the dynamical parameter $m^*/m$ is very important for the phase separation discussion for the asymmetric fermion system with unequal populations\cite{Lobo2006,Pilati2008,Combescot2007}. Like the universal constant $\xi =1+\be$, the effective fermion mass $m^*/m$ is an another universal constant for the BCS-BEC crossover thermodynamics. Obviously, the physics beyond the mean-field theory should be reasonably well understood. Unlike the ground state energy or the universal constant $\xi$ with the world average value $\xi\approx 0.44$, the effective fermion mass is an unknown parameter up to now. For example, the effective fermion mass is estimated to be $m^*/m\approx 1.04$ with a quantum Monte Carlo calculation\cite{Lobo2006}. A quantitative study of the phase diagram at zero temperature along the BCS-BEC crossover using fixed-node diffusion Monte Carlo simulations shows $m^*/m\approx 1.09$\cite{Pilati2008}. A many-body variational wave function with a T-matrix approximation leads to a larger value $m^*/m \approx 1.17$\cite{Combescot2007}. What is the exact value of $m^*/m$? In a quantitative way, we make a comparative study for the finite-temperature thermodynamic properties of the unitary fermion gas with the two formulations. The behavior of entropy per particle based on the quasi-linear approximation and the generalized exclusion statistics is discussed in detail. Indirectly, the effective fermion mass is determined from the entropy. The results are further compared with the Monte Carlo calculations. The paper is organized in the following way. In Sec.\ref{section2}, the relevant thermodynamic expressions are given by the quasi-linear approximation. Correspondingly, the thermodynamics given by the generalized exclusion statistics is presented in Sec.\ref{section3}. The numerical calculations and concrete comparisons between the two methods are given in Sec.\ref{section4}. In this section, the entropy per particle and corresponding effective fermion mass $m^*/m$ are discussed. In Sec.\ref{section5}, we present the conclusion remarks. \section{Thermodynamical quantities given by statistical dynamics with quasi-linear approximation}\label{section2} Strongly correlated matter under extreme conditions often requires the use of effective field theories in the description for the thermodynamic properties, independently of the energy scale under consideration. In the strongly interacting system, the central task is how to deal with the non-perturbative fluctuation and correlation effects. In Ref.\cite{Chenjs}, a quasi-linear approximation is taken to account for the non-local correlation effects on the unitary Fermi gas thermodynamics. With the quasi-linear approximation method, the obtained grand thermodynamic potential $\Omega (T,\mu) $ or pressure $P=-\Omega/V$ can be described by the two coupled-parametric equations through the intermediate variable-effective chemical potential $\mu^*$ \arraycolsep .1em \begin{eqnarray}\label{1} P=\frac{2T}{\lambda^{3}}f_{{5}/{2}}(z^{'})+\frac{\pi a_{eff}}{m}n^{2}+n\mu_{r},\end{eqnarray} \arraycolsep .1em \begin{eqnarray}\label{2} \mu=\mu^{*}+\frac{2\pi a_{eff}}{m}n+\mu_{r}. \end{eqnarray} In the above equations, $\lambda=\sqrt{{2\pi}/({mT})}$ is the thermal de Broglie wavelength and $m$ is the bare fermion mass(with natural units $k_{B}=\hbar=1$ throughout the paper). The effective chemical potential $\mu^{*}$ is introduced by the single-particle self-consistent equation. $\mu^*$ makes the thermodynamic expressions appear as the standard Fermi integral formalism\arraycolsep .1em \begin{eqnarray} f_{\upsilon}(z^{'})=\frac{1}{\Gamma(\upsilon)}\int_{0}^{\infty}{\frac{x^{\upsilon-1}dx}{z^{'-1}e^{x}+1}}, \end{eqnarray} where $\Gamma(\upsilon)$ is the gamma function, and $z^{'}=e^{{\mu^{*}}/{T}}$ is the effective fugacity. For example, the quasi-particle Fermi-Dirac distribution function gives the particle number density according to \arraycolsep .1em \begin{eqnarray} n=\frac{2}{\lambda^{3}}f_{{3}/{2}}(z^{'}).\end{eqnarray} In the coupled equations \eq{1} and \eq{2}, the shorthand notations are defined as\arraycolsep .1em \begin{eqnarray} a_{eff}=-\frac{m}{2\pi m_{D}^{2}},~~~~~~ m_{D}^{2}=\(\frac{\partial n}{\partial \mu^{*}}\)_{T}.\end{eqnarray} The shift term $\propto\mu_r$ characterizes the high order nonlinear contributions, which strictly ensures the energy-momentum conservation law. In the nonlinear approximation, this significant high order correction term can be fixed in a thermodynamic way. It is worthy noting that the term $\propto\mu_r$ can be exactly canceled by each other in the Helmholtz free energy density \arraycolsep .1em \begin{eqnarray} \frac{F}{V}=f=-P+n\mu, \end{eqnarray} where $V$ is the system volume. However, the high order correlation term $\propto\mu_r$ can be obtained in terms of the thermodynamic relations\cite{Chenjs} \arraycolsep .1em \begin{eqnarray}\label{temp1} P=-\(\frac{\partial F}{\partial V}\)_{T,N}=-\(\frac{\partial\frac{F}{N}}{\partial\frac{V}{N}}\)_{T}=n^{2}\(\frac{\partial\frac{f}{n}}{\partial n}\)_{T}, \end{eqnarray} and \arraycolsep .1em \begin{eqnarray}\label{temp2} \mu=\(\frac{\partial F}{\partial N}\)_{T,V}=\(\frac{\partial\frac{F}{V}}{\partial\frac{N}{V}}\)_{T}=\(\frac{\partial f}{\partial n}\)_{T}. \end{eqnarray} Comparing those obtained from \eq{temp1} and \eq{temp2} with \eq{1} and \eq{2}, the explicit expression of $\mu_r$ is\arraycolsep .1em \begin{eqnarray} \mu_{r}=\frac{1}{2}\(\frac{\partial m_{D}^{2}}{\partial n}\)_{T}\(\frac{2\pi a_{eff}}{m}\)^{2}n^{2}, \end{eqnarray} The integrated expressions of the pressure and chemical potential for the unitary Fermi gas are \arraycolsep .1em \begin{eqnarray}\label{pressure} P&&=\frac{2T}{\lambda^{3}}\(f_{{5}/{2}}(z^{'})- \frac{f_{{3}/{2}}^{2}(z^{'})}{2f_{{1}/{2}}(z^{'})}+ \frac{f_{{3}/{2}}^{3}(z^{'})f_{-{1}/{2}}(z^{'})}{2f_{{1}/{2}}^{3}(z^{'})}\),\no\\ \\ \label{3} \mu&&=\mu^{*}-T\frac{f_{{3}/{2}}(z^{'})}{f_{{1}/{2}}(z^{'})} +\frac{T}{2}\frac{f_{{3}/{2}}^{2}(z^{'})f_{-{1}/{2}}(z^{'})}{f_{{1}/{2}}^{3}(z^{'})}. \end{eqnarray} In the quasi-linear approximation, the auxiliary implicit variable $\mu^*$ is introduced to characterize the non-linear fluctuation/correlation effects. As indicated by \eq{pressure} and \eq{3}, the $\mu^*$ or $z'$ makes the realistic grand thermodynamic potential $\Omega (T,\mu)$ appear as the set of highly non-linear parametric equations, which can be represented by the standard Fermi integral. By eliminating the auxiliary variable $\mu^*$, the equation of state will uniquely be determined. From the underlying grand thermodynamic potential-partition function, one can derive the analytical expressions for the entropy density $s=S/V$ and internal energy density $\epsilon=E/V$. The following partial derivative formulae will be used \arraycolsep .1em \begin{eqnarray} &&\(\frac{\partial\mu^{*}}{\partial T}\)_{\mu}\(\frac{\partial T}{\partial \mu}\)_{\mu^{*}}\(\frac{\partial\mu}{\partial\mu^{*}}\)_{T}=-1, \no\\ &&\(\frac{\partial m_{D}^{2}}{\partial T}\)_{n}=\(\frac{\partial m_{D}^{2}}{\partial T}\)_{\mu^{*}}+\(\frac{\partial m_{D}^{2}}{\partial\mu^{*}}\)_{T}\(\frac{\partial \mu^{*}}{\partial T}\)_{n}.\end{eqnarray} The entropy is derived according to \arraycolsep .1em \begin{eqnarray}\label{4} &&\frac{s}{n}=\frac{1}{n}\(\frac{\partial P}{\partial T}\)_{\mu}\no\\ &&=\frac{5}{2}\frac{f_{{5}/{2}}(z^{'})}{f_{{3}/{2}}(z^{'})}-\ln{z^{'}}+ \frac{3f_{-{1}/{2}}(z^{'})f_{{3}/{2}}^{2}(z^{'})}{4f_{{1}/{2}}^{3}(z^{'})} -\frac{f_{{3}/{2}}(z^{'})}{4f_{{1}/{2}}(z^{'})}.\no\\ \end{eqnarray} Correspondingly, the explicit energy density expression is calibrated to be \arraycolsep .1em \begin{eqnarray}\label{5} \epsilon=\frac{3T}{\lambda^{3}}\(f_{{5}/{2}}(z^{'})- \frac{f_{{3}/{2}}^{2}(z^{'})}{2f_{{1}/{2}}(z^{'})}+ \frac{f_{{3}/{2}}^{3}(z^{'})f_{-{1}/{2}}(z^{'})}{2f_{{1}/{2}}^{3}(z^{'})}\).\no\\ \end{eqnarray} Essentially, the entropy density includes the high order nonlinear contribution. What we want to emphasize is that the third law of thermodynamics is exactly ensured as expected. The analytical analysis indicates that the energy density at zero-temperature gives the dimensionless universal coefficient according to $\xi={\mu}/{E_{F}}=\049$ or ${E}/{(\035 NE_{F})}=\xi $, where the Fermi energy is $E_{F}={(3\pi^{2}n)^{{2}/{3}}}/{(2m)}$ and $T_{F}$ is the Fermi characteristic temperature in the unit Boltzmann constant. The universal coefficient $\xi=\049 $ has attracted much attention in the literature and is reasonably consistent with some Monte Carlo calculations\cite{constant1,MC1}. \section{Thermodynamics given by the generalized exclusion statistics}\label{section3} \subsection{Generalized exclusion statistics} The generalized exclusion statistics is proposed in \cite{Haldane1991} and \cite{Wu1994}. If the dimensional of the Hilbert space is $d$ and the particle number is $N$, then $d$ and $N$ are connected by $\bigtriangleup d=-g\bigtriangleup N$, where the shift of the single-particle states number is $\bigtriangleup d$. The shift of the particle number for identical particle system is $\bigtriangleup N$ and $g$ is a statistical parameter, which denotes the ability of one particle to exclude other particles in occupying single-particle state. When $g=0$ the intermediate statistics returns to the Bose-Einstein statistics and $g=1$ to the Fermi-Dirac statistics. For anyons, the number of quantum states $W$ of $N$ identical particles occupying a group of $G$ states are determined by the interpolated statistical weights of the Bose-Einstein and Fermi-Dirac statistics. A simple formula with the generalized exclusion statistics is used to describe the microscopic quantum states\cite{Wu1994} \arraycolsep .1em \begin{eqnarray}\label{W1} W=\frac{[G+(N-1)(1-g)]!}{N![G-gN-(1-g)]!}. \end{eqnarray} One can divide the one-particle states into a large number of cells with $G\gg1$ states in each cell, and calculate the number with $N_i$ particles in the $i$-th cell. The total energy and the total number of particles are fixed and given as \arraycolsep .1em \begin{eqnarray} E=\sum_{i}N_{i}\epsilon_{i}, N=\sum_{i}N_{i}, \end{eqnarray} with $\epsilon_{i}$ defined as the energy of particle of species $i$. By generalizing \eq{W1}, we have \arraycolsep .1em \begin{eqnarray} W=\prod_{i}\frac{[G_{i}+(N_{i}-1)(1-g)]!}{N_{i}![G_{i}-gN_{i}-(1-g)]!}. \end{eqnarray} We consider a grand canonical ensemble at temperature $T$. For very large $G_i\gg1$ and $N_i\gg1$, using the Stirling formula $\ln{N!}=N(\ln{N}-1)$, and introducing the average occupation number defined by $\bar{N_i}\equiv N_i/G_i$, one has \arraycolsep .1em \begin{eqnarray}\label{W2} \ln{W}&&=\sum_{i}\ln{\[\frac{[G_{i}+(N_{i}-1)(1-g)]!}{N_{i}![G_{i}-gN_{i}-(1-g)]!}\]}\no\\ &&\simeq\sum_{i}\[G_i\(1+(1-g)\bar{N_i}\)\ln{G_i\(1+(1-g)\bar{N_i}\)} \right. \no\\ && \left. ~~-G_i(1-g\bar{N_i})\ln{G_i(1-g\bar{N_i})}-G_i\bar{N_i}\ln{G_i\bar{N_i}}\].\no\\ \end{eqnarray} Through the Lagrange multiplier method, the most probable distribution of $\bar{N_i}$ is determined by \arraycolsep .1em \begin{eqnarray} \frac{\partial}{\partial\bar{N_i}}[\ln{W}-\sum_{i}G_i\bar{N_i}(\epsilon_i-\mu)/T]=0, \end{eqnarray} with chemical potential $\mu$. It follows that \arraycolsep .1em \begin{eqnarray} \bar{N_i}e^{(\epsilon_i-\mu)/T}=[1+(1-g)\bar{N_i}]^{(1-g)}(1-g\bar{N_i})^g. \end{eqnarray} Setting $\omega_i=1/\bar{N_i}-g$, we have the anyon statistical distribution \arraycolsep .1em \begin{eqnarray}\label{N1} \bar{N_i}=\frac{1}{\omega_i+g}, \end{eqnarray} where $\omega$ obeys the relation \arraycolsep .1em \begin{eqnarray}\label{10} \omega^{g}(1+\omega)^{1-g}=e^{(\epsilon-\mu)/T}. \end{eqnarray} One can define $\omega_{0}$ of $\omega$ at $\epsilon=0$ with \eq{10} \arraycolsep .1em \begin{eqnarray}\label{11} \mu=-T\ln{[\omega_{0}^{g}(1+\omega_{0})^{1-g}]}. \end{eqnarray} The relation between $\mu$ and $T$ has been established indirectly through $\omega_0$ and $g$. From \eq{10}, the $\om$ and $\omega_0$ are related with each other through single-particle energy $\epsilon$\arraycolsep .1em \begin{eqnarray}\label{12} \epsilon=T\ln{\[\(\frac{\omega}{\omega_{0}}\)^{g}\(\frac{1+\omega}{1+\omega_{0}}\)^{1-g}\]}, \end{eqnarray} which gives \arraycolsep .1em \begin{eqnarray}\label{13} d\epsilon=\frac{T(g+\omega)}{\omega(1+\omega)}d\omega. \end{eqnarray} For $T=0$, the average occupation number can be explicitly indicated as \arraycolsep .1em \begin{eqnarray}\label{14} \bar{N}=\left\{ \begin{array}{ll} 0, & \hbox{ if $\varepsilon>\mu$;} \\ \frac{1}{g}, & \hbox{ if $\varepsilon<\mu$,} \end{array} \right. \end{eqnarray} which is quite similar to the Fermi-Dirac statistics. \subsection{Particle number and energy densities} In the anyon statistics, the density of states is also given by \arraycolsep .1em \begin{eqnarray} D(\epsilon)=\alpha{(2m)^{3/2}}V\epsilon^{1/2}/({4\pi^{2}}), \end{eqnarray} where $\alpha$ is the degree of the spin degeneracy and $m$ is the bare fermion mass. At $T=0$, the particle number is explicitly given by \arraycolsep .1em \begin{eqnarray}\label{a} N=\frac{1}{g}\int_{0}^{\widetilde{E}_{F}}{D(\epsilon)d\epsilon}=\frac{\alpha(2m)^{{3}/{2}}}{6\pi^{2}}VE_{F}^{{3}/{2}}, \end{eqnarray} where $\widetilde{E}_{F}$ is related with the Fermi energy $E_{F}$ through $\widetilde{E}_{F}=g^{{2}/{3}}E_{F}$. With the $\widetilde{E}_{F}$ symbol, the system energy can be represented as\arraycolsep .1em \begin{eqnarray} E=\frac{1}{g}\int_{0}^{\widetilde{E}_{F}}{\epsilon D(\epsilon)d\epsilon}=\frac{3}{5}g^{{2}/{3}}NE_{F}. \end{eqnarray} As we will see, once $g$ is fixed, one can discuss the general finite-temperature thermodynamic properties. Therefore, the essential task in the generalized exclusion statistics is fixing the statistical factor $g$. This can be determined by the zero-temperature ground state energy or the universal constant $\xi$ according to $\xi=g^{{2}/{3}}$. Various theoretical or experimental attempts have been made in the literature for determining the ground state energy. With the universal coefficient $\xi=\049$\cite{Chenjs}, the expected statistical factor can be identified to be $g=\frac{8}{27}$. For the general finite-temperature scenario, the particle number and energy can be rewritten as \arraycolsep .1em \begin{eqnarray}\label{18} N=\int_{0}^{\infty}{\frac{D(\epsilon)d\epsilon}{\omega+g}},\\ \label{19} E=\int_{0}^{\infty}{\frac{\epsilon D(\epsilon)d\epsilon}{\omega+g}}. \end{eqnarray} By replacing \eq{12}-\eq{13} and \eq{a} into \eq{18} and \eq{19}, one can have \arraycolsep .1em \begin{eqnarray}\label{20} \frac{3}{2}&&\(\frac{T}{T_{F}}\)^{3/2}a(\omega_{0})=1,\\ \label{21} \frac{E}{NE_{F}}&&=\frac{3}{2}\(\frac{T}{T_{F}}\)^{5/2}b(\omega_{0}),\\ a(\omega_{0})&&=\int_{\omega_{0}}^{\infty}{\frac{d\omega}{\omega(1+\omega)} \[\ln{\(\frac{\omega}{\omega_{0}}\)^{g}\(\frac{1+\omega}{1+\omega_{0}}\)^{1-g}}\]^{1/2}},\no\\ b(\omega_{0})&&=\int_{\omega_{0}}^{\infty}{\frac{d\omega}{\omega(1+\omega)} \[\ln{\(\frac{\omega}{\omega_{0}}\)^{g}\(\frac{1+\omega}{1+\omega_{0}}\)^{1-g}}\]^{3/2}}.\no\end{eqnarray} \eq{20} determines $\omega_{0}$ for a given temperature $T$. ${E}/{(NE_{F})}$ can be obtained by a given $\omega_{0}$ through \eq{21}. For giving the explicit entropy density expression with the generalized exclusion statistics in the next subsection, let us make further discussion for the energy density. By eliminating $N$ with \eq{a} and \eq{21}, the energy can be alternatively expressed as \arraycolsep .1em \begin{eqnarray} E=\frac{\alpha(2m)^{{3}/{2}}}{4\pi^{2}}VT^{{5}/{2}}b(\omega_{0}). \end{eqnarray} The partial derivative of the internal energy $E$ to $T$ for fixed $\mu$ is given by \arraycolsep .1em \begin{eqnarray}\label{e} \(\frac{\partial E}{\partial T}\)_{\mu}=\frac{\alpha V(2m)^{{3}/{2}}}{4\pi^{2}}T^{{3}/{2}}\[\frac{5}{2}b(\omega_{0})+ T\(\frac{\partial b(\omega_{0})}{\partial T}\)_{\mu}\].\no\\ \end{eqnarray} Furthermore, the variable $\omega_{0}$ of the integral function $b(\omega_{0})$ can be converted into $\mu$ and $T$ through \eq{11} \arraycolsep .1em \begin{eqnarray}\label{above} b(\omega_{0},\mu,T)=\int_{\omega_{0}}^{\infty}{\frac{d\omega}{\omega(1+\omega)} \[\ln(\omega^{g}(1+\omega)^{1-g})+\frac{\mu}{T}\]^{{3}/{2}}}.\no\\ \end{eqnarray} Therefore, one can have \arraycolsep .1em \begin{eqnarray}\label{f} \(\frac{\partial b}{\partial T}\)_{\mu}=\frac{3}{2T}\ln[\omega_{0}^{g}(1+\omega_{0})^{1-g}]a(\omega_{0}). \end{eqnarray} \subsection{Entropy per particle} Due to the scaling properties, the thermodynamics of a unitary Fermi gas also satisfies the ideal gas virial theorem \cite{Ho2004,Chenjs,Thomas2005} \arraycolsep .1em \begin{eqnarray}\label{b} P=\frac{2}{3}\frac{E}{V}. \end{eqnarray} According to the thermodynamic relation for the entropy $S$ and pressure $P$, one can have\arraycolsep .1em \begin{eqnarray}\label{d} S=\frac{2}{3}\(\frac{\partial E}{\partial T}\)_{\mu}. \end{eqnarray} By substituting \eq{e} and \eq{f} into \eq{d}, the explicit expression for the entropy per particle is derived to be\arraycolsep .1em \begin{eqnarray}\label{g} \frac{S}{N}=\frac{5}{2}\(\frac{T}{T_{F}}\)^{{3}/{2}}b(\omega_{0})+\ln[\omega_{0}^{g}(1+\omega_{0})^{1-g}], \end{eqnarray} where $\omega_{0}$ is given by \eq{20} for a given $T$. \section{Numerical results and comparisons}\label{section4} Based on the above analytical expressions, we will give the numerical results. \subsection{Internal energy and chemical potential} \begin{figure}[ht] \centering \psfig{file=fig1.eps,width=7cm,angle=-0} \caption{The internal energy per particle versus the rescaled temperature. The solid curve denotes that for the ideal Fermi gas, and the short-dashed one is that given by the quasi-linear approximation. The long-dashed curve represents the result in terms of the generalized exclusion statistics model. The dots and solid squares are the Monte Carlo calculations \cite{MC1} and \cite{MC2}, respectively. \small }\label{fig1} \end{figure} From \eq{20} and \eq{21}, the energy per particle versus the rescaled temperature can be solved. As indicated by Fig.\ref{fig1}, the internal energies for the unitary Fermi gas based on the quasi-linear approximation and the generalized exclusion statistics have similar analytical properties; i.e., the internal energy increases with the increase of temperature. The two approaches both show that the energy density of a unitary Fermi gas is lower than that of the ideal Fermi gas. However, the shift of the internal energy given by the quasi-linear approximation is more quicker than that determined by the generalized exclusion statistics model. \begin{figure}[ht] \centering \psfig{file=fig2.eps,width=7cm,angle=-0} \caption{Physical chemical potential versus the recaled temperature. The line-styles are similar to Figure.1. \small }\label{fig2} \end{figure} With \eq{20} and \eq{11}, we have also shown the chemical potential versus the rescaled temperature in Fig.\ref{fig2}. The chemical potential given by the two formalisms decreases with the increase of temperature. The departure of them is getting bigger with the increasing temperature. The results for the energy per particle shown in Fig.\ref{fig1} and Fig.\ref{fig2} in terms of the two different analytical approaches are reasonably consistent with the Monte Carlo calculations\refr{MC1,MC2}, while the chemical potential differs explicitly from the Monte Carlo result \refr{MC2} for $T/T_{F}>0.8$. \subsection{Entropy} \begin{figure}[ht] \centering \psfig{file=fig3.eps,width=7cm,angle=-0} \caption{Entropy per particle versus the recaled temperature. The line-styles are similar to Figure.1. The Monte Carlo simulation result is extracted from Ref.\cite{MC1}. \small }\label{fig3} \end{figure} With \eq{20} and \eq{g}, the entropy per particle curve versus the rescaled temperature is presented in Fig.\ref{fig3}. The quasi-linear approximation predicts that the curve is higher than that of the ideal Fermi gas, while the generalized exclusion statistics model gives lower values compared with that of the ideal Fermi gas. With the increase of temperature, the entropy per particle given by the generalized exclusion statistics is getting closer to and almost overlaps with that of the ideal Fermi gas. In terms of the quasi-linear approximation, the ratio of entropy to that of the ideal Fermi gas approaches a constant in the Boltzmann regime. Especially, in the low-temperature strong degenerate regime, the slope of the entropy per particle versus the scaled temperature given by these two approaches is different. The low-temperature behavior is determined by the effective fermion mass according to the Landau theory of strongly correlated Fermi-liquid. In turn, from the entropy curve, one can derive the effective fermion mass indirectly. The careful study shows that the quasi-linear approximation indicates $m^*/m\approx 1.11>1$, while the latter predicts $m^*/m\approx 0.70<1$. Compared with the latter, the quasi-linear approximation result is more consistent with the Monte Carlo calculations $m^*/m\sim 1.04-1.09$\refr{Lobo2006,Pilati2008}. \section{Conclusion}\label{section5} In terms of the quasi-linear approximation method and generalized exclusion statistics model, the internal energies, chemical potentials and entropies of a unitary Fermi gas have been analyzed in detail. The two different approximations give similar behavior for the internal energies and chemical potentials of a unitary Fermi gas. The entropy is an important characteristic quantity in statistical mechanics. The entropy by the quasi-linear approximation is higher than that of the ideal non-interacting fermion gas. In the Boltzmann regime, the entropy curve given by the generalized exclusion statistics gets closer towards and almost overlaps with that of the ideal Fermi gas. The entropy given by the quasi-linear approximation is getting far away from that of the ideal Fermi gas and the ratio of entropy to that of the ideal Fermi gas approaches a constant. According to the quasi-particle viewpoint of the Landau Fermi-Liquid theory, the slope of entropy per particle determines the effective fermion mass in the low-temperature strong degenerate region. The numerical analysis demonstrates that the generalized exclusion statistics model gives $m^*/m\approx 0.70<1$. The developed quasi-linear approximation predicts $m^*/m\approx 1.11>1$, which is closer to the updating Monte Carlo investigations. \acknowledgments{ The authors are grateful to J.-r Li, X.-w Hou and X.-j Xia for stimulating discussions. Supported in part by the National Natural Science Foundation of China under Grant No. 10675052 and 10875050 and MOE of China under projects No.IRT0624. }
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $M$ be a compact Riemannian manifold, and $\Delta$ its Laplace-Beltrami operator. We consider a basis of $L^2(M)$ consisting of $\Delta$-eigenfunctions; if there are multiplicities in the spectrum there may be many such. The property of quantum ergodicity is a spectral analog of the classical ergodic property, stating that $$\frac{1}{N(\lambda)} \sum_{\lambda_j\leq \lambda} \left| \langle \psi_j, a\psi_j\rangle - \int_M a\, d\text{Vol} \right|^2 \to 0$$ where $\{\psi_j\}$ runs over our basis of eigenfunctions, and $a$ is any fixed test function on $M$. Here $N(\lambda):=|\{\lambda_j\leq \lambda\}|$ is the eigenvalue counting function (with multiplicity). More generally, one replaces multiplication by $a$ in the inner product by a zero-order pseudodifferential operator $A$, and the $a$ in the integral by the principal symbol $\sigma_A$ of $A$. The quantum ergodicity property implies that there exists a density $1$ subsequence $\{\psi_{j_k}\}$ of the $\{\psi_j\}$, such that the measures $|\psi_{j_k}|^2dVol$ converge weak-* to the uniform measure. This property was first shown to hold --- for any choice of orthonormal basis--- on manifolds $M$ for which the geodesic flow is ergodic, by \v{S}nirlman, Zelditch, and Colin de Verdi\`ere \cite{Shn,Z2,CdV}. It is important to note that quantum ergodicity is a property of the {\em basis}, rather than of the manifold $M$; if there are large degeneracies in the spectrum of $\Delta$, it may hold for some bases but not for others. Such large degeneracy occur for the spectrum of the Laplacian on the sphere $\mathbb S ^2$ --- indeed, for the sphere we can write \begin{equation*} L ^2 (\mathbb S ^2) = \bigoplus_ {s = 0} ^ \infty \mathcal H _ s \end{equation*} where for each $s \in \mathbb{N}$ we take $\mathcal H _ s$ to be the space of spherical harmonics of degree $s$, an eigenspace for the Laplacian with eigenvalue $s(s+1)$ and $\dim (\mathcal H _ s) = 2 s +1$; in particular $\mathcal H_0$ denotes the 1-dimensional space of constant functions. Here it is easy to see that the standard basis of spherical harmonics fails to be quantum ergodic \cite{CdV}, but Zelditch \cite{ZelSphere} has shown that a random orthonormal basis of eigenfunctions will be quantum ergodic with probablity $1$ (in a natural sense). Strenghening this, VanderKam \cite{Vanderkam} has shown that quantum {\em unique} ergodicity holds for a random orthonormal basis, i.e. that if for any $s$ we chose $\{\psi^{(s)}_j\}_{j=1}^{2s+1}$ to be a random orthonormal basis of $\mathcal H_s$ then a.s. \begin{equation}\label{e:que} \max_{1\leq j\leq 2s+1} \left| \langle \psi^{(s)}_j, a\psi^{(s)}_j\rangle - \int_{{\mathbb{S}^2}} a\, d\sigma \right| \to 0 \qquad\text{as $s\to\infty$} \end{equation} with $\sigma$ denoting the normalized volume measure on ${\mathbb{S}^2}$. \smallskip Here, we consider bases of $L^2(\mathbb{S}^2)$, that consist of joint eigenfunctions of $\Delta$ and an averaging operator over rotations of the sphere. Precisely, for $N \geq 2$, let $g_1,\ldots,g_N$ be a finite set of rotations in $\operatorname{SO}(3)$, and define an operator $T_q$ acting on $L^2({\mathbb{S}^2})$, by \begin{equation}\label{Tq} T_q f(x) = \frac{1}{\sqrt q}\sum_{j=1}^N (f(g_j x) + f(g_j^{-1} x)) \end{equation} with $q = 2N -1$; up to the choice of normalization constant this is an averaging operator. Since each space $\mathcal H_s$ is invariant under rotations, the self-adjoint operators $T _ q$ preserve $\mathcal H _ s$, and so we can find for every $s$ an orthonormal basis $\{\psi^{(s)}_j\}_{j=1}^{2s+1}$ of $T_q$-functions for $\mathcal H _ s$; thus $\{\psi^{(s)}_j\}_{j,s}$ gives a complete orthonormal sequence of joint eigenfunctions of~$\Delta$ and~$T _ q$. We note that for very special choice of rotations --- rotations that correspond to norm $n$ elements in an order in a quaternion division algebra --- it has been conjectured by B{\"o}cherer, Sarnak, and Schulze-Pillot \cite{B-S-SP} that such joint eigenfunctions satisfy the much stronger quantum unique ergodicity condition \eqref{e:que}. This conjecture is still open, but would follow from GRH--- or even from subconvex estimates on values of appropriate $L$-functions at the critical point (cf.~\cite{B-S-SP} for more details). \smallskip Our purpose in this paper is to prove quantum ergodicity for joint eigenfunctions of the Laplacian and a fairly general averaging operator on~${\mathbb{S}^2}$. We give two flavours of such a result for these joint eigenfunctions: one in which the assumption on the rotations is milder --- namely that they generate a free subgroup of $\operatorname{SO}(3)$--- but allows only continuous test functions $a$; and one where $a$ can be rather wild --- an arbitrary element of $L^\infty(\mathbb S^2)$--- but the rotations are in addition assumed to be given by matrices whose entries are algebraic numbers. In this latter case we give \emph{quantitative} estimates. \begin{Theorem}\label{t:spheremain} Let $g _ 1, \dots, g _ N$ be a finite set of rotations in $\operatorname{SO}(3)$ that generate a free subgroup. Let $\{ \psi^{(s)}_j \}_{j=1}^{2s+1}$ be an orthonormal basis of~$\mathcal{H}_s$ consisting of $T_q$-eigenfunctions, with~$T_q$ the operator defined in \eqref{Tq}. Then for any continuous function~$a$ on~$ {\mathbb{S}^2}$, we have \[ \frac 1{2s+1} \sum_{j=1}^{2s+1} \left| \langle \psi_j^{(s)}, a \psi_j^{(s)} \rangle - \int_{{\mathbb{S}^2}} a \, d \sigma \right|^2 \to 0 \] when $s \to \infty$. \end{Theorem} It is not hard to show in this case that as $s\to\infty$, the distribution of the $T_q$-eigenvalues of $\{\psi ^ {( s )} _ j\}_{j=1}^{2s+1}$ tends to a specific measure, the Kesten-McKay distribution, whose support is the interval $[-2, 2]$ (see \S\ref{s:Kesten-McKay} for details), which implies the following version of Theorem~\ref{t:spheremain}. \begin{Corollary}\label{c:spheremain} Let $g _ 1, \dots, g _ N$ and $T _ q$ be as in Theorem~\ref{t:spheremain}, and let $I$ be an arbitrary (fixed) subinterval of $[-2,2]$. Let as before $\{ \psi^{(s)}_j \}_{j=1}^{2s+1}$ be an orthonormal basis of $\mathcal{H}_s$ consisting of $T_q$-eigenfunctions, and define $\lambda(s,j)$ by $T_q \psi^{(s)}_j = \lambda(s,j) \psi^{(s)}_j$, and $$N(I,s)= \# \{ j : \lambda(s,j) \in I \}.$$ Then for any continuous function~$a$ on~$ {\mathbb{S}^2}$, we have \[ \frac 1{N(I,s)} \sum_{\{ j : \lambda(s,j) \in I \} } \left| \langle \psi_j^{(s)}, a \psi_j^{(s)} \rangle - \int_{{\mathbb{S}^2}} a \, d \sigma \right|^2 \to 0 \] when $s \to +\infty$. \end{Corollary} Finally, we give the following more quantitative result under more stringent assumptions on the rotations: \begin{Theorem}\label{t:spherealgebraic} Let $g _ 1, \dots, g _ N$ be a finite set of rotations in $\operatorname{SO}(3)$, that generate a free subgroup, and moreover so that all entries appearing in the matrices representing the $g _ i$ are given by algebraic numbers. Let $\{ \psi^{(s)}_j \}_{j=1}^{2s+1}$ be an orthonormal basis of~$\mathcal{H}_s$ consisting of $T_q$-eigenfunctions with~$T_q$ defined from the $g _ i$ as above. Then for any function $a \in L ^ \infty ({\mathbb{S}^2})$, we have \[ \frac 1{2s+1} \sum_{j=1}^{2s+1} \left| \langle \psi_j^{(s)}, a \psi_j^{(s)} \rangle - \int_{{\mathbb{S}^2}} a \, d \sigma \right|^2 \lesssim \frac {\norm {a} _ \infty }{ \log s}, \] with the implicit constant depending only on $q=2N-1$ and the choice of rotations $g _ 1, \dots, g _ N$. \end{Theorem} A key ingredient in this sharper variant is a result of Bourgain and Gamburd \cite{BG} which shows that for such rotations the action of the operator $T_q$ on $L^2( \mathbb S^2)$ has a spectral gap. These results are closely related to the results of the second-named author with N. Anantharaman \cite{ALM}, proving quantum ergodicity for expander sequences of large regular graphs that satisfy a non-degeneracy condition, that the radius of injectivity grow at almost all points (see condition (BST) below). In Theorem~\ref{t:spherealgebraic}, the spectral gap result of Bourgain and Gamburd is a close analog to the expander condition on the graphs. The algebraicity of the rotations is also used to give a lower bound on how close a nontrivial word of length $n$ in $g _ 1$,\dots,$g_N$ can be, which can be viewed as an analogue of the injectivity radius condition. In Theorem~\ref{t:spheremain}, we reduce to the case of $a$ being a sum of finitely many spherical harmonics, where we get a spectral gap for $T_q$ simply because $a$ belongs to a finite dimensional $T_q$-invariant subspace of $L^2({\mathbb{S}^2})$, and the assumption that $g _ 1, \dots, g _ N$ generate a free group gives a weaker, unquantified, substitute for the injectivity radius condition. We will use a new and alternative method to \cite{ALM}, more straightforward in the sense that it does not use pseudo-differential calculus on the graphs (\cite{LM}), but instead a discrete wave propagation. Although it gives a less general result in the present form, it may be of independent interest, and gives a quantitative improvement in some aspects. We thus bring in Sections~\ref{qe on graphs} and \ref{main estimate graphs} a new proof of quantum ergodicity for large regular graphs (i.e., the main result of \cite{ALM}), in order to introduce the ideas to be used in the sequel. Then in Section~\ref{sphere}, we return to the setting of $\mathbb{S}^2$, and show the modifications necessary to work on the sphere in $\mathcal{H}_s$, and prove Theorems~\ref{t:spheremain} and \ref{t:spherealgebraic}. \section{Quantum ergodicity on regular graphs}\label{qe on graphs} Let $(G_k)$ be a sequence of connected $(q+1)$-regular graphs, $G_k=(V_k, E_k)$ with $V_k =\{1,\ldots, k\}$. We are interested in the eigenfunctions of the averaging operator $$T_q f(x) = \frac1{\sqrt{q}} \sum_{d(x,y)=1} f(y) $$ acting on functions of the vertices of the graph. We parametrize the spectrum of $T_q$ by $\lambda = 2\cos{\theta_\lambda}$. The spectrum of this operator is contained in $$\left[-2\cosh\left(\frac{\log q}{2}\right),2\cosh\left(\frac{\log q}{2}\right)\right].$$ It can be divided into two parts: the \emph{tempered spectrum} is the part contained in the interval $[-2,2]$, and the \emph{untempered spectrum} is the part lying outside this interval. We assume the following conditions: (EXP) The sequence of graphs is a family of expanders. That is, there exists a fixed $\beta>0$ such that the spectrum of $T_q$ on $L^2(G_k)$ is contained in $\left\{2\cosh\left(\frac{\log q}{2}\right)\right\}\cup \left[-2\cosh\left(\frac{\log q}{2} - \beta \right), 2\cosh\left(\frac{\log q}{2} - \beta \right)\right]$ for all $k$. (BST) The sequence of graphs converges to a tree in the sense of Benjamini-Schramm. More precisely, for all $R$, $\frac{|\{x\in V_k, \rho(x)<R\}|}{k}\to 0$ where $\rho(x)$ is the injectivity radius at $x$ (meaning the largest $\rho$ such that the ball $B(x, \rho)$ in $G_k$ is a tree). Or equivalently, there exists $R_k\to +\infty$ and $\alpha_k\to 0$ such that $$\frac{|\{x\in V_k, \rho(x)<R_k\}|}{k}\leq \alpha_k.$$ The last condition is satisfied in particular if the injectivity radius goes to infinity (with $R_k$ taken to be the minimal injectivity radius and $\alpha_k=0$). \begin{Theorem} \label{t:main} Assume that $(G_k)$ satisfies (BST) and (EXP). Denote by $\{\psi^{(k)}_1,\ldots, \psi^{(k)}_k\}$ an orthonormal basis of eigenfunctions of $T_q$ on $G_k$. Let $a_k : V_k \to \mathbb{C}$ be a sequence of functions such that $$\sum_{x\in V_k} a_k(x)=0 \quad \text{and} \quad \norm{a_k}_\infty \leq 1.$$ Then $$\frac1{k} \sum_{j=1}^k \left| \langle \psi^{(k)}_j, a_k \psi^{(k)}_j\rangle\right|^2\to 0,$$ when $k\to +\infty$. More precisely, \begin{align}\label{e:qegraph} \frac1{k} \sum_{j=1}^k \left| \langle \psi^{(k)}_j, a_k \psi^{(k)}_j\rangle\right|^2 &\lesssim \beta^{-2} \min\left\{ R_k, \log(1/ \alpha_k) \right\}^{-1} \|a\|_2^2 + \alpha_k^{1/2} \|a\|_\infty^2, \end{align} where the implicit constant depends only on the degree $q+1$. \end{Theorem} \begin{remark} The spectral average in \eqref{e:qegraph} can be restricted to eigenvalues in any fixed subinterval of $\left[-2\cosh\left(\frac{\log q}{2}\right),2\cosh\left(\frac{\log q}{2}\right)\right]$, using the Kesten-McKay law (see for example Section 5 of \cite{ALM}). \end{remark} We now introduce the Chebyshev polynomials $P_n$ of the first kind for every $n \in \mathbb{N}$ defined by the relation $$ P_n(\cos\theta) = \cos(n\theta). $$ As each $P_n$ is a polynomial, the definition of the operator $P_{n}(T_q/2)$ is clear. We denote by $\norm{A}_{HS}$ the Hilbert-Schmidt norm of the operator $A$ acting on $L^2(G_k)$, and write $L^2_0(G_k)$ for the subspace of $L^2(G_k)$ orthogonal to the constants. We will prove the following proposition in Section~\ref{main estimate graphs}. \begin{proposition}\label{p:egorov_graphs} Let $a$ be a function in $L^2_0(G_k)$. For any integer $T \geq 1$, \begin{multline*} \left\| \frac1T \sum_{n=1}^T P_{2n}(T_q/2) a P_{2n}(T_q/2) \right\|_{HS}^2 \lesssim \frac{\|a\|_2^2}{\beta^2 T} \\ + q^{8T} |\{x\in G_k, \rho_k(x) \leq 4T\}| \norm{a}_\infty^2, \end{multline*} where $\rho_k(x)$ is the radius of injectivity at the vertex $x \in G_k$, and the implicit constant in the inequality depends only on $q$. \end{proposition} To prove the theorem, we first need the following lemma, which will be used as well in the case of the sphere in Section~\ref{sphere}. \begin{lemma}\label{l:unitarity} For every $\theta \in [0,\pi]$, $T \geq 10$, $$ \left| \frac1T \sum_{n=1}^T \cos(n\theta)^2 \right| \geq 0.3$$ \end{lemma} \begin{proof}[Proof of Lemma~\ref{l:unitarity}] We write $$ S_T := \sum_{n=1}^T \cos(n\theta)^2. $$ Then \begin{align*} \frac1T S_T &= \frac1T \sum_{n=1}^T \left( \frac12 + \frac14 e^{2in\theta} + \frac14 e^{-2in\theta} \right) \\ &= \frac{2T - 1}{4T} + \frac1{4T} \sum_{n=-T}^T e^{2in\theta} \\ &= \frac{2T - 1}{4T} + \frac{\sin[(2T+1)\theta]}{4T \sin\theta}. \end{align*} Now $$ \min_{\theta \in [0,\pi]} \frac{\sin[(2T+1)\theta]}{4T \sin\theta} = \min_{\theta \in [0,\pi/2]} \frac{\sin[(2T+1)\theta]}{4T \sin\theta} \geq -\frac1{4T\sin(\pi/(2T+1))}.$$ Indeed: $|\sin\theta|$ is monotone increasing in $[0,\pi/2]$ and $\sin(2T+1)\theta > 0$ for $0 < \theta < \pi/(2T+1).$ Hence $$ \frac1T S_T \geq \frac1{4T} \left( 2T-1 - \frac1{\sin(\pi/(2T+1))} \right).$$ A routine calculation shows this is $\geq 0.3$ for $T\geq 10$. \end{proof} We now show how Theorem~\ref{t:main} follows from the central estimate Proposition~\ref{p:egorov_graphs}; the proof of Proposition~\ref{p:egorov_graphs} will be the subject of Section~\ref{main estimate graphs}. \begin{proof}[Proof of Theorem \ref{t:main} from Proposition~\ref{p:egorov_graphs}] Recall that $$P_{2n}(T_q/2)\psi_j^{(k)} = \cos(2n\theta_{j,k})\psi_j^{(k)}$$ for $\lambda_j^{(k)}=2\cos\theta_{j,k}$. Thus we use Lemma~\ref{l:unitarity} to estimate \begin{align*} \frac1{k} &\sum_{j=1}^k \left| \langle \psi^{(k)}_j, a_k\psi^{(k)}_j\rangle\right|^2 \\ & \lesssim \frac1{k} \sum_{j=1}^k \left|\frac{1}{T} \sum_{n=1}^T \cos^2 (2n\theta_{j,k})\right|^2\left| \langle \psi^{(k)}_j, a_k\psi^{(k)}_j\rangle\right|^2 \\ &\lesssim \frac1{k} \sum_{j=1}^k \left| \left\langle \psi^{(k)}_j, \frac1T \sum_{n=1}^T P_{2n}(T_q/2) a_k P_{2n}(T_q/2)\psi^{(k)}_j\right\rangle\right|^2 \\ &\lesssim \frac1{k} \left\| \frac1T \sum_{n=1}^T P_{2n}(T_q/2) a_k P_{2n}(T_q/2) \right\|_{HS}^2 \\ &\lesssim \frac{\|a\|_{2}^2}{k T} + q^{8T} \frac{|\{x\in G_k, \rho(x) \leq 4T\}|}{k} \|a\|_\infty^2, \end{align*} with the implicit constant depending only on $q$, as in Proposition~\ref{p:egorov_graphs}. We then take $T = T_k = \min\left\{ \frac{R_k}{4}, -\frac{\log \alpha_k}{16 \log q} \right\}$. \end{proof} \section{Proof of the main estimate for graphs}\label{main estimate graphs} Let $\mathfrak{X}$ be the $q+1$ regular tree, and $G = \mathfrak{X}/\Gamma$ be any finite $q+1$-regular graph (where $\Gamma$ is a discrete subroup of the group of automorphisms of the tree). We will denote by $d(x,y)$ the geodesic distance on the tree. We will also assume that when the symbol $\lesssim$ is used without further indication, the implicit constant depends only on $q$. Proposition~\ref{p:egorov_graphs} (and later Proposition~\ref{p:egorov}) is based on the following propagation lemma; see \cite{BL} or \cite{jointQmodes} for a proof. \begin{lemma}\label{estimate} Fix a point $0 \in \mathfrak{X}$ and write $|x| = d(0,x)$ for any $x \in \mathfrak{X}$. Let $\delta_0$ be the $\delta$-function supported at $0$, and $n$ a positive even integer. Then \begin{eqnarray*} P_n(T_q/2)\delta_0(x) & = & \left\{ \begin{array}{ccc} 0 & \quad & |x| \text{ odd } \quad \text{or} \quad |x|>n\\ \frac{1-q}{2q^{n/2}} & \quad & |x|<n \quad \text{and} \quad |x| \text{ even } \\ \frac{1}{2q^{n/2}} & \quad & |x| = n \end{array}\right. \end{eqnarray*} \end{lemma} In what follows, we will denote simply by $P_n$ the operator $P_n(T_q/2)$. The set $D$ will be a fundamental domain of $G$ on the tree $\mathfrak{X}$. We see any function $a$ on the graph $G$ as a $\Gamma$-invariant function on the tree. We denote by $K_n(x,y)$ the kernel of the operator $P_n a P_n$ on the tree. This kernel satisfies the invariance relation $$ \forall \gamma \in \Gamma \quad K_n(\gamma \cdot x, \gamma \cdot y) = K_n(x,y) $$ and it therefore defines an operator $T_n$ on the graph $G$, whose kernel is given by $$ \tilde{K}_n(x,y) = \sum_{\gamma \in \Gamma} K_n(x,\gamma \cdot y) $$ We denote by $K_T(x,y)$ the kernel of $A_T = \frac1T \sum_{n=0}^T P_{2n} a P_{2n}$, and by $\tilde{K}_T$ the corresponding kernel on the graph. Let us first give an explicit expression of the kernel $K_T$. For this purpose we define the sets \[E_{j,k} = E_{j,k}(x,y) = \{ z: d(x,z) = j, d(y,z) = k \}.\] We find from Lemma~\ref{estimate} that \begin{align}\label{e:K2n} K_{2n}(x,y) &= \frac1{4q^{2n}} \sum_{z\in E_{2n,2n}} a(z) \\ &\quad + \frac{(1-q)}{4q^{2n}} \sum_{j=0}^{n-1}\left( \sum_{z\in E_{2j,2n}} a(z) + \sum_{z\in E_{2n,2j}} a(z) \right) \nonumber \\ &\quad + \frac{(1-q)^2}{4q^{2n}} \sum_{j,k=0}^{n -1} \sum_{z\in E_{2j,2k}} a(z). \nonumber \end{align} Note that this kernel is equal to $0$ whenever $d(x,y)$ is odd or $d(x,y) > 4n$. We then have $K_T = \frac1T \sum_{n=0}^T K_{2n}$. We want to evaluate the Hilbert-Schmidt norm of this operator on the graph, that is $$ \norm{A_T}_{HS}^2 = \sum_{x,y \in D} |\tilde{K}_T(x,y)|^2. $$ We first separate the points with small and large radius of injectivity. Define \begin{equation*} A_T' f(x) = \left\{ \begin{array}{ll} A_T f(x) & \text{if } \rho(x) > 4T \\ 0 & \text{otherwise.} \end{array} \right. \end{equation*} We then have \begin{lemma} $$ \|A_T\|_{HS}^2 \leq \| A_T' \|_{HS}^2 + q^{8T} |\{x\in D, \rho(x) \leq 4T\}| \| a \|_\infty^2 $$ \end{lemma} \begin{proof} We have $$ \| A_T - A'_T\|_{HS}^2 = \frac1{T^2} \sum_{\substack{x,y\in D\\ \rho(x) \leq 4T}} \left| \sum_{\gamma\in \Gamma} \sum_{n=0}^T K_{2n}(x,\gamma\cdot y) \right|^2 $$ and there are at most $q^{4T}$ terms in the sum over $\Gamma$. So we have by Cauchy-Schwarz inequality, and using also the fact that $K_{2n}(x,y) = 0$ when $d(x,y) > 4T \geq 4n$, \begin{align*} \| A_T - A'_T\|_{HS}^2 &\leq \frac1{T^2} q^{4T} \sum_{\substack{x,y\in D\\ \rho(x) \leq 4T}}\sum_{\gamma \in \Gamma} \left| \sum_{n=0}^T K_{2n}(x,y) \right|^2\\ &\leq \frac1{T^2} q^{4T} \sum_{\substack{x\in D\\ \rho(x) \leq 4T}}\sum_{\substack{y \in \mathfrak{X} \\ d(x,y) \leq 4T}} \left| \sum_{n=0}^T K_{2n}(x,y) \right|^2. \end{align*} We then use the fact that $\sup_{x,y} K_{2n}(x,y) \leq \| a \|_\infty$ as is clear from \eqref{e:K2n} to obtain $$ \| A_T - A'_T\|_{HS}^2 \leq q^{8T} |\{x\in D, \rho(x) \leq 4T\}| \| a \|_\infty^2$$ \end{proof} We can now restrict our attention to the operator $A_T'$. We write $$ A_T' = \frac1T \sum_{n=0}^T \frac1{q^{2n}} \sum_{l=0}^{2n}\sum_{j,k=0}^n c(j,k) \tilde{S}_{2j,2k,2l}, $$ or, interchanging the sums in $l$ and $n$, which will be useful later: \begin{equation}\label{e:AT} A_T' = \frac1T \sum_{l=0}^{2T} \sum_{n=\lceil l/2 \rceil}^{T} \frac1{q^{2n}} \sum_{j,k=0}^n c(j,k) \tilde{S}_{2j,2k,2l}, \end{equation} where $\tilde{S}_{2j,2k,2l}$ is defined as follows. We first define the operator $S_{2j,2k,2l}$, acting on functions of the tree, by its kernel $$ [S_{2j,2k,2l}](x,y) = \mathbf{1}_{\{d(x,y)=2l \}} \sum_{z\in E_{2j,2k}(x,y)} a(z). $$ It gives an operator on the graph, that we restrict to the points $x$ such that $\rho(x) > 4T$ in order to obtain $\tilde{S}_{2j,2k,2l}$: the kernel on the graph is, for $x,y \in D$ $$ [\tilde S_{2j,2k,2l}](x,y) = \mathbf{1}_{\{\rho(x) > 4T\}} \sum_{\gamma \in \Gamma} [S_{2j,2k,2l}](x,\gamma \cdot y). $$ Note that the constants $c(j,k)$ depend only on $q$. The proof of Proposition \ref{p:egorov_graphs} then follows from the following estimate: \begin{lemma}\label{l:Sop} $$ \| \tilde{S}_{2j,2k,2l} \|_{HS} \lesssim q^{(k+j)} e^{-\frac{\beta}2 (k+j-l)} \|a \|_2 $$ \end{lemma} Indeed, let us first notice that for $l\neq l'$, the operators $\tilde{S}_{2j,2k,2l}$ and $\tilde{S}_{2j',2k',2l'}$ are orthogonal with respect to the Hilbert-Schmidt norm. We deduce from this fact and the expression \eqref{e:AT} that \begin{align*} \|A_T' \|^2_{HS} &= \frac1{T^2} \sum_{l=0}^{2T} \left\| \sum_{n=\lceil l/2 \rceil}^T \frac1{q^{2n}} \sum_{j,k=0}^n c(j,k) \tilde S_{2j,2k,2l} \right\|_{HS}^2 \\ &\lesssim \frac1{T^2} \sum_{l=0}^{2T} \left( \sum_{n=\lceil l/2 \rceil}^T \frac1{q^{2n}} \sum_{j,k=0}^n \| \tilde{S}_{2j,2k,2l} \|_{HS} \right)^2 \\ &\lesssim \frac1{T^2} \sum_{l=0}^{2T} \left( \sum_{n=\lceil l/2 \rceil}^T \frac1{q^{2n}} \sum_{j,k=0}^n q^{(k+j)} e^{-\frac{\beta}2 (k+j-l)} \|a\|_2 \right)^2 \\ &\lesssim \frac1{T^2} \sum_{l=0}^{2T} \left( \sum_{n=\lceil l/2 \rceil}^T e^{-\beta(n-l/2)} \|a\|_2 \right)^2 \\ &\lesssim \frac{ \|a\|^2_2}{\beta^2 T}. \end{align*} Let us now prove Lemma \ref{l:Sop}. Recall that the kernel of $S_{2j,2k,2l}$ on the tree is given by $$ \mathbf{1}_{\{d(x,y)=2l \}} \sum_{z\in E_{2j,2k}(x,y)} a(z). $$ In order to be able to work with this expression, we will consider the \emph{arc graph}, defined as follows. For every directed edge $a$ of $G$ we denote by $a^+$ the target of $a$, by $a^-$ the source of $a$ and by $\bar a$ the reversal of $a$. The arc graph $G'$ is then the directed graph whose vertices are the directed edges, or arcs of $G$, and such that there is an edge from the arc $a$ to the arc $b$ when $a^+ = b^-$ and $a \neq \bar b$. We will also see the arc graph $G'$ as a quotient of the arc tree $\mathfrak{X}'$ by the group $\Gamma$ acting on edges. We then define the averaging operator (or normalized adjacency matrix) $T_q'$ on $G'$ for any function $F$ on $G'$ by $$T_q' F(a) = \frac1q \sum_{\substack{b^- = a^+ \\ b \neq \bar a}} F(b).$$ We also define the maps $B,E : L^2(G) \to L^2(G')$ by \begin{equation}\label{e:beginend} Bf(a) = f(a^-), Ef(a) = f(a^+) \end{equation} \begin{lemma}\label{l:edgegap} For every $k \geq 1$, we have $$ \| (T_q')^k \| \lesssim e^{-\beta k}$$ with an implicit constant depending only on $q$. \end{lemma} \begin{proof} The idea is to find an orthonormal basis in which the powers of $T_q'$ are simple to study. The space $L^2(G')$ can be divided into the direct sum of $\text{Im}(B)\oplus\text{Im}(E)$ and its orthogonal complement, the latter being the space of functions $F$ such that for any $v \in G$ $$ \sum_{a^- = v} F(a) = \sum_{a^+=v} F(a) = 0.$$ On this space the action of $T_q'$ is given by $$ T_q'F(a) = - \frac{1}{q} F(\bar a). $$ We can then decompose $\text{Im}(B)\oplus\text{Im}(E)$ using an orthonormal basis of $T_q$. For every eigenfunction $w \in L^2(G)$ with $T_q$-eigenvalue $\lambda$, the space $\mathbb{C}(Bw) + \mathbb{C}(Ew)$ is stable under $T_q'$ and the action of $T_q'$ is given by the matrix \begin{equation*} \begin{pmatrix} 0 & -\frac{1}{q} \\ 1 & \frac{\lambda}{\sqrt{q}} \end{pmatrix} \end{equation*} Notice that we have the conjugation \begin{equation*} \begin{pmatrix} q^{1/4} & 0 \\ 0 & q^{-1/4} \end{pmatrix} \begin{pmatrix} 0 & -\frac{1}{q} \\ 1 & \frac{\lambda}{\sqrt{q}} \end{pmatrix} \begin{pmatrix} q^{-1/4} & 0 \\ 0 & q^{1/4} \end{pmatrix} = \frac1{\sqrt{q}} \begin{pmatrix} 0 & -1 \\ 1 & \lambda \end{pmatrix}, \end{equation*} so, up to a constant depending only on $q$ it is enough to study the matrix on the right-hand side of the previous equality. We have \begin{equation}\label{e:chebyshevmatrix} q^{-n/2} \begin{pmatrix} 0 & -1 \\ 1 & \lambda \end{pmatrix}^n = q^{-n/2} \begin{pmatrix} -U_{n-2}(\lambda/2) & -U_{n-1}(\lambda/2) \\ U_{n-1}(\lambda/2) & U_n(\lambda/2) \end{pmatrix} \end{equation} where the $U_n$ are the Chebyshev polynomials of the second kind, $$U_n(\cosh\theta) = \sinh[(n+1)\theta]/\sinh\theta.$$ Because $|\lambda| \leq 2\cosh\left(\frac{\log q}{2} -\beta \right)$ the coefficients of \eqref{e:chebyshevmatrix} are bounded in absolute value by $e^{-\beta n}$ up to a constant depending only on $q$. \end{proof} We can now rewrite the kernel of $S_{2j,2k,2l}$ using the operator $T'_q$ with the help of the following lemma. \begin{lemma}\label{l:Tq sum} Let $ 0 \leq j,k \leq n$. Let $x \in D$ and $y \in \mathfrak{X}$ such that $l = d(x,y)/2$ satisfies $|k-j| \leq l \leq k+j$. Denote by $w$ the vertex of the segment $[x,y]$ such that $d(x,w) = l - (k-j)$ and $d(y,w) = l + (k-j)$ in the case $k\geq j$, or $d(x,w) = l + (k-j)$ and $d(y,w) = l - (k-j)$ in the case $j \geq k$. Then \begin{equation}\label{e:vertextoedge} \sum_{z\in E_{2j,2k}} a(z) = \sideset{}{'}\sum_{e^- = w} q^{k+j-l} (T'_q)^{k+j-l} B a (e) \end{equation} where the sum on the right-hand side runs over the edges which contain only one vertex on the segment $[x,y]$. \end{lemma} We will also need the following lemma whose proof is clear. \begin{lemma}\label{l:edgenorm} Let $k \leq 2l$ be fixed, and for any $x \in D$ and $y \in \mathfrak X$ such that $d(x,y) = 2l$, let $w$ be the vertex of the segment $[x,y]$ such that $d(x,w) = k$. $$ \sum_{\substack{x\in D\\ \rho(x) > 4T}} \sum_{\substack{y\in \mathfrak{X}\\ d(x,y)=2l}} \sum_{\substack{e\in\mathfrak{X}' \\ e^- = w}} |f (e)| \lesssim q^{2l} \sum_{e \in D'} | f (e)| $$ where $D'$ is a fundamental domain for the action of $\Gamma$ on $\mathfrak{X}'$. \end{lemma} \begin{proof}[Proof of Lemma \ref{l:Sop}] We use Lemma \ref{l:Tq sum} and Lemma \ref{l:edgenorm} to write \begin{align*} \| \tilde S_{2j,2k,2l} \|_{HS}^2 & = \sum_{\substack{x\in D\\ \rho(x) > 4T}} \sum_{\substack{y\in \mathfrak X\\ d(x,y)=2l}} \left| \sideset{}{'}\sum_{e^- = w} q^{k+j-l} (T'_q)^{k+j-l} B a (e) \right|^2 \\ &\lesssim q^{2(k+j-l)} q^{2l} \sum_{e \in D'} \left| (T'_q)^{k+j-l} B a (e) \right|^2. \end{align*} Then Lemma \ref{l:edgegap} gives $$ \| \tilde S_{2j,2k,2l} \|_{HS}^2 \lesssim q^{2(k+j)} e^{-\beta(k+j-l)} \|a\|_2^2.$$ \end{proof} \section{The argument on the sphere}\label{sphere} We bring here the additional ingredients needed to adapt the methods of the previous sections to the setting of ${\mathbb{S}^2}$, in order to prove Theorems~\ref{t:spheremain} and \ref{t:spherealgebraic}. We begin by reviewing some harmonic analysis on ${\mathbb{S}^2}$ that we will need; further details can be found in eg. \cite{Hobson, Szego}. \subsection{Some Harmonic Analysis on ${\mathbb{S}^2}$}\label{s:harmonic analysis} Eigenfunctions of the Laplacian on ${\mathbb{S}^2}$ can be realized as restrictions to ${\mathbb{S}^2}$ of harmonic polynomials in $\mathbb{R}^3$. The Laplace eigenvalues are given by $s(s+1)$ for $s\in\mathbb{N}$, and the corresponding eigenspace $\mathcal{H}_s$ of spherical harmonics has dimension $2s+1$. There is a unique $L^2$-normalized function in each $\mathcal{H}_s$ that is radial with respect to the North pole $z=0$. We call it $Y_s$ and it is given in $(\theta,\phi)$ coordinates by $$Y_s(\theta, \phi)= Y_s(\theta) = \sqrt{\frac{2s+1}{4\pi}} L_s(\cos\theta)$$ where $L_s$ is the {\bf Legendre polynomial} of degree $s$. In fact, an orthonormal basis of the eigenspace $\mathcal{H}_s$ is given by $$Y_s^m(\theta, \phi)=(-1)^m\sqrt{\frac{2s+1}{4\pi}\frac{(s-m)!}{(s+m)!}}L_s^m(\cos\theta)e^{im\phi}$$ as $m$ runs over $-s\leq m\leq s$, where $L_s^m$ is the {\bf associated Legendre polynomial} of degree $s$ and order $m$. We also define the {\bf zonal spherical harmonics} $Z^{(s)}_z$ to be the unique element of $\mathcal{H}_s$ that is radial with respect to $z\in{\mathbb{S}^2}$, and is normalized by $$Z_0^{(s)} (\theta, \phi) = \sqrt{\frac{2s+1}{4\pi}}Y_s (\theta, \phi) = \frac{2s+1}{4\pi} L_s(\cos\theta)$$ With this normalization, $Z_z^{(s)}$ has the reproducing property on $\mathcal{H}_s$ \begin{equation}\label{reproducing} \langle Z_z^{(s)}, \psi\rangle = \psi(z) \end{equation} for any $\psi\in\mathcal{H}_s$. The zonal spherical harmonics can be expressed in terms of the $Y_s^m$--- indeed, in terms of any orthonormal basis of $\mathcal{H}_s$--- by $$Z_z^{(s)}(y) = \sum_{m=-s}^s Y_s^m(z)\overline{Y_s^m(y)}$$ We will also make use of the estimate \cite[Theorem 7.3.3]{Szego} \begin{equation}\label{legendre decay} |L_s(\cos\theta)| < \frac{1}{\sqrt{s \sin\theta}} < \frac{2}{\sqrt{s \theta}} \end{equation} for $0\leq \theta\leq \frac{\pi}{2}$, which implies \begin{equation}\label{e:Zest} |Z_z^{(s)}(y)| \lesssim\sqrt{\frac{s}{d_{\mathbb{S}^2} (z,y)}} \end{equation} for points $z,y$ in the same hemisphere. We can use this to estimate the inner product between two zonal spherical harmonics: \begin{lemma}\label{inner product z z'} Let $z,z'\in{\mathbb{S}^2}$. Then $$\left| \langle Z_z^{(s)}, Z_{z'}^{(s)}\rangle\right| < 2 \sqrt{\frac{s}{{d_{\sph}^{\pm}(z,z')}}}$$ where $d_{\sph}^{\pm}(z,z') := \min\{d_{\sph}(z,z'), d_{\sph}(z,-z') \}$ is the distance from $z$ to either $z'$ or it's antipodal point $-z'$, whichever is closer (i.e. whichever is in the same hemisphere as $z$). \end{lemma} \begin{proof} By symmetry, we may apply a rotation and assume that $z'=0$ is the north pole, whereby our inner product becomes $$\left|\langle Z_z^{(s)}, Z_0^{(s)}\rangle\right| $$ Since Legendre polynomials are either even or odd, the zonal spherical harmonics are either even or odd with respect to antipodal reflection, and so since we are taking absolute value of the inner product we may assume that $z$ is in the upper hemisphere; i.e. that $$0\leq d_{\sph}(z,0) =d_{\sph}^{\pm}(z,0) \leq \frac{\pi}{2}$$ Since $Z_0^{(s)}\in\mathcal{H}_s$, the reproducing property (\ref{reproducing}) implies, together with \eqref{e:Zest}, that $$\langle Z_z^{(s)}, Z_0^{(s)}\rangle = Z_0^{(s)}(z) \lesssim \sqrt{\frac{s}{d_{\mathbb{S}^2} (z,0)}}.$$ \end{proof} \subsection{Quantum Ergodicity in $\mathcal{H}_s$}\label{s:QE Hs} We now turn to Theorems~\ref{t:spheremain} and \ref{t:spherealgebraic}. For each $x \in \mathbb{S}^2$, our set of rotations $g_1,g_2,\ldots g_N$ generates a regular combinatorial tree, embedded (non isometrically) in ${\mathbb{S}^2}$, that we will denote by $$\mathfrak{X}(x) = \{g\cdot x : \, \, g\in \,\, \langle g_1, g_2, \ldots, g_N \rangle \}$$ If $x$ and $y$ are two vertices of the same tree (that is, if $\mathfrak{X}(x) = \mathfrak{X}(y)$), we will denote by $d(x,y)$ the geodesic distance on the tree between $x$ and $y$, meaning the length of the shortest path in $\mathfrak{X}(x)$ between the two vertices. As above, the geodesic distance on the sphere between two points $z_1$ and $z_2$ will be denoted by $d_{\sph}(z_1,z_2)$. The Laplacian commutes with isometries, and thus with the action of the operator $T_q$ defined in (\ref{Tq}) averaging over this generating set of rotations; thus $T_q$ acts on each Laplace eigenspace $\mathcal{H}_s$. Similarly to Section \ref{qe on graphs} we will say that $T_q$ acting on a subspace of $L^2({\mathbb{S}^2})$ has a spectral gap $\beta$ if the spectrum of ${T_q}$ for the action on this subspace is included in \begin{equation}\label{e:gapsphere} \left[ - 2\cosh\left(\frac{\log q}2 -\beta \right), 2\cosh\left(\frac{\log q}2 -\beta \right) \right] \cup \left\{ 2\cosh\left(\frac{\log q}2\right) \right\}. \end{equation} We fix once and for all an orthonormal basis $\{\psi_j^{(s)}\}_{j=1}^{2s+1}$ of each $\mathcal{H}_s$, consisting of $T_q$-eigenfunctions $T_q\psi_j^{(s)} = \lambda(s,j)\psi_j^{(s)}$. For any $r \in \mathbb{N}$, we denote by $S_r(x)$ the sphere of radius $r$ in the tree $\mathfrak{X}(x)$, and by $B_r(x)$ the ball of radius $r$. For $n\in\mathbb{N}$, recall that $P_n$ is the sequence of Chebyshev polynomials of the first kind, defined by the relation $$ P_n(\cos\theta) = \cos(n\theta). $$ We now fix a test function $a \in L^2({\mathbb{S}^2})$. For the proof of Theorem~\ref{t:spheremain} we will take $a$ to be a linear combination of spherical harmonics. The corresponding subspace being of finite dimension, the existence of a spectral gap $\beta_a$ on this subspace is automatic and will depend on $a$. We then extend the result to any continuous test function $a$, by using the fact that $a$ can be uniformly approximated by finite linear combinations of spherical harmonics. For the proof of Theorem~\ref{t:spherealgebraic} we take $a \in L^\infty({\mathbb{S}^2})\subset L^2({\mathbb{S}^2})$. The additional condition that the rotations have algebraic entries guarantees that $T_q$ will have a spectral gap on all of $L^2({\mathbb{S}^2})$ by \cite{BG}. For simplicity we will finally assume in both cases, without loss of generality, that $a \in L^2_0({\mathbb{S}^2})$, that is $\int_{\mathbb{S}^2} a \, d\sigma = 0$. We identify the test function $a$ with the operator of multiplication by $a$, and we are interested in the following operator acting on functions of the sphere: \begin{equation}\label{e:timeaverage} \frac1{T} \sum_{n=1}^T P_{2n}(T_q/2) a P_{2n}(T_q/2). \end{equation} In order to work inside $\mathcal{H}_s$, we use the operator of convolution with $Z_0^{(s)}$. To wit, we write the kernel of $P_{2n}(T_q/2) a P_{2n}(T_q/2)$ as the function $K_{2n}(x,y)$ for $x \in {\mathbb{S}^2}$ and $y \in \mathfrak{X}(x)$, such that any $u \in L^2({\mathbb{S}^2})$ satisfies, $$P_{2n}(T_q/2) a P_{2n}(T_q/2)u(x) = \sum_{y \in \mathfrak{X}(x)} K_{2n}(x,y)u(y).$$ We then define a kernel $[K_{2n}\ast Z^{(s)}](x,y)$ for every $x,y \in {\mathbb{S}^2}$ by $$ [K_{2n}\ast Z^{(s)}](x,y) = \sum_{z\in \mathfrak{X}(x)} Z^{(s)}_z(y) K_{2n}(x,z), $$ where $Z^{(s)}_z$ is the zonal spherical harmonic of degree $s$ centered at $z\in\mathbb{S}^2$ defined in Section \ref{s:harmonic analysis}. Note that $K_{2n}\ast Z^{(s)}$ is the kernel of the operator $$P_{2n}(T_q/2)aP_{2n}(T_q/2)\tilde{Z}^{(s)}$$ on ${\mathbb{S}^2}$, where $\tilde{Z}^{(s)}$ is the operator of convolution with the zonal spherical harmonic $Z_0^{(s)}$ of degree $s$, in the sense that $$[P_{2n}(T_q/2)aP_{2n}(T_q/2)\tilde{Z}^{(s)}u](x) = \int_{\mathbb{S}^2} [K_{2n}\ast Z^{(s)}](x,y)u(y)d\sigma(y) $$ Since $\tilde{Z}^{(s)}$ acts trivially on $\mathcal{H}_s$ by the reproducing property (\ref{reproducing}), we have equality of the restrictions \begin{equation}\label{equality on H_s} P_{2n}(T_q/2)aP_{2n} (T_q/2)\Big|_{\mathcal{H}_s} = P_{2n}(T_q/2)aP_{2n} (T_q/2)\tilde{Z}^{(s)}\Big|_{\mathcal{H}_s} \end{equation} Indeed, $\tilde{Z}^{(s)}$ is nothing more than the orthogonal projection to $\mathcal{H}_s$. \bigskip Now, each rotation of ${\mathbb{S}^2}$ fixes two antipodal points, and we consider the set of points $\mathcal{F}_{16T}\subset {\mathbb{S}^2}$ fixed by a word of length $\leq 16T$ in the generators, where $T$ corresponds to the parameter of the same name in \eqref{e:timeaverage}. For every $x\in \mathcal{F}_{16T}$, we take the neighborhood (in the sphere metric) of radius $s^{-1/4}$, and define the union of these balls to be $$\mathcal{E}_s(16T) := \bigcup_{x\in \mathcal{F}_{16T}} B_{\mathbb{S}^2} (x, s^{-1/4})$$ These are the points $x$ that are ``very close" (we think of $s$ as being large) to one of their images in $B_{16T}(x)$. Note that $x\notin \mathcal{E}_s(16T)$ implies that $B_{4T}(x)\cap\mathcal{E}_s(8T)=\emptyset$; i.e. any point $g\cdot x\in B_{4T}(x)$ cannot be $s^{-1/4}$-close to a fixed point of a word $h$ of length $\leq {8T}$, since then $x$ would have to be $s^{-1/4}$- close to a fixed point of $g^{-1}hg$. We will require the following on the parameters $T$ and $s$: For any $x\notin\mathcal{E}_s(16T)$, and $z\in B_{4T}(x)$, we have \begin{equation}\label{e:Tcondition} \left\{ \begin{array}{ll} B_{\mathbb{S}^2}(z, s^{-1/2})\cap B_{4T}(x) & = \{z\} \\ B_{\mathbb{S}^2}(-z, s^{-1/2})\cap B_{4T}(x) & = \emptyset \end{array} \right. \end{equation} The free group assumption guarantees that for all $T$ we can find $s_0$ large enough so that the condition is satisfied for any pair $T, s$ with $s \geq s_0$. Condition \eqref{e:Tcondition} allows us to apply the methods of Sections~\ref{qe on graphs} and \ref{main estimate graphs} to the kernel $K\ast Z^{(s)}$ on the sphere (see Lemma~\ref{l:HSnorm}). In the case of algebraic rotations, it is sufficient to take $T < c\log{s}$ for some small constant $c>0$ depending on our generating set of rotations, as is shown in the following lemma. \begin{lemma}\label{l:algebraicity and discreteness} There exists a constant $c>0$ depending only on the generating set $\{g_1,\ldots, g_N\}$, such that for all $T < c\log s$, condition \eqref{e:Tcondition} is satisfied for all $x\notin\mathcal{E}_s(16T)$ and $z\in B_{4T}(x)$. \end{lemma} \begin{proof} Let $K$ be a finite extension of $\mathbb{Q}$ containing all entries of the matrices $g _ 1, \dots, g _ N$. Recall the notion of (logarithmic) height of an algebraic number $\alpha \in K$ from e.g.~\cite[Ch.~1]{Bombieri-Gubler}. What is important for us is that it is a nonnegative real number measuring the complexity of nonzero $\alpha \in K$ (e.g. for a rational number given in lowest terms by $p/q$ we have that $h(p/q)= \log \max (p,q)$) with the following properties: \begin{enumerate}[label=(\alph*)] \item $h(\alpha \beta) \leq h (\alpha) + h (\beta)$ \item $h (\alpha + \beta) \leq h (\alpha) + h (\beta) + \log 2$ (cf. \cite[\S1.5.16]{Bombieri-Gubler}) \item for any embedding $\iota: K \hookrightarrow \mathbb{C}$ and any algebraic number $\alpha \neq 0$ we have that $\abs {\iota (\alpha)} \geq e^{-[K:\mathbb{Q}]h(\alpha)}$ (cf. \cite[\S1.5.19]{Bombieri-Gubler}). \item $h (\alpha ^{-1}) = h (\alpha)$ \end{enumerate} It follows that if we set for $g \in \SO (3, \overline{\mathbb{Q}})$ the height $h (g)$ of $g$ to be the maximum of the heights of its coordinates then for any $g _ 1, g _ 2 \in \SO (3, \overline {\mathbb{Q}})$ \begin{align*} h (g _ 1 g _ 2) &\leq h (g _ 1) + h (g _ 2) + O (1) \\ h (g _ 1 ^{-1}) &\leq O (h (g _ 1) + 1) .\end{align*} Thus if $w (g _ 1, \dots, g _ N)$ is any word of length $\ell$ in the given generating set \begin{equation*} h (w (g _ 1, \dots, g _ N)) \ll \ell (1+\max (h (g_1) ,\dots, h (g _ N )) .\end{equation*} Since $g _ 1, \dots, g _ N$ generate a free group, it follows from the above and the basic property (c) of heights that for any reduced word $w$ of length $\ell = 8T$ if $g=w(g_1, \dots g _ N)$ then $\norm {g-1} \geq M^{-8T}$ for some $M$ depending only on the generating set, in other words $g$ is a rotation of angle $\theta$ around its axis with $\abs {\theta} \geq M ^ {-8 T}$. Choosing $c$ small enough, we can guarantee that $$M^{-8T} > M^{-8c\log{s}} = s^{8c\log{M}}> s^{-1/4}$$ and thus each word of length $\leq 8T$ in the generators is a rotation through an angle $\geq s^{-1/4}$. This means that for any $x\notin \mathcal{E}_s(16T)$, and any pair of distinct points $z,z' \in B_{4T}(x)$ --- which implies $z,z' \notin \mathcal{E}_s(8T)$--- there exists a non-trivial word $g$ of length at most $8T$ in the generators such that $g.z=z'$. By the above argument, this means that $g$ must be a rotation through an angle of at least $\geq s^{-1/4}$, which for points $z\notin \mathcal{E}_s(8T)$ means that $d_{\mathbb{S}^2}(z,g.z) \gtrsim s^{-1/2}$. Since this is true for all $z\neq z'\in B_{4T}$, the first part of (\ref{e:Tcondition}) follows after adjusting $c$ to absorb the implied constant. For the second part of (\ref{e:Tcondition}), simply observe that if there exists a rotation $g$ such that $d_{\mathbb{S}^2}(-z, g.z)$ is small, then $d_{\mathbb{S}^2}(z, g^2.z) = 2d_{\mathbb{S}^2}(-z,g.z)$; and so by further reducing the constant $c$, we may incorporate the second part of (\ref{e:Tcondition}) as well. \end{proof} \medskip We can now state the following central estimate. It will be proved in Section~\ref{main estimate sphere}. \begin{proposition}\label{p:egorov} Let $T,s > 0$ satisfying condition \eqref{e:Tcondition}. Let $a \in H \subset L^2({\mathbb{S}^2})$, where $H$ is a subspace closed under the action of $T_q$, and such that ${T_q}{\restriction_{H}}$ has a spectral gap $\beta_H$, as defined in \eqref{e:gapsphere}. Then $$\left\| \frac1T \sum_{n=1}^T P_{2n}(T_q/2) a P_{2n}(T_q/2)\tilde{Z}^{(s)} \right\|_{HS}^2 \lesssim \frac{s}{\beta_H^2 T} \|a\|_2^2 + s^{1/2} q^{16T} \|a\|_\infty^2,$$ with an implied constant depending only on $q$. \end{proposition} \vspace{.1in} We will now prove Theorems \ref{t:spheremain} and \ref{t:spherealgebraic} using this estimate. We have the general lemma \begin{lemma}\label{l:HSQE} For any function $a \in L^2({\mathbb{S}^2})$, $$ \sum_{j=1}^{2s+1} |\langle \psi^{(s)}_j, a \, \psi^{(s)}_j \rangle|^2 \lesssim \left\| \frac1T \sum_{n=0}^T P_{2n} a P_{2n}\tilde{Z}^{(s)} \right\|_{HS}^2, $$ where the implied constant is absolute. \end{lemma} \begin{proof} By Lemma~\ref{l:unitarity} and the equality of operators (\ref{equality on H_s}) on $\mathcal{H}_s$, we have as in the end of Section~\ref{qe on graphs} \begin{align*} \sum_{j=1}^{2s+1} |\langle \psi^{(s)}_j, a \psi^{(s)}_j \rangle|^2 &\lesssim \sum_{j=1}^{2s+1} \left|\frac{1}{T} \sum_{k=1}^T \cos^2 (2n\theta)\right|^2|\langle \psi^{(s)}_j, a \psi^{(s)}_j \rangle|^2 \\ &\lesssim \sum_{j=1}^{2s+1} \left|\left\langle \psi^{(s)}_j, \frac1T \sum_{n=0}^T P_{2n} a P_{2n} \psi^{(s)}_j \right\rangle\right|^2 \\ &\lesssim \sum_{j=1}^{2s+1} \left|\left\langle \psi^{(s)}_j, \frac1T \sum_{n=0}^T P_{2n} a P_{2n} \tilde{Z}^{(s)}\psi^{(s)}_j \right\rangle\right|^2 \\ &\lesssim \left\| \frac1T \sum_{n=0}^T P_{2n} a P_{2n}\tilde{Z}^{(s)} \right\|_{HS}^2 \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{t:spheremain}] Let $k \in \mathbb{N}$, $s_1,\ldots,s_k \in \mathbb{N} \setminus \{0\}$, and $$ H = \bigoplus_{i=1}^k \mathcal H_{s_i}. $$ Notice that $H$ is closed under the action of $T_q$ and that ${T_q}{\restriction_{H}}$ has no eigenvalues equal to $\pm 2\cosh(\log q / 2)$. The space $H$ being finite dimensional, ${T_q}{\restriction_{H}}$ has a finite number of eigenvalues and thus has trivially a spectral gap $\beta_H$ as defined in Section \ref{s:QE Hs}. We first prove the theorem for a test function $a \in H \subset L^\infty({\mathbb{S}^2})$ (as $H$ is orthogonal to $\mathcal H_{0}$ we have $\int_{\mathbb{S}^2} a \, d\sigma = 0$). Combining Lemma~\ref{l:HSQE} and Proposition~\ref{p:egorov} we get \begin{align*} \frac1{2s+1}\sum_{j=1}^{2s+1} |\langle \psi^{(s)}_j, a \psi^{(s)}_j \rangle|^2 &\lesssim \frac{1}{\beta_H^2 T} \|a\|_2^2 + s^{-1/2} q^{16T} \|a\|_\infty^2. \end{align*} We fix $T$ and use the free group assumption to find $s_0$ such that \eqref{e:Tcondition} is satisfied for the pair $T, s$ with $s \geq s_0$. We have $$ \lim_{s \to +\infty} \frac1{2s+1} \sum_{j=1}^{2s+1} |\langle \psi^{(s)}_j, a \psi^{(s)}_j \rangle|^2 \lesssim \frac{1}{\beta_H^2 T} \|a\|_2^2. $$ We then use the fact that we can find a sequence of test functions $a_n$ such that $$ \|a_n - a \|_\infty \to 0,$$ by density of finite linear combinations of spherical harmonics in the space of continuous function for the uniform convergence, to extend the conclusion to the case of a general continuous test function. \end{proof} \begin{proof}[Proof of Theorem \ref{t:spherealgebraic}] In this case we assume that the rotations are algebraic. We can thus take a general test function $a \in L^2({\mathbb{S}^2})$, then \cite{BG} tells us $T_q$ has a spectral gap as an operator acting on this space. We assume in addition that $a\in L^\infty({\mathbb{S}^2})$ and $\int_{\mathbb{S}^2} a \, d\sigma = 0$. Combining Lemma~\ref{l:HSQE} and Proposition~\ref{p:egorov} we get \begin{align*} \frac1{2s+1} \sum_{j=1}^{2s+1} |\langle \psi^{(s)}_j, a \psi^{(s)}_j \rangle|^2 &\lesssim \frac{1}{\beta^2 T} \|a\|_2^2 + s^{-1/2} q^{16T} \|a\|_\infty^2. \end{align*} To finish the proof, we take $T = c_0 \log s$ with $c_0 < c $ (where the constant $c$ is given by Lemma \ref{l:algebraicity and discreteness}), so that condition \eqref{e:Tcondition} is satisfied, and $s^{-1/2} q^{16T} < s^{-\epsilon}$ for some $\epsilon > 0$. \end{proof} \section{Proof of the main estimate on the sphere}\label{main estimate sphere} Here we prove Proposition~\ref{p:egorov}, assuming the rotations are algebraic. Recall that this assumption is used to get both a spectral gap for $T_q$ on very general spaces of test functions, and a quantitative relationship between the parameters $s$ and $T$. We use it in this section only for the quantitative aspect. The same ideas apply for general rotations generating a free group--- under the assumption assumption that $a$ is a finite linear combination of spherical harmonics--- except that we would need to replace all quantitative relationships between $s$ and $T$ with a more qualitative ``$s$ large enough with respect to $T$''. The next Lemma enables us to apply the techniques of Section~\ref{main estimate graphs} to our kernels on ${\mathbb{S}^2}$, outside of the small set $\mathcal{E}_s(16T)$ of ``bad" points: \begin{lemma}\label{l:HSnorm} For any $T,s$ satisfying condition \eqref{e:Tcondition}, and any function $K(x,y)$ defined on $\{ (x,y) \in {\mathbb{S}^2} \times {\mathbb{S}^2} \, | \, y \in \mathfrak X(x) \}$ such that $K(x,y)=0$ whenever $d(x,y)>{4T}$, we have \begin{align*} \int_{{\mathbb{S}^2}} |[K\ast Z^{(s)}](x,y)|^2 \, d\sigma(y) &\lesssim s \sum_{z\in\mathfrak{X}(x)} |K(x,z)|^2\cdot \left\{ \begin{array}{ll} 1 & x\notin \mathcal{E}_s(16T)\\ q^{4T} & x\in\mathcal{E}_s(16T) \end{array} \right. \end{align*} \end{lemma} \begin{proof} \begin{align*} |[K \ast Z^{(s)}](x,y)|^2 & = \sum_{z\in \mathfrak{X}(x)}Z_z^{(s)}(y)^2 | K(x,z) |^2 \\ & \quad + \sum_{z,z' \in \mathfrak{X}(x), z \neq z'} Z_z^{(s)}(y)Z_{z'}^{(s)}(y) \overline{K(x,z)} K(x,z') \end{align*} The first term in the sum is easily handled; the integral over $y$ is applied only to $Z_z^{(s)}(y)^2$, and we know that the $L^2$-norm of $Z_z^{(s)}$ is given by $$||Z_z^{(s)}||^2_2 = \frac{2s+1}{4\pi} \lesssim s$$ so that $$\int_{{\mathbb{S}^2}} \sum_{z\in \mathfrak{X}(x)}Z_z^{(s)}(y)^2 | K(x,z) |^2 d\sigma(y) \lesssim s\cdot \sum_{z\in\mathfrak{X}(x)} |K(x,z)|^2$$ We turn our attention to the second sum, and consider first the case where $x\notin\mathcal{E}_s(16T)$. By Condition \eqref{e:Tcondition}, for any point $z\in\mathfrak{X}(x)$ such that $d(x,z) \leq {4T}$ on the tree, the balls $B_{\mathbb{S}^2}(\pm z,s^{-1/2})$ do not contain any other point $z'\in\mathfrak{X}(x)$ such that $d(x,z') \leq {4T}$. Then by Lemma~\ref{inner product z z'}, the integral over $y$ gives $$\langle Z_z^{(s)}, Z_{z'}^{(s)}\rangle \lesssim s^{3/4}$$ for all terms in the second sum, and so the contribution to the integral (over $y$) of the second sum is estimated by \begin{eqnarray*} \sum_{z,z'\in \mathfrak{X}(x), z\neq z'} \langle Z_z^{(s)}, Z_{z'}^{(s)}\rangle\overline{K(x,z)}K(x,z') & \lesssim & s^{3/4} \sum_{z,z'\in \mathfrak{X}(x)} \overline{K(x,z)}K(x,z')\\ & \lesssim & s^{3/4} \left|\sum_{z\in\mathfrak{X}(x), d(x,z)\leq {4T}} K(x,z)\right|^2\\ & \lesssim & s^{3/4}q^{4T} \sum_{z\in\mathfrak{X}(x)} |K(x,z)|^2\\ & \lesssim & s\cdot \sum_{z\in\mathfrak{X}(x)} |K(x,z)|^2 \end{eqnarray*} if $c$ is sufficiently small so that $q^{4T} < s^{1/4}$. If $x\in\mathcal{E}_s(16T)$, then we estimate the inner product trivially by $$\langle Z_z^{(s)}, Z_{z'}^{(s)}\rangle \leq ||Z_z^{(s)}||_2||Z_{z'}^{(s)}||_2 \lesssim s$$ and apply the same calculation to get $$\int_{{\mathbb{S}^2}} |[K\ast Z^{(s)}](x,y)|^2 \, d\sigma(y) \lesssim sq^{4T} \sum_{y\in\mathfrak{X}(x)} |K(x,y)|^2$$ \end{proof} We now proceed as in the proof of Proposition~\ref{p:egorov_graphs} in Section~\ref{main estimate graphs}. In what follows, we will denote simply by $P_n$ the operator $P_n(T_q/2)$. The rest of the argument closely follows the proof on graphs from Section~\ref{main estimate graphs}. We denote by $K_T(x,y)$ the kernel of the operator $$A_T = \frac1T \sum_{n=0}^T P_{2n} a P_{2n} \tilde Z^{(s)}.$$ Let us first give an explicit expression for the kernel $K_{2n}$ of the operator $P_{2n}aP_{2n}$. For this purpose we define the sets \[E_{j,k} = E_{j,k}(x,y) = \{ z \in \mathfrak{X}(x): d(x,z) = j, d(y,z) = k \}.\] The kernel $K_{2n}(x,y)$ is then given by \begin{align}\label{e:sphere_kernel} K_{2n}(x,y) &= \frac1{4q^{2n}} \sum_{z\in E_{2n,2n}} a(z) \\ &\quad + \frac{(1-q)}{4q^{2n}} \sum_{j=0}^{n-1}\left( \sum_{z\in E_{2j,2n}} a(z) + \sum_{z\in E_{2n,2j}} a(z) \right) \nonumber \\ &\quad + \frac{(1-q)^2}{4q^{2n}} \sum_{j,k=0}^{n -1} \sum_{z\in E_{2j,2k}} a(z). \nonumber \end{align} This kernel is equal to $0$ whenever $d(x,y)$ is odd or $d(x,y) > 4n$. We then have $K_T = \frac1T \sum_{n=0}^T [K_{2n}\ast Z^{(s)}](x,y)$. We now separate the good and bad points. Define \begin{equation*} A_T' f(x) = \left\{ \begin{array}{ll} A_T f(x) & \text{if } x \notin \mathcal E_s(16T) \\ 0 & \text{otherwise.} \end{array} \right. \end{equation*} We then have \begin{lemma} $$ \|A_T\|_{HS}^2 \leq \| A_T' \|_{HS}^2 + O\left(s^{1/2} q^{16T} \|a\|_\infty^2\right) $$ \end{lemma} \begin{proof} We have $$ \| A_T - A'_T\|_{HS}^2 = \frac{1}{T^2} \int_{x\in\mathcal{E}_s(16T)}\int_{y\in{\mathbb{S}^2}} \left|\sum_{n=0}^T[K_{2n}\ast Z^{(s)}](x,y)\right| ^2 d\sigma(y)d\sigma(x) $$ We then use Lemma~\ref{l:HSnorm} to write \begin{align*} \| A_T - A'_T\|_{HS}^2 &\lesssim \frac{s q^{4T}}{T^2} \int_{x\in\mathcal{E}_s(16T)} \sum_{\substack{y \in \mathfrak{X}(x)\\ d(x,y) \leq 4T}} \left| \sum_{n=0}^T K_{2n}(x,y)\right|^2 \, d\sigma(x) \\ &\lesssim s q^{8T} \sigma(\mathcal{E}_s(16T)) \|a\|_\infty^2, \end{align*} where we used the fact that $\sup_{x,y} K_{2n}(x,y) \leq \| a \|_\infty$. We then notice that $\mathcal{E}_s(16T)$ is a union of $O(q^{16T})$ balls of radius $s^{-1/4}$ and obtain $$ \| A_T - A'_T\|_{HS}^2 \lesssim s^{1/2} q^{16T} \|a\|_\infty^2.$$ \end{proof} We can now restrict our attention to the operator $A_T'$. We write $$ A_T' = \frac1T \sum_{n=0}^T \frac1{q^{2n}} \sum_{l=0}^{2n}\sum_{j,k=0}^n c(j,k) S_{2j,2k,2l}, $$ where the operator $S_{2j,2k,2l}$ is defined by its kernel on points $x,y \in {\mathbb{S}^2}$ \begin{equation}\label{e:S kernel sphere} [S_{2j,2k,2l}](x,y) = \sum_{z\in\mathfrak X(x)} Z_z^{(s)}(y) K(x,z), \end{equation} and $K(x,y)$ is given by $$K(x,y) = \mathbf{1}_{\{x\notin \mathcal E_s(16T), \; y\in \mathfrak X(x), \; d(x,y)=2l \}} \sum_{z\in E_{2j,2k}(x,y)} a(z). $$ Interchanging the sums in $l$ and $n$, which will be useful later, we obtain \begin{equation}\label{e:AT sphere} A_T' = \frac1T \sum_{l=0}^{2T} \sum_{n=\lceil l/2 \rceil}^{T} \frac1{q^{2n}} \sum_{j,k=0}^n c(j,k) S_{2j,2k,2l}, \end{equation} The proof of Proposition \ref{p:egorov} will follow from the following estimate: \begin{lemma}\label{l:Sop sphere} $$ \| S_{2j,2k,2l} \|_{HS} \lesssim s^{1/2} \, q^{(k+j)} e^{-\frac{\beta}2 (k+j-l)} \|a \|_2 $$ \end{lemma} Indeed, the estimation of the Hilbert-Schmidt norm of $A'_T$ using this lemma is here exactly the same as in the case of graphs up to a factor $s$. We reproduce it for the convenience of the reader. Let us first notice that for $l\neq l'$, the operators $S_{2j,2k,2l}$ and $S_{2j',2k',2l'}$ are orthogonal with respect to the Hilbert-Schmidt norm. We deduce from this fact and the expression \eqref{e:AT sphere} that \begin{align*} \|A_T' \|^2_{HS} &= \frac1{T^2} \sum_{l=0}^{2T} \left\| \sum_{n=\lceil l/2 \rceil}^T \frac1{q^{2n}} \sum_{j,k=0}^n c(j,k) S_{2j,2k,2l} \right\|_{HS}^2 \\ &\lesssim \frac1{T^2} \sum_{l=0}^{2T} \left( \sum_{n=\lceil l/2 \rceil}^T \frac1{q^{2n}} \sum_{j,k=0}^n \| S_{2j,2k,2l} \|_{HS} \right)^2 \\ &\lesssim \frac{s}{T^2} \sum_{l=0}^{2T} \left( \sum_{n=\lceil l/2 \rceil}^T \frac1{q^{2n}} \sum_{j,k=0}^n q^{(k+j)} e^{-\frac{\beta}2 (k+j-l)} \|a\|_2 \right)^2 \\ &\lesssim \frac{s}{T^2} \sum_{l=0}^{2T} \left( \sum_{n=\lceil l/2 \rceil}^T e^{-\beta(n-l/2)} \|a\|_2 \right)^2 \\ &\lesssim \frac{ s \|a\|^2_2}{\beta^2 T}. \end{align*} We now want to prove Lemma~\ref{l:Sop sphere}. We will first write the kernel of the operators $S_{2j,2k,2l}$ in a more convenient way. We consider the space $L^2({\mathbb{S}^2}')$, where ${\mathbb{S}^2}' = \{(x,y) \in {\mathbb{S}^2} \times {\mathbb{S}^2}, \exists i \in \{1,\ldots,N\}, g_i\cdot x = y \}$ and the measure is given by $\sum_{i=1}^N d\sigma(x) \otimes \delta_{g_i \cdot x}$. We define the operator $$ T_q'F(x,y) = \frac1{q} \sum_{g_i \cdot y \neq x} F(y,g_i \cdot y). $$ We then have the following analog of Lemma \ref{l:edgegap}. \begin{lemma}\label{l:edgegapsphere} For every $k \geq 1$, we have $$ \| (T_q')^k \| \lesssim e^{-\beta k}$$ with an implicit constant depending only on $q$. \end{lemma} \begin{proof} The maps $B,E : L^2({\mathbb{S}^2}) \to L^2({\mathbb{S}^2}')$ are here given by $$ Bf(x,y) = f(x), Ef(x,y) = f(y) $$ The space $L^2({\mathbb{S}^2}')$ can be divided into $\text{Im}(B)\oplus\text{Im}(E)$ and its orthogonal complement given by the functions $F$ such that for any $x \in {\mathbb{S}^2}$ $$ \sum_{i=1}^N F(x,g_i x) = \sum_{i=1}^N F(g_i x,x) = 0$$ On this orthogonal complement the action of $T_q'$ is given by $$ T_q'F(x,y) = - \frac{1}{q} F(y,x), $$ and we can decompose $\text{Im}(B)\oplus\text{Im}(E)$ using an orthonormal basis of $T_q$. For every eigenfunction $w \in L^2({\mathbb{S}^2})$ with $T_q$-eigenvalue $\lambda$, the space $\mathbb{C}(Bw) + \mathbb{C}(Ew)$ is stable under $T_q'$ and the action of $T_q'$ is given by the matrix \begin{equation*} \begin{pmatrix} 0 & -\frac{1}{q} \\ 1 & \frac{\lambda}{\sqrt{q}} \end{pmatrix} \end{equation*} The rest of the proof is completely analogous to the proof of Lemma \ref{l:edgegap}. \end{proof} The following analog of Lemma~\ref{l:Tq sum} allows us to rewrite the kernel of $S_{2j,2k,2l}$ using the operator $T'_q$. \begin{lemma}\label{l:Tq sum sphere} Let $0 \leq j,k \leq n$. Let $x \in {\mathbb{S}^2}$ and $y \in \mathfrak{X}(x)$ such that $l = d(x,y)/2$ satisfies $|k-j| \leq l \leq k+j$. Denote by $w$ the vertex of the segment $[x,y]$ such that $d(x,w) = l - (k-j)$ and $d(y,w) = l + (k-j)$ in the case $k\geq j$, and $d(x,w) = l + (k-j)$ and $d(y,w) = l - (k-j)$ in the case $j \geq k$. Then \begin{equation}\label{e:vertextoedge sphere} \sum_{z\in E_{2j,2k}} a(z) = \sideset{}{'}\sum_{g_i} q^{k+j-l} (T'_q)^{k+j-l} B a (w,g_i w) \end{equation} where the sum on the right-hand side runs only over the rotations $g_i$ such that $g_i w$ is not on the segment $[x,y]$. \end{lemma} We will also use the following lemma, a straightforward analog of Lemma~\ref{l:edgenorm}. \begin{lemma}\label{l:L2edge} Let $k,l$ be fixed integers such that $k \leq 2l \leq 4T$. For any $x \in \mathbb{S}^2$ and $y \in \mathfrak X(x)$ such that $d(x,y) = 2l$, let $w$ be a vertex of the segment $[x,y]$ such that $d(x,w) = k$. Then $$ \int_{\mathbb{S}^2} \sum_{y\in S_{2l}(x)} \sum_{g} |F (w,gw)| d\sigma(x) \leq q^{2l} \int_{\mathbb{S}^2} \sum_{g} | F (x,gx)| d\sigma(x) $$ \end{lemma} \begin{proof}[Proof of Lemma \ref{l:Sop sphere}] We first use Lemma~\ref{l:HSnorm} together with \eqref{e:S kernel sphere} to write $$ \| S_{2j,2k,2l} \|_{HS}^2 \lesssim s \int_{x\notin\mathcal E_s(16T)} \sum_{\substack{y\in \mathfrak X(x)\\ d(x,y)=2l}} \left| \sum_{z\in E_{2j,2k}(x,y)} a(z) \right|^2 d\sigma(x)$$ We then use Lemma \ref{l:Tq sum sphere} and Lemma \ref{l:L2edge} to obtain \begin{align*} \| S_{2j,2k,2l} \|_{HS}^2 & \lesssim s \int_{x\notin\mathcal E_s(16T)} \sum_{\substack{y\in \mathfrak X\\ d(x,y)=2l}} \left| \sideset{}{'}\sum_{g_i} q^{k+j-l} (T'_q)^{k+j-l} B a (w,g_i w) \right|^2 d\sigma(x) \\ &\lesssim s \, q^{2(k+j-l)} q^{2l} \int_{\mathbb{S}^2} \sum_g \left| (T'_q)^{k+j-l} B a (x,gx) \right|^2 d\sigma(x). \end{align*} Finally, Lemma \ref{l:edgegapsphere} gives $$ \| S_{2j,2k,2l} \|_{HS}^2 \lesssim s \, q^{2(k+j)} e^{-\beta(k+j-l)} \|a\|_2^2.$$ \end{proof} \section{Kesten-McKay Law for Spherical Harmonics} \label{s:Kesten-McKay} In this section, we prove the following Kesten-McKay law for the space $\mathcal{H}_s$ of spherical harmonics: \begin{lemma}\label{sphere kesten mckay} Let $\{\psi_j^{(s)}\}_{j=1}^{2s+1}$ be an orthonormal basis of $\mathcal{H}_s$ consisting of $T_q$-eigenfunctions with $T_q\psi_j^{(s)} = \lambda(s,j) \psi_j^{(s)}$, and let $I\subset [-2,2]$ be an arbitrary interval in the tempered spectrum of $T_q$. Set $$N(I,s) := \#\{j : \lambda(s,j) \in I\}$$ to be the dimension of the subspace of $\mathcal{H}_s$ spanned by those $T_q$-eigenfunctions with $T_q$-eigenvalue in $I$. Then $$N(I,s)\sim C_q(I) \cdot s$$ as $s\to\infty$, where $C_q(I)>0$ depends only on the interval $I$, and the size of the generating set of rotations. \end{lemma} \begin{proof} This is a standard argument that we bring here for completeness. The idea is to estimate the moments $$M_n(s) = \sum_{j=1}^{2s+1} \lambda(s,j)^n = Tr\left(T_q^n\restriction_{\mathcal{H}_s}\right)$$ and show that they agree asymptotically with the moments of the Plancherel measure for the $q+1$-regular tree; solving the inverse moment problem then implies that the eigenvalues of $T_q$ in $\mathcal{H}_s$ are asymptotically distributed according to a continuous, positive distribution on $(-2,2)$; namely, the $q$-adic Plancherel measure. In particular, this implies that any fixed interval receives a positive proportion of the $T_q$-eigenvalues as $s\to\infty$. We begin again with the zonal spherical harmonic $Z_z^{(s)}\in\mathcal{H}_s$, which can be written in terms of the $\{\psi_j^{(s)}\}$--- indeed, in terms of any orthonormal basis of $\mathcal{H}_s$--- as $$Z_z^{(s)}(y) = \sum_{j=1}^{2s+1} \psi_j^{(s)}(z)\overline{\psi_j^{(s)}(y)} $$ Since the $\{\psi_j^{(s)}\}$ are eigenfunctions of $T_q$, we can write $$T_q^n Z_z^{(s)}(y) = \sum_{j=1}^{2s+1} \lambda(s,j)^n \psi_j^{(s)}(z) \overline{\psi_j^{(s)}(y)}$$ and so the trace of $T_q^n$ on $\mathcal{H}_s$ is obtained by integrating on the diagonal \begin{equation}\label{Tq trace} \int_{z\in{\mathbb{S}^2}} T_q^n Z_z^{(s)}(z) d\sigma(z)= \sum_{j=1}^{2s+1} \lambda(s,j)^n \end{equation} We wish to compute the asymptotics of these moments as $s\to\infty$; we will do this by estimating the integrand on the left-hand side point wise. It is easy to see that $$T_q^n Z_z^{(s)}(z) = p^{-n/2}N_0(n)Z_z^{(s)}(z) + \sum_{0<d_{\mathfrak{X}}(z,y)\leq n} N_y(n)Z_z^{(s)}(y)$$ where $N_0(n)$ is the number of paths of length $n$ in the $q+1$-regular tree starting and ending at $0$, and similarly $N_y(n)$ is the number of paths of length $n$ from $z$ to $y$. The number of terms in the second sum, as well as the values of $N_y(n)$, depend only on $n$ and are independent of $s$. Now, since the rotations generate a free group, there exists a small exceptional set $z\in\mathcal{E}_s(3n)$ as in Section~\ref{main estimate sphere}, outside of which we know that all points $y$ in the sum are separated from each other and from $z$; i.e. that for $s$ large enough, the distance $d_{\mathbb{S}^2}(z,y)>s^{-1/4}$ for all terms in the second sum. The estimate (\ref{legendre decay}) then implies that for all $z\notin\mathcal{E}_s(3n)$ we have $$Z_z^{(s)}(y)\lesssim q^n \max\{N_y(n)\} s^{3/4} \lesssim_n s^{3/4}$$ since the points $y$ and corresponding values of $N_y(n)$ depend only on $n$. On the other hand $$Z_z^{(s)}(z) = Z_0^{(s)}(0) \gtrsim s$$ so that as $s\to\infty$ we have $$T_q^nZ_z^{(s)}(z) \sim p^{-n/2}N_0(n)Z_0^{(s)}(0)$$ uniformly in $z\in{\mathbb{S}^2}\backslash\mathcal{E}_s(3n)$, and since the right-hand side is independent of $z$, we have \begin{eqnarray*} \int_{{\mathbb{S}^2}\backslash\mathcal{E}_s(3n)} T^n_q Z_z^{(s)}(z)d\sigma(z) & \sim & p^{-n/2}N_0(n)Vol({\mathbb{S}^2}\backslash\mathcal{E}_s(3n))\cdot s\\ & \sim & p^{-n/2}N_0(n)Vol({\mathbb{S}^2})\cdot s \end{eqnarray*} As in Section~\ref{main estimate sphere}, the integral of $T^n_qZ_z^{(s)}(z)$ over the set $z\in \mathcal{E}_s(3n)$ is negligible once $s$ is large enough (and thus the set $\mathcal{E}_s(3n)$ small enough). Therefore the trace satisfies $$\sum_{j=1}^{2s+1} \lambda(s,j)^n \sim p^{-n/2}N_0(n)Vol({\mathbb{S}^2}) \cdot s$$ Dividing by $s$ and solving the inverse moment problem (just as in, eg. \cite{SarnakHeckeStatistics}) shows that the asymptotic distribution of the $T_q$-eigenvalues in $\mathcal{H}_s$ is asymptotically proportional to $s$ times the $q$-adic Plancherel measure $$d\mu_q(x) = \left\{ \begin{array}{ccc} \frac{ (q+1)\sqrt{4-x^2}}{2\pi (q+q^{-1}+2)-x^2} & \quad\quad\quad & |x|\leq 2\\ 0 & \quad\quad\quad & |x|\geq 2 \end{array} \right.$$ which is a continuous and non-negative measure supported in $[-2,2]$, and strictly positive in the interior $(-2,2)$ (note that it vanishes only at the endpoints). This shows that any interval $I\subset [-2,2]$ contains a number of $T_q$-eigenvalues \begin{eqnarray*} N(I,s) & \sim & s\cdot \int_{x\in I} d\mu_q(x) \end{eqnarray*} as required. \end{proof} \bibliographystyle{plain} \def$'${$'$}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Wess-Zumino \cite{Wess71} effective action, with topological content clarified by Witten \cite{Witten83}, describes all effects of QCD anomalies in low-energy processes with photons and Goldstone bosons, without reference to massive vector mesons. The extension to the case with spin-1 mesons is not unique, and has been addressed in different frameworks \cite{Schechter84},\cite{Fujiwara85},\cite{Kaiser90}. Important issues arise when one includes the spin-1 states. Here we address the concept of VMD and the pseudoscalar -- axial-vector mixing ($\pi a_1$-mixing) of meson states. In particular, it has been shown in \cite{Fujiwara85} that the {\it complete} VMD is not valid in either $\pi^0\to \gamma\gamma$ or $\gamma\to 3\pi$ processes, and that mixing affects hadronic amplitudes in \cite{Gasiorovicz69,Osipov85}. Therefore one should demonstrate how the departure from VMD occurs and how $\pi a_1$-mixing is treated in order to comply with the predictions of the Wess-Zumino action. This is not a trivial task, in \cite{Wakamatsu89} it has been reported that in a number of well-known models \cite{Schwinger67}-\cite{Bando85c} the $\pi a_1$-mixing breaks low-energy theorems (LET) for some anomalous processes, e.g., $\gamma\to 3\pi$, $K^+K^-\to 3\pi$. In \cite{okh:2020}, based on the gauge covariant treatment of $\pi a_1$-mixing, only recently addressed \cite{Osipov18a}-\cite{Osipov20}, we show precisely how the deviation of the complete VMD occurs in the framework of the NJL Lagrangian, fulfilling the LET \begin{equation} \label{LET} F^{\pi}=e f_\pi^2 F^{3\pi}. \end{equation} The procedure is sufficiently general to be applied in other processes. To be more definite, recall that the $\pi a_1$ diagonalization is generally performed by a linearized transformation of the axial vector field \begin{equation} \label{ngcr} a_\mu \to a_\mu + \frac{\partial_\mu \pi}{ag_\rho f_\pi}, \end{equation} where $\pi =\tau_i \pi^i$, $a_\mu =\tau_i a^i_\mu$ and $\tau_i$ are the $SU(2)$ Pauli matrices; $g_\rho\simeq \sqrt{12\pi}$ is the coupling of the $\rho$ meson to two pions and $f_\pi \simeq 93$MeV the pion weak decay constant. In extensions of the model that couple to the electroweak sector this replacement violates gauge invariance \cite{Osipov18a}-\cite{Osipov20} in anomalous processes, leaving however the real part of the action invariant \cite{Osipov18c,Osipov19}. For example the anomalous low energy amplitude describing the $a_1\to\gamma\pi^+\pi^-$ decay is not transverse \cite{Osipov18a,Osipov18b} To restore gauge invariance the gauge covariant derivative $\mathcal D_\mu \pi$ must be used instead of $\partial_\mu \pi$ \cite{Osipov18a}-\cite{Osipov20} \begin{equation} \label{cov} a_\mu \to a_\mu + \frac{\mathcal D_\mu \pi}{ag_\rho f_\pi}, \quad \mathcal D_\mu \pi =\partial_\mu \pi -ieA_\mu [Q,\pi ], \quad Q=\frac{1}{2}(\tau_3+\frac{1}{3}), \end{equation} In the context of the LET $F^{\pi}=e f_\pi^2 F^{3\pi}$ mixing occurs related to both anomalous form factors, but it has been proven in \cite{okh:2020} that the radiative decay ${\pi^0\to\gamma\gamma}$ is not affected by the mixing, and coincides with the low energy result of current algebra given by the Lagrangian density \cite{Wess71,Witten83} \begin{equation} \label{pigg} \mathcal{L}_{\pi\gamma\gamma}=-\frac{1}{8}F^\pi\pi^0 e^{\mu\nu\alpha\beta} F_{\mu\nu}F_{\alpha\beta}, \quad F^{\pi}=\frac{N_c e^2}{12\pi^2 f_\pi}, \end{equation} where $e$ is the electric charge, $F_{\mu\nu}=\partial_\mu A_\nu -\partial_\nu A_\mu$ stands for the strength of the electromagnetic field, $N_c$ is the number of quark colors. The absence of mixing is seen as follows. In the NJL model one can switch to spin-1 variables without direct photon-quark coupling, as described in the VMD picture. Then $\mathcal{L}_{\pi\gamma\gamma}$ is related to the $\pi^0\omega\rho$ quark triangle shown in Fig. 1(a) left . At leading order of a derivative expansion the current-algebra result $\Gamma (\pi^0\to\gamma\gamma)=7.1 \, \mbox{eV}$ is obtained. Diagram 1 (b) left , due to mixing, is described by an axialvector vector vector (AVV) Adler-Bell-Jackiw anomaly \cite{Adler71}-\cite{Jackiw00}. The related surface term (ST) which results from the difference of two linearly divergent amplitudes is apriori arbitrary. Here this arbitrary parameter is fixed on gauge invariant grounds of $a_1\rightarrow \gamma \gamma$, upon which graph 1 (b) left vanishes at leading order of a derivative expansion. This complies with the Landau-Yang theorem \cite{Landau48},\cite{Yang56} which states that a massive unit spin particle cannot decay into two on shell massless photons. Effects of $\pi a_1$-mixing in $\gamma\to 3\pi$ amplitudes (due to G-parity it is sufficient to consider the isoscalar component of the photon, related to $\omega\to 3\pi$) have been studied by Wakamatsu \cite{Wakamatsu89} in detail, using the prescription (\ref{ngcr}). He found that the amplitude of the $\omega\to 3\pi$ decay contains uncompensated contributions generated by $\pi a_1$-mixing, breaking the LET at order of $1/a^2$, where $a=\frac{m_\rho^2}{g_\rho^2 f_\pi^2}=1.84$ and $m_\rho$ is the empirical mass of the $\rho$-meson. This conclusion is based on the assumption that VMD is valid. \vspace{0.5cm} \includegraphics[height=3.5cm,angle=0]{fig1.pdf} \hspace{0.3cm} {\includegraphics[height=3.5cm,angle=0]{fig2.pdf} \hspace{0.3cm} {\includegraphics[height=2.cm,angle=0]{fig3.pdf} {\small Fig. 1. Left (a) and (b): the two graphs describing the $\pi^0\to\gamma\gamma$ decay in the NJL model, (b) for $\pi a_1$-mixing effects on the pion line; Middle: quark loop contributions to $\omega\to 3\pi$ decay, (a) full set of possible diagrams without and with 1, 2, and 3 $\pi a_1$-mixing effects on the pion line (not drawn); (b) $\rho$ exchange diagrams without and with $\pi a_1$ transitions; Right: contribution to $\gamma\to 3\pi$ decay due to covariant $\pi a_1$ diagonalization, see (\ref{cov}), with pion lines subject to $\pi a_1$-mixing.} \vspace{0.5cm} Let us recall and complement the calculations made in \cite{Wakamatsu89}. The diagrams contributing to the $\omega\to 3\pi$ decay are shown in Fig. 1, middle, where we have additionally taken into account the box diagram with three $\pi a_1$-transitions in (a) as well as the contribution of the $\omega\rho (a_1\to\pi )$ vertex in the $\rho$-exchange graph (b), both neglected in \cite{Wakamatsu89}. The corresponding amplitude is given by \begin{equation} \label{om3pi} A_{\omega\to 3\pi}=-\frac{N_c g_\rho}{4\pi^2 f_\pi^3} e_{\mu\nu\alpha\beta} \epsilon^\mu (q) p_0^\nu p_+^\alpha p_-^\beta F_{\omega\to 3\pi}, \end{equation} where $p_0, p_+, p_-$ are the momenta of the pions, $\epsilon^\mu (q)$ the polarization of the $\omega$-meson with momentum $q$, and the form factor $F_{\omega\to 3\pi}$ is found to be \begin{eqnarray} \label{ff} F_{\omega\to 3\pi}&=&\left(1-\frac{3}{a}+\frac{3}{2a^2}+\frac{1}{8a^3}\right) + \left(1-\frac{c}{2a}\right)\sum_{k=0,+,-} \frac{g_\rho^2 f_\pi^2}{m_\rho^2-(q-p_k)^2}. \end{eqnarray} In the first parentheses, the box diagrams without, with one, two, and three $\pi a_1$-transitions are given correspondingly. The last term represents the contribution of $\rho$-exchange graphs, where $c$ controls the magnitude of an arbitrary local part of the anomalous AVV-quark-triangle. In the low-energy limit, the sum yields $3/a$, as one neglects the dependence on momenta in (\ref{ff}), leading to full cancellation among the terms of order $1/a$, as well-known \cite{Wakamatsu89}. The ST $c$ contributes at order $1/a^2$. For $c=0$ we reproduce the $\pi a_1$-mixing effect found in \cite{Wakamatsu89} to this order. Had $c$ been used instead to cancel the $\pi a_1$-mixing effect, as $c=1+1/(12a)$, a too low width $\Gamma (\omega\to\pi^+\pi^0\pi^-)=3.2\,\mbox{MeV}$ would be obtained as compared to experiment $\Gamma (\omega\to\pi^+\pi^0\pi^-)=7.57\pm 0.13 \,\mbox{MeV}$. Furthermore the value $c=0$ is also required following \cite{Cohen89}, where the chiral Ward identities (WI) for $\gamma\to 3\pi$ imply that both the chiral triangle and the box anomaly contribute as \begin{equation} A_{\gamma\to 3\pi}^{tot}=\frac{3}{2}A^{AVV}-\frac{1}{2}A^{VAAA}, \end{equation} where $A_{\gamma\to 3\pi}^{tot}$, $A^{AVV}$ and $A^{VAAA}$ are, respectively, the total $\gamma\pi\pi\pi$ amplitude, the $\gamma\to\omega\to\pi\rho\to\pi\pi\pi$ process and the point $\gamma\to\omega\to\pi\pi\pi$ amplitude. This result is consistent with both the chiral WI and with the KSFR relation \cite{Kaw66,Riaz66}, which arises in the NJL model at $a=2$. One sees from eq. (\ref{ff}) that, if one neglects the terms of order $1/a^2$ and higher in the box contribution and puts $c=0$ in the $\rho$-exchange term, the amplitude $A^{VAAA}$ has a factor $(1-3/a)=-1/2$, and the $A^{AVV}$ amplitude has a factor $(1-c/(2a)) 3/a=3/2$, as is required by the chiral WI. On the other hand, if $c$ is chosen to cancel $\pi a_1$-mixing effects, these amplitudes contribute with relative weights $-7/64$ and $71/64$, respectively. Therefore the ST $c$ cannot be used to resolve the $\pi a_1$-mixing puzzle, the chiral WI require $c=0$. This pattern has been considered in \cite{Schechter84,Kaiser90,Wakamatsu89}, and reproduces well the phenomenological value of the width. That allows us to conclude, following \cite{Wakamatsu89}, that if the VMD is a valid theoretical hypothesis, the $\gamma\to\omega\to 3\pi$ amplitude contains contributions due to $\pi a_1$-mixing that violate the LET (\ref{LET}) \begin{equation} \label{g3pi} A_{\gamma\to 3\pi}=-F^{3\pi} e_{\mu\nu\alpha\beta} \epsilon^\mu (q) p_0^\nu p_+^\alpha p_-^\beta , \end{equation} \begin{equation} \label{3pi} F^{3\pi}=\frac{N_c e}{12\pi^2 f_\pi^3}\left(1+\frac{3}{2a^2}+\frac{1}{8a^3}\right)\neq \frac{N_c e}{12\pi^2 f_\pi^3}. \end{equation} In the following we will show that it is possible to combine the phenomenologically successful value $c=0$ with a full cancellation of $\pi a_1$-mixing effects within the NJL approach by taking into account the anomalous AAA triangle shown in the right side of Fig.1, which occurs as result of (\ref{cov}) \begin{eqnarray} A&=&\frac{N_c e}{4a^3f_\pi^3} \left\{p_-^\sigma [J_{\mu\nu\sigma} (p_0,p_-)-J_{\mu\sigma\nu} (p_-,p_0)] \right. \nonumber \\ && \!\!\!\! + \left. p_+^\sigma [J_{\mu\nu\sigma} (p_0,p_+)-J_{\mu\sigma\nu} (p_+,p_0)] \right\} \epsilon^\mu (q)p_0^\nu . \end{eqnarray} The low energy expansion of the loop integral $J_{\mu\nu\sigma}$ starts from a linear term \begin{equation} \label{exp} J_{\mu\nu\sigma} (p_0,p_-)=\frac{1}{24\pi^2} e_{\mu\nu\sigma\rho}\left(p_0-p_- -3 \upsilon \right)^\rho +\ldots \end{equation} Owing to the shift ambiguity related to the formal linear divergence of this integral, the result depends on the undetermined 4-vector $\upsilon_\rho$, \begin{equation} \label{amp} A=-\frac{N_c e}{4\pi^2 f_\pi^3} e_{\mu\nu\sigma\rho}\epsilon^\mu (q)p_0^\nu (p_++p_-)^\sigma \left(\frac{\upsilon^\rho}{4a^3}\right) \end{equation} This is the complete result for this triangle diagram. The 4-vector $\upsilon_\rho$ is represented as linear combination of the independent momenta of the process, $\upsilon_\mu= b_1 q_\mu + b_2 (p_+ -p_-)_\mu + b_3 (p_+ + p_-)_\mu$, but only the second term survives in (\ref{amp}). Thus, the graph shown on Fig.1 right gives an additional contribution $\Delta F^{3\pi}$ to the form factor $F^{3\pi}$ \begin{equation} \label{g3pinew} \Delta F^{3\pi}=\frac{N_c e}{12\pi^2 f_\pi^3} \left(\frac{-3b_2}{2a^3}\right), \end{equation} where $b_2$ is dimensionless and as yet undetermined. This constitutes a further example in which an arbitrary regularization dependent parameter should be fixed by the physical requirements \cite{Jackiw00, Baeta01,Batista18}. The AAA amplitude would be zero had it been regularized in advance by any regularization that sets ST to zero. For a detailed discussion of this and further anomalous vertices appearing in the present calculation we refer to \cite{okh:2020}. To fix $b_2$ we use the LET (\ref{LET}); requiring that the unwanted terms in (\ref{3pi}) vanish we find that $b_2=a+\frac{1}{12}=1.92$. Thus, the solution of the $\pi a_1$-mixing problem in the $\gamma\to 3\pi$ amplitude can be associated with the ST of the anomalous non-VMD diagram shown on the right of Fig. 1.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Target tracking~\cite{LWH+02,SNP16} is a key enabling technology in many applications of multi-agent systems and wireless sensor networks, such as motion planning~\cite{CWC+04,DSH09} and surveillance~\cite{PJ05,PDS14}. In one of its basic forms, the tracking problem aims to maintain position estimates of a moving, signal-emitting target over time using noisy measurements of the emitted signal collected by stationary sensors. Such a sequential localization formulation has been extensively studied in the control and signal processing communities, and various approaches for tackling it have been proposed. When a model on the target dynamics and noise statistics is available, a classic approach is to employ Kalman filtering techniques to perform the tracking; see, e.g.,~\cite{LEH06,TFLC09,WLM11,YRLS14} and the references therein. In recent years, however, there have been increasing efforts in developing tracking techniques that require only minimal assumptions on the target trajectory and/or noise distribution. One approach is to view the sequential localization formulation through the lens of time-varying optimization~\cite{DSBM20,SDP+20}. Specifically, at each time step, the position estimate of the target is given by a minimizer of a loss function that depends on the noisy signal measurements collected at that time step. However, since the time interval between successive measurements is often very short and the sensors have limited computational power, it is impractical to solve the loss minimization problem at each time step exactly. This motivates the use of online optimization techniques to tackle the target tracking problem. To evaluate the performance of an online method, various metrics are available; see~\cite{SDP+20}. These metrics differ in how they measure the discrepancy between the solutions generated by the method at different time steps and the optimal solutions at the corresponding time steps. When the loss function is convex at every time step, it has been shown that many online methods enjoy strong performance guarantees under different metrics; see, e.g.,~\cite{Zink03,HW15,JRSS15,MSJ+16,SMK+16,SJ17,BSR18} and the references therein. Although the results just mentioned cover a wide variety of target tracking scenarios, they do not apply to those where the loss function of interest is \emph{non-convex}. One such scenario is \emph{time-of-arrival (TOA)-based tracking}, in which sensors collect TOA measurements of the target signal and the tracking is achieved by minimizing a non-convex least-squares loss function associated with the measurements collected at each time step~\cite{ZXY+10,XDD13,LTS20}. In this scenario, the tracking problem can be viewed as a sequential version of the well-studied TOA-based source localization problem; see, e.g.,~\cite{CMS04,SY07,BSL08,BTC08,XDD11a,JSZ+13,So19}. As far as we know, the TOA-based tracking problem has barely been investigated from the time-varying or online optimization perspective in the literature. Recently, there have been some works that study time-varying optimization problems with general non-convex loss functions. However, the results are not entirely satisfactory when specialized to the TOA-based tracking problem. For instance, the work~\cite{LTS20} proposes an online Newton's method (ONM) and establishes a bound on its \emph{dynamic regret} (i.e., the difference between the cumulative loss incurred by the sequence of solutions generated by the method and that incurred by the sequence of optimal solutions; see~\cite{SDP+20}) by assuming, among other things, that the Hessian of the loss function at each time step satisfies a non-degeneracy condition. It also demonstrates the numerical performance of ONM on the TOA-based tracking problem. Nevertheless, since ONM needs to compute the inverse of the Hessian of the loss function at each time step, it can be computationally expensive. In addition, the work does not shed any light on whether the TOA-based tracking problem satisfies the assumptions underlying the dynamic regret analysis of ONM. As such, the theoretical performance of ONM for TOA-based tracking remains unclear. On the other hand, the work~\cite{HMMR20} develops a dual averaging method and obtains a bound on its dynamic regret under relatively mild assumptions on the loss functions. However, the method is mainly of theoretical interest, as it needs to compute a distribution over the feasible solutions and sample a solution from this distribution at each time step, and neither of these is straightforward to implement for the TOA-based tracking problem. Motivated by the above discussion, we are interested in developing a low-complexity online method for TOA-based tracking and establishing theoretical guarantee on its performance. One method that naturally suggests itself is online gradient descent (OGD). The method only needs to perform a single gradient descent update at each time step, thus making it well-suited for the target tracking task. However, there has been no performance analysis of OGD for our problem setting so far. Not surprisingly, a major difficulty is that the least-squares loss function associated with the TOA measurements is non-convex. The main contribution of this work is the development of the first non-trivial performance bound for OGD when it is applied to the TOA-based tracking problem. The performance metric we adopt is the \emph{cumulative target tracking error} (CTTE), which is defined as the sum of the distances between the estimated target position and the \emph{true target position} at different time steps. Our bound makes explicit the dependence of the CTTE of OGD on the path length of the target trajectory and the noise power of the TOA measurements. It is important to note that there is a subtle yet fundamental difference in nature between the CTTE metric and most other metrics used in the time-varying or online optimization literature. The former measures the performance relative to the \emph{true values of the parameter} we wish to estimate (viz. the true positions of the target at different time steps), while the latter (such as the dynamic regret or the usual tracking error) measure the performance relative to the \emph{optimal solutions} to the loss minimization problems at different time steps. In the context of the TOA-based tracking problem, it is clear that the CTTE defined above is a more relevant performance metric, as ultimately we are interested in how well the online method tracks the true target positions rather than the optimal solutions to the time-varying loss minimization problem. Nevertheless, the use of the true target positions in the definition of CTTE makes it a more challenging metric to analyze. To establish the said CTTE bound, we proceed in two steps. First, we revisit the classic least-squares formulation of the (static) TOA-based source localization problem and elucidate its estimation and geometric properties. Specifically, under standard assumptions on the TOA measurement model, we establish a bound on the estimation error of any least-squares estimate of the true target position and use this bound to show that the loss function, albeit non-convex in general, is locally strongly convex at its global minima. Moreover, we give an explicit estimate of the size of the strong convexity region. We remark that similar results have previously been established for a \emph{time-difference-of-arrival} (TDOA)-based least-squares loss function~\cite{LPS17}. However, to the best of our knowledge, our results for the TOA-based least-squares loss function are new and can be of independent interest. In particular, it provides further theoretical justification for the good empirical performance of gradient-based schemes observed in~\cite{BTC08} when solving the TOA-based source localization problem. Second, we extend our local strong convexity result from the static localization setting to the dynamic target tracking setting. Specifically, we show that as long as the aforementioned assumptions on the TOA measurement model are satisfied and the distance between the true positions of the target at consecutive time steps is sufficiently small, the position estimate of the target at the current time step will lie in the strong convexity region of the loss function at the next time step. This allows us to utilize techniques from online strongly convex optimization to establish the advertised CTTE bound for OGD. The notation in this paper is mostly standard. We use $\|\cdot\|_1$ and $\|\cdot\|$ to denote the $\ell_1$-norm and Euclidean norm, respectively. Given a vector $\bar{\bm x}\in\mathbb R^n$ and a scalar $r>0$, we use $B(\bar{\bm x},r) := \{\bm{x}\in\mathbb R^n: \|\bm{x}-\bar{\bm x}\| \le r\}$ to denote the closed Euclidean ball with center $\bar{\bm x}$ and radius $r$. Given a symmetric matrix $\bm{A}$, we use $\lambda_{\min}(\bm{A})$ to denote its smallest eigenvalue and $\bm{A}\succ\bm{0}$ to indicate that it is positive definite. The rest of the paper is organized as follows. In Section~\ref{sec:formulation}, we present a time-varying optimization formulation of the TOA-based tracking problem and describe how it can be tackled by OGD. In Section~\ref{sec:str_cvx}, we study the estimation error and local strong convexity property of the static TOA-based source localization problem. Using these results, we establish our bound on the CTTE of OGD for the TOA-based tracking problem in Section~\ref{sec:toa-reg}. In Section~\ref{sec:sim}, we present numerical results to demonstrate the efficacy of OGD for TOA-based tracking and illustrate our theoretical findings. We then end with some closing remarks in Section~\ref{sec:concl}. \section{Problem Formulation and Preliminaries}\label{sec:formulation} We begin by describing the setup for TOA-based tracking. Let $\bm{x}_t^{\star}\in\mathbb{R}^n$ be the unknown true position of the moving target at time $t$, where $t=1,\ldots,T$ and $T$ is the time horizon of interest. Furthermore, let $\bm{a}_i\in\mathbb{R}^n$ ($i=1,\ldots,m$) be the known position of the $i$th sensor and suppose that the vectors $\{\bm{a}_i-\bm{a}_1\}_{i=2}^m$ span $\mathbb{R}^n$ (in particular, we have $m \ge n+1$). We consider the following model for TOA-based range measurements: \begin{equation} \label{eq:toa-model} r_i^t = \|\bm{x}_t^{\star} - \bm{a}_i\| + w_i^t, \quad i=1,\ldots,m; \, t=1,\ldots,T. \end{equation} Here, $w_i^t$ is the measurement noise and $r_i^t$ is the noisy TOA-based range measurement between the target and the $i$th sensor at time $t$. We assume that $w_i^t$ is a random variable with mean zero, variance bounded above by $\sigma_t^2$ and is independent of the noise at other sensors and at other time steps. We also assume that $|w_i^t| \ll \|\bm{x}_t^{\star} - \bm{a}_i\|$ for $i=1,\ldots,m$ and $t=1,\ldots,T$. It is worth noting that similar assumptions have appeared in the localization literature; see, e.g.,~\cite{WSL16}. To estimate the target position at time $t$, a natural approach is to consider the following non-convex least-squares formulation: \begin{equation} \label{eq:loss} \min_{\bm{x} \in \mathbb R^n} \ f_t(\bm{x}) := \sum_{i=1}^m(\|\bm{x} - \bm{a}_i\| - r_i^t)^2. \end{equation} Such a formulation is motivated by the fact that when the measurement noise vector $\bm{w}^t=(w_1^t,\ldots,w_m^t)$ is Gaussian, every optimal solution to Problem~\eqref{eq:loss} is a maximum-likelihood estimate of the true target position $\bm{x}_t^\star$; see, e.g.,~\cite{CMS04}. Henceforth, we shall use $\hat{\bm x}_t$ to denote an optimal solution to~\eqref{eq:loss} (i.e., $\hat{\bm x}_t \in \arg\min_{\bm{x}\in\mathbb R^n} f_t(\bm{x})$) and refer to it as a \emph{least-squares estimate} of the true target position $\bm{x}_t^\star$. In this paper, we propose to apply OGD to tackle the time-varying optimization formulation~\eqref{eq:loss}, as it may not be computationally feasible to find an (approximately) optimal solution to~\eqref{eq:loss} at every time step. Specifically, given an estimate $\bm{x}_{t-1}$ of the target position at time $t-1$ and the noisy range measurements $\{r_i^t\}_{i=1}^m$ at time $t$, we generate an estimate $\bm{x}_t$ of the target position at time $t$ via the one-step gradient descent update \begin{align}\label{eq:toa-ogd} \bm{x}_{t} = \bm{x}_{t-1} - \eta_t\nabla f_t(\bm{x}_{t-1}), \quad t=1,\ldots,T, \end{align} where $\eta_t>0$ is the step size. We remark that the update~\eqref{eq:toa-ogd} should be interpreted in a formal sense at this point, as the function $f_t$ is non-differentiable at $\bm{x} \in \{\bm{a}_1,\ldots,\bm{a}_m\}$. We shall justify the validity of~\eqref{eq:toa-ogd} in the following sections. Naturally, we are interested in evaluating the performance of the sequence of position estimates $\{\bm{x}_t\}_{t=1}^T$. For that purpose, we employ the notion of CTTE, which is defined as \[ {\rm CTTE}\left( \{\bm{x}_t\}_{t=1}^T \right) := \sum_{t=1}^T \| \bm{x}_t - \bm{x}_t^\star \|. \] Note that the definition of CTTE involves the sequence of \emph{true target positions} $\{\bm{x}_t^\star\}_{t=1}^T$, not the sequence of \emph{optimal solutions} $\{\hat{\bm{x}}_t\}_{t=1}^T$ to Problem~\eqref{eq:loss}, as it is the former that we are interested in tracking. Indeed, a small CTTE implies that the estimate $\bm{x}_t$ is close to the true target position $\bm{x}_t^\star$ at every time step $t$. Our goal is to bound the CTTE in terms of the variations in the target trajectory $\{ \|\bm{x}_{t+1}^\star - \bm{x}_t^\star\| \}_{t=1}^{T-1}$ and the noise power $\{\sigma_t^2\}_{t=1}^T$ and to derive conditions that can guarantee a sublinear CTTE bound (i.e., $\tfrac{1}{T}{\rm CTTE}\left( \{\bm{x}_t\}_{t=1}^T \right) \rightarrow 0$) on the tracking performance of OGD. We remark that a sublinear CTTE bound is a desirable property for a tracking algorithm to have, as it implies that the target tracking error of the algorithm---i.e., the distance between the target position estimate produced by the algorithm and the true target position---vanishes asymptotically. In the next section, we will develop two results that are key to achieving this goal. Specifically, under the assumption that the power of the measurement noise $\sigma_t^2$ is sufficiently small, we will first establish a bound on the estimation error $\|\hat{\bm x}_t - \bm{x}_t^\star\|$ and then use this bound to show that the loss function $f_t$ is locally strongly convex at the least-squares estimate $\hat{\bm x}_t$.\footnote{A function $g:\mathbb R^n\rightarrow\mathbb R$ is said to be \emph{locally strongly convex at $\bar{\bm x}$} if there exists an $r>0$ such that $g$ is strongly convex on the ball $B(\bar{\bm x},r)$~\cite{Vial82}.} \section{Local Strong Convexity of TOA-Based Source Localization} \label{sec:str_cvx} Consider a fixed time $t$. Then, Problem~\eqref{eq:loss} reduces to the classic TOA-based source localization problem (see, e.g.,~\cite{So19}), in which the target is considered static. For notational simplicity, we drop the index $t$ and write Problem~\eqref{eq:loss} as \begin{align}\label{eq:toa-ml} \min_{\bm{x}\in\mathbb R^n} \ f(\bm{x}) := \sum_{i=1}^m(\|\bm{x} - \bm{a}_i\| - r_i)^2 \end{align} with $r_i=\|\bm{x}^{\star} - \bm{a}_i\| + w_i$. As before, we assume that $w_i$ is a random variable with mean zero, variance bounded above by $\sigma^2$ and satisfies $|w_i| \ll \| \bm{x}^{\star} - \bm{a}_i \|$. Let $\hat{\bm{x}} \in \arg\min_{\bm{x}\in\mathbb R^n} f(\bm{x})$ denote a least-squares estimate of the true target position $\bm{x}^\star$. The following proposition, which plays a crucial role in our subsequent development, shows that $\hat{\bm x}$ and $\bm{x}^\star$ are close when the power of the measurement noise vector $\bm{w}=(w_1,\ldots,w_m)$ is small. \begin{proposition}[Estimation Error of Least-Squares Estimator] \label{thm:esterror} Suppose that $\|\bm{w}\| \le c_0 \sqrt{m}\sigma$ for some constant $c_0>0$. Then, there exist constants $K_1,~K_2 > 0$, which are determined by $\bm{a}_1,\ldots,\bm{a}_m$ and $\bm{x}^\star$, such that \[ \|\hat{\bm{x}}-\bm{x}^\star\| \leq K_1\sqrt{m}\sigma + K_2m\sigma^2. \] \end{proposition} \noindent The proof of Proposition~\ref{thm:esterror} can be found in Appendix~\ref{app:esterror}. The assumption on $\|\bm{w}\|$ in Proposition~\ref{thm:esterror} is rather mild, as it can be satisfied with high probability when, e.g., $w_1,\ldots,w_m$ are sub-Gaussian random variables~\cite[Chapter 3]{Ver18}. Now, using Proposition~\ref{thm:esterror}, we can prove the following theorem, which establishes the local strong convexity of $f$ at $\hat{\bm x}$ and provides an explicit estimate on the size of the strong convexity region around $\hat{\bm x}$. This constitutes our first main result in this paper. \begin{theorem}[Local Strong Convexity of TOA-Based Source Localization] \label{thm:str_cvx} Consider the setting of Proposition~\ref{thm:esterror}. Suppose that for some given $\delta>0$, the noise power $\sigma^2$ satisfies \begin{equation} \label{eq:dist-asp} \|\bm{x}^{\star} - \bm{a}_i\|>K_1\sqrt{m}\sigma + K_2m\sigma^2+\delta, \quad i=1,\ldots,m \end{equation} and \begin{align} \kappa &:= \frac{\delta}{10m} \cdot \Lambda - (K_1\sqrt{m}\sigma + K_2m\sigma^2) - \frac{4c_0\sigma}{5} > 0, \label{eq:eps} \end{align} where \[ \Lambda := \lambda_{\min}\left( \sum_{i=1}^m \left( \frac{\bm{x}^\star-\bm{a}_i}{\|\bm{x}^\star-\bm{a}_i\|} \right)\left( \frac{\bm{x}^\star-\bm{a}_i}{\|\bm{x}^\star-\bm{a}_i\|}\right)^T \right). \] Then, we have $\nabla^2f(\hat{\bm x}+\bm{\epsilon}) \succ \bm{0}$ for all $\bm{\epsilon} \in \mathbb R^n$ satisfying $\| \bm{\epsilon} \| \le \kappa$; i.e., $f$ is strongly convex over $B(\hat{\bm x},\kappa)$. \end{theorem} \noindent The proof of Theorem~\ref{thm:str_cvx} can be found in Appendix~\ref{app:str_cvx}. Here, let us elaborate on the assumptions of the theorem. \begin{enumerate} \item Condition~\eqref{eq:dist-asp} stipulates that the target should be sufficiently far from the sensors, which is not very restrictive in practice. Moreover, when combined with Proposition~\ref{thm:esterror}, the condition implies that $\| \hat{\bm x} - \bm{a}_i\| > \delta$ for $i=1,\ldots,m$, which shows that the loss function $f$ is smooth around the least-squares estimate $\hat{\bm x}$. This allows us to use the Hessian $\nabla^2f$ to characterize the local strong convexity of $f$ at $\hat{\bm x}$. \item Since the vectors $\{\bm{a}_i-\bm{a}_1\}_{i=2}^m$ span $\mathbb R^n$ by assumption, it can be shown that the vectors $\{\bm{x}^\star - \bm{a}_i\}_{i=1}^m$ also span $\mathbb R^n$. This implies that $\Lambda > 0$. Thus, condition~\eqref{eq:eps} can be satisfied when $\sigma>0$ is sufficiently small (incidentally, condition~\eqref{eq:dist-asp} also becomes easier to satisfy as $\sigma$ becomes smaller). An important insight drawn from~\eqref{eq:eps} is that the landscape of the loss function $f$ around the least-squares estimate $\hat{\bm x}$ depends on the noise power level and the geometric configuration of the target and sensors. \end{enumerate} We remark that although the TOA-based source localization problem has been extensively studied in the literature, Theorem~\ref{thm:str_cvx} is, to the best of our knowledge, the first result that elicits the local strong convexity property of the non-convex least-squares formulation~\eqref{eq:toa-ml}. Now, since the strong convexity region $B(\hat{\bm x},\kappa)$ around $\hat{\bm x}$ is compact and $\nabla^2f$ is continuous over $B(\hat{\bm x},\kappa)$, we see that $\nabla f$ is Lipschitz continuous over $B(\hat{\bm x},\kappa)$. Thus, Theorem~\ref{thm:str_cvx} implies that when applying the gradient descent method to tackle Problem~\eqref{eq:toa-ml}, the resulting sequence of iterates will converge to the optimal solution $\hat{\bm x}$ at a linear rate, provided that the initial point lies in the strong convexity region around $\hat{\bm x}$. This can be deduced using the following well-known result. \begin{fact}[Linear Convergence of Gradient Descent for Strongly Convex Minimization; cf.~{\cite[Theorem 2.1.15]{N04}}]\label{thm:conv_GD} Let $g:\mathbb{R}^n\rightarrow\mathbb{R}$ be a function that is smooth, $\mu$-strongly convex, and $L$-gradient Lipschitz continuous on an open convex set $\mathcal{X}\subseteq\mathbb{R}^n$. Suppose that $g$ has a global minimizer $\hat{\bm x}$ over $\mathcal{X}$. Then, the sequence $\{\bm{x}_k\}_{k\ge0}$ generated by the gradient descent method \[ \bm{x}_{k+1} = \bm{x}_k - \eta\nabla g(\bm{x}_k) \] with initial point $\bm{x}_0 \in \mathcal{X}$ and step size $\eta \in (0,2/(\mu + L)]$ satisfies \[ \|\bm{x}_{k+1} - \hat{\bm{x}}\|^2 \leq\left(1 - \frac{2\eta\mu L}{\mu+L}\right)\|\bm{x}_k - \hat{\bm{x}}\|^2. \] \end{fact} \noindent In particular, Theorem~\ref{thm:str_cvx} provides a means to justify the good empirical performance of gradient-based schemes observed in~\cite{BTC08} when solving the TOA-based source localization problem. \section{CTTE of OGD for TOA-Based Tracking}\label{sec:toa-reg} Let us now address the main goal of this paper---namely, to establish a bound on the CTTE of OGD for TOA-based tracking. The results in Section~\ref{sec:str_cvx} suggest that if the iterate generated by OGD at time $t$ lies in the strong convexity region of the loss function at time $t+1$ for $t=0,1,\ldots,T-1$, then the tracking problem is essentially reduced to that of minimizing a time-varying strongly convex function. This opens up the possibility of using techniques from online strongly convex optimization to bound the CTTE of OGD for TOA-based tracking. To realize the above idea, we need to first introduce some additional preliminaries and collect some consequences of the results in Section~\ref{sec:str_cvx}. Observe that the constants $K_1, K_2, \Lambda$ in Theorem~\ref{thm:str_cvx} involve the target position $\bm{x}^\star$. Since the target is moving in the tracking setting, it will simplify our subsequent analysis if we can find uniform bounds on these constants. Towards that end, we further assume that the target stays within a fixed compact region $\mathcal{T} \subseteq \mathbb R^n$ throughout the tracking task. Such an assumption is rather mild in practice. Moreover, since $K_1,K_2,\Lambda$ depend continuously on $\bm{x}^\star$, it implies the existence of finite upper bounds on $K_1, K_2$ and a positive lower bound on $\Lambda$ that hold for all $t\ge0$. As a slight abuse of notation, we shall use $K_1,K_2,\Lambda$ to denote these uniform bounds in the sequel. Following the setting of Theorem~\ref{thm:str_cvx}, let $\hat{\bm x}_t \in \arg\min_{\bm{x}\in\mathbb R^n} f_t(\bm{x})$ denote a least-squares estimate of the true target position $\bm{x}_t^\star$ at time $t$ and $c_0>0$ be a constant such that $\|\bm{w}^t\| \le c_0\sqrt{m}\sigma_t$ for $t=1,\ldots,T$. Furthermore, suppose that for some given $\delta>0$, the maximum noise power $\sigma^2 := \max_{t\in\{1,\ldots,T\}} \sigma_t^2$ satisfies \begin{align} \|\bm{x}_t^{\star} - \bm{a}_i\| &> K_1\sqrt{m}\sigma + K_2m\sigma^2+\delta, \nonumber \\ &\qquad\qquad i=1,\ldots,m; \, t=1,\ldots,T \label{eq:dist-asp-dyn} \end{align} and \begin{align} \kappa &:= \frac{\delta}{10m} \cdot \Lambda - (K_1\sqrt{m}\sigma + K_2m\sigma^2) - \frac{4c_0\sigma}{5} > 0 \label{eq:eps-dyn} \end{align} (recall from the discussion in the preceding paragraph that $K_1,K_2,\Lambda$ are now uniform in $t$ and hence $\kappa$ is also uniform in $t$). Then, using Theorem~\ref{thm:str_cvx}, the expressions for $\nabla f_t, \nabla^2 f_t$, and the assumption that the target stays within the compact region $\mathcal{T}$, we deduce the existence of constants $\mu,L>0$ such that for $t=1,\ldots,T$, \begin{enumerate} \item $f_t$ is $\mu$-strongly convex over $B(\hat{\bm{x}}_t,\kappa)$---i.e., for any $\bm{x},\bm{y}\in B(\hat{\bm{x}}_t,\kappa)$, \begin{align}\label{eq:toa-strcvx} f_t(\bm{x}) \geq f_t(\bm{y}) +\nabla f_t(\bm{y})^T(\bm{x}-\bm{y})+\frac{\mu}{2}\|\bm{x}-\bm{y}\|^2; \end{align} \item $\nabla f_t$ is $L$-Lipschitz continuous over $B(\hat{\bm{x}}_t,\kappa)$---i.e., for any $\bm{x},\bm{y}\in B(\hat{\bm{x}}_t,\kappa)$, \begin{align}\label{eq:toa-gradlip} \|\nabla f_t(\bm{y}) - \nabla f_t(\bm{x})\|\leq L\|\bm{x}-\bm{y}\|; \end{align} \end{enumerate} Now, let $\{\bm{x}_t\}_{t=1}^T$ be the sequence of iterates generated by the OGD update~\eqref{eq:toa-ogd} with initial point $\bm{x}_0$ and step size $\eta_t \equiv \eta \in (0,2/(\mu+L)]$ for $t=1,\ldots,T$. In addition, let $v_t := \|\bm{x}_{t+1}^\star - \bm{x}_t^\star\|$ ($t=1,\ldots,T-1$) denote the variation in the true target position between time $t$ and $t+1$ and $v := \max_{t\in\{1,\ldots,T-1\}} v_t$ denote the maximum variation in the true target position between successive time steps. The following proposition shows that under suitable conditions, OGD maintains the invariant that the iterate generated at the current time step lies in the strong convexity region of the loss function at the next time step. \begin{proposition}[Invariant of OGD] \label{prop:ogd-inv} Suppose that in addition to~\eqref{eq:dist-asp-dyn} and~\eqref{eq:eps-dyn}, the maximum noise power $\sigma^2$ and maximum variation $v$ satisfy \begin{equation} \label{eq:eps-rad} \kappa \ge \frac{2(K_1\sqrt{m}\sigma + K_2m\sigma^2) + v}{1-\rho}, \end{equation} where $\rho := \left( 1-\tfrac{2 \eta \mu L}{\mu+L} \right)^{1/2} \in (0,1)$ with $\mu,L$ given by~\eqref{eq:toa-strcvx},~\eqref{eq:toa-gradlip}, respectively, and $\kappa > 0$ is the radius of the strong convexity region of the loss function $f_t$ around the least-squares estimate $\hat{\bm x}_t$ for $t=1,\ldots,T$. Furthermore, suppose that the initial point $\bm{x}_0$ satisfies $\|\bm{x}_0 - \bm{x}_1^\star\| \le K_1\sqrt{m}\sigma + K_2m\sigma^2$. Then, for $t=0,1,\ldots,T-1$, the iterate $\bm{x}_t$ lies in the strong convexity region $B(\hat{\bm x}_{t+1},\kappa)$ of the loss function $f_{t+1}$. \end{proposition} \begin{proof} We proceed by induction on $t$. For $t=0$, we have \begin{align} \| \bm{x}_0 - \hat{\bm x}_1 \| &\le \| \bm{x}_0 - \bm{x}_1^\star \| + \| \bm{x}_1^\star - \hat{\bm x}_1 \| \nonumber \\ &\le 2(K_1\sqrt{m}\sigma + K_2m\sigma^2) \label{eq:init-bd} \\ &\le \kappa, \nonumber \end{align} where the second inequality follows from our assumption on $\bm{x}_0$ and Proposition~\ref{thm:esterror} and the last follows from our choice of $\kappa$ in~\eqref{eq:eps-rad}. This establishes the base case. Now, for $t\ge1$, we have \begin{align*} &~ \| \bm{x}_t - \hat{\bm x}_{t+1} \| \le \| \bm{x}_t - \hat{\bm x}_t \| + \| \hat{\bm x}_t - \hat{\bm x}_{t+1} \| \\ \le&~ \rho \| \bm{x}_{t-1} - \hat{\bm x}_t \| + \| \hat{\bm x}_t - \bm{x}_t^\star \| + \| \bm{x}_{t+1}^\star - \hat{\bm x}_{t+1} \| \\ &\quad~ + \| \bm{x}_t^\star - \bm{x}_{t+1}^\star \| \\ \le&~ \rho\kappa + 2(K_1\sqrt{m}\sigma + K_2m\sigma^2) + v_t \\ \le&~ \kappa, \end{align*} where the second inequality follows from the OGD update~\eqref{eq:toa-ogd}, the inductive hypothesis (i.e., $\bm{x}_{t-1}$ lies in the strong convexity region of $f_t$), and Fact~\ref{thm:conv_GD}; the third follows from the inductive hypothesis and Proposition~\ref{thm:esterror}; the last follows from our choice of $\kappa$ in~\eqref{eq:eps-rad}. This completes the inductive step and also the proof of Proposition~\ref{prop:ogd-inv}. \end{proof} We remark that since the loss functions $\{f_t\}_{t=1}^T$ are non-convex, some conditions on the maximum noise power, maximum variation, and quality of the initial point are to be expected in the CTTE analysis of OGD for tackling the TOA-based tracking problem~\eqref{eq:loss}. In fact, the performance analysis of ONM for general time-varying non-convex optimization in~\cite{LTS20}, though focusing on the dynamic regret metric, makes use of similar conditions on the maximum variation and quality of the initial point as those in Proposition~\ref{prop:ogd-inv}. Armed with Proposition~\ref{prop:ogd-inv}, we can prove the following theorem, which establishes a CTTE bound for OGD when it is applied to the TOA-based tracking problem. This constitutes our second main result in this paper. \begin{theorem}[CTTE of OGD for TOA-Based Tracking] \label{thm:ogd-ctte} Under the setting of Proposition~\ref{prop:ogd-inv}, the sequence of iterates $\{\bm{x}_t\}_{t=1}^T$ satisfies \[ {\rm CTTE}\left( \{\bm{x}_t\}_{t=1}^T \right) = \mathcal{O}(1 + V(T) + N_1(T) + N_2(T)), \] where $V(T) := \sum_{t=1}^{T-1} \| \bm{x}_{t+1}^\star - \bm{x}_t^\star \| = \sum_{t=1}^{T-1} v_t$ denotes the path length of the target trajectory, $N_1(T):=\sum_{t=1}^T \sigma_t$ denotes the cumulative noise standard deviation, and $N_2(T):=\sum_{t=1}^T \sigma_t^2$ denotes the cumulative noise variance. \end{theorem} \begin{proof} Using the definition of CTTE and the triangle inequality, we have \begin{align} {\rm CTTE}\left( \{\bm{x}_t\}_{t=1}^T \right) &= \sum_{t=1}^T \| \bm{x}_t - \bm{x}_t^\star \| \nonumber \\ &\le \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| + \sum_{t=1}^T \| \hat{\bm x}_t - \bm{x}_t^\star \|. \label{eq:pre-ctte} \end{align} Let us now bound the two terms in~\eqref{eq:pre-ctte} separately. For the first term, we begin by adapting the argument used in the proof of~\cite[Theorem 1]{MSJ+16} to our time-varying optimization setting and bound \begin{align*} &~ \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| \le \rho \sum_{t=1}^T \| \bm{x}_{t-1} - \hat{\bm x}_t \| \\ \le&~ \rho \| \bm{x}_0 - \hat{\bm x}_1 \| + \rho \sum_{t=2}^T \| \bm{x}_{t-1} - \hat{\bm x}_{t-1} \| + \rho \sum_{t=2}^T \| \hat{\bm x}_{t-1} - \hat{\bm x}_t \| \\ =&~ \rho \left( \| \bm{x}_0 - \hat{\bm x}_1 \| - \| \bm{x}_T - \hat{\bm x}_T \| \right) + \rho \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| \\ &~\, + \rho \sum_{t=1}^{T-1} \| \hat{\bm x}_t - \hat{\bm x}_{t+1} \|, \end{align*} where the first inequality follows from the OGD update~\eqref{eq:toa-ogd}, Proposition~\ref{prop:ogd-inv}, and Fact~\ref{thm:conv_GD}. It follows that \begin{align} \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| \le \frac{\rho}{1-\rho} \left( \| \bm{x}_0 - \hat{\bm x}_1 \| + \sum_{t=1}^{T-1} \| \hat{\bm x}_t - \hat{\bm x}_{t+1} \| \right). \label{eq:cum-err} \end{align} Now, using~\eqref{eq:init-bd}, we get \[ \| \bm{x}_0 - \hat{\bm x}_1 \| \le 2(K_1\sqrt{m}\sigma+K_2m\sigma^2). \] Furthermore, we have \begin{align*} &~ \sum_{t=1}^{T-1} \| \hat{\bm x}_t - \hat{\bm x}_{t+1} \| \\ \le&~ \sum_{t=1}^{T-1} \left( \| \hat{\bm x}_t - \bm{x}_t^\star \| + \| \bm{x}_t^\star - \bm{x}_{t+1}^\star \| + \| \bm{x}_{t+1}^\star - \hat{\bm x}_{t+1} \| \right) \\ \le&~ \sum_{t=1}^{T-1} \left( K_1\sqrt{m}(\sigma_t+\sigma_{t+1}) + K_2m(\sigma_t^2+\sigma_{t+1}^2) + v_t \right) \\ =&~ \sum_{t=1}^{T-1} v_t + 2K_1\sqrt{m} \sum_{t=1}^{T} \sigma_t + 2K_2m\sum_{t=1}^{T} \sigma_t^2, \end{align*} where the second inequality follows from Proposition~\ref{thm:esterror}. Substituting the above into~\eqref{eq:cum-err} yields \[ \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| = \mathcal{O}(1 + V(T) + N_1(T) + N_2(T)). \] For the second term, we simply invoke Proposition~\ref{thm:esterror} to get \begin{align*} \sum_{t=1}^T \| \hat{\bm x}_t - \bm{x}_t^\star \| &\le K_1\sqrt{m} \sum_{t=1}^T \sigma_t + K_2 m \sum_{t=1}^T \sigma_t^2 \\ &= \mathcal{O}(N_1(T) + N_2(T)). \end{align*} The desired result now follows by substituting the above into~\eqref{eq:pre-ctte}. \end{proof} Theorem~\ref{thm:ogd-ctte} reveals that OGD can achieve sublinear CTTE when both the path length $V(T)$ and the cumulative noise power $N_2(T)$ grow sublinearly (note that the latter, together with the fact that $N_1(T) \le \sqrt{T \cdot N_2(T)}$, implies the sublinear growth of the cumulative noise standard deviation $N_1(T)$). Roughly speaking, this means that if the target is not moving too fast and the noise power decays at a sufficiently fast rate over time, then the target tracking error of OGD will vanish asymptotically. It is important to note that our CTTE bound is expressed in terms of the path length of the \emph{target trajectory} (i.e., $V(T) = \sum_{t=1}^{T-1} \| \bm{x}_{t+1}^\star - \bm{x}_t^\star \|$), not the path length of the \emph{optimal solution trajectory} of the time-varying loss function (i.e., $V'(T) := \sum_{t=1}^{T-1} \| \hat{\bm x}_{t+1} - \hat{\bm x}_t \|$). Although the latter is commonly used in existing performance analyses of online methods (see, e.g.,~\cite{MSJ+16,BSR18,LTS20}), the former captures the actual variations in the target trajectory and is thus more relevant to the tracking problem considered in this paper. It is also worth noting that our CTTE bound shows explicitly how the TOA measurement noise affects the tracking performance of OGD through the terms $N_1(T)$ and $N_2(T)$. \section{Numerical Simulations} \label{sec:sim} In this section, we present numerical results to demonstrate the efficacy of OGD for the TOA-based tracking problem and illustrate our theoretical findings. Specifically, we apply both OGD and ONM---the latter has previously been used in~\cite{LTS20} to tackle the TOA-based tracking problem---to various test instances and compare their tracking performance. In all the considered instances, there are $m=3$ sensors, which are located at $\bm{a}_1 = \begin{bmatrix} 0.5 & 0.5 \end{bmatrix} ^T$, $\bm{a}_2 = \begin{bmatrix} 0 & 0.5 \end{bmatrix} ^T$, and $\bm{a}_3 = \begin{bmatrix} 0.5 & 0 \end{bmatrix} ^T$. Given the time horizon of interest $T$ and the target trajectory $\{\bm{x}_t^\star\}_{t=1}^T$, the measurement noise $w_i^t$ in~\eqref{eq:toa-model} is generated according to the Gaussian distribution with mean zero and variance $\sigma_t^2$ for $i=1,\ldots,m$; $t=1,\ldots,T$, and the TOA-based range measurements $\{ r_i^t : i=1,\ldots,m; \, t=1,\ldots,T \}$ are then obtained using~\eqref{eq:toa-model}. We consider two initialization strategies for OGD and ONM. One is \emph{exact initialization}, which assumes that the true initial target position $\bm{x}_1^\star$ is known and takes $\bm{x}_0=\bm{x}_1^\star$ as the initial point. The other is \emph{ordinary least-squares (OLS) initialization}, which takes \begin{equation} \label{eq:OLS-init} \bm{x}_0 = (\bm{A}^T\bm{A})^{-1}\bm{A}^T\bm{b}_1 \end{equation} with \begin{align} \bm{A} &:= \begin{bmatrix} (\bm{a}_2-\bm{a}_1)^T\\ \vdots\\ (\bm{a}_m-\bm{a}_{m-1})^T\\ \end{bmatrix}, \label{eq:LS-A} \\ \bm{b}_1 &:= \frac{1}{2}\begin{bmatrix} \|\bm{a}_2\|^2 - \|\bm{a}_1\|^2 + (r_1^1)^2 - (r_2^1)^2 \\ \vdots\\ \|\bm{a}_m\|^2 - \|\bm{a}_{m-1}\|^2 + (r_{m-1}^1)^2 - (r_m^1)^2 \end{bmatrix} \label{eq:LS-b} \end{align} as the initial point; see~\cite{STK05}. The OLS estimate in~\eqref{eq:OLS-init} can be obtained as follows: Observe that any $\bm{x}$ satisfying \[ \| \bm{x} - \bm{a}_i \|^2 \approx (r_i^1)^2, \quad i=1,\ldots,m \] can serve as an estimate of the true initial target position $\bm{x}_1^\star$. Upon subtracting the $i$th equation from the $(i+1)$st, where $i=1,\ldots,m-1$, we get \[ 2(\bm{a}_{i+1}-\bm{a}_i)^T\bm{x} \approx \|\bm{a}_{i+1}\|^2 - \|\bm{a}_i\|^2 + (r_i^1)^2 - (r_{i+1}^1)^2.\] In particular, we can obtain an estimate of $\bm{x}_1^\star$ by solving \begin{align} \label{eq:LS} \min_{\bm{x} \in \mathbb{R}^n} \|\bm{A}\bm{x} - \bm{b}\|^2, \end{align} where $\bm{A}$ and $\bm{b}$ are given by~\eqref{eq:LS-A} and~\eqref{eq:LS-b}, respectively. Since the vectors $\{\bm{a}_{i}-\bm{a}_1\}_{i=2}^{m}$ span $\mathbb{R}^n$ by assumption, the solution to~\eqref{eq:LS} is readily given by~\eqref{eq:OLS-init}. It is worth noting that the OLS estimate in~\eqref{eq:OLS-init} can be computed simply by using the sensor positions $\{\bm{a}_i\}_{i=1}^m$ and noisy range measurements $\{r_i^1\}_{i=1}^m$. Thus, it is an attractive choice for initializing OGD and ONM. We use the step size $\eta_t = 0.1$ for $t=1,\ldots,T$ in OGD. Then, OGD generates the position estimates of the target via~\eqref{eq:toa-ogd}, while ONM generates those via \[ \bm{x}_t = \bm{x}_{t-1} - \left(\nabla^2f_t(\bm{x}_{t-1})\right)^{-1}\nabla f_t(\bm{x}_{t-1}), \quad t=1,\ldots,T. \] All computations were carried out in MATLAB on an Intel(R) Core(TM) i5-8600 CPU 3.10GHz CPU machine. The CTTE shown in the figures are averaged over 1000 Monte Carlo runs. \subsection{Small Noise Level and Path Variation} To begin, we construct the following set of test instances (cf.~\cite[Section IV]{LTS20}): The time horizon of interest $T$ is set to $500$. The target's initial position is set to $\bm{x}_1^\star = \begin{bmatrix} 2 & 1 \end{bmatrix}^T $ and its positions at subsequent time steps are given by \begin{equation}\label{eq:source-update} \bm{x}_{t+1}^\star = \bm{x}_t^\star + \frac{0.005}{\sqrt{2(t+1)}} \bm{u}_t, \quad t=1,\ldots,T-1, \end{equation} where $\bm{u}_1,\ldots,\bm{u}_{T-1} \in \mathbb R^2$ are independently and uniformly distributed on the unit circle centered at the origin. We consider three scenarios, which correspond to three different noise levels: (i) $\sigma_t = 0.0001$ for $t=1,\ldots,T$; (ii) $\sigma_t = 0.01$ for $t=1,\ldots,T$; (iii) $\sigma_t = \tfrac{0.01}{\sqrt{t}}$ for $t=1,\ldots,T$. Figures~\ref{fig:error_sigma1e-4}--\ref{fig:error_sigma1e-2oversqrt} show the CTTE of OGD and ONM with exact and OLS initialization at these three noise levels. Figures~\ref{fig:traj_sigma1e-4}--\ref{fig:traj_sigma1e-2oversqrt} show the tracking trajectories generated by OGD and ONM for particular instances at those noise levels with OLS initialization. We also include the trajectories of the least-squares estimates $\{\hat{\bm x}_t\}_{t=1}^T$ in the figures for reference. These trajectories are generated using gradient descent (GD) at each time step. Specifically, at time $t$, we use the true target position $\bm{x}_t^\star$ as the initial point and perform the GD updates using the constant step size $1/m$ until either the norm of the gradient is smaller than $10^{-8}$ or the number of iterations reaches 5000. We then declare the last iterate to be $\hat{\bm x}_t$. \begin{figure*}[!t] \centering \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/new_error_sig1e-4} \caption{$\sigma_t=0.0001$} \label{fig:error_sigma1e-4} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/error_sig1e-2} \caption{$\sigma_t=0.01$} \label{fig:error_sigma1e-2} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/error_sig1e-2oversqrt} \caption{$\sigma_t=0.01/\sqrt{t}$} \label{fig:error_sigma1e-2oversqrt} \end{subfigure} \vfill \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/GD1_traj_sig1e-4.png} \caption{$\sigma_t=0.0001$} \label{fig:traj_sigma1e-4} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/GD1_traj_sig1e-2.png} \caption{$\sigma_t=0.01$} \label{fig:traj_sigma1e-2} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/GD1_traj_sig1e-2oversqrt.png} \caption{$\sigma_t=0.01/\sqrt{t}$} \label{fig:traj_sigma1e-2oversqrt} \end{subfigure} \caption{CTTE (top row) and tracking trajectories (bottom row) of OGD and ONM at different noise levels.} \label{fig:OGDvsONM} \end{figure*} In the first scenario, the noise level is small compared to the path variation (i.e., $\sigma_{t+1}=0.0001$ vs. $v_t = \tfrac{0.005}{\sqrt{2(t+1)}}$ for $t=1,\ldots,T-1$ with $T=500$). We see from Figure~\ref{fig:error_sigma1e-4} that ONM has a smaller CTTE than OGD. This can be explained as follows: First, Proposition~\ref{thm:esterror} implies that the least-squares estimate $\hat{\bm x}_t$ is close to the true target position $\bm{x}_t^\star$ for $t=1,\ldots,T$. Second, since ONM uses both first- and second-order information of the loss function $f_t$, the point it generates is closer to $\hat{\bm x}_t$ than that generated by OGD. This suggests that ONM is better at tracking the least-squares estimates than OGD. In fact, these two claims are corroborated by our numerical results; see Figure~\ref{fig:traj_sigma1e-4}. In the second scenario, the noise level increases relative to the path variation (i.e., $\sigma_{t+1}=0.01$ vs. $v_t = \tfrac{0.005}{\sqrt{2(t+1)}}$ for $t=1,\ldots,T-1$ with $T=500$). Here, the ability of ONM to track the least-squares estimates closely becomes a liability, because Proposition~\ref{thm:esterror} suggests that the true target position and the least-squares estimate will be further apart. Indeed, as shown in Figure~\ref{fig:error_sigma1e-2}, ONM has a larger CTTE than OGD, and the gap widens as time goes by. We see from Figure~\ref{fig:traj_sigma1e-2} that ONM is much better at tracking the least-squares estimates than OGD. However, the least-squares estimates are quite far from the true target positions, and OGD is better at tracking the latter. We note that in the above two scenarios, the noise level is constant, and the CTTE of OGD eventually grows linearly (see Figures~\ref{fig:error_sigma1e-4} and~\ref{fig:error_sigma1e-2}). This is consistent with the result in Theorem~\ref{thm:ogd-ctte}, as $N_1(T)=\Theta(T)$ and $N_2(T)=\Theta(T)$ and both terms dominate $V(T)=\Theta(\sqrt{T})$. In the third scenario, the noise level diminishes as time goes by, but the relative magnitude between noise level and path variation stays roughly constant (i.e., $\sigma_{t+1}=\tfrac{0.01}{\sqrt{t}}$ vs. $v_t = \tfrac{0.005}{\sqrt{2(t+1)}}$ for $t=1,\ldots,T-1$ with $T=500$). From Figure~\ref{fig:error_sigma1e-2oversqrt}, we see that with exact initialization, OGD has a smaller CTTE than ONM. This suggests that the high initial noise level, which causes the least-squares estimate to deviate from the true target position, throws off ONM and degrades its subsequent tracking performace even though the noise level is diminishing. Moreover, given the high initial noise level, the OLS initialization strategy tends to produce an inaccurate estimate of the true initial target position. Consequently, with OLS initialization, the CTTE of both OGD and ONM grow rapidly in the beginning, though the former is more affected by the quality of the OLS estimate than the latter. Nevertheless, we observe that the CTTE gap between OGD and ONM narrows as time goes by. This supports our earlier claim that OGD is better at tracking the true target positions than ONM; see also Figure~\ref{fig:traj_sigma1e-2oversqrt}. Lastly, we note that the CTTE of OGD grows sublinearly. This is consistent with the result in Theorem~\ref{thm:ogd-ctte}, as we have $V(T) = \Theta(\sqrt{T})$, $N_1(T) = \Theta(\sqrt{T})$, and $N_2(T)=\Theta(\log T)$. We also compare the per-iteration CPU time of OGD and ONM. As can be seen in Table~\ref{tab:CPUtime}, OGD is about 2-3 times faster than ONM. The higher runtime of the latter can be attributed to the computation of the inverse of the Hessian of the loss function. \begin{table}[htb] \fontsize{10}{12.5}\selectfont \centering \begin{tabular}{c|c|c} Noise Level & OGD & ONM\\ \hline $\sigma_t=0.0001$ & $5.23\times10^{-6}$s & $1.36\times10^{-5}$s\\ $\sigma_t=0.01$ & $5.11\times10^{-6}$s & $1.31\times10^{-5}$s\\ $\sigma_t=0.01/\sqrt{t}$ & $5.05\times10^{-6}$s & $1.29\times10^{-5}$s \end{tabular} \caption{Per-iteration CPU time of OGD and ONM.} \label{tab:CPUtime} \end{table} To better understand the effect of the relative magnitude between noise level and path variation on the tracking performance of OGD and ONM, let us plot Figure~\ref{fig:error_sigma1e-4} again but with the longer time horizon $T=10000$. The result is shown in Figure~\ref{fig:CTTE}. Although the CTTE of OGD is higher than that of ONM in the beginning, the latter eventually overtakes the former as $t$ increases. This is consistent with our earlier observation that ONM is better at tracking the least-squares estimates than OGD. Indeed, when $t$ is sufficiently large, the noise level $\sigma_{t+1}=0.0001$ is larger than the path variation $v_t = \tfrac{0.005}{\sqrt{2(t+1)}}$. Thus, as time goes by, the true target position and the least-squares estimate become further apart (see Proposition~\ref{thm:esterror}), and ONM starts to incur a higher target tracking error at each time step. This suggests that the performance of ONM is rather sensitive to the noise level, while that of OGD is quite stable. \begin{figure} \centering \includegraphics[scale=0.48]{fig/rand_sig1e-4T10000.png} \caption{CTTE of OGD and ONM at noise level $\sigma_t=0.0001$, $T=10000$.} \label{fig:CTTE} \end{figure} As a further illustration, we construct another set of test instances with $T=10000$, the same initial target position $\bm{x}_1^\star = \begin{bmatrix} 2 & 1 \end{bmatrix}^T $ and target trajectory~\eqref{eq:source-update} as before, and the following two different noise levels: (i) $\sigma_t = \frac{0.005}{\sqrt{2t}}$ for $t=1,\ldots,T$; (ii) $\sigma_t = \frac{0.008}{\sqrt{2t}}$ for $t=1,\ldots,T$. For $t=1,\ldots,T-1$, the ratios of noise level $\sigma_{t+1}$ to path variation $v_t$ in these two cases are 1 and 1.6, respectively. Figures~\ref{fig:rand_noise-same-variation}--\ref{fig:rand_noise-1pt6-variation} show the CTTE of OGD and ONM with exact and OLS initialization at these two noise levels. \begin{figure*}[htb] \centering \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=0.92\textwidth]{fig/rand_noise-same-variation.png} \caption{$\bm{x}_{t+1}^\star = \bm{x}_t^\star + \frac{0.005}{\sqrt{2(t+1)}} \bm{u}_t,~\sigma_t = \frac{0.005}{\sqrt{2t}}$} \label{fig:rand_noise-same-variation} \end{subfigure} \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=0.92\textwidth]{fig/rand_noise-1pt6-variation.png} \caption{$\bm{x}_{t+1}^\star = \bm{x}_t^\star + \frac{0.005}{\sqrt{2(t+1)}} \bm{u}_t,~\sigma_t = \frac{0.008}{\sqrt{2t}}$} \label{fig:rand_noise-1pt6-variation} \end{subfigure} \caption{CTTE of OGD and ONM when applied to different target trajectories and noise levels.} \label{fig:region_OGD} \end{figure*} When the noise level to path variation ratio is 1, Figure~\ref{fig:rand_noise-same-variation} shows that ONM performs better than OGD, regardless of whether exact or OLS initialization is used. However, when the ratio increases to $1.6$, Figure~\ref{fig:rand_noise-1pt6-variation} shows that OGD eventually performs better than ONM, regardless of whether exact or OLS initialization is used. These results corroborate our earlier account that OGD is better at tracking the true target positions, while ONM is better at tracking the least-squares estimates. \subsection{Large Noise Level and Path Variation} Next, we study the CTTE of OGD and ONM when the two methods are applied to test instances that violate one or more of the conditions~\eqref{eq:dist-asp-dyn},~\eqref{eq:eps-dyn}, and~\eqref{eq:eps-rad}. In particular, there is no guarantee that the iterate generated by OGD at the current time step lies in the strong convexity region of the loss function at the next time step. We first construct a test instance that has large noise level and path variation but the ratio between them is small. The time horizon of interest is set to $T=500$. The target's initial position is set to $\bm{x}_1^\star = \begin{bmatrix} 2 & 1 \end{bmatrix}^T $ and its subsequent positions are given by \[ \bm{x}_{t+1}^\star = \bm{x}_t^\star+\frac{0.1}{\sqrt{2(t+1)}}\bm{u}_t,\quad t = 1,\ldots,T-1. \] Here, as before, $\bm{u}_1,\ldots,\bm{u}_{T-1} \in \mathbb R^2$ are independently and uniformly distributed on the unit circle centered at the origin. The noise levels are given by $\sigma_t=\frac{0.1}{\sqrt{2t}}$ for $t = 1,\ldots,T$. Figure~\ref{fig:rand_noise-var_1e-1oversqrt} shows the CTTE of OGD and ONM. We observe that the CTTE of OGD is much lower than that of ONM with both exact and OLS initialization. One possible explanation is that the good performance of ONM relies heavily on the local strong convexity of the loss function, and the lack of such a property seriously affects its performance. Now, let us construct a test instance that has a small noise level but large path variation, so that the ratio between them is small. The time horizon of interest and the target's initial position are the same as before. The target trajectory is given by \[ \bm{x}_{t+1}^\star = \bm{x}_t^\star+\frac{0.5}{\sqrt{2(t+1)}}\bm{u}_t,\quad t = 1,\ldots,T-1, \] while the noise levels are given by $\sigma_t=\frac{0.001}{\sqrt{2t}}$ for $t = 1,\ldots,T$. Figure~\ref{fig:rand_noise_1e-3oversqrt-var_5e-1oversqrt} shows the CTTE of OGD and ONM. We see that the CTTE of OGD is much lower than that of ONM. In fact, when the iterates are no longer guaranteed to lie in the strong convexity regions of the loss functions, ONM becomes rather unstable regardless of the noise level to path variation ratio. This supports our earlier explanation that the local strong convexity of the loss function is crucial to the good performance of ONM. By contrast, OGD is much more robust and can better track the true target positions even when the conditions for local strong convexity are violated. \begin{figure*}[!t] \centering \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=0.92\textwidth]{fig/rand_noise-var_1e-1oversqrt.png} \caption{$\bm{x}_{t+1}^\star = \bm{x}_t^\star+\frac{0.1}{\sqrt{2(t+1)}}\bm{u}_t,~\sigma_t=\frac{0.1}{\sqrt{2t}}$} \label{fig:rand_noise-var_1e-1oversqrt} \end{subfigure} \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=0.92\textwidth]{fig/rand_noise_1e-3oversqrt-var_5e-1oversqrt.png} \caption{$\bm{x}_{t+1}^\star = \bm{x}_t^\star+\frac{0.5}{\sqrt{2(t+1)}}\bm{u}_t,~\sigma_t=\frac{0.001}{\sqrt{2t}}$} \label{fig:rand_noise_1e-3oversqrt-var_5e-1oversqrt} \end{subfigure} \caption{CTTE of OGD and ONM with large noise level and/or path variation.} \label{fig:large_var&noise} \end{figure*} \section{Conclusion} \label{sec:concl} In this paper, we established the first non-trivial performance bound for OGD when it is applied to a time-varying non-convex least-squares formulation of the TOA-based tracking problem. The performance metric we adopted is the CTTE, which measures the cumulative discrepancy between the trajectory of position estimates and that of the true target. To establish the said performance bound, we developed new results on the estimation and geometric properties of the classic static TOA-based source localization problem, which can be of independent interest. Our numerical results corroborate the theoretical findings and show that OGD can effectively track the target at different noise levels. A possible future direction is to design and analyze online methods for TDOA-based tracking, which corresponds to a sequential version of the TDOA-based source localization problem (see, e.g.,~\cite{LCS09} and the references therein). One possible approach is to combine the results in~\cite{LPS17} with the techniques developed in this paper. Another future direction is to study the performance of different online methods for solving the TOA-based tracking problem. \section{Introduction} \label{sec:intro} Target tracking~\cite{LWH+02,SNP16} is a key enabling technology in many applications of multi-agent systems and wireless sensor networks, such as motion planning~\cite{CWC+04,DSH09} and surveillance~\cite{PJ05,PDS14}. In one of its basic forms, the tracking problem aims to maintain position estimates of a moving, signal-emitting target over time using noisy measurements of the emitted signal collected by stationary sensors. Such a sequential localization formulation has been extensively studied in the control and signal processing communities, and various approaches for tackling it have been proposed. When a model on the target dynamics and noise statistics is available, a classic approach is to employ Kalman filtering techniques to perform the tracking; see, e.g.,~\cite{LEH06,TFLC09,WLM11,YRLS14} and the references therein. In recent years, however, there have been increasing efforts in developing tracking techniques that require only minimal assumptions on the target trajectory and/or noise distribution. One approach is to view the sequential localization formulation through the lens of time-varying optimization~\cite{DSBM20,SDP+20}. Specifically, at each time step, the position estimate of the target is given by a minimizer of a loss function that depends on the noisy signal measurements collected at that time step. However, since the time interval between successive measurements is often very short and the sensors have limited computational power, it is impractical to solve the loss minimization problem at each time step exactly. This motivates the use of online optimization techniques to tackle the target tracking problem. To evaluate the performance of an online method, various metrics are available; see~\cite{SDP+20}. These metrics differ in how they measure the discrepancy between the solutions generated by the method at different time steps and the optimal solutions at the corresponding time steps. When the loss function is convex at every time step, it has been shown that many online methods enjoy strong performance guarantees under different metrics; see, e.g.,~\cite{Zink03,HW15,JRSS15,MSJ+16,SMK+16,SJ17,BSR18} and the references therein. Although the results just mentioned cover a wide variety of target tracking scenarios, they do not apply to those where the loss function of interest is \emph{non-convex}. One such scenario is \emph{time-of-arrival (TOA)-based tracking}, in which sensors collect TOA measurements of the target signal and the tracking is achieved by minimizing a non-convex least-squares loss function associated with the measurements collected at each time step~\cite{ZXY+10,XDD13,LTS20}. In this scenario, the tracking problem can be viewed as a sequential version of the well-studied TOA-based source localization problem; see, e.g.,~\cite{CMS04,SY07,BSL08,BTC08,XDD11a,JSZ+13,So19}. As far as we know, the TOA-based tracking problem has barely been investigated from the time-varying or online optimization perspective in the literature. Recently, there have been some works that study time-varying optimization problems with general non-convex loss functions. However, the results are not entirely satisfactory when specialized to the TOA-based tracking problem. For instance, the work~\cite{LTS20} proposes an online Newton's method (ONM) and establishes a bound on its \emph{dynamic regret} (i.e., the difference between the cumulative loss incurred by the sequence of solutions generated by the method and that incurred by the sequence of optimal solutions; see~\cite{SDP+20}) by assuming, among other things, that the Hessian of the loss function at each time step satisfies a non-degeneracy condition. It also demonstrates the numerical performance of ONM on the TOA-based tracking problem. Nevertheless, since ONM needs to compute the inverse of the Hessian of the loss function at each time step, it can be computationally expensive. In addition, the work does not shed any light on whether the TOA-based tracking problem satisfies the assumptions underlying the dynamic regret analysis of ONM. As such, the theoretical performance of ONM for TOA-based tracking remains unclear. On the other hand, the work~\cite{HMMR20} develops a dual averaging method and obtains a bound on its dynamic regret under relatively mild assumptions on the loss functions. However, the method is mainly of theoretical interest, as it needs to compute a distribution over the feasible solutions and sample a solution from this distribution at each time step, and neither of these is straightforward to implement for the TOA-based tracking problem. Motivated by the above discussion, we are interested in developing a low-complexity online method for TOA-based tracking and establishing theoretical guarantee on its performance. One method that naturally suggests itself is online gradient descent (OGD). The method only needs to perform a single gradient descent update at each time step, thus making it well-suited for the target tracking task. However, there has been no performance analysis of OGD for our problem setting so far. Not surprisingly, a major difficulty is that the least-squares loss function associated with the TOA measurements is non-convex. The main contribution of this work is the development of the first non-trivial performance bound for OGD when it is applied to the TOA-based tracking problem. The performance metric we adopt is the \emph{cumulative target tracking error} (CTTE), which is defined as the sum of the distances between the estimated target position and the \emph{true target position} at different time steps. Our bound makes explicit the dependence of the CTTE of OGD on the path length of the target trajectory and the noise power of the TOA measurements. It is important to note that there is a subtle yet fundamental difference in nature between the CTTE metric and most other metrics used in the time-varying or online optimization literature. The former measures the performance relative to the \emph{true values of the parameter} we wish to estimate (viz. the true positions of the target at different time steps), while the latter (such as the dynamic regret or the usual tracking error) measure the performance relative to the \emph{optimal solutions} to the loss minimization problems at different time steps. In the context of the TOA-based tracking problem, it is clear that the CTTE defined above is a more relevant performance metric, as ultimately we are interested in how well the online method tracks the true target positions rather than the optimal solutions to the time-varying loss minimization problem. Nevertheless, the use of the true target positions in the definition of CTTE makes it a more challenging metric to analyze. To establish the said CTTE bound, we proceed in two steps. First, we revisit the classic least-squares formulation of the (static) TOA-based source localization problem and elucidate its estimation and geometric properties. Specifically, under standard assumptions on the TOA measurement model, we establish a bound on the estimation error of any least-squares estimate of the true target position and use this bound to show that the loss function, albeit non-convex in general, is locally strongly convex at its global minima. Moreover, we give an explicit estimate of the size of the strong convexity region. We remark that similar results have previously been established for a \emph{time-difference-of-arrival} (TDOA)-based least-squares loss function~\cite{LPS17}. However, to the best of our knowledge, our results for the TOA-based least-squares loss function are new and can be of independent interest. In particular, it provides further theoretical justification for the good empirical performance of gradient-based schemes observed in~\cite{BTC08} when solving the TOA-based source localization problem. Second, we extend our local strong convexity result from the static localization setting to the dynamic target tracking setting. Specifically, we show that as long as the aforementioned assumptions on the TOA measurement model are satisfied and the distance between the true positions of the target at consecutive time steps is sufficiently small, the position estimate of the target at the current time step will lie in the strong convexity region of the loss function at the next time step. This allows us to utilize techniques from online strongly convex optimization to establish the advertised CTTE bound for OGD. The notation in this paper is mostly standard. We use $\|\cdot\|_1$ and $\|\cdot\|$ to denote the $\ell_1$-norm and Euclidean norm, respectively. Given a vector $\bar{\bm x}\in\mathbb R^n$ and a scalar $r>0$, we use $B(\bar{\bm x},r) := \{\bm{x}\in\mathbb R^n: \|\bm{x}-\bar{\bm x}\| \le r\}$ to denote the closed Euclidean ball with center $\bar{\bm x}$ and radius $r$. Given a symmetric matrix $\bm{A}$, we use $\lambda_{\min}(\bm{A})$ to denote its smallest eigenvalue and $\bm{A}\succ\bm{0}$ to indicate that it is positive definite. The rest of the paper is organized as follows. In Section~\ref{sec:formulation}, we present a time-varying optimization formulation of the TOA-based tracking problem and describe how it can be tackled by OGD. In Section~\ref{sec:str_cvx}, we study the estimation error and local strong convexity property of the static TOA-based source localization problem. Using these results, we establish our bound on the CTTE of OGD for the TOA-based tracking problem in Section~\ref{sec:toa-reg}. In Section~\ref{sec:sim}, we present numerical results to demonstrate the efficacy of OGD for TOA-based tracking and illustrate our theoretical findings. We then end with some closing remarks in Section~\ref{sec:concl}. \section{Problem Formulation and Preliminaries}\label{sec:formulation} We begin by describing the setup for TOA-based tracking. Let $\bm{x}_t^{\star}\in\mathbb{R}^n$ be the unknown true position of the moving target at time $t$, where $t=1,\ldots,T$ and $T$ is the time horizon of interest. Furthermore, let $\bm{a}_i\in\mathbb{R}^n$ ($i=1,\ldots,m$) be the known position of the $i$th sensor and suppose that the vectors $\{\bm{a}_i-\bm{a}_1\}_{i=2}^m$ span $\mathbb{R}^n$ (in particular, we have $m \ge n+1$). We consider the following model for TOA-based range measurements: \begin{equation} \label{eq:toa-model} r_i^t = \|\bm{x}_t^{\star} - \bm{a}_i\| + w_i^t, \quad i=1,\ldots,m; \, t=1,\ldots,T. \end{equation} Here, $w_i^t$ is the measurement noise and $r_i^t$ is the noisy TOA-based range measurement between the target and the $i$th sensor at time $t$. We assume that $w_i^t$ is a random variable with mean zero, variance bounded above by $\sigma_t^2$ and is independent of the noise at other sensors and at other time steps. We also assume that $|w_i^t| \ll \|\bm{x}_t^{\star} - \bm{a}_i\|$ for $i=1,\ldots,m$ and $t=1,\ldots,T$. It is worth noting that similar assumptions have appeared in the localization literature; see, e.g.,~\cite{WSL16}. To estimate the target position at time $t$, a natural approach is to consider the following non-convex least-squares formulation: \begin{equation} \label{eq:loss} \min_{\bm{x} \in \mathbb R^n} \ f_t(\bm{x}) := \sum_{i=1}^m(\|\bm{x} - \bm{a}_i\| - r_i^t)^2. \end{equation} Such a formulation is motivated by the fact that when the measurement noise vector $\bm{w}^t=(w_1^t,\ldots,w_m^t)$ is Gaussian, every optimal solution to Problem~\eqref{eq:loss} is a maximum-likelihood estimate of the true target position $\bm{x}_t^\star$; see, e.g.,~\cite{CMS04}. Henceforth, we shall use $\hat{\bm x}_t$ to denote an optimal solution to~\eqref{eq:loss} (i.e., $\hat{\bm x}_t \in \arg\min_{\bm{x}\in\mathbb R^n} f_t(\bm{x})$) and refer to it as a \emph{least-squares estimate} of the true target position $\bm{x}_t^\star$. In this paper, we propose to apply OGD to tackle the time-varying optimization formulation~\eqref{eq:loss}, as it may not be computationally feasible to find an (approximately) optimal solution to~\eqref{eq:loss} at every time step. Specifically, given an estimate $\bm{x}_{t-1}$ of the target position at time $t-1$ and the noisy range measurements $\{r_i^t\}_{i=1}^m$ at time $t$, we generate an estimate $\bm{x}_t$ of the target position at time $t$ via the one-step gradient descent update \begin{align}\label{eq:toa-ogd} \bm{x}_{t} = \bm{x}_{t-1} - \eta_t\nabla f_t(\bm{x}_{t-1}), \quad t=1,\ldots,T, \end{align} where $\eta_t>0$ is the step size. We remark that the update~\eqref{eq:toa-ogd} should be interpreted in a formal sense at this point, as the function $f_t$ is non-differentiable at $\bm{x} \in \{\bm{a}_1,\ldots,\bm{a}_m\}$. We shall justify the validity of~\eqref{eq:toa-ogd} in the following sections. Naturally, we are interested in evaluating the performance of the sequence of position estimates $\{\bm{x}_t\}_{t=1}^T$. For that purpose, we employ the notion of CTTE, which is defined as \[ {\rm CTTE}\left( \{\bm{x}_t\}_{t=1}^T \right) := \sum_{t=1}^T \| \bm{x}_t - \bm{x}_t^\star \|. \] Note that the definition of CTTE involves the sequence of \emph{true target positions} $\{\bm{x}_t^\star\}_{t=1}^T$, not the sequence of \emph{optimal solutions} $\{\hat{\bm{x}}_t\}_{t=1}^T$ to Problem~\eqref{eq:loss}, as it is the former that we are interested in tracking. Indeed, a small CTTE implies that the estimate $\bm{x}_t$ is close to the true target position $\bm{x}_t^\star$ at every time step $t$. Our goal is to bound the CTTE in terms of the variations in the target trajectory $\{ \|\bm{x}_{t+1}^\star - \bm{x}_t^\star\| \}_{t=1}^{T-1}$ and the noise power $\{\sigma_t^2\}_{t=1}^T$ and to derive conditions that can guarantee a sublinear CTTE bound (i.e., $\tfrac{1}{T}{\rm CTTE}\left( \{\bm{x}_t\}_{t=1}^T \right) \rightarrow 0$) on the tracking performance of OGD. We remark that a sublinear CTTE bound is a desirable property for a tracking algorithm to have, as it implies that the target tracking error of the algorithm---i.e., the distance between the target position estimate produced by the algorithm and the true target position---vanishes asymptotically. In the next section, we will develop two results that are key to achieving this goal. Specifically, under the assumption that the power of the measurement noise $\sigma_t^2$ is sufficiently small, we will first establish a bound on the estimation error $\|\hat{\bm x}_t - \bm{x}_t^\star\|$ and then use this bound to show that the loss function $f_t$ is locally strongly convex at the least-squares estimate $\hat{\bm x}_t$.\footnote{A function $g:\mathbb R^n\rightarrow\mathbb R$ is said to be \emph{locally strongly convex at $\bar{\bm x}$} if there exists an $r>0$ such that $g$ is strongly convex on the ball $B(\bar{\bm x},r)$~\cite{Vial82}.} \section{Local Strong Convexity of TOA-Based Source Localization} \label{sec:str_cvx} Consider a fixed time $t$. Then, Problem~\eqref{eq:loss} reduces to the classic TOA-based source localization problem (see, e.g.,~\cite{So19}), in which the target is considered static. For notational simplicity, we drop the index $t$ and write Problem~\eqref{eq:loss} as \begin{align}\label{eq:toa-ml} \min_{\bm{x}\in\mathbb R^n} \ f(\bm{x}) := \sum_{i=1}^m(\|\bm{x} - \bm{a}_i\| - r_i)^2 \end{align} with $r_i=\|\bm{x}^{\star} - \bm{a}_i\| + w_i$. As before, we assume that $w_i$ is a random variable with mean zero, variance bounded above by $\sigma^2$ and satisfies $|w_i| \ll \| \bm{x}^{\star} - \bm{a}_i \|$. Let $\hat{\bm{x}} \in \arg\min_{\bm{x}\in\mathbb R^n} f(\bm{x})$ denote a least-squares estimate of the true target position $\bm{x}^\star$. The following proposition, which plays a crucial role in our subsequent development, shows that $\hat{\bm x}$ and $\bm{x}^\star$ are close when the power of the measurement noise vector $\bm{w}=(w_1,\ldots,w_m)$ is small. \begin{proposition}[Estimation Error of Least-Squares Estimator] \label{thm:esterror} Suppose that $\|\bm{w}\| \le c_0 \sqrt{m}\sigma$ for some constant $c_0>0$. Then, there exist constants $K_1,~K_2 > 0$, which are determined by $\bm{a}_1,\ldots,\bm{a}_m$ and $\bm{x}^\star$, such that \[ \|\hat{\bm{x}}-\bm{x}^\star\| \leq K_1\sqrt{m}\sigma + K_2m\sigma^2. \] \end{proposition} \noindent The proof of Proposition~\ref{thm:esterror} can be found in Appendix~\ref{app:esterror}. The assumption on $\|\bm{w}\|$ in Proposition~\ref{thm:esterror} is rather mild, as it can be satisfied with high probability when, e.g., $w_1,\ldots,w_m$ are sub-Gaussian random variables~\cite[Chapter 3]{Ver18}. Now, using Proposition~\ref{thm:esterror}, we can prove the following theorem, which establishes the local strong convexity of $f$ at $\hat{\bm x}$ and provides an explicit estimate on the size of the strong convexity region around $\hat{\bm x}$. This constitutes our first main result in this paper. \begin{theorem}[Local Strong Convexity of TOA-Based Source Localization] \label{thm:str_cvx} Consider the setting of Proposition~\ref{thm:esterror}. Suppose that for some given $\delta>0$, the noise power $\sigma^2$ satisfies \begin{equation} \label{eq:dist-asp} \|\bm{x}^{\star} - \bm{a}_i\|>K_1\sqrt{m}\sigma + K_2m\sigma^2+\delta, \quad i=1,\ldots,m \end{equation} and \begin{align} \kappa &:= \frac{\delta}{10m} \cdot \Lambda - (K_1\sqrt{m}\sigma + K_2m\sigma^2) - \frac{4c_0\sigma}{5} > 0, \label{eq:eps} \end{align} where \[ \Lambda := \lambda_{\min}\left( \sum_{i=1}^m \left( \frac{\bm{x}^\star-\bm{a}_i}{\|\bm{x}^\star-\bm{a}_i\|} \right)\left( \frac{\bm{x}^\star-\bm{a}_i}{\|\bm{x}^\star-\bm{a}_i\|}\right)^T \right). \] Then, we have $\nabla^2f(\hat{\bm x}+\bm{\epsilon}) \succ \bm{0}$ for all $\bm{\epsilon} \in \mathbb R^n$ satisfying $\| \bm{\epsilon} \| \le \kappa$; i.e., $f$ is strongly convex over $B(\hat{\bm x},\kappa)$. \end{theorem} \noindent The proof of Theorem~\ref{thm:str_cvx} can be found in Appendix~\ref{app:str_cvx}. Here, let us elaborate on the assumptions of the theorem. \begin{enumerate} \item Condition~\eqref{eq:dist-asp} stipulates that the target should be sufficiently far from the sensors, which is not very restrictive in practice. Moreover, when combined with Proposition~\ref{thm:esterror}, the condition implies that $\| \hat{\bm x} - \bm{a}_i\| > \delta$ for $i=1,\ldots,m$, which shows that the loss function $f$ is smooth around the least-squares estimate $\hat{\bm x}$. This allows us to use the Hessian $\nabla^2f$ to characterize the local strong convexity of $f$ at $\hat{\bm x}$. \item Since the vectors $\{\bm{a}_i-\bm{a}_1\}_{i=2}^m$ span $\mathbb R^n$ by assumption, it can be shown that the vectors $\{\bm{x}^\star - \bm{a}_i\}_{i=1}^m$ also span $\mathbb R^n$. This implies that $\Lambda > 0$. Thus, condition~\eqref{eq:eps} can be satisfied when $\sigma>0$ is sufficiently small (incidentally, condition~\eqref{eq:dist-asp} also becomes easier to satisfy as $\sigma$ becomes smaller). An important insight drawn from~\eqref{eq:eps} is that the landscape of the loss function $f$ around the least-squares estimate $\hat{\bm x}$ depends on the noise power level and the geometric configuration of the target and sensors. \end{enumerate} We remark that although the TOA-based source localization problem has been extensively studied in the literature, Theorem~\ref{thm:str_cvx} is, to the best of our knowledge, the first result that elicits the local strong convexity property of the non-convex least-squares formulation~\eqref{eq:toa-ml}. Now, since the strong convexity region $B(\hat{\bm x},\kappa)$ around $\hat{\bm x}$ is compact and $\nabla^2f$ is continuous over $B(\hat{\bm x},\kappa)$, we see that $\nabla f$ is Lipschitz continuous over $B(\hat{\bm x},\kappa)$. Thus, Theorem~\ref{thm:str_cvx} implies that when applying the gradient descent method to tackle Problem~\eqref{eq:toa-ml}, the resulting sequence of iterates will converge to the optimal solution $\hat{\bm x}$ at a linear rate, provided that the initial point lies in the strong convexity region around $\hat{\bm x}$. This can be deduced using the following well-known result. \begin{fact}[Linear Convergence of Gradient Descent for Strongly Convex Minimization; cf.~{\cite[Theorem 2.1.15]{N04}}]\label{thm:conv_GD} Let $g:\mathbb{R}^n\rightarrow\mathbb{R}$ be a function that is smooth, $\mu$-strongly convex, and $L$-gradient Lipschitz continuous on an open convex set $\mathcal{X}\subseteq\mathbb{R}^n$. Suppose that $g$ has a global minimizer $\hat{\bm x}$ over $\mathcal{X}$. Then, the sequence $\{\bm{x}_k\}_{k\ge0}$ generated by the gradient descent method \[ \bm{x}_{k+1} = \bm{x}_k - \eta\nabla g(\bm{x}_k) \] with initial point $\bm{x}_0 \in \mathcal{X}$ and step size $\eta \in (0,2/(\mu + L)]$ satisfies \[ \|\bm{x}_{k+1} - \hat{\bm{x}}\|^2 \leq\left(1 - \frac{2\eta\mu L}{\mu+L}\right)\|\bm{x}_k - \hat{\bm{x}}\|^2. \] \end{fact} \noindent In particular, Theorem~\ref{thm:str_cvx} provides a means to justify the good empirical performance of gradient-based schemes observed in~\cite{BTC08} when solving the TOA-based source localization problem. \section{CTTE of OGD for TOA-Based Tracking}\label{sec:toa-reg} Let us now address the main goal of this paper---namely, to establish a bound on the CTTE of OGD for TOA-based tracking. The results in Section~\ref{sec:str_cvx} suggest that if the iterate generated by OGD at time $t$ lies in the strong convexity region of the loss function at time $t+1$ for $t=0,1,\ldots,T-1$, then the tracking problem is essentially reduced to that of minimizing a time-varying strongly convex function. This opens up the possibility of using techniques from online strongly convex optimization to bound the CTTE of OGD for TOA-based tracking. To realize the above idea, we need to first introduce some additional preliminaries and collect some consequences of the results in Section~\ref{sec:str_cvx}. Observe that the constants $K_1, K_2, \Lambda$ in Theorem~\ref{thm:str_cvx} involve the target position $\bm{x}^\star$. Since the target is moving in the tracking setting, it will simplify our subsequent analysis if we can find uniform bounds on these constants. Towards that end, we further assume that the target stays within a fixed compact region $\mathcal{T} \subseteq \mathbb R^n$ throughout the tracking task. Such an assumption is rather mild in practice. Moreover, since $K_1,K_2,\Lambda$ depend continuously on $\bm{x}^\star$, it implies the existence of finite upper bounds on $K_1, K_2$ and a positive lower bound on $\Lambda$ that hold for all $t\ge0$. As a slight abuse of notation, we shall use $K_1,K_2,\Lambda$ to denote these uniform bounds in the sequel. Following the setting of Theorem~\ref{thm:str_cvx}, let $\hat{\bm x}_t \in \arg\min_{\bm{x}\in\mathbb R^n} f_t(\bm{x})$ denote a least-squares estimate of the true target position $\bm{x}_t^\star$ at time $t$ and $c_0>0$ be a constant such that $\|\bm{w}^t\| \le c_0\sqrt{m}\sigma_t$ for $t=1,\ldots,T$. Furthermore, suppose that for some given $\delta>0$, the maximum noise power $\sigma^2 := \max_{t\in\{1,\ldots,T\}} \sigma_t^2$ satisfies \begin{align} \|\bm{x}_t^{\star} - \bm{a}_i\| &> K_1\sqrt{m}\sigma + K_2m\sigma^2+\delta, \nonumber \\ &\qquad\qquad i=1,\ldots,m; \, t=1,\ldots,T \label{eq:dist-asp-dyn} \end{align} and \begin{align} \kappa &:= \frac{\delta}{10m} \cdot \Lambda - (K_1\sqrt{m}\sigma + K_2m\sigma^2) - \frac{4c_0\sigma}{5} > 0 \label{eq:eps-dyn} \end{align} (recall from the discussion in the preceding paragraph that $K_1,K_2,\Lambda$ are now uniform in $t$ and hence $\kappa$ is also uniform in $t$). Then, using Theorem~\ref{thm:str_cvx}, the expressions for $\nabla f_t, \nabla^2 f_t$, and the assumption that the target stays within the compact region $\mathcal{T}$, we deduce the existence of constants $\mu,L>0$ such that for $t=1,\ldots,T$, \begin{enumerate} \item $f_t$ is $\mu$-strongly convex over $B(\hat{\bm{x}}_t,\kappa)$---i.e., for any $\bm{x},\bm{y}\in B(\hat{\bm{x}}_t,\kappa)$, \begin{align}\label{eq:toa-strcvx} f_t(\bm{x}) \geq f_t(\bm{y}) +\nabla f_t(\bm{y})^T(\bm{x}-\bm{y})+\frac{\mu}{2}\|\bm{x}-\bm{y}\|^2; \end{align} \item $\nabla f_t$ is $L$-Lipschitz continuous over $B(\hat{\bm{x}}_t,\kappa)$---i.e., for any $\bm{x},\bm{y}\in B(\hat{\bm{x}}_t,\kappa)$, \begin{align}\label{eq:toa-gradlip} \|\nabla f_t(\bm{y}) - \nabla f_t(\bm{x})\|\leq L\|\bm{x}-\bm{y}\|; \end{align} \end{enumerate} Now, let $\{\bm{x}_t\}_{t=1}^T$ be the sequence of iterates generated by the OGD update~\eqref{eq:toa-ogd} with initial point $\bm{x}_0$ and step size $\eta_t \equiv \eta \in (0,2/(\mu+L)]$ for $t=1,\ldots,T$. In addition, let $v_t := \|\bm{x}_{t+1}^\star - \bm{x}_t^\star\|$ ($t=1,\ldots,T-1$) denote the variation in the true target position between time $t$ and $t+1$ and $v := \max_{t\in\{1,\ldots,T-1\}} v_t$ denote the maximum variation in the true target position between successive time steps. The following proposition shows that under suitable conditions, OGD maintains the invariant that the iterate generated at the current time step lies in the strong convexity region of the loss function at the next time step. \begin{proposition}[Invariant of OGD] \label{prop:ogd-inv} Suppose that in addition to~\eqref{eq:dist-asp-dyn} and~\eqref{eq:eps-dyn}, the maximum noise power $\sigma^2$ and maximum variation $v$ satisfy \begin{equation} \label{eq:eps-rad} \kappa \ge \frac{2(K_1\sqrt{m}\sigma + K_2m\sigma^2) + v}{1-\rho}, \end{equation} where $\rho := \left( 1-\tfrac{2 \eta \mu L}{\mu+L} \right)^{1/2} \in (0,1)$ with $\mu,L$ given by~\eqref{eq:toa-strcvx},~\eqref{eq:toa-gradlip}, respectively, and $\kappa > 0$ is the radius of the strong convexity region of the loss function $f_t$ around the least-squares estimate $\hat{\bm x}_t$ for $t=1,\ldots,T$. Furthermore, suppose that the initial point $\bm{x}_0$ satisfies $\|\bm{x}_0 - \bm{x}_1^\star\| \le K_1\sqrt{m}\sigma + K_2m\sigma^2$. Then, for $t=0,1,\ldots,T-1$, the iterate $\bm{x}_t$ lies in the strong convexity region $B(\hat{\bm x}_{t+1},\kappa)$ of the loss function $f_{t+1}$. \end{proposition} \begin{proof} We proceed by induction on $t$. For $t=0$, we have \begin{align} \| \bm{x}_0 - \hat{\bm x}_1 \| &\le \| \bm{x}_0 - \bm{x}_1^\star \| + \| \bm{x}_1^\star - \hat{\bm x}_1 \| \nonumber \\ &\le 2(K_1\sqrt{m}\sigma + K_2m\sigma^2) \label{eq:init-bd} \\ &\le \kappa, \nonumber \end{align} where the second inequality follows from our assumption on $\bm{x}_0$ and Proposition~\ref{thm:esterror} and the last follows from our choice of $\kappa$ in~\eqref{eq:eps-rad}. This establishes the base case. Now, for $t\ge1$, we have \begin{align*} &~ \| \bm{x}_t - \hat{\bm x}_{t+1} \| \le \| \bm{x}_t - \hat{\bm x}_t \| + \| \hat{\bm x}_t - \hat{\bm x}_{t+1} \| \\ \le&~ \rho \| \bm{x}_{t-1} - \hat{\bm x}_t \| + \| \hat{\bm x}_t - \bm{x}_t^\star \| + \| \bm{x}_{t+1}^\star - \hat{\bm x}_{t+1} \| \\ &\quad~ + \| \bm{x}_t^\star - \bm{x}_{t+1}^\star \| \\ \le&~ \rho\kappa + 2(K_1\sqrt{m}\sigma + K_2m\sigma^2) + v_t \\ \le&~ \kappa, \end{align*} where the second inequality follows from the OGD update~\eqref{eq:toa-ogd}, the inductive hypothesis (i.e., $\bm{x}_{t-1}$ lies in the strong convexity region of $f_t$), and Fact~\ref{thm:conv_GD}; the third follows from the inductive hypothesis and Proposition~\ref{thm:esterror}; the last follows from our choice of $\kappa$ in~\eqref{eq:eps-rad}. This completes the inductive step and also the proof of Proposition~\ref{prop:ogd-inv}. \end{proof} We remark that since the loss functions $\{f_t\}_{t=1}^T$ are non-convex, some conditions on the maximum noise power, maximum variation, and quality of the initial point are to be expected in the CTTE analysis of OGD for tackling the TOA-based tracking problem~\eqref{eq:loss}. In fact, the performance analysis of ONM for general time-varying non-convex optimization in~\cite{LTS20}, though focusing on the dynamic regret metric, makes use of similar conditions on the maximum variation and quality of the initial point as those in Proposition~\ref{prop:ogd-inv}. Armed with Proposition~\ref{prop:ogd-inv}, we can prove the following theorem, which establishes a CTTE bound for OGD when it is applied to the TOA-based tracking problem. This constitutes our second main result in this paper. \begin{theorem}[CTTE of OGD for TOA-Based Tracking] \label{thm:ogd-ctte} Under the setting of Proposition~\ref{prop:ogd-inv}, the sequence of iterates $\{\bm{x}_t\}_{t=1}^T$ satisfies \[ {\rm CTTE}\left( \{\bm{x}_t\}_{t=1}^T \right) = \mathcal{O}(1 + V(T) + N_1(T) + N_2(T)), \] where $V(T) := \sum_{t=1}^{T-1} \| \bm{x}_{t+1}^\star - \bm{x}_t^\star \| = \sum_{t=1}^{T-1} v_t$ denotes the path length of the target trajectory, $N_1(T):=\sum_{t=1}^T \sigma_t$ denotes the cumulative noise standard deviation, and $N_2(T):=\sum_{t=1}^T \sigma_t^2$ denotes the cumulative noise variance. \end{theorem} \begin{proof} Using the definition of CTTE and the triangle inequality, we have \begin{align} {\rm CTTE}\left( \{\bm{x}_t\}_{t=1}^T \right) &= \sum_{t=1}^T \| \bm{x}_t - \bm{x}_t^\star \| \nonumber \\ &\le \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| + \sum_{t=1}^T \| \hat{\bm x}_t - \bm{x}_t^\star \|. \label{eq:pre-ctte} \end{align} Let us now bound the two terms in~\eqref{eq:pre-ctte} separately. For the first term, we begin by adapting the argument used in the proof of~\cite[Theorem 1]{MSJ+16} to our time-varying optimization setting and bound \begin{align*} &~ \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| \le \rho \sum_{t=1}^T \| \bm{x}_{t-1} - \hat{\bm x}_t \| \\ \le&~ \rho \| \bm{x}_0 - \hat{\bm x}_1 \| + \rho \sum_{t=2}^T \| \bm{x}_{t-1} - \hat{\bm x}_{t-1} \| + \rho \sum_{t=2}^T \| \hat{\bm x}_{t-1} - \hat{\bm x}_t \| \\ =&~ \rho \left( \| \bm{x}_0 - \hat{\bm x}_1 \| - \| \bm{x}_T - \hat{\bm x}_T \| \right) + \rho \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| \\ &~\, + \rho \sum_{t=1}^{T-1} \| \hat{\bm x}_t - \hat{\bm x}_{t+1} \|, \end{align*} where the first inequality follows from the OGD update~\eqref{eq:toa-ogd}, Proposition~\ref{prop:ogd-inv}, and Fact~\ref{thm:conv_GD}. It follows that \begin{align} \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| \le \frac{\rho}{1-\rho} \left( \| \bm{x}_0 - \hat{\bm x}_1 \| + \sum_{t=1}^{T-1} \| \hat{\bm x}_t - \hat{\bm x}_{t+1} \| \right). \label{eq:cum-err} \end{align} Now, using~\eqref{eq:init-bd}, we get \[ \| \bm{x}_0 - \hat{\bm x}_1 \| \le 2(K_1\sqrt{m}\sigma+K_2m\sigma^2). \] Furthermore, we have \begin{align*} &~ \sum_{t=1}^{T-1} \| \hat{\bm x}_t - \hat{\bm x}_{t+1} \| \\ \le&~ \sum_{t=1}^{T-1} \left( \| \hat{\bm x}_t - \bm{x}_t^\star \| + \| \bm{x}_t^\star - \bm{x}_{t+1}^\star \| + \| \bm{x}_{t+1}^\star - \hat{\bm x}_{t+1} \| \right) \\ \le&~ \sum_{t=1}^{T-1} \left( K_1\sqrt{m}(\sigma_t+\sigma_{t+1}) + K_2m(\sigma_t^2+\sigma_{t+1}^2) + v_t \right) \\ =&~ \sum_{t=1}^{T-1} v_t + 2K_1\sqrt{m} \sum_{t=1}^{T} \sigma_t + 2K_2m\sum_{t=1}^{T} \sigma_t^2, \end{align*} where the second inequality follows from Proposition~\ref{thm:esterror}. Substituting the above into~\eqref{eq:cum-err} yields \[ \sum_{t=1}^T \| \bm{x}_t - \hat{\bm x}_t \| = \mathcal{O}(1 + V(T) + N_1(T) + N_2(T)). \] For the second term, we simply invoke Proposition~\ref{thm:esterror} to get \begin{align*} \sum_{t=1}^T \| \hat{\bm x}_t - \bm{x}_t^\star \| &\le K_1\sqrt{m} \sum_{t=1}^T \sigma_t + K_2 m \sum_{t=1}^T \sigma_t^2 \\ &= \mathcal{O}(N_1(T) + N_2(T)). \end{align*} The desired result now follows by substituting the above into~\eqref{eq:pre-ctte}. \end{proof} Theorem~\ref{thm:ogd-ctte} reveals that OGD can achieve sublinear CTTE when both the path length $V(T)$ and the cumulative noise power $N_2(T)$ grow sublinearly (note that the latter, together with the fact that $N_1(T) \le \sqrt{T \cdot N_2(T)}$, implies the sublinear growth of the cumulative noise standard deviation $N_1(T)$). Roughly speaking, this means that if the target is not moving too fast and the noise power decays at a sufficiently fast rate over time, then the target tracking error of OGD will vanish asymptotically. It is important to note that our CTTE bound is expressed in terms of the path length of the \emph{target trajectory} (i.e., $V(T) = \sum_{t=1}^{T-1} \| \bm{x}_{t+1}^\star - \bm{x}_t^\star \|$), not the path length of the \emph{optimal solution trajectory} of the time-varying loss function (i.e., $V'(T) := \sum_{t=1}^{T-1} \| \hat{\bm x}_{t+1} - \hat{\bm x}_t \|$). Although the latter is commonly used in existing performance analyses of online methods (see, e.g.,~\cite{MSJ+16,BSR18,LTS20}), the former captures the actual variations in the target trajectory and is thus more relevant to the tracking problem considered in this paper. It is also worth noting that our CTTE bound shows explicitly how the TOA measurement noise affects the tracking performance of OGD through the terms $N_1(T)$ and $N_2(T)$. \section{Numerical Simulations} \label{sec:sim} In this section, we present numerical results to demonstrate the efficacy of OGD for the TOA-based tracking problem and illustrate our theoretical findings. Specifically, we apply both OGD and ONM---the latter has previously been used in~\cite{LTS20} to tackle the TOA-based tracking problem---to various test instances and compare their tracking performance. In all the considered instances, there are $m=3$ sensors, which are located at $\bm{a}_1 = \begin{bmatrix} 0.5 & 0.5 \end{bmatrix} ^T$, $\bm{a}_2 = \begin{bmatrix} 0 & 0.5 \end{bmatrix} ^T$, and $\bm{a}_3 = \begin{bmatrix} 0.5 & 0 \end{bmatrix} ^T$. Given the time horizon of interest $T$ and the target trajectory $\{\bm{x}_t^\star\}_{t=1}^T$, the measurement noise $w_i^t$ in~\eqref{eq:toa-model} is generated according to the Gaussian distribution with mean zero and variance $\sigma_t^2$ for $i=1,\ldots,m$; $t=1,\ldots,T$, and the TOA-based range measurements $\{ r_i^t : i=1,\ldots,m; \, t=1,\ldots,T \}$ are then obtained using~\eqref{eq:toa-model}. We consider two initialization strategies for OGD and ONM. One is \emph{exact initialization}, which assumes that the true initial target position $\bm{x}_1^\star$ is known and takes $\bm{x}_0=\bm{x}_1^\star$ as the initial point. The other is \emph{ordinary least-squares (OLS) initialization}, which takes \begin{equation} \label{eq:OLS-init} \bm{x}_0 = (\bm{A}^T\bm{A})^{-1}\bm{A}^T\bm{b}_1 \end{equation} with \begin{align} \bm{A} &:= \begin{bmatrix} (\bm{a}_2-\bm{a}_1)^T\\ \vdots\\ (\bm{a}_m-\bm{a}_{m-1})^T\\ \end{bmatrix}, \label{eq:LS-A} \\ \bm{b}_1 &:= \frac{1}{2}\begin{bmatrix} \|\bm{a}_2\|^2 - \|\bm{a}_1\|^2 + (r_1^1)^2 - (r_2^1)^2 \\ \vdots\\ \|\bm{a}_m\|^2 - \|\bm{a}_{m-1}\|^2 + (r_{m-1}^1)^2 - (r_m^1)^2 \end{bmatrix} \label{eq:LS-b} \end{align} as the initial point; see~\cite{STK05}. The OLS estimate in~\eqref{eq:OLS-init} can be obtained as follows: Observe that any $\bm{x}$ satisfying \[ \| \bm{x} - \bm{a}_i \|^2 \approx (r_i^1)^2, \quad i=1,\ldots,m \] can serve as an estimate of the true initial target position $\bm{x}_1^\star$. Upon subtracting the $i$th equation from the $(i+1)$st, where $i=1,\ldots,m-1$, we get \[ 2(\bm{a}_{i+1}-\bm{a}_i)^T\bm{x} \approx \|\bm{a}_{i+1}\|^2 - \|\bm{a}_i\|^2 + (r_i^1)^2 - (r_{i+1}^1)^2.\] In particular, we can obtain an estimate of $\bm{x}_1^\star$ by solving \begin{align} \label{eq:LS} \min_{\bm{x} \in \mathbb{R}^n} \|\bm{A}\bm{x} - \bm{b}\|^2, \end{align} where $\bm{A}$ and $\bm{b}$ are given by~\eqref{eq:LS-A} and~\eqref{eq:LS-b}, respectively. Since the vectors $\{\bm{a}_{i}-\bm{a}_1\}_{i=2}^{m}$ span $\mathbb{R}^n$ by assumption, the solution to~\eqref{eq:LS} is readily given by~\eqref{eq:OLS-init}. It is worth noting that the OLS estimate in~\eqref{eq:OLS-init} can be computed simply by using the sensor positions $\{\bm{a}_i\}_{i=1}^m$ and noisy range measurements $\{r_i^1\}_{i=1}^m$. Thus, it is an attractive choice for initializing OGD and ONM. We use the step size $\eta_t = 0.1$ for $t=1,\ldots,T$ in OGD. Then, OGD generates the position estimates of the target via~\eqref{eq:toa-ogd}, while ONM generates those via \[ \bm{x}_t = \bm{x}_{t-1} - \left(\nabla^2f_t(\bm{x}_{t-1})\right)^{-1}\nabla f_t(\bm{x}_{t-1}), \quad t=1,\ldots,T. \] All computations were carried out in MATLAB on an Intel(R) Core(TM) i5-8600 CPU 3.10GHz CPU machine. The CTTE shown in the figures are averaged over 1000 Monte Carlo runs. \subsection{Small Noise Level and Path Variation} To begin, we construct the following set of test instances (cf.~\cite[Section IV]{LTS20}): The time horizon of interest $T$ is set to $500$. The target's initial position is set to $\bm{x}_1^\star = \begin{bmatrix} 2 & 1 \end{bmatrix}^T $ and its positions at subsequent time steps are given by \begin{equation}\label{eq:source-update} \bm{x}_{t+1}^\star = \bm{x}_t^\star + \frac{0.005}{\sqrt{2(t+1)}} \bm{u}_t, \quad t=1,\ldots,T-1, \end{equation} where $\bm{u}_1,\ldots,\bm{u}_{T-1} \in \mathbb R^2$ are independently and uniformly distributed on the unit circle centered at the origin. We consider three scenarios, which correspond to three different noise levels: (i) $\sigma_t = 0.0001$ for $t=1,\ldots,T$; (ii) $\sigma_t = 0.01$ for $t=1,\ldots,T$; (iii) $\sigma_t = \tfrac{0.01}{\sqrt{t}}$ for $t=1,\ldots,T$. Figures~\ref{fig:error_sigma1e-4}--\ref{fig:error_sigma1e-2oversqrt} show the CTTE of OGD and ONM with exact and OLS initialization at these three noise levels. Figures~\ref{fig:traj_sigma1e-4}--\ref{fig:traj_sigma1e-2oversqrt} show the tracking trajectories generated by OGD and ONM for particular instances at those noise levels with OLS initialization. We also include the trajectories of the least-squares estimates $\{\hat{\bm x}_t\}_{t=1}^T$ in the figures for reference. These trajectories are generated using gradient descent (GD) at each time step. Specifically, at time $t$, we use the true target position $\bm{x}_t^\star$ as the initial point and perform the GD updates using the constant step size $1/m$ until either the norm of the gradient is smaller than $10^{-8}$ or the number of iterations reaches 5000. We then declare the last iterate to be $\hat{\bm x}_t$. \begin{figure*}[!t] \centering \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/new_error_sig1e-4} \caption{$\sigma_t=0.0001$} \label{fig:error_sigma1e-4} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/error_sig1e-2} \caption{$\sigma_t=0.01$} \label{fig:error_sigma1e-2} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/error_sig1e-2oversqrt} \caption{$\sigma_t=0.01/\sqrt{t}$} \label{fig:error_sigma1e-2oversqrt} \end{subfigure} \vfill \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/GD1_traj_sig1e-4.png} \caption{$\sigma_t=0.0001$} \label{fig:traj_sigma1e-4} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/GD1_traj_sig1e-2.png} \caption{$\sigma_t=0.01$} \label{fig:traj_sigma1e-2} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=1.08\textwidth]{fig/GD1_traj_sig1e-2oversqrt.png} \caption{$\sigma_t=0.01/\sqrt{t}$} \label{fig:traj_sigma1e-2oversqrt} \end{subfigure} \caption{CTTE (top row) and tracking trajectories (bottom row) of OGD and ONM at different noise levels.} \label{fig:OGDvsONM} \end{figure*} In the first scenario, the noise level is small compared to the path variation (i.e., $\sigma_{t+1}=0.0001$ vs. $v_t = \tfrac{0.005}{\sqrt{2(t+1)}}$ for $t=1,\ldots,T-1$ with $T=500$). We see from Figure~\ref{fig:error_sigma1e-4} that ONM has a smaller CTTE than OGD. This can be explained as follows: First, Proposition~\ref{thm:esterror} implies that the least-squares estimate $\hat{\bm x}_t$ is close to the true target position $\bm{x}_t^\star$ for $t=1,\ldots,T$. Second, since ONM uses both first- and second-order information of the loss function $f_t$, the point it generates is closer to $\hat{\bm x}_t$ than that generated by OGD. This suggests that ONM is better at tracking the least-squares estimates than OGD. In fact, these two claims are corroborated by our numerical results; see Figure~\ref{fig:traj_sigma1e-4}. In the second scenario, the noise level increases relative to the path variation (i.e., $\sigma_{t+1}=0.01$ vs. $v_t = \tfrac{0.005}{\sqrt{2(t+1)}}$ for $t=1,\ldots,T-1$ with $T=500$). Here, the ability of ONM to track the least-squares estimates closely becomes a liability, because Proposition~\ref{thm:esterror} suggests that the true target position and the least-squares estimate will be further apart. Indeed, as shown in Figure~\ref{fig:error_sigma1e-2}, ONM has a larger CTTE than OGD, and the gap widens as time goes by. We see from Figure~\ref{fig:traj_sigma1e-2} that ONM is much better at tracking the least-squares estimates than OGD. However, the least-squares estimates are quite far from the true target positions, and OGD is better at tracking the latter. We note that in the above two scenarios, the noise level is constant, and the CTTE of OGD eventually grows linearly (see Figures~\ref{fig:error_sigma1e-4} and~\ref{fig:error_sigma1e-2}). This is consistent with the result in Theorem~\ref{thm:ogd-ctte}, as $N_1(T)=\Theta(T)$ and $N_2(T)=\Theta(T)$ and both terms dominate $V(T)=\Theta(\sqrt{T})$. In the third scenario, the noise level diminishes as time goes by, but the relative magnitude between noise level and path variation stays roughly constant (i.e., $\sigma_{t+1}=\tfrac{0.01}{\sqrt{t}}$ vs. $v_t = \tfrac{0.005}{\sqrt{2(t+1)}}$ for $t=1,\ldots,T-1$ with $T=500$). From Figure~\ref{fig:error_sigma1e-2oversqrt}, we see that with exact initialization, OGD has a smaller CTTE than ONM. This suggests that the high initial noise level, which causes the least-squares estimate to deviate from the true target position, throws off ONM and degrades its subsequent tracking performace even though the noise level is diminishing. Moreover, given the high initial noise level, the OLS initialization strategy tends to produce an inaccurate estimate of the true initial target position. Consequently, with OLS initialization, the CTTE of both OGD and ONM grow rapidly in the beginning, though the former is more affected by the quality of the OLS estimate than the latter. Nevertheless, we observe that the CTTE gap between OGD and ONM narrows as time goes by. This supports our earlier claim that OGD is better at tracking the true target positions than ONM; see also Figure~\ref{fig:traj_sigma1e-2oversqrt}. Lastly, we note that the CTTE of OGD grows sublinearly. This is consistent with the result in Theorem~\ref{thm:ogd-ctte}, as we have $V(T) = \Theta(\sqrt{T})$, $N_1(T) = \Theta(\sqrt{T})$, and $N_2(T)=\Theta(\log T)$. We also compare the per-iteration CPU time of OGD and ONM. As can be seen in Table~\ref{tab:CPUtime}, OGD is about 2-3 times faster than ONM. The higher runtime of the latter can be attributed to the computation of the inverse of the Hessian of the loss function. \begin{table}[htb] \fontsize{10}{12.5}\selectfont \centering \begin{tabular}{c|c|c} Noise Level & OGD & ONM\\ \hline $\sigma_t=0.0001$ & $5.23\times10^{-6}$s & $1.36\times10^{-5}$s\\ $\sigma_t=0.01$ & $5.11\times10^{-6}$s & $1.31\times10^{-5}$s\\ $\sigma_t=0.01/\sqrt{t}$ & $5.05\times10^{-6}$s & $1.29\times10^{-5}$s \end{tabular} \caption{Per-iteration CPU time of OGD and ONM.} \label{tab:CPUtime} \end{table} To better understand the effect of the relative magnitude between noise level and path variation on the tracking performance of OGD and ONM, let us plot Figure~\ref{fig:error_sigma1e-4} again but with the longer time horizon $T=10000$. The result is shown in Figure~\ref{fig:CTTE}. Although the CTTE of OGD is higher than that of ONM in the beginning, the latter eventually overtakes the former as $t$ increases. This is consistent with our earlier observation that ONM is better at tracking the least-squares estimates than OGD. Indeed, when $t$ is sufficiently large, the noise level $\sigma_{t+1}=0.0001$ is larger than the path variation $v_t = \tfrac{0.005}{\sqrt{2(t+1)}}$. Thus, as time goes by, the true target position and the least-squares estimate become further apart (see Proposition~\ref{thm:esterror}), and ONM starts to incur a higher target tracking error at each time step. This suggests that the performance of ONM is rather sensitive to the noise level, while that of OGD is quite stable. \begin{figure} \centering \includegraphics[scale=0.48]{fig/rand_sig1e-4T10000.png} \caption{CTTE of OGD and ONM at noise level $\sigma_t=0.0001$, $T=10000$.} \label{fig:CTTE} \end{figure} As a further illustration, we construct another set of test instances with $T=10000$, the same initial target position $\bm{x}_1^\star = \begin{bmatrix} 2 & 1 \end{bmatrix}^T $ and target trajectory~\eqref{eq:source-update} as before, and the following two different noise levels: (i) $\sigma_t = \frac{0.005}{\sqrt{2t}}$ for $t=1,\ldots,T$; (ii) $\sigma_t = \frac{0.008}{\sqrt{2t}}$ for $t=1,\ldots,T$. For $t=1,\ldots,T-1$, the ratios of noise level $\sigma_{t+1}$ to path variation $v_t$ in these two cases are 1 and 1.6, respectively. Figures~\ref{fig:rand_noise-same-variation}--\ref{fig:rand_noise-1pt6-variation} show the CTTE of OGD and ONM with exact and OLS initialization at these two noise levels. \begin{figure*}[htb] \centering \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=0.92\textwidth]{fig/rand_noise-same-variation.png} \caption{$\bm{x}_{t+1}^\star = \bm{x}_t^\star + \frac{0.005}{\sqrt{2(t+1)}} \bm{u}_t,~\sigma_t = \frac{0.005}{\sqrt{2t}}$} \label{fig:rand_noise-same-variation} \end{subfigure} \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=0.92\textwidth]{fig/rand_noise-1pt6-variation.png} \caption{$\bm{x}_{t+1}^\star = \bm{x}_t^\star + \frac{0.005}{\sqrt{2(t+1)}} \bm{u}_t,~\sigma_t = \frac{0.008}{\sqrt{2t}}$} \label{fig:rand_noise-1pt6-variation} \end{subfigure} \caption{CTTE of OGD and ONM when applied to different target trajectories and noise levels.} \label{fig:region_OGD} \end{figure*} When the noise level to path variation ratio is 1, Figure~\ref{fig:rand_noise-same-variation} shows that ONM performs better than OGD, regardless of whether exact or OLS initialization is used. However, when the ratio increases to $1.6$, Figure~\ref{fig:rand_noise-1pt6-variation} shows that OGD eventually performs better than ONM, regardless of whether exact or OLS initialization is used. These results corroborate our earlier account that OGD is better at tracking the true target positions, while ONM is better at tracking the least-squares estimates. \subsection{Large Noise Level and Path Variation} Next, we study the CTTE of OGD and ONM when the two methods are applied to test instances that violate one or more of the conditions~\eqref{eq:dist-asp-dyn},~\eqref{eq:eps-dyn}, and~\eqref{eq:eps-rad}. In particular, there is no guarantee that the iterate generated by OGD at the current time step lies in the strong convexity region of the loss function at the next time step. We first construct a test instance that has large noise level and path variation but the ratio between them is small. The time horizon of interest is set to $T=500$. The target's initial position is set to $\bm{x}_1^\star = \begin{bmatrix} 2 & 1 \end{bmatrix}^T $ and its subsequent positions are given by \[ \bm{x}_{t+1}^\star = \bm{x}_t^\star+\frac{0.1}{\sqrt{2(t+1)}}\bm{u}_t,\quad t = 1,\ldots,T-1. \] Here, as before, $\bm{u}_1,\ldots,\bm{u}_{T-1} \in \mathbb R^2$ are independently and uniformly distributed on the unit circle centered at the origin. The noise levels are given by $\sigma_t=\frac{0.1}{\sqrt{2t}}$ for $t = 1,\ldots,T$. Figure~\ref{fig:rand_noise-var_1e-1oversqrt} shows the CTTE of OGD and ONM. We observe that the CTTE of OGD is much lower than that of ONM with both exact and OLS initialization. One possible explanation is that the good performance of ONM relies heavily on the local strong convexity of the loss function, and the lack of such a property seriously affects its performance. Now, let us construct a test instance that has a small noise level but large path variation, so that the ratio between them is small. The time horizon of interest and the target's initial position are the same as before. The target trajectory is given by \[ \bm{x}_{t+1}^\star = \bm{x}_t^\star+\frac{0.5}{\sqrt{2(t+1)}}\bm{u}_t,\quad t = 1,\ldots,T-1, \] while the noise levels are given by $\sigma_t=\frac{0.001}{\sqrt{2t}}$ for $t = 1,\ldots,T$. Figure~\ref{fig:rand_noise_1e-3oversqrt-var_5e-1oversqrt} shows the CTTE of OGD and ONM. We see that the CTTE of OGD is much lower than that of ONM. In fact, when the iterates are no longer guaranteed to lie in the strong convexity regions of the loss functions, ONM becomes rather unstable regardless of the noise level to path variation ratio. This supports our earlier explanation that the local strong convexity of the loss function is crucial to the good performance of ONM. By contrast, OGD is much more robust and can better track the true target positions even when the conditions for local strong convexity are violated. \begin{figure*}[!t] \centering \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=0.92\textwidth]{fig/rand_noise-var_1e-1oversqrt.png} \caption{$\bm{x}_{t+1}^\star = \bm{x}_t^\star+\frac{0.1}{\sqrt{2(t+1)}}\bm{u}_t,~\sigma_t=\frac{0.1}{\sqrt{2t}}$} \label{fig:rand_noise-var_1e-1oversqrt} \end{subfigure} \begin{subfigure}{.42\textwidth} \centering \includegraphics[width=0.92\textwidth]{fig/rand_noise_1e-3oversqrt-var_5e-1oversqrt.png} \caption{$\bm{x}_{t+1}^\star = \bm{x}_t^\star+\frac{0.5}{\sqrt{2(t+1)}}\bm{u}_t,~\sigma_t=\frac{0.001}{\sqrt{2t}}$} \label{fig:rand_noise_1e-3oversqrt-var_5e-1oversqrt} \end{subfigure} \caption{CTTE of OGD and ONM with large noise level and/or path variation.} \label{fig:large_var&noise} \end{figure*} \section{Conclusion} \label{sec:concl} In this paper, we established the first non-trivial performance bound for OGD when it is applied to a time-varying non-convex least-squares formulation of the TOA-based tracking problem. The performance metric we adopted is the CTTE, which measures the cumulative discrepancy between the trajectory of position estimates and that of the true target. To establish the said performance bound, we developed new results on the estimation and geometric properties of the classic static TOA-based source localization problem, which can be of independent interest. Our numerical results corroborate the theoretical findings and show that OGD can effectively track the target at different noise levels. A possible future direction is to design and analyze online methods for TDOA-based tracking, which corresponds to a sequential version of the TDOA-based source localization problem (see, e.g.,~\cite{LCS09} and the references therein). One possible approach is to combine the results in~\cite{LPS17} with the techniques developed in this paper. Another future direction is to study the performance of different online methods for solving the TOA-based tracking problem.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{To do} \section{Introduction} \IEEEPARstart{P}{erfectly} matched layer (PML)~\cite{Berenger1994, Chew1994} is often used in finite difference~\cite{Taflove2005}, finite element~\cite{Jin2015FEM}, and discontinuous Galerkin time-domain (DGTD) methods~\cite{Hesthaven2002, Cockburn2004, Lu2004, Hesthaven2008, Gedney2009, Cohen2017} to imitate/approximate the radiation boundary condition (i.e., truncate a unbounded physical domain to a finite computation domain) while solving Maxwell equations or the wave equation. The performance of the PML depends on the attenuation coefficient (which is implemented as conductivity in Maxwell equations) and the thickness of the layer. Increasing either one or both of them increases the absorption inside the PML. However, in practice, one cannot use a constant and high conductivity as it would increase the numerical reflection at the interface between the PML and computation domain, or use a very thick layer since it would increase the computational cost. Therefore, a smoothly increasing conductivity profile is often used to achieve both high absorption and small numerical reflection~\cite{Berenger1994, Chew1994, Chew1996, Taflove2005, Jin2015FEM, Berenger2007book, Gedney2011}. \begin{figure}[!b] \centerline{\includegraphics[width=0.79\columnwidth]{Article_PML.png}} \caption{Implementation of the PML with smoothly-increasing conductivity. (a) Conductivity is constant in elements (paved mesh). (b) Conductivity is constant in elements (layered mesh). (c) Conductivity is allowed to vary in elements (paved mesh). Mesh does not have to conform to the interface between the PML and the computation domain.} \label{Profile} \end{figure} In DGTD, the PML conductivity profile can be implemented in two different ways. The first method assumes that the conductivity is constant in a given element. This constant might for example be set to the value of the conductivity profile at the center of the element. Implementation of this method is rather straightforward since the mass matrices of different elements only differ by a constant (for linear elements)~\cite{Hesthaven2008}. However, the conductivity profile becomes discontinuous between neighboring elements [Fig.~\ref{Profile} (a)] and the element surfaces that are not parallel to the PML interface lead to large reflections and destroy the high-order accuracy of the solution. One workaround is to build layered (tetrahedral) meshes [Fig.~\ref{Profile} (b)] or use orthogonal (hybrid) meshes inside the PML~\cite{Lu2004, Chen2018}, and accordingly set a layered conductivity profile. But this makes the setups of the computation domain and the PML rather tedious since one needs to control the mesh and the conductivity on all face, edge, and corner regions of the PML. Moreover, to reduce the numerical reflection by decreasing the conductivity discontinuity between neighboring mesh (or conductivity) layers, their number has to be increased. The second method allows the conductivity to vary inside a given element (following the increasing conductivity profile inside the PML). It should be noted here that (higher-order) DGTD allows for sampling of the material properties at the sub-elemental level. In this case, the behavior of the PML is determined only by the conductivity samples within the elements and therefore the mesh interfaces can be aligned arbitrarily [Fig.~\ref{Profile} (c)] without adversely affecting its performance. {\color{black}Indeed, this second approach can improve the PML performance~\cite{Angulo2015, Angulo2014, Sankaran2007thesis}.} However, the conductivity varying within the elements results in an element-dependent mass matrix. Therefore, a direct implementation requires a different mass matrix (or its inverse) to be stored for every element~\cite{Gedney2009, Lu2004, Niegemann2009, Angulo2015, Angulo2014, Chen2018}. This significantly increases DGTD's memory footprint. For example, for the stretched-coordinate (SC)-PML~\cite{Gedney2009}, the memory required to store the mass matrices scales with $15K_{\mathrm{PML}} \times N_p^2$, where ${N_p}$ is the number of interpolating nodes in each element, $K_{\mathrm{PML}}$ is the number of elements in the PML, and $15$ comes from the five material-dependent coefficients in the update equations and three Cartesian components of the vector field. In contrast, for the first method, where the conductivity is assumed constant in a given element, this memory requirement scales with $K_{\mathrm{PML}}$ since only the constant conductivity of each element and a single reference mass matrix are stored. In this work, a memory-efficient method to implement the SC-PML with smoothly-varying attenuation coefficients in DGTD is developed. The proposed method allows the conductivity to vary inside the elements and constructs the resulting local mass matrices using a weight-adjusted approximation (WAA)~\cite{Chan2017}. Compared to the direct implementation that is briefly described above, the proposed method reduces the memory requirement to $15K_{\mathrm{PML}} \times N_q$, where $N_q\sim N_p$, while maintaining the PML performance. It should be noted here that the WAA has been proved to be energy-stable and preserve the high-order convergence of DGTD~\cite{Chan2017, Guo2020, Shukla2020}. Indeed, numerical examples presented here also show the proposed method maintains the higher-order accuracy of the solution. Additionally, it is demonstrated that the PML with smoothly-increasing conductivity profile as implemented with the proposed method performs better than the PML implemented using element-wise constant conductivity profile. \section{Formulation} \subsection{WAA-DGTD for SC-PML} Maxwell equations in stretched-coordinates for a source-free and lossless medium can be expressed as~\cite{Chew1994} \begin{align} \label{MaxwellSCE} - j\omega \mu {\mathbf{H}} = {\nabla _e} \times {\mathbf{E}}\\ \label{MaxwellSCH} j\omega \epsilon {\mathbf{E}} = {\nabla _h} \times {\mathbf{H}} \end{align} where $\mathbf{E}$ and $\mathbf{H}$ are the electric and magnetic fields, $\epsilon$ and $\mu$ are the permittivity and the permeability, $\omega$ is the frequency, and \begin{equation} {\nabla _e} = {\nabla _h} = \hat x\frac{1}{{{s_x}}}\frac{\partial }{{\partial x}} + \hat y\frac{1}{{{s_y}}}\frac{\partial }{{\partial y}} + \hat z\frac{1}{{{s_z}}}\frac{\partial }{{\partial z}}. \end{equation} The coordinate-stretching variables ${s_{u}}$, $u \in \{x,y,z\}$, are defined as~\cite{Chew1994, Gedney2009, Gedney2011} \begin{equation} \label{su} {s_u (u)} = {\kappa _u (u)} + \frac{{{\sigma _u (u)}}}{{j\omega {\varepsilon _0}}} \end{equation} where $\kappa _u$ and $\sigma _u$ are one-dimensional positive real functions along direction $u$. Here, $\sigma _u$ is the attenuation coefficient that ensures absorption inside the SC-PML and $\kappa _u$ changes the propagation speed {\color{black} (and also increases the absorption for evanescent waves)}. It is well known that using smoothly-increasing $\sigma _u$ and $\kappa _u$ reduces the numerical reflection from the interface between the PML and the computation domain while maintaining a high absorption rate, and therefore improves the PML performance~\cite{Chew1996, Berenger2007book, Gedney2011}. The time-domain update equations in SC-CPML can be expressed as~\cite{Gedney2009} \begin{align} \label{upH} & {\partial _t}\ddot a \cdot \mu {\mathbf{H}} = - \nabla \times {\mathbf{E}} - \ddot b \cdot \mu {\mathbf{H}} - \ddot c \cdot \mu {{\mathbf{P}}^H} \\ \label{upE} & {\partial _t}\ddot a \cdot \varepsilon {\mathbf{E}} = \nabla \times {\mathbf{H}} - \ddot b \cdot \varepsilon {\mathbf{E}} - \ddot c \cdot \mu {{\mathbf{P}}^E} \\ \label{upPH} & {\partial _t}{{\mathbf{P}}^H} = {\ddot \kappa ^{ - 1}}{\mathbf{H}} - \ddot d{{\mathbf{P}}^H} \\ \label{upPE} & {\partial _t}{{\mathbf{P}}^E} = {\ddot \kappa ^{ - 1}}{\mathbf{E}} - \ddot d{{\mathbf{P}}^E} \end{align} where ${{\mathbf{P}}^E}$ and ${{\mathbf{P}}^H}$ are auxiliary variables introduced to avoid computationally costly temporal convolutions while converting~\eqref{MaxwellSCE} and~\eqref{MaxwellSCH} into time domain~\cite{Gedney2009} and $\ddot a$, $\ddot b$, $\ddot c$, $\ddot d$, and $\ddot \kappa$ are diagonal tensors with entries defined as \begin{align} \nonumber & {a_{uu}} = \frac{{{\kappa _v}{\kappa _w}}}{{{\kappa _u}}}, \quad {b_{uu}} = \frac{1}{{{\kappa _u}{\varepsilon _0}}}({\sigma _v}{\kappa _w} + {\sigma _w}{\kappa _v} - {a_{uu}}{\sigma _u})\\ \label{tensor} & {c_{uu}} = \frac{{{\sigma _v}{\sigma _w}}}{{\varepsilon _0^2}} - {b_{uu}}\frac{{{\sigma _u}}}{{{\varepsilon _0}}}, \quad {d_{uu}} = \frac{{{\sigma _u}}}{{{\kappa _u}{\varepsilon _0}}}, \quad {\kappa _{uu}} = {\kappa _u}. \end{align} In~\eqref{tensor} and the rest of the text $(u,v,w)$ follows the permutation $(x,y,z)$ $\rightarrow$ $(y,z,x)$ $\rightarrow(z,x,y)$. Following the standard nodal discontinuous Galerkin method~\cite{Hesthaven2008}, the computation domain and the PML are discretized into $K$ elements with volumetric support $\Omega_k$ and boundary surface $\partial \Omega_k$, and in each element ${\mathbf{E}}$, ${\mathbf{H}}$, ${{\mathbf{P}}^E}$ and ${{\mathbf{P}}^H}$ are expanded using the Lagrange polynomials $\ell _i({\mathbf{r}})$~\cite{Hesthaven2008}, $i=1,\cdots,N_p$, where ${N_p} = (p + 1)(p + 2)(p + 3)/6$ is the number of interpolating nodes and $p$ is the order of the Lagrange polynomials. Finally, Galerkin testing yields the semi-discrete system of equations as \begin{align} \label{disH} & {\partial _t}{{\bar{H}}_k} = - ({\bar{M}}_k^a)^{-1} [{\bar{M}}_k^b{\bar{H}}_k + {\bar{M}}_k^c{{\bar{P}}_k^H} + {\mu_k^{-1}} \bar{\mathbb{C}}_k ({{\bar{E}}_k},{{\bar{E}}_{k'}},{{\bar{H}}_k},{{\bar{H}}_{k'}})] \\ \label{disE} & {\partial _t}{{\bar{E}}_k} = - ({\bar{M}}_k^a)^{-1} [{\bar{M}}_k^b{\bar{E}}_k + {\bar{M}}_k^c{{\bar{P}}_k^E} - {\varepsilon_k^{-1}} \bar{\mathbb{C}}_k({{\bar{H}}_k},{{\bar{H}}_{k'}}, {{\bar{E}}_k},{{\bar{E}}_{k'}})] \\ \label{disPH} & {\partial _t}{\bar{P}}_k^H = {{\bar{M}}_k^{-1}} ({\bar{M}}_k^{{1/\kappa}}{\bar{H}}_k - {\bar{M}}_k^d{{\bar{P}}_k^H}) \\ \label{disPE} & {\partial _t}{\bar{P}}_k^E = {{\bar{M}}_k^{-1}} ({\bar{M}}_k^{{1/\kappa}}{\bar{E}}_k - {\bar{M}}_k^d{{\bar{P}}_k^E}). \end{align} Here, $\bar{H}_k$, $\bar{E}_k$, $\bar{P}_k^H$, and $\bar{P}_k^E$ are vectors storing the unknown time-dependent coefficients of the relevant basis functions, $\bar{M}_k$ and $\bar{M}_k^{\alpha}$, $\alpha \in \{a,b,c,d,{1/\kappa}\}$, are the mass matrices with entries \begin{align} \label{mass0} & {\bar{M}_k}(i,j) = \int_{{\Omega _k}} {{\ell _i}({\mathbf{r}}){\ell _j}({\mathbf{r}})} d{\mathbf{r}} \\ \label{mass} & {\bar{M}_k^{\alpha,u}}(i,j) = \int_{{\Omega _k}} {\alpha_{uu}(\mathbf{r}){\ell _i}({\mathbf{r}}){\ell _j}({\mathbf{r}})} d{\mathbf{r}} \end{align} $\bar{\mathbb{C}}_k({f_k},{f_{k'}},{g_k},{g_{k'}})$ denotes the curl operator with its component along direction $u$ is given by \begin{equation*} \bar{\mathbb{C}}_k^u({f_k},{f_{k'}},{g_k},{g_{k'}}) = {{\bar{S}}_k^v}{f_k^w} - {{\bar{S}}_k^w}{f_k^v} + {{\bar{F}}_{k}}{\mathbb{F}^u}({f_k},{f_{k'}},{g_k},{g_{k'}}) \end{equation*} where $u \in \{ x,y,z\}$, $(f,g)\in\{(\bar{E}, \bar{H}), (\bar{H}, \bar{E})\}$, $\mathbb{F}^u$ is the component of the numerical flux along direction $u$, which in general involves unknowns from the current element $k$ and its neighboring element $k'$~\cite{Hesthaven2008, Chen2020steadystate, Chen2019multiphysics}, $\bar{S}_k$ and $\bar{F}_k$ are the stiffness and the face mass matrices with entries \begin{align} \label{stiff} & \bar{S}_k^u (i,j) = \int_{{\Omega _k}} {{\ell _i}({\mathbf{r}})\frac{{d{\ell _j}({\mathbf{r}})}}{{du }}} d{\mathbf{r}} \\ \label{lift} & {\bar{F}_k}(i,j) = \oint_{\partial {\Omega _k}} {{\ell _i}({\mathbf{r}}){\ell _j}({\mathbf{r}})d{\mathbf{r}}} \end{align} and $\epsilon _k$ and $\mu _k$ are the permittivity and the permeability (assumed constant) in each element. Note that, the nodal DG framework~\cite{Hesthaven2008} is used here, but the proposed method can be easily extended to vector DG methods~\cite{Cockburn2004, Lu2004, Gedney2009, Li2015IBC, Li2017dispersive, Li2018graphene}. For linear elements, the mass matrices ${\bar{M}_k}$ in~\eqref{mass0} are simply scaled versions of the mass matrix ${\bar{M}}$ defined on the reference element, ${\bar{M}_k = J_k {\bar{M}}}$, where $J_k$ is the Jacobian of the coordinate transformation between element $k$ and the reference element. Hence, only ${\bar{M}}$ and (scalar constant) $J_k$ are stored. Similarly, in~\eqref{mass}, if $\alpha_{uu}(\mathbf{r})$ is assumed constant inside the elements, i.e., $\alpha_{uu}(\mathbf{r}) = \alpha_{uu}^k$, then ${\bar{M}_k^{\alpha,u}}=\alpha_{uu}^k{\bar{M}_k}=\alpha_{uu}^k J_k {\bar{M}}$ and one only needs to store $\alpha_{uu}^k$ in addition to ${\bar{M}}$ and $J_k$. In this case, \eqref{disH}-\eqref{disPE} can be implemented as efficiently as the case without the PML~\cite{Hesthaven2008, Liu2012, Chen2019discontinuous}. However, if $\alpha_{uu}(\mathbf{r})$ is allowed to vary inside the elements, ${\bar{M}_k^{\alpha,u}}$ are different in different elements and in general there is no simple relationship between these different mass matrices. One has to store every one of these mass matrices (or their inverse). As an alternative, the mass matrix can be recomputed at each time step, but this would significantly increase the cost of time marching~\cite{Hesthaven2008}. The memory required to store the mass matrices ${\bar{M}_k^{\alpha,u}}$ in~\eqref{disH}-\eqref{disPE} scales with $3 \times 5\times N_p^2$ per element, where $3$ is the number of the $(x,y,z)$ components of the vector field, $5$ is the number of the coefficients $a(\mathbf{r})$, $b(\mathbf{r})$, $c(\mathbf{r})$, $d(\mathbf{r})$, and $\kappa(\mathbf{r})$. Note that this memory requirement is significantly higher than that of storing the unknowns coefficients of the basis functions, which scales with $12 \times N_p$ in the PML. To reduce the memory requirement of implementing~\eqref{mass} with $\alpha_{uu}(\mathbf{r})$ allowed to vary inside the elements, WAA~\cite{Chan2017} is used. It has been shown that with this approximation DGTD retains provable energy-stability and high-order accuracy~\cite{Chan2017, Guo2020, Shukla2020}. Note that in the above SC-PML formulation, directly multiplying~\eqref{upH} and \eqref{upE} with $\ddot{a}^{-1}$ on both sides reduces the number of element-dependent mass matrices to $4$. But this would result in a non-conservative form, whose solution is neither provably energy-stable nor provably high-order accurate~\cite{Hesthaven2008, Chan2017}. First, a weight-adjusted inner product is introduced to approximate the parameter-weighted inner product in the expression of the mass matrix~\cite{Chan2017}. The mass matrix, which is associated with the element $k$ and a locally varying coefficient $\alpha_k(\mathbf{r})$, is approximated as \begin{align} \label{WAmass} {\bar{M}_k^{\alpha}} \approx \bar{M}_k (\bar{M}_k^{1/\alpha})^{-1} \bar{M}_k. \end{align} Since $({\bar{M}_k^{\alpha}})^{-1}$ is used in~\eqref{disH}-\eqref{disPE} (for $\alpha=a$), one needs to calculate $\bar{M}_k^{1/\alpha}$. Under the nodal DG framework~\cite{Hesthaven2008}, \begin{align} \nonumber \bar{M}_k^{1/\alpha}(i,j) & = J_k \int_{{\Omega _k}} {\alpha_k^{-1}(\mathbf{r}){\ell _i}({\mathbf{r}}){\ell _j}({\mathbf{r}})} d{\mathbf{r}} \\ & \approx J_k \sum_q \ell_i(\mathbf{r}_q) {w_q}{\alpha_k^{-1}(\mathbf{r}_q)} \ell_j(\mathbf{r}_q) \end{align} where $\mathbf{r}_q$, $q=1, ..., N_q$, are the Gaussian quadrature nodes corresponding to the quadrature rules of degree $2p+1$ and $w_q$ are the corresponding weights. Hence, \begin{align} \label{WAmass1} \bar{M}_k^{1/\alpha} = J_k \bar{V}_q^T \bar{w}_q \bar{\alpha}_k^{-1} \bar{V}_q \end{align} where $\bar{V}_q$ is an interpolation matrix defined on the reference element, $\bar{V}_q = \bar{V}_I \bar{V}^{-1}$, $\bar{V}_I$ and $\bar{V}$ are generalized Vandermonde matrices with entries $\bar{V}_I(q,i)=\phi_i(\mathbf{r}_q)$ and $\bar{V}(j,i)=\phi_i(\mathbf{r}_j)$, respectively, $\phi_i(\mathbf{r})$ is the $i$-th orthonormal polynomial basis~\cite{Hesthaven2008}, $\bar{w}_q=\mathrm{diag} \{w_1, ..., w_{N_q}\}$ is also element-independent, and $\bar{\alpha}_k = \mathrm{diag} \{ \alpha_k(\mathbf{r}_1), ... , \alpha_k(\mathbf{r}_{N_q}) \}$ is a diagonal matrix containing the coefficients evaluated at the quadrature nodes. Inverting~\eqref{WAmass} and substituting~\eqref{WAmass1} yield \begin{align} \nonumber ({\bar{M}_k^{\alpha}})^{-1} & \approx \bar{M}_k^{-1} \bar{M}_k^{1/\alpha} \bar{M}_k^{-1}\\ \nonumber & = {\bar{M}^{-1}} \bar{V}_q^T \bar{w}_q \bar{\alpha}_k^{-1} \bar{V}_q \bar{M}_k^{-1} \\ \label{WAmassInv} & = \bar{P}_q \bar{\alpha}_k^{-1} \bar{V}_q \bar{M}_k^{-1}. \end{align} Here, $\bar{P}_q = \bar{M}^{-1}\bar{V}_q^T \bar{w}_q$ is introduced to simplify the implementation. In~\eqref{WAmassInv}, $\bar{P}_q$ and $\bar{V}_q$ are defined on the reference element, and ${\bar{M}_k^{-1}}$ is a scaled version of the reference matrix, ${\bar{M}_k^{-1}}= J_k^{-1} {\bar{M}^{-1}}$. The update equations~\eqref{disH}-\eqref{disPE} contain multiplications between element-dependent mass matrices. To reduce the number of arithmetic operations, the following operators are defined \begin{align} \label{Mb} & \tilde{M}_k^{b} = {({\bar{M}}_k^{a})^{-1} {\bar{M}}_k^{b}} = \bar{P}_q \bar{a}_k^{-1} \bar{V}_q \bar{P}_q \bar{b}_k \bar{V}_q \\ \label{Mc} & \tilde{M}_k^{c} = {({\bar{M}}_k^{a})^{-1} {\bar{M}}_k^{c}} = \bar{P}_q \bar{a}_k^{-1} \bar{V}_q \bar{P}_q \bar{c}_k \bar{V}_q \\ \label{Md} & \tilde{M}_k^{d} = {{\bar{M}}_k^{-1} {\bar{M}}_k^{d}} = \bar{P}_q \bar{d}_k \bar{V}_q\\ \label{Mkappa} & \tilde{M}_k^{1/\kappa} = {{\bar{M}}_k^{-1} {\bar{M}}_k^{1/\kappa}} = \bar{P}_q \bar{\kappa}_k^{-1} \bar{V}_q \end{align} where~\eqref{WAmassInv} is used for $\alpha=a$ and~\eqref{WAmass1} is used for $\alpha \in \{b, c, d, {1/\kappa}\}$. These operators can be directly used on the right hand sides of~\eqref{disH}-\eqref{disPE}. Substituting~\eqref{Mb}-\eqref{Mkappa} into~\eqref{disH}-\eqref{disPE} yields \begin{align} \label{disH3} & {\partial _t}{{\bar{H}}_k} = - \bar{P}_q \bar{a}_k^{-1} \bar{V}_q [\bar{P}_q (\bar{b}_k \bar{V}_q \bar{H}_k + \bar{c}_k \bar{V}_q \bar{P}_k^H ) + {\mu_k^{-1}} \bar{\mathbb{C}}_k ({{\bar{E}}_k},{{\bar{E}}_{k'}},{{\bar{H}}_k},{{\bar{H}}_{k'}}) ] \\ \label{disE3} & {\partial _t}{{\bar{E}}_k} = - \bar{P}_q \bar{a}_k^{-1} \bar{V}_q [\bar{P}_q (\bar{b}_k \bar{V}_q \bar{E}_k + \bar{c}_k \bar{V}_q \bar{P}_k^E ) - {\epsilon_k^{-1}} \bar{\mathbb{C}}_k ({{\bar{H}}_k},{{\bar{H}}_{k'}},{{\bar{E}}_k},{{\bar{E}}_{k'}}) ] \\ \label{disPH3} & {\partial _t}{\bar{P}}_k^H = \bar{P}_q (\bar{\kappa}_k^{-1} \bar{V}_q {\bar{H}}_k - \bar{d}_k \bar{V}_q {\bar{P}}_k^H ) \\ \label{disPE3} & {\partial _t}{\bar{P}}_k^E = \bar{P}_q (\bar{\kappa}_k^{-1} \bar{V}_q {\bar{E}}_k - \bar{d}_k \bar{V}_q {\bar{P}}_k^E ). \end{align} Equations~\eqref{disH3}-\eqref{disPE3} can be implemented in a matrix-free manner just like it is done in classical DG implementations~\cite{Hesthaven2008, Gedney2009, Cohen2017, Liu2012, Sirenko2012, Chen2019discontinuous, Chen2019unitcell}. \subsection{Computational complexity} \label{complexity} In DGTD with explicit time marching, all operations are localized within the elements. The memory required to store the mass matrices in the direct implementation of~\eqref{disH}-\eqref{disPE} scales with $K_{\mathrm{PML}} \times 15 N_p^2$, where $15$ comes from the number of unknown components times the number of different mass matrices associated with different coefficients and $K_{\mathrm{PML}}$ is the number of elements in the PML. In the WAA formulation~\eqref{disH3}-\eqref{disPE3}, the memory requirement reduces to $(K_{\mathrm{PML}} \times 15 N_q) + 2 N_p N_q$, where $15 N_q$ comes from the number of unknown components times the number of coefficient samples at the quadrature points and $2 N_p N_q$ comes from $\bar{V}_q$ and $\bar{P}_q$ defined on the reference element. For simplicial quadrature rules that are exact for up to polynomials of degree $2p+1$, $N_q\! \sim \! N_p$~\cite{Cools1999, Xiao2010}. To compare the number of arithmetic operations required by the two implementations, one should first note that the curl operator $\bar{\mathbb{C}}$ is the same in both formulations. Computation of $\bar{\mathbb{C}}$ requires those of the spatial derivatives and the numerical flux~\cite{Chen2020float, Sirenko2018, Chen2020hybridizable}. Here, the memory access time is much more significant than the time required to carry out these computations because data from neighboring elements, which are discontinuous in memory, is required. Therefore, only the times required to complete the arithmetic operations of the remaining terms are compared. For the same reason, in practice, the time required to compute $\bar{\mathbb{C}}$ dominates the overall time required by the time marching, and the difference in the numbers of arithmetic operations as estimated below for the remaining terms is less significant (see the example in Section~\ref{Examples}). In~\eqref{disH}, the three matrix-vector multiplications and two vector-vector additions require $3 N_p^2$ multiplication operations and $2 N_p$ addition operations, respectively. In~\eqref{disH3}, the multiplication of $\bar{V}_q$ with a vector of length $N_p$, and the multiplication of $\bar{P}_q$ with a vector of length $N_q$ require $N_q N_p$ multiplication operations. The multiplication of a diagonal matrix with a vector (such as $\bar{b}_k \bar{v}$) requires $N_q$ multiplication operations. As a result, excluding the computation of $\bar{\mathbb{C}}$,~\eqref{disH3} requires $5 N_q N_p + 3 N_q$ multiplications and $N_q + N_p$ additions. For the auxiliary variable, the cost of~\eqref{disPH} is $3 N_p^2$ multiplications and $N_p$ subtractions, while~\eqref{disPH3} requires $3 N_q N_p + 2 N_q$ multiplications and $N_q$ subtractions. One can see the number of operations in the WAA implementation is slightly higher than that in the direct implementation. But as mentioned above the time required by these operations is smaller than the time required to compute $\bar{\mathbb{C}}$, and therefore overall times required by the two implementations are not that different. \section{Numerical Examples}\label{Examples} In this section, the accuracy and the efficiency of the proposed WAA formulation are compared to those of the traditional PML implementations using numerical examples. To this end, four PML configurations/implementations are considered in these examples: (i) $\sigma_u$ and/or $\kappa_u$, $u \in \{x,y,z\}$, are assumed constant inside the elements on a paved mesh (EC-paved) [Fig.~\ref{Profile}(a)], (ii) $\sigma_u$ and/or $\kappa_u$, $u \in \{x,y,z\}$, are assumed constant inside the elements on a layered mesh (EC-layered) [Fig.~\ref{Profile}(b)], (iii) $\sigma_u$ and/or $\kappa_u$, $u \in \{x,y,z\}$, are allowed to vary inside the elements on a paved mesh (SV-paved) [Fig.~\ref{Profile}(c)], and (iv) same configuration in (iii) but implemented using the proposed method with the WAA (SV-WAA-paved) [Fig.~\ref{Profile}(c)]. In all implementations, the order of the Lagrange polynomials $p \in \{1, 2, 3, 4, 5,\}$, which results in $N_p \in \{4, 10, 20, 35, 56\}$. For configuration (i), the constant values in a given element are obtained by sampling $\sigma_u$ and $\kappa_u$, $u \in \{x,y,z\}$, at that element's node that is farthest away from the PML interface (along the $\pm u$-direction). For configuration (ii), to ensure that the element surfaces are strictly parallel to the axes, the PML mesh is built layer by layer and constant values in a given layer are obtained by sampling $\sigma_u$ and $\kappa_u$, $u \in \{x,y,z\}$, at the outermost surface of that layer (along the $\pm u$-direction). For the WAA in implementation (iv), the order of the Gaussian quadrature rule is $2p$, resulting in $N_q \in \{4, 11, 23, 44, 74\}$~\cite{Xiao2010}. In all examples, the background medium is free space and the excitation is a plane wave with electric field $\mathbf{E}(z,t)=E_0\mathbf{\hat{x}}G(t-z/c_0)$, where $E_0=1 \mathrm{V/m}$, $c_0$ is the speed of light in free space, and $G(t) = e^{(t-t_0)^2/4\tau^2}$ is a base-band Gaussian pulse with $\tau=66.67\,\mathrm{ps}$ and $t_0=15\tau$. The average edge lengths of all meshes used under this excitation are $0.4\,\mathrm{cm}$. First, the reflection of a plane wave normally incident on the PML is computed. The computation domain is a rectangular box with dimensions $1.2\,\mathrm{cm} \times 1.2\,\mathrm{cm} \times 60\, \mathrm{cm}$. Perfect electric conductor (PEC) and periodic boundary conditions are used on the outer boundary of the PML that is located perpendicular to the $z$ direction and on the computation domain boundaries perpendicular to the $x$ and $y$ directions, respectively. The plane wave excitation is introduced on surface $z=0$ and propagates in the $+z$-direction. The domain is long enough to ensure that the reflected field is well-separated from the incident one, and therefore the reflection from the PML is simply measured by the peak value of the reflected field's amplitude. The conductivity profile is described by $\sigma_z(z)=\sigma_{\mathrm{max}}[(z-z_0)/L_{z}]^{p_{\sigma}}$, where $z_0$ is the $z$-coordinate on the interface between PML and the computation domain, $L_z$ is the thickness of the PML and $ p_{\sigma}$ is the order of the profile. Note that $\sigma_z(z)$ is nonzero only when $|z|>|z_0|$. The values of these parameters are $z_0=\pm 30$ $\mathrm{cm}$, $L_{z}=1.6$ $\mathrm{cm}$, and $p_{\sigma}=1$, and also $\kappa_z(z)=1$ both inside the PML and the computation domain. In this example, four configurations/implementations are considered: EC-paved, EC-layered, SV-paved, and SV-WAA-paved. Their performances are compared for $p \in \{2, 3, 4, 5\}$. For all four groups of simulations, $\sigma_{\mathrm{max}}$ is scanned to find the minimum reflection that can be obtained for each case. Fig.~\ref{PW} shows that with increasing $\sigma_{\mathrm{max}}$, the reflection first decreases exponentially and then increases gradually. This is observed for all configurations/implementations and all values of $p$. When $\sigma_{\mathrm{max}}$ is small, the overall reflection is dominated by the reflection from the PEC boundary simply because the absorption inside the PML is not high enough. Therefore, in this regime, increasing $\sigma_{\mathrm{max}}$ elevates the absorption and reduces the amplitude of the wave reflected back into the computation domain exponentially. The numerical reflection (which is smaller than the reflection from the PEC boundary for small $\sigma_{\mathrm{max}}$) increases with increasing $\sigma_{\mathrm{max}}$~\cite{Chew1996}, and starts dominating the overall reflection as demonstrated in the figure by the gradual increase after the minimum point. \begin{figure}[!t] \centerline{\includegraphics[width=0.79\columnwidth]{Article_PW.png}} \caption{Peak value of the reflected field's amplitude versus $\sigma_{\mathrm{max}}$ for different PML configurations/implementations and orders of Lagrange polynomials ($p$). The plane wave is normally incident on the PML.} \label{PW} \end{figure} For the EC-paved configuration, the reflection stays at a high level and does not decrease with increasing $p$. This is because of the large reflections from unoriented internal element surfaces. For the EC-layered, SV-layered, and SV-WAA-paved configurations, high-order convergence is observed, i.e., the reflection keeps on decreasing exponentially with increasing $p$. Still, the reflection for the SV-paved and SV-WAA-paved configurations is about $15~\mathrm{dB}$ smaller than that for the EC-layered configuration. Note that this higher accuracy comes with the ease of meshing since a layered mesh (and conductivity profile) is not needed. Finally, Fig.~\ref{PW} also shows that the SV-WAA-paved implementation performs exactly the same as the SV-paved direct implementation, which verifies the accuracy of the proposed method. Next, scattering from a PEC sphere of radius $1$ $\mathrm{cm}$ is considered. The computation domains and the PMLs for the EC-layered and SV-paved and SV-WAA-paved configurations are shown in Figs.~\ref{Article_Sphere} (a) and (b), respectively. The plane wave excitation is introduced on the total-field scattered-field (TFSF) surface [shown in green in Figs.~\ref{Article_Sphere} (a) and (b)]. The conductivity function is $\sigma_u(u)=\sigma_{\mathrm{max}}[(u-u_0)/L_{u}]^{p_{\sigma}}$, $u \in \{x,y,z\}$, $u_0 \in \{x_0, y_0, z_0\}$, where $u_0$ is the $u$-coordinate on the interface between PML and the computation domain, $L_u$ is the thickness of the PML along the $\pm u$ direction, and $ p_{\sigma}$ is the order of the profile. Note that $\sigma_u(u)$ is nonzero only when $|u|>|u_0|$. The values of these parameters are $x_0=y_0=z_0=\pm 2.2$ $\mathrm{cm}$, and $L_x=L_y=L_z=1.2$ $\mathrm{cm}$. Because the distance between the sphere surface and the PML is short, possibly-evanescent scattered waves enter the PML with high grazing angles. A varying $\kappa_u$, $u \in \{x,y,z\}$, profile is employed to help with the absorption of these evanescent waves~\cite{Berenger2007book, Gedney2011}: $\kappa_u(u)=1+(\kappa_{\mathrm{max}}-1)[(u-u_0)/L_{u}]^{p_{\sigma}}$ with $\kappa_{\mathrm{max}}=2$. Note that inside the computation domain, $\kappa_u(u)=1$. In this example, using the PEC or the first-order absorbing boundary condition~\cite{Angulo2015} on the outer boundary of the PML gives similar results. The results presented here are obtained with the PEC boundary condition. Three configurations/implementations are considered here: EC-layered, SV-paved, and SV-WAA-paved. For the EC-layered configuration, the thickness of each PML mesh layer is $0.4$ $\mathrm{cm}$ [Fig.~\ref{Article_Sphere} (a)]. Note that, for this example, generation of these layers is rather tedious since in the corner region one has to align all layer/element surfaces in all three directions. In contrast, for the SV-paved and SV-WAA-paved configurations [same mesh is used -- Fig.~\ref{Article_Sphere} (b)], $\sigma_u$ and $\kappa_u$ values are simply obtained by sampling the corresponding profile functions at the nodes of the elements. This significantly simplifies the setups of the computation domain and the PML since even an explicit interface between the computation domain and the PML is not required [see Fig.~\ref{Article_Sphere} (b)]. The performances of the three configurations are compared for $p \in \{1, 3, 4\}$ and $ p_{\sigma} \in \{1, 2\}$. \begin{figure}[!t] \centering \subfloat[\label{Article_SphereL}]{\includegraphics[width=0.499\columnwidth]{Article_SphereL.png}} \subfloat[\label{Article_SphereR}]{\includegraphics[width=0.499\columnwidth]{Article_SphereR.png}} \caption{ Computation domains, meshes, and PML conductivity profiles (represented by color) used for the (a) EC-layered and (b) SV-paved and SV-WAA-paved configurations.} \label{Article_Sphere} \end{figure} \begin{figure}[!t] \centerline{\includegraphics[width=0.79\columnwidth]{SphereReflectionSigMax1.png}} \caption{Reflection of the scattered field versus $\sigma_{\mathrm{max}}$ for different PML configurations/implementations, orders of the Lagrange polynomials ($p$), and orders of the PML profile ($p_{\sigma}$). The scatterer is a PEC sphere of radius $1\, \mathrm{cm}$. } \label{SphereReflectionSigMax} \end{figure} $\sigma_{\mathrm{max}}$ is scanned to find the minimum reflection that could be reached for each case. Note that in this example ``reflection'' is defined as the peak value of the absolute difference between the fields computed at a probe point for the above cases and corresponding reference fields computed at the same point. $10$ different probe points (placed either in the TF or in the SF region) have been tested and the results are consistent for all of them. The results below correspond to the probe point at $(1.0 \,\mathrm{cm}, 1.0\, \mathrm{cm}, 1.0\, \mathrm{cm})$. These reference fields are computed under the same excitation but the distance between the sphere surface and the PML is extended to $12$ $\mathrm{cm}$. To ensure that the discretization errors are at the same level, the meshes in the overlapped regions between the actual computation domains and the extended ones are kept exactly the same, the average edge lengths of the meshes in the extended region are kept the same as those in the actual computation domains, and the solutions are obtained using the same value of $p$. Fig.~\ref{SphereReflectionSigMax} plots the reflection for the three cases with different values of $p$ and $p_{\sigma}$ versus $\sigma_{\mathrm{max}}$. Clearly, the SV-paved and SV-WAA-paved configurations perform better than the EC-layered configuration for every value of $p_{\sigma}$. The best performance is obtained with $p_{\sigma}=2$. Note that further increasing $p_{\sigma}$ degrades the PML performance for all configurations/implementations and all values of $p$ since high conductivity values only appear at the very end of the PML when $p_{\sigma}$ is high. Fig.~\ref{SphereReflectionSigMax} also shows that the SV-WAA-paved implementation performs exactly the same as the SV-paved direct implementation, which means the error caused by the WAA of the mass matrices is below the level of the discretization error. \begin{table}[!t] \centering \begin{threeparttable} \renewcommand{\arraystretch}{1.1} \centering \caption{Computational costs of the SV-paved and SV-WAA-paved implementations for different orders of the Lagrange polynomials ($p$) \tnote{*}.} \label{cost} \setlength{\tabcolsep}{3pt} \begin{tabular}{ c | c | c | c | c | c | c } \hline $p$ & $N_p$ & $N_q$ & \multicolumn{2}{|c|}{memory (KB)} & \multicolumn{2}{|c}{CPU time per step (s)} \\ \hline & & & SV-paved & SV-WAA-paved & SV-paved & SV-WAA-paved\\ \hline 1 & 4 & 4 & 378,660 & 267,928 & 1.652716 & 2.341525 \\ \hline 2 & 10 & 11 & 1,274,424 & 498,508 & 4.083981 & 5.960960 \\ \hline 3 & 20 & 23 & 4,126,640 & 894,936 & 9.606642 & 15.73330 \\ \hline 4 & 35 & 44 & 11,583,440 & 1,513,140 & 19.64900 & 30.16986 \\ \hline 5 & 56 & 74 & 28,410,000 & 2,291,608 & 78.56877 & 105.1317 \\ \hline \end{tabular} \smallskip \scriptsize \begin{tablenotes} \item[*] {Tested on a workstation with Intel Xeon(R) E5-2680 v4 CPU and 128GB memory. A single process is used. $K=72,762$ and $K_{\mathrm{PML}}=52,657$.} \end{tablenotes} \end{threeparttable} \end{table} Table.~\ref{cost} compares the computational cost of the SV-paved and SV-WAA-paved implementations. With increasing $p$, the memory requirement increases dramatically for the SV-paved direct implementation but only modestly for the SV-WAA implementation. For $p=5$, the memory requirement of the SV-paved implementation is $12.4$ times that of the SV-WAA-paved implementation. The computation time required by the SV-WAA-paved implementation per time step is slightly larger than that required by the SV-paved implementation due to the increased number of arithmetic operations (see Section.~\ref{complexity}). It should be also be noted here that, in practice, a DGTD algorithm is usually parallelized. The difference in times required for updating different elements is relatively small and can be easily compensated by allocating a smaller number of elements for those MPI processes containing PML elements. In the numerical results presented here, assigning a weight of $2$ for PML elements in ParMetis~\cite{metis1998, Chen2019parallel} yields a good load-balance. \section{Conclusion} A PML implementation that allows the attenuation coefficient to vary inside the discretization elements yields a smaller numerical reflection from the interface between the PML and the computation domain and significantly simplifies the meshing process. However, these advantages come at the cost of increased memory footprint since a different mass matrix has to be stored for every discretization element. In this work, this memory requirement is reduced by applying WAA to the mass matrices without abandoning the advantages listed above. Indeed, numerical results demonstrate that the PML with smoothly-increasing conductivity profile as implemented with the proposed method performs better than the PML implemented using element-wise constant conductivity profile and that the higher-order accuracy of the solution is maintained. The proposed method is especially useful for simulations running on shared-memory systems where the high memory requirement of smoothly-varying PMLs could be a bottleneck. For simulations running on distributed-memory systems, the memory requirement of a single computing node is also reduced and a better load-balance could be reached with a slightly adjusted weight in the domain partition.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction } The ``optimized $\delta $ - expansion'', also called ``linear $\delta $ - expansion'', or, more appropriately, ``variational perturbation theory'', is a powerful method which combines the merits of perturbation theory with those of variational approaches. The underlying idea is simple: The action $S$ is split into a free part $\lambda S_0$ and an interacting part $S-\lambda S_0$. Actually it is not necessary that $S_0$ describes a free action, but only that all relevant quantities can be calculated explicitely with $S_0$. The interacting part is multiplied by a factor $\delta $ which serves as expansion parameter and is put equal to one at the end. The exact result should be independent of the parameter $\lambda $ while any approximation will depend on it. The idea, often called ``Stevenson's principle of minimal sensitivity'' \cite{stev}, is that the approximate solution should depend as little as possible upon the parameter $\lambda $. This means that $\lambda $ should be chosen such that the quantity to be calculated has an extremum. In this way the result becomes non-perturbative because $\lambda $ becomes a non-linear function of the coupling constant. In every order of perturbation theory the parameter has to be calculated again. There are already many successful applications of this method in various fields of physics as well as rigorous convergence proofs for simple cases. We refer, e.g. to the references given in \cite{Gr}. In the present paper we concentrate on applications on the lattice. Up to now, three different types of actions $S_0$ have been used in this context. In the first paper on the subject by Duncan and Moshe \cite{DM}, which, among other topics, treats the plaquette energy for U(1) and $Z_2$ gauge theory in $d = 3$ space time dimensions, the action $S_0$ was chosen as a maximal tree of plaquettes. This is a set of plaquettes which does not contain a closed surface, whereas the addition of any further plaquette would lead to a closed surface. One of the reasons for this choice is, of course, that all integrations can be performed explicitely in this case. A maximal tree for $S_0$ was also used by Buckley and Jones \cite{BJ2} in their work on Z(2), U(1), and SU(2) in four dimensions. A second natural choice for $S_0$ is a quadratic action, typically the sum of the squares of the plaquette angles. Such an action was used by Duncan and Jones \cite{DJ} in their work on U(1) in $d = 4$ and by Buckley and Jones \cite{BJ1} for SU(2). A third choice, used by Akeyo and Jones \cite{AJ} for SU(2), as well as for the mixed SU(2) - SO(3) model, is a single link action, i.e. the sum of $\mbox{Tr}\;U_l$ over all links. A maximal tree is a good approximation to the original action in the strong coupling limit. A quadratic action or a single link action, on the other hand, is a good approximation in the weak coupling limit. The $\delta $ - expansion therefore behaves quite differently in the two cases: In the first case one obtains a good description of the Monte Carlo data for small $\beta $, roughly up to the phase transition, or the transition region, respectively. In the second case the same holds for large $\beta $ from the transition region up to infinity. The signal for the qualitative change in the behavior is the merging of two extrema (with respect to $\lambda $) into a point of inflexion with horizontal tangent and it's subsequent disappearance, when $\beta $ is changed near the critical region. We will, however, see, that this point of inflexion has no special relevance. In the present paper we use a fourth ansatz for $S_0$ which has several advantages compared to the previous ones (as well as a minor drawback). We enlarge the degrees of freedom of the system by embedding the lattice gauge theory into a continuum theory and use the free Maxwell Lagrangian for the vector potential as our interpolating action. The divergences which arise in the continuum are absorbed by splitting off a divergent factor from the action. The consequence thereof is that only originally divergent graphs, in which a photon line starts and ends at one and the same link, survive. This leads to a dramatic simplification. No graph has to be calculated explicitely, the whole game essentially becomes a problem of counting certain configurations of plaquettes. The advantages of the method are the following: \begin{itemize} \item In any order of the calculation we obtain neither integrals, nor infinite sums, nor special functions. \item In any order $n$ of the expansion only a finite number of configurations, consisting of $n+1$ connected plaquettes, has to be considered. \item In any order the result only contains polynomials in the variational parameter $\lambda $ and powers of $e^{-\lambda /4}$. \item The calculation can be easily performed for arbitrary dimension. \end{itemize} These features allow calculations to a comparatively high order. In the present paper we obtained explicit expressions for arbitrary dimension $d\geq 3$ up to third order of the $\delta $- expansion. For the cases $d = 3$ and $d = 4$, with the help of a computer program which searches the relevant configurations of plaquettes, we can go to fourth order, one order more than computed in \cite{DJ}. We also mention here a drawback of the method: \begin{itemize} \item The coefficient in front of $1/\beta $ in the weak coupling expansion is not correctly reproduced from the beginning, but it converges to the correct value in higher orders. \end{itemize} Since our main interest is in the transition region, this drawback can be easily tolerated. A priori it is anyhow impossible to tell which type of trial action is best suited in this region. In sect. 2 we explain our method for U(1) and give the relevant general formulae. In sect. 3 we present the results up to order 4. Here we also discuss possibilities to enlarge the region in $\beta $ where the principle of minimal sensitivity can be applied. In sect. 4 we extend the method to SU(2) and apply it up to second order. Sect. 5 summarizes our conclusions. \newpage \setcounter{equation}{0}\addtocounter{saveeqn}{1}% \section{The method for U(1) } We use the familiar formulation of U(1) on a $d$-dimensional lattice with lattice constant $a$, described by the partition function \begin{equation} Z=\int _{-\pi }^{\pi }\cdots \int _{-\pi }^{\pi }e^{-\beta S} \prod _l \frac{d\phi _l} {2\pi },\end{equation} with the action \begin{equation} S = -\sum_{p'}\cos \Theta _{p'}. \end{equation} Here $l$ runs over the links, $p'$ over the plaquettes, $\Theta _{p'}$ is the sum of the four (oriented) angles $\phi _l$ living on the links of the plaquette $p'$. The object we shall discuss is the average plaquette energy E, i.e. the expectation value of $\cos \Theta_p$. In the first step we want to extend the integrations over the angles $\phi _l$ to the interval from $-\infty$ to $\infty$. This can be done by using the following relation which holds for any periodic function $f(\phi )$: \begin{equation} \int _{-\pi }^\pi f(\phi ) \frac{d \phi } {2\pi } = \lim_{\gamma \rightarrow 0} \sqrt{2 \gamma } \int_{-\infty}^\infty e^{-\gamma \phi ^2} f(\phi ) \frac{d\phi } {2\pi }. \end{equation} This is easily proven by splitting the rhs into integrals of length $2\pi $, shifting the integration variable back into the interval from $-\pi $ to $\pi $, and replacing the sum over the intervals by an integral in the limit $\gamma \rightarrow 0$. Actually we will see later that in any finite order of our expansion we can simply put $\gamma = 0$ in the expectation value, which is a pleasant simplification. In the next step we enlarge the number of degrees of freedom drastically, by introducing a vector potential $A_\mu $ defined in the whole continuum. The connection with the link variables $\phi _l$ is, as usual, \begin{equation} \phi _l = e\int _l A_\mu d x_\mu ,\end{equation} whith $e^2 = 1/\beta $. Expectation values are not changed if we now replace the ordinary integrations $\prod _l d\phi _l/(2\pi )$ by the path integral ${\cal D}[A]$. The reason for this is easily understood: The fields which are not sitting on the links appear neither in the action nor in the plaquette energy. Therefore the corresponding integrations factorize both in the numerator and in the denominator and cancel. The same happens for the fields which sit on the links but are transversal to them. Finally, for the longitudinal fields on the links, one may go over to new variables by performing a linear transformation with constant coefficients in such a way that one of the variables becomes the integral $\phi _l$ in (2.4). The integrations over the remaining other variables, as well as the constant Jacobian, cancel again and we are left with the original expectation value. We can thus write the expectation value of the plaquette energy as an expectation value in the continuum theory: \begin{equation} E =\lim_{\gamma \rightarrow 0} \frac{1}{N_\gamma } \int E_p \;\exp[\beta \sum_{p'}E_{p'}] \;\exp[-\gamma e^2\sum_l(\int_l A_\mu d x_\mu )^2 ]{\cal D}A, \end{equation} with \begin{equation} E_p = \cos e\oint_p A_\mu d x_\mu . \end{equation} The normalization constant $N_\gamma $ is, of course, obtained by replacing $ E_p$ by 1 in the numerator of (2.5). We are now in the position to apply the optimized $\delta $-expansion by introducing the free continuum Lagrangian. All expressions in (2.5) are gauge invariant in the limit $\gamma \rightarrow 0$, so, for simplicity, we will use the Feynman gauge which has the advantage that all graphs connecting orthogonal links vanish from the beginning. While, on the lattice, everything is finite, we will immediately obtain divergences in the continuum from the singularity of the propagator at zero distance. At first sight this looks like an additional complication. It can, however, easily be overcome by splitting off an appropriate divergent constant from the free action. In fact, this leads to a drastic simplification, because only the divergent terms of ordinary perturbation theory survive. This, in turn, will have the consequence that in any order only finite connected sets of plaquettes are involved. As free interpolating action we choose \begin{equation} S_0 = -\frac{c} {\beta \lambda }\int A_\mu \Box A_\mu d^dx. \end{equation} Here $\lambda $ is the variational parameter. The constant $c$ is a positive parameter which is divergent for $d\geq 3$. It is formally defined by \begin{equation} c = -\int_{0}^a\int_{0}^a D(t-t')dt\; dt' \end{equation} with $D$ the Green function of the d'Alembert operator, e.g. in four dimensions $D(x)=-1/(4\pi ^2 x^2)$. Of course, one could easily use some regularization which would lead to a large but finite $c$, and later on perform the limit. Because the whole procedure is, however, very transparent, this intermediate step can be skipped. After introducing the free continuum action, the plaquette expectation value in the optimized $\delta $-expansion reads \begin{equation} E = \frac{1} {N(\delta )} \int E_p e^{-S_0} e^{\delta (S_0 +\beta \sum_{p'}E_{p'})}{\cal D}A. \end{equation} We have simplified the expression by taking the limit $\gamma \rightarrow 0$ in (2.5) which is now allowed in any finite order of the $\delta $-expansion. The problem has thus become a continuum problem of calculating expectation values of products of plaquettes. The calculations for higher orders are greatly simplified by a simple trick, essentially already used in \cite{DM}. One should {\em not} expand the expression (2.9) with respect to $\delta $ as it stands, because this would introduce all the mixing terms between $S$ and $S_0$. Things become much simpler if one leaves the term $(1-\delta )S_0$ together and performs the substitution \begin{equation} A_\mu = \sqrt{ \frac{\beta \lambda} {2c(1-\delta )}}\;A'_\mu. \end{equation} This brings the free action into the usual form $(1-\delta )S_0 \rightarrow S'_0 = -(1/2) \int A'_\mu \Box A'_\mu d^dx $. In $E_p$ and $E_{p'}$ as defined in (2.6) one has to make the replacement \begin{equation} e\rightarrow e' = \sqrt{\frac{\tilde{ \lambda }} {2c}}, \end{equation} where, for convenience, we have introduced the abbreviation \begin{equation} \tilde{\lambda } \equiv \lambda /(1-\delta ). \end{equation} The quantity $\beta $ stays as it was before. So we end up with the comparatively simple expression \begin{equation} E=\int E_p e^{-S'_0} e^{\delta \beta \sum_{p'} E_{p'}} {\cal D}A'/\int e^{-S'_0} e^{\delta \beta \sum_{p'}E_{p'}} {\cal D}A' \end{equation} in which now $e$ and $A_\mu $ are to be replaced by $e'$ and $A'_\mu $ in $E_p$ and $E_{p'}$. Note, that besides the explicit $\delta $-dependence of this expression, there is also an implicit $\delta $-dependence contained in $e'$ which has to be considered. Let us first look for the expectation value of the $1\times 1$ Wilson loop with respect to the free continuum action. There are two types of graphs: In the first type the propagator connects two different parallel lines of the loop and is finite. The coupling constant $e'^2$ multiplying the propagator vanishes due to the constant $c = + \infty $ in the denominator. Therefore the product is zero, i.e. the exponential becomes equal to 1. We therefore only get contributions from the self energy graphs where the propagator connects points on the same link. For the four links of the plaquette this gives \begin{equation} \exp\{\frac{4}{2}e'^2 \int_{0}^a\int_{0}^a D(t- t')dt\; dt'\} = \exp (-\tilde{\lambda }) = \exp (-\lambda /(1-\delta )). \end{equation} Our choice for the divergent constant $c$ in (2.8) becomes clear from this. In any order of the $\delta $-expansion the $\beta ^0$ term is immediately obtained by expanding (2.14) to the desired order in $\delta $. To obtain the complete result we have to expand (2.13). After symmetrization in the summation variables $p_k$ one obtains a series of the form \begin{equation} E^{(n)} = \sum_{\nu =0}^n \frac{\eta _\nu ^{[n]} } {\nu !} \delta ^\nu \beta ^\nu \end{equation} with \begin{eqnarray} \eta _0 & = & <E_p>\nonumber\\ \eta _1 & = & \sum_{p_1}[<E_p E_{p_1}> - <E_p><E_{p_1}>]\\ \eta _2 & = & \sum_{p_1,p_2}[<E_p E_{p_1}E_{p_2}> -<E_p>< E_{p_1}E_{p_2}> - <E_{p_1}><E_p E_{p_2}>\nonumber\\ & & - <E_{p_2}><E_p E_{p_1}> + 2 <E_p>< E_{p_1}><E_{p_2}>] \nonumber\\ \cdots & = & \cdots\cdots\cdots \nonumber\\ \eta _n & = & \sum_{p_1,\cdots ,p_n}[<E_pE_{p_1}\cdots E_{p_n}> - \mbox{\quad factorized contributions].\quad}\nonumber \end{eqnarray} Here $ <\cdots >$ denotes the normalized expectation value. The coefficients $\eta _\nu ^{[n]}$ in (2.15) are defined by expanding each $\eta _\nu (\tilde{\lambda })$ with respect to the $\delta $ contained in $\tilde{\lambda }$ up to order $n - \nu $. This means that in total one expands up to order $n$ of the $\delta $-expansion. Finally one has to put $\delta =1$. The calculation of the expectation values proceeds along the following scheme. Consider, e.g. the term $<E_pE_{p_1}\cdots E_{p_n}>$. Write the cosines in the $E_{p_k}$ as exponentials, $\cos \Theta_k = (1/2)\sum_{j_k=\pm 1}e^{ij_k \Theta_k}$, the cosine in $E_p$ can be simply replaced by $e^{i \Theta}$. In this way one obtains a sum of $2^n$ terms with a factor $2^{-n}$ in front. The expectation value above is then evaluated with the use of the formula \begin{equation}\frac{\int e^{-S'_0} e^{\int J'_\mu (x)A'_\mu (x)d^dx} {\cal D}A'} { \int e^{-S'_0} {\cal D}A'} = \exp\{-(1/2)\int J'_\mu (x)D(x-x')J'_\mu (x')d^dx\; d^dx'\}. \end{equation} In our case all currents $J'_\mu $ are localized on the links and the $d$-dimensional integrals above become one dimensional. The current on a link has the form \begin{equation} J_{link} = ie'\delta ^{(d-1)}(link)\sum_k j_k^{(link)},\end{equation} where $\delta ^{(d-1)}(link)$ is the $(d-1)$-dimensional $\delta $-function with support on the link while the sum runs over all plaquettes $p_k$ which share the considered link (put $p=p_0, j_0 = 1$ in this context). The further calculation is greatly simplified by the fact that we never need any mixing terms between different links, because these involve a finite propagator, so the exponent becomes zero due to the factor $c$ in the denominator of $e'^2$. For the singular diagonal contribution of a single link, on the other hand, the divergent constant $c$ cancels and we end up with $-(\sum_k j_k^{(link)})^2 \tilde{\lambda }/4$ in the exponent. Finally, we thus obtain the generic formula \begin{equation} <E_pE_{p_1}\cdots E_{p_n}> =\frac{1}{2^n} \sum_{j_{k_1},\cdots ,j_{k_n}=\pm 1} \exp\{ -\sum_{links}(\sum_k j_k^{(link)})^2 \tilde{\lambda }/4\}. \end{equation} This allows all expressions in (2.16) to be evaluated in a simple way. An enormous simplification arises through the fact that only connected configurations of plaquettes need to be considered, where two plaquettes are called connected if they share a common link (or are identical). The reason is simple. If we have a configuration of two sets of plaquettes which are disconnected from each other, there are, as shown above, no contributions where the propagator connects the two sets. The contributions thus factorize and are therefore canceled by the factorized terms in (2.16). Therefore the expansion is local in the sense that in any order $n$ there is only a finite number of plaquettes, coming from the expansion of the exponent of the lattice action, which needs to be considered. These plaquettes make up a connected set together with the plaquette $p$. Therefore we simply obtain a sum of expressions which only contain polynomials in $\lambda $ (from the expansion of $\tilde{ \lambda }$) times powers of $\exp(-\lambda /4)$. So, in any order we neither get integrations, nor infinite sums, nor special functions! This simple structure allows the calculation of comparatively high orders which would, e.g., become prohibitively complicated in any approach working with lattice propagators. \setcounter{equation}{0}\addtocounter{saveeqn}{1}% \section{Results for U(1)} For the first three orders the plaquette configurations which contribute in the sum and their contributions can be written down explicitely. Only some modest computer help was used just for convenience. In the following we will give the formulae for general dimension $d$ but first discuss only $d = 4$. Other dimensions are briefly treated at the end. \\[1ex] {\bf Order 1 and generalities}\\[1ex] There are only two types of configurations in the sum over $p_1$ which contribute (see fig. 1a). In the first type one has $p_1 = p$, its contribution to $\eta _1$, according to the foregoing considerations, is $<E_p^2> - <E_p>^2 \; = (1/2)(1+e^{-4\lambda } -2e^{-2\lambda}) $. The second type consists of all plaquettes $p_1$ which share just one link with $p$. Their number is $4(2d-3)$, where the factor 4 is, of course, due to the four links of $p$, while the second factor counts the possible orientations of $p_1$. All these plaquettes give the same contribution $<E_pE_{p_1}> - <E_p><E_{p_1}> \; = (1 /2) (e^{-3\lambda /2}+e^{-5\lambda /2}-2e^{-2\lambda })$. In this way one ends up with the following result. \alpheqn \begin{eqnarray} E^{(1)}(\beta, \lambda ) & = & \eta _0^{[1]} + \delta \beta \eta _1^{[1]} = (1-\delta \lambda ) e^{-\lambda} +\delta \beta \eta _1(\lambda ) \mbox{\quad with\quad}\\ \eta _1(\lambda ) & = & \frac{1}{2 }[1+e^{-4\lambda } -2e^{-2\lambda} +4(2d-3)(e^{-3\lambda /2}+e^{-5\lambda /2}-2e^{-2\lambda })].\end{eqnarray}\reseteqn At the end, $\delta $ has, of course, to be set equal to 1. According to the principle of minimal sensitivity we have to look for the extrema with respect to $\lambda $. There is always a local maximum at $\lambda = \infty $, corresponding to the limit, where we do not introduce an interpolating continuum action at all, i.e. to the ordinary strong coupling expansion. Choosing this extremum would obviously lead to the expected result $E^{(1)} = \beta /2$. For large $\beta $, on the other hand, there is always a minimum at small $\lambda $. This is found by expanding $E^{(1)}(\beta ,\lambda ) = 1 -2\lambda +(3/2)\lambda ^2 +\beta (d+1/2)\lambda ^2+O(\lambda ^3)$. The minimum is at $\lambda =1/(d+1/2)\beta +O(1/\beta ^2)$ and gives $E^{(1)}=1-1/(d+1/2)\beta +O(1/\beta ^2)$. Obviously the qualitative behavior in the weak coupling limit is correct, but the factor $1/d$ in front of $\beta $ is replaced by $1/(d+1/2)$ which means that it is too small by 11 \% compared to the correct factor. The reason is, that our continuum action is not an as appropriate approximation in the weak coupling limit as, e.g. the quadratic action used in \cite{DJ}, \cite{BJ1}. We will see how this factor converges towards the correct one in higher orders. Since our main interest is in the region of the phase transition, the fact that we do not reproduce the correct weak coupling limit in first order is only a minor drawback. In this context one should also mention the merit of the variational method that it anticipates to a large extend the higher order coefficients. In our case, although it gives - 2/9 for the leading coefficient instead of - 1/4, as just discussed, it gives, e.g. a second order coefficient of $-2/81$ for $d = 4$. This is $79\%$ of the correct second order term $-1/32$. The full structure of the extrema of (3.1) is easily discussed and essentially independent of the dimension $d$. For large $\beta $ there are 3 finite extrema (in addition to the one at infinity), the one with the smallest $\lambda $ is the minimum just discussed and has to be chosen. If $\beta $ is decreased, this minimum and the neighboring maximum merge into a turning point with horizontal tangent, i.e. a point of inflexion. In the sense of catastrophe theory one has a fold catastrophe there. The value where this happens is easily found by solving the simultaneous equations $\partial E/\partial \lambda =\partial^2 E/\partial \lambda ^2 = 0$. The solution is $\beta _{pi} = 0.9674$. In fig. 2a we show our results for order 1 to 4 together with the Monte Carlo data of Caldi \cite{Cal}. We followed the minimum with the smallest $\lambda $ when coming from large $\beta $ up to the point of inflexion at $\beta _{pi}$ where it disappears. The appearance or disappearance of extrema might be interpreted as a signal for the existence of a phase transition. In fact, the value of $\beta _{pi}$ found in this first order calculation is too small only by 4\% compared to the value $\beta _c = 1.0081 \pm 0.0067$ given in \cite{Cal}. But, on the other hand, one also finds a turning point for $d=3$ where there is no phase transition. There is a simple argument which shows that the point of inflexion has no direct relation to the position of the phase transition. If this were the case one should essentially obtain the same point of inflexion if, instead of $E$, ones calculates some function of $E$, say a power $E^\kappa $. In the spirit of the $\delta $-expansions one has to calculate $E^\kappa $ from (3.1), expand to first order in $\delta $, and finally put $\delta =1$. One finds that the position of the point of inflexion depends drastically on $\kappa $. For $\kappa =5$, e.g. one gets $\beta _{pi}=1.6089$. If $\kappa $ is decreased, $\beta _{pi}$ decreases monotonically. At $\kappa _0=0.2781$ one reaches $\beta _{pi} = 0.8252$, while for even smaller $\kappa $ there is no point of inflexion at all, while the extrema persist! If one chooses some $\kappa \leq \kappa _0$ one can therefore follow the minimum down to small $\beta $ and calculate $E^\kappa $. From this one may finally obtain $E$. Before we investigate, whether one can obtain reasonable results by this simple trick even below the transition region, let us first mention, that the above considerations can serve as an excellent test for the stability of the approach. Notwithstanding the fact that the point of inflexion moves with the power $\kappa $, the value of $E$ finally obtained, should be essentially independent of $\kappa $ within a reasonable range. This is indeed the case to an impressive accuracy. For illustration we choose the arbitrary value $\beta = 1.2$ and vary $\kappa $ in the range from $\kappa = -1$ to $\kappa \approx 2.3$ where $\beta _{pi}$ becomes equal to 1.2. The value for $E$ then only moves from 0.7903 to 0.7868! The test becomes a bit worse if we apply it to values of $\beta $ below $\beta _{pi}$, say $\beta =0.9$, which can only be reached by the above trick for $\kappa $ small enough. Varying $\kappa $ from -1 to 0.6, one finds that $E$ moves from 0.7008 to 0.6917. These values still lie considerably above the MC data but convergence to the correct values in higher orders can be expected. One may also perform a Pad\'e transformation with respect to $\delta $ before applying the principle of minimal sensitivity. This was done by Duncan and Moshe \cite{DM} for the second order, in order to obtain an extremum. In the first order discussed at the moment, the (0,1) Pad\'e approximation has the interesting property that there is no turning point where the extrema disappear. Therefore one can follow the minimum over the whole range of $\beta $. This again shows that the point of inflexion has no direct relevance. It demonstrates, however, once more the impressive stability of the method for the values of $\beta $ above the transition region. As seen in fig. 2b, for small $\beta $ the power curve lies below the Pad\'e curve and closer to the data, for $\beta >1$ both curves as well as the original first order curve differ by less than 0.003. \\[1ex] {\bf Order 2 }\\[1ex] In second order there are 5 types of connected plaquette configurations which contribute (fig. 1b). The first two of them are identical with the ones of the first order, with one plaquette occupied twice. To count the number of equivalent configurations belonging to every type, one has to note that one of the plaquettes is always identical to the fixed plaquette $p$, the other two have to be arranged in all possible ways. The discussion proceeds along the same lines as before. Contrary to the first order there is now {\em no} relevant minimum near $\lambda =0$ in the weak coupling limit, so that the principle of minimal sensitivity cannot be directly applied. This is a well known feature in simple models as the anharmonic oscillator in zero and one dimension \cite{Osc}, where all even orders show the same behavior. In our case, there is, however, a relevant minimum for values of $\beta $ around $\beta \approx 1$, which merges with a maximum at $\beta _{pi} =1.0168$. To this belongs the value $E_{pi} = 0.6783$ in good agreement with the MC data. If one follows the minimum from the point of inflexion to increasing $\beta $ one finds, however, that the curve no longer follows the data. There is no extremum which corresponds to the physical value. This is, of course, nothing but the afore mentioned absence of a reasonable weak coupling result in even orders. To extract more useful information from the second order calculation we apply the (1,1) Pad\'e transformation with respect to $\delta $ as done by Duncan and Moshe \cite{DM} (the (0,2) transformation gives no extrema at all). One finds a relevant minimum for all $\beta $ above $\beta _{pi} = 1.0486$. The results are again presented in fig. 2a. They show considerable improvement compared to the first order and already a very close agreement with the MC data. \\[1ex] {\bf Order 3 }\\[1ex] In order three there are 16 types of connected configurations, 7 of them are lower order configurations with multiply occupied plaquettes. The correct counting of the number of equivalent members of one type becomes delicate for some cases but is still feasible. We refer to fig. 1c for details. There is now again a reasonable weak coupling limit. For large $\beta $ there is an extremum at $\lambda \approx 0.2234/\beta $ for $d = 4$ which leads to $E \approx 1-0.2417/\beta $. The error in the coefficient of $1/\beta $ has become smaller by a factor of 3.4 compared to the first order. If one decreases $\beta $ and follows the minimum, the latter disappears at $\beta _{pi} = 1.2187$. But at some larger $\lambda $ there is another minimum. One may switch to this with practically no change in the plaquette energy, and go further down to $\beta _{pi} = 1.0625$ where this minimum also disappears. The result shows only a minor change compared to the second order and there is thus again excellent agreement with the data. As in the first order one can now again enlarge the region of applicability by the power trick or the Pad\'e transformation. Only the (0,3) transformation works, the (2,1) and the (1,2) transformation show no extrema below the transition region. The results are shown in fig. 2b. Again the power curve lies below the Pad\'e curve and both of them are much closer to the data below the phase transition than in first order. The discrepancy is, however still sizeable in this region. For $\beta > 1.1$ both curves practically agree with the ordinary third order curve. \\[1ex] {\bf Order 4 }\\[1ex] The simplicity of our approach permits to go up to fourth order with reasonable effort for $d = 3$ and $d = 4$. To do this we wrote a computer program in Mathematica. It searches all possibilities for connected plaquettes in a certain order. Some configurations which are obtained from others by permutations of $p_1,\cdots,p_n$ are not found in this way, while others which involve multiply coccupied plaquettes are obtained several times. This is taken into account by applying the appropriate factors. Finally the program calculates the contribution for each configuration and adds up everything. This program also served as a check for the lower order calculations. As in the second order we apply a Pad\'e transformation. The only useful one turned out to be the (2,2) diagonal transformation. There are two intervals in $\beta $ where the use of the relevant extrema gives good, respectively excellent, agreement with the data, as seen in fig. 2a. The first one is {\em below} the transition region and impressively reproduces the steep increase of the plaquette energy in this region. For $\beta > 1.08$, however, the curve lies above the data. There is a second interval, $1.4 < \beta < \infty $, where one has perfect agreement with the data. For $\beta < 1.4$, however, the curve lies again too high.\\[1ex] {\bf Other dimensions}\\[1ex] The discussion proceeds as before, therefore we can be very brief here. In first order the points where the minima disappear are at $\beta _{pi} = 1.1937$ for $d = 3$ and at $\beta _{pi} = 0.8099$ for $d = 5$. The results are shown in figs. 3a and 4a, the curves obtained with the power trick and the (0,1) Pad\'e transformation in figs. 3b and 4b. In second order the limiting values for the (1,1) Pad\'e transformations are $\beta _{pi}= 1.4495$ for $d = 3$ and $\beta _{pi} = 0.8090$ for $d = 5$. In third order one finds the following common feature for large $\beta $: For increasing $\lambda $ there are two minima and two maxima at finite $\lambda $, the extremum at lowest $\lambda $ is the relevant minimum. If one decreases $\beta $, there is a qualitative difference for different dimensions. In the case $d = 3$, the first minimum and maximum merge into a point of inflexion at some $\beta _{pi}$. The second pair merges already for slightly larger $\beta $, but this is of no importance, because these extrema are not relevant. For $d = 4$ and $d = 5$, on the other hand, the first pair merges at a larger $\beta $ than the second one, so one has to jump to the second minimum for a short interval till this also disappears. For $d\geq$ 6, finally, the irrelevant interior pair of extrema merges first, for still smaller $\beta $ the outer pair merges into a point of inflexion. This means that for $d = 3$, as well as for $d\geq$ 6, one can follow the minimum down to the critical value, while for $d = 4$ and $ d = 5$ one has to jump to the other minimum near the phase transition. Clearly the results are much better in five dimensions than in three. This is due to the above mentioned error in the coefficient of the weak coupling expansion which becomes smaller for larger dimension. \newpage \setcounter{equation}{0}\addtocounter{saveeqn}{1}% \section{SU(2) } We use the notation of Creutz \cite{Cr}, Lautrup and Nauenberg \cite{LN}, and Buckley and Jones \cite{BJ1}, with $(\beta /2) \sum_{p'} \mbox{Tr} U_{p'}$ in the exponent, and the plaquette energy defined as the expectation value of $(1/2) \mbox{Tr} U_p$. In the non abelian case it becomes crucial to choose an appropriate parametrization for the unitary matrices $U_l$ on the links. A very convenient parametrization with a simple behavior in the weak coupling limit is the one proposed by Buckley and Jones \cite{BJ1} \begin{equation} U=e^{i\sigma _1\varphi }e^{i\sigma _2\vartheta } e^{i\sigma _3\psi }. \end{equation} As parameter space one may use the region \begin{equation} -\pi <\varphi,\psi <\pi ,\quad -\pi /4 <\vartheta<\pi /4. \end{equation} Actually the group manifold is covered twice by this choice, but in this way we will already get periodicity in $\varphi $ and $\psi $ immediately. The Haar measure, in a normalization convenient for us, reads \begin{equation} H(\psi ,\vartheta ,\varphi ) = \frac{\pi }{2} \cos(2\vartheta). \end{equation} An efficient technique for the further procedure, which was also extensively used in \cite{BJ1}, is the splitting of the matrix exponentials in (4.1) into sums of ordinary exponentials times projection operators, in general \begin{equation} e^{i\sigma _k\alpha } = \sum_{s=\pm 1} e^{i s \alpha } P_s^{(k)},\mbox{\quad with\quad }P_s^{(k)} = \frac{1}{2 } (1+s\sigma _k).\end{equation} From this it is immediately clear that all traces are periodic functions of the three link angles $\varphi _l,\vartheta _l,\psi _l$ with period $2\pi $. So, for $\varphi $ and $\psi $ we may use the same procedure as in the U(1) case, in order to extend the integrations from $- \infty $ to $ \infty $. For the $\vartheta$ integration, on the other hand, the presence of the Haar measure and the limited integration region from $-\pi /4$ to $\pi /4$ enforce a special procedure. We have to continue the Haar measure periodically into the full interval from $-\pi $ to $\pi $ by expanding it into a Fourier series. This results in \begin{equation} \frac{\pi }{2 }\cos(2\vartheta)_{periodic} = \sum_{\nu =- \infty }^ \infty \frac{(-1)^\nu e^{4i\nu \vartheta}}{1-4\nu ^2}. \end{equation} The unitary matrices $U$ in (4.1) can be decomposed into a linear combination of $1,\sigma _k$. It is then easily seen that they are invariant under the substitution $\vartheta \rightarrow \pi /2 - \vartheta$, if, simultaneously, one substitutes $\varphi \rightarrow \varphi +\pi /2,\psi \rightarrow \psi +\pi /2$. The integral is invariant under the latter substitutions due to the periodicity in $\varphi $ and $\psi $. Together with similar relations, the $\vartheta$-integration can thus also be extended to the interval from $-\pi $ to $\pi $ and subsequently from $- \infty $ to $ \infty $ if we use the periodic Haar measure in (4.5). Next we introduce again the continuum fields $A_\mu ^{(a)}$, the longitudinal components on the links are connnected to the link angles by \begin{equation} \varphi =\frac{g}{2 }\int {\bf A}^{(1)}d{\bf x}, \quad \vartheta =\frac{g}{2 }\int {\bf A}^{(2)}d{\bf x}, \quad \psi =\frac{g}{2 }\int {\bf A}^{(3)}d{\bf x},\mbox{\quad with\quad }g^2=4/\beta . \end{equation} Contrary to the abelian U(1) case the procedure is now no longer gauge invariant, because we introduce an ordinary exponential, not a path ordered exponential, along the link. We have, of course, the freedom to to this. Next we introduce the free interpolating action as in (2.7), with the sole difference that we have to sum over the three SU(2) indices $a$ in the potentials $A_\mu ^{(a)}$. The situation is now quite similar to the abelian case. The expansion for $E^{(n)}$ looks as in (2.15), (2.16) where now $E_p = (1/2) \mbox{Tr} U_p$. Again only connected configurations of plaquettes contribute in the expansion. The plaquette actions, when evaluated by using (4.4), contain 10 (not 12) projection operators, because two of the unitary matrices enter as adjoints, and neighboring identical $\sigma $ - matrices can be combined. The traces are therefore 10 fold sums with $2^{10}$ terms. The plaquette angles appear in exponentials only. The path integrations over $\varphi $ and $\psi $ (more precisely, those over $ A_\mu ^{(1)}$ and $ A_\mu ^{(3)})$ can be performed as before. In the $\vartheta $ - integrations one has to consider in addition the periodic continuation of the Haar measure (4.5), which also only contains exponentials. In this way the following functions, defined by infinite sums, arise:\alpheqn \begin{eqnarray} f_m(\lambda ) & = &\sum_{\nu = - \infty }^ \infty \frac{(-1)^\nu e^{-(4\nu +m)^2\lambda /4}}{1-4\nu ^2 }\\ h_m(\lambda ) & = & f_m(\lambda )/f_0(\lambda ). \end{eqnarray} \reseteqn Up to second order we will only need the functions $h_m(\lambda )$ for $m = 1,2,3$, obviously $h_m(\lambda ) = h_{-m}(\lambda )$. The sums in (4.7) converge rapidly, therefore the consideration of a few terms is sufficient for the computation. The evaluation of the full traces is only necessary in the case of multiply occupied plaquettes. In all configurations where links belong to one plaquette only, it is much simpler to perform the integrations over the corresponding link variables first. This removes the corresponding $\sigma $ - matrices and leads to much simpler traces. For $m$-fold occupied plaquettes, on the other hand, a calculation by brute force becomes prohibitively complicated very soon, because it would involve a sum over $2^{10m}$ terms! Fortunately one can reduce the complexity of the problem by using a simple group theoretical relation: \begin{equation} [\mbox{Tr} U_{1/2}]^2 = 1 + \mbox{Tr} U_1. \end{equation} Here $U_{1/2}$ and $U_1$ denote the $SU(2)$ representation matrices in the spin 1/2 and 1 representation respectively. The latter are simply related to the former by replacing $e^{i\sigma _k\alpha }$ by $e^{i2J_k\alpha }$, with $J_k$ the $3\times 3$ representation matrices. Application of (4.8) reduces the complexity, but not quite as much as it appears at first sight. The matrices $J_k$ fulfil $J_k^{2n} = J_k^2$ for $n\geq 1$ and $J_k^{2n+1} = J_k$ for $n \geq 0$. Contrary to the spin 1/2 case, however, $J_k^2\neq 1$. Therefore the relation corresponding to (4.4) is slightly more complicated: \begin{equation} e^{i2J_k\alpha } = \sum_{s=0,\pm 1}e^{i2s\alpha } \bar{P}_s^{(k)},\quad \mbox{with}\quad \bar{P}_0^{(k)} = 1-J_k^2,\quad \bar{P}_{\pm 1}^{(k)} = \frac{1}{2 }(J_k^2\pm J_k). \end{equation} Nevertheless, the simplification obtained by using (4.8) is sizeable. E.g. for the twofold occupied plaquette it reduces the number of terms in the sum from $2^{20}$ to $3^{10}$, i.e. by a factor of $\approx 18$. In this way it was possible to calculate all contributions of the second order without special effort, except the threefold plaquette. But fortunately the latter appears only once and can be safely neglected among the 977 configurations which contribute in second order. After these remarks we can come to the results:\\[1ex] {\bf Order 1 }\\[1ex] The first order result reads \begin{eqnarray} \lefteqn{E^{(1)}(\beta ,\lambda ) = }\nonumber\\ & & [1 + \delta \lambda (-2 + 4 h_1'(\lambda )/h_1(\lambda )] e^{-2\lambda } h_1^4 (\lambda ) \nonumber\\ & & +\delta \beta [(1+e^{-8\lambda }+2 e^{-4\lambda } h_2^4(\lambda ))/4 - e^{-4\lambda } h_1^8(\lambda)] \nonumber\\ & & +\delta \beta (2d-3) [ \frac{1}{2 } e^{-3\lambda } h_1^6(\lambda ) \{(1+e^{-\lambda })^2 (1 + h_2(\lambda )) + (1-e^{-\lambda })^2 (1 - h_2(\lambda ))\} \nonumber\\ & & \quad \quad \quad \quad \quad \quad - 4 e^{-4\lambda } h_1^8(\lambda ) ]. \end{eqnarray} The first term with $\delta \beta $ arises from the double plaquette, the second one from the neighboring plaquettes. The whole situation is rather similar to the U(1) case. The minimum with respect to $\lambda $ now disappears at $\beta _{pi} = 2.1377$. In fig. 5 we show the result, together with the one obtained by the power trick and the (0,1) Pad\'e transformation. The quality of the results is comparable to the U(1) case. \\[1ex] {\bf Order 2 }\\[1ex] The extremum disappears at $\beta _{pi} = 2.2336$, the corresponding energy lies somewhat above the data. For increasing $\beta $, however, the curve drops down below the data as in the U(1) case which shows again the problematics of even orders of the expansion. We therefore performed the (1,1) Pad\'e transformation as before. It has an extremum for all $\beta $ above $\beta _{pi} = 2.2737$. The result is also shown in fig. 5. The second order Pad\'e transformation reduces the error by a factor of about one half compared to the first order. The agreement with the data is not as good as for U(1) in $d=4$. The poorer quality of the approximation in the non abelian case could be due to the specific gauge dependence introduced by our definition of the fields in (4.6), or to the non analyticity in $\vartheta $ resulting from the periodic continuation of the Haar measure in (4.5). \setcounter{equation}{0}\addtocounter{saveeqn}{1}% \section{Conclusions} The present work can be considered as an empirical study of the convergence properties of the optimized $\delta $ - expansion for non trivial systems with an infinite number of degrees of freedom and possibly a phase transition. Because we were able to go up to fourth order in the case of U(1) in $d=4$ dimensions, we believe that our conclusions can be considered as quite reliable. We begin with the positive aspects: For $\beta $ above the critical value $\beta _c$ of the phase transition, one has rapid convergence in the whole region. The second order (1,1) Pad\'e transformation gives already perfect agreement with the MC data. For $\beta <\beta _c$ one needs manipulations like the power trick or a suitable Pad\'e transformation in order that the principle of minimal sensitivity can be applied down to lower $\beta $. The discrepancy with the data is much larger in this region, but a clear tendency of convergence towards the latter is visible. The fourth order (2,2) Pad\'e transformation gives a remarkably good approximation down to $\beta > 0.95$ which hardly can be considered as accidental. The large increase of the energy within a small region of $\beta $ is clearly reproduced. This result exceeds the previous work in refs \cite{DM} -\cite{AJ} where good approximations where obtained up to the transition region (from above or from below, respectively) but not beyond. Unfortunately higher order calculations appear not feasible with reasonable effort without an additional idea. In the case of U(1) it is quite trivial to write down the contribution for any configuration with the help of (2.19); the only cumbersome task is the correct counting of equivalent configurations. Rigorous convergence proofs for complex systems as considered here are also not available. So one can only speculate that higher orders would stay stable above the transition region and further improve the results below. Let us finally mention the dubious aspects of the whole approach. Its ``distinctly alchemical flavor'' \cite{Osc} has clearly shown up again. The ambiguity in the choice of the interpolating action appears only as a minor deficiency; this choice is an art as in all variation methods. A really serious problem is that we have no a priori principle whatever, which tells us which of several extrema should be chosen. Even worse, there are cases, as in the even orders, where extrema exist in some $\beta $-interval, but none of them belongs to the physical situation. There are two possibilities how to find the relevant extremum, or, alternatively, to reject them all. The first one is to use additional, at least crude, information, say from MC data. The second one, which relies completely on the expansion itself, is to look for the convergence of the solution by comparing different orders of the expansion. Both criteria can be successfully applied to our figures. In view of the described general problematics it appears even more impressive, how the choice of the ``correct'' extremum leads to excellent results. This should encourage further theoretical work on the method. \newpage \setcounter{equation}{0}\addtocounter{saveeqn}{1}%
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Acknowledgement} This work is supported by \section{Background}\label{sec:background} \subsection{Object Detection} Object detection has received significant attention and achieved striking improvements in recent years, as demonstrated in popular object detection competitions such as PASCAL VOC detection challenge~\cite{everingham2010pascal, everingham2015pascal}, ILSVRC large scale detection challenge~\cite{russakovsky2015imagenet} and MS COCO large scale detection challenge~\cite{lin2014microsoft}. Object detection aims at outputting instances of semantic objects with a certain class label such as humans, cars. It has wide applications in many computer vision tasks including face detection, face recognition, pedestrian detection, video object co-segmentation, image retrieval, object tracking and video surveillance. Different from image classification, object detection is not to classify the whole image. Position and category information of the objects are both needed which means we have to segment instances of objects from backgrounds and label them with position and class. The inputs are images or video frames while the outputs are lists where each item represents position and category information of candidate objects. In general, object detection seeks to extract discriminative features to help in distinguishing the classes. Methods for object detection generally fall into 3 categories: 1) traditional machine learning based approaches; 2) region proposal based deep learning approaches; 3) end-to-end deep learning approaches. For traditional machine learning based approaches, one of the important steps is to design features. Many methods have been proposed to first design features~\cite{viola2001rapid,viola2004robust,lowe1999object,dalal2005histograms} and apply techniques such as support vector machine (SVM)~\cite{hearst1998support} to do the classification. The main steps of traditional machine learning based approaches are: \begin{itemize} \item Region Selection: using sliding windows at different sizes to select candidate regions from whole images or video frames; \item Feature Extraction: extract visual features from candidate regions using techniques such as Harr feature for face detection, HOG feature for pedestrian detection or general object detection; \item Classifier: train and test classifier using techniques such as SVM. \end{itemize} The tradition machine learning based approaches have their limitations. The scheme using sliding windows to select RoIs (Regions of Interests) increases computation time with a lot of window redundancies. On the other hand, these hand-crafted features are not robust due to the diversity of objects, deformation, lighting condition, background and etc., while the feature selection has a huge effect on classification performance of candidate regions. Recent advances in deep learning, especially in computer vision have shown that Convolutional Neural Networks (CNNs) have a strong capability of representing objects and help to boost the performance of numerous vision tasks, comparing to traditional heuristic features \cite{dalal2005histograms}. For deep learning based approaches, there are convolutional neural networks (CNN) to extract features of region proposals or end-to-end object detection without specifically defining features of a certain class. The well-performed deep learning based approaches of object detection includes Region Proposals (R-CNN)~\cite{girshick2014rich}, Fast R-CNN~\cite{girshick2015fast}, Faster R-CNN~\cite{ren2015faster}), Single Shot MultiBox Detector (SSD)~\cite{liu2016ssd}, and You Only Look Once (YOLO)~\cite{redmon2016you}. Usually, we adopt region proposal methods (Category 2) for producing multiple object proposals, and then apply a robust classifier to further refine the generated proposals, which are also referred as two-stage method. The first work of the region proposal based deep learning approaches is R-CNN~\cite{girshick2014rich} proposed to solve the problem of selecting a huge number of regions. The main pipeline of R-CNN~\cite{girshick2014rich} is: 1) gathering input images; 2) generating a number of region proposals (e.g. 2000); 3) extracting CNN features; 4) classifying regions using SVM. It usually adopts Selective Search (SS), one of the state-of-art object proposals method \cite{Uijlings13} applied in numerous detection task on several fascinating systems\cite{girshick2014rich,girshick2015fast,ren2015faster}, to extract these regions from the image and names them region proposals. Instead of trying to classify all the possible proposals, R-CNN select a fixed set of proposals (e.g. 2000) to work with. The selective search algorithm used to generate these region proposals includes: (1) Generate initial sub-segmentation, generate many candidate regions; (2) Use greedy algorithm to recursively combine similar regions into larger ones; (3) Use the generated regions to produce the final candidate region proposals. These candidate region proposals are warped into a square and fed into a convolutional neural network (CNN) which acts as the feature extractor. The output dense layer consists of the extracted features to be fed into an SVM~\cite{hearst1998support} to classify the presence of the object within that candidate region proposal. The main problem of R-CNN~\cite{girshick2014rich} is that it is limited by the inference speed, due to a huge amount of time spent on extracting features of each individual region proposal. And it cannot be applied in applications requiring a real-time performance (such as online video analysis). Later, Fast R-CNN~\cite{girshick2015fast} is proposed to improve the speed by avoiding feeding raw region proposals every time. Instead, the convolution operation is done only once per image and RoIs over the feature map are generated. Faster R-CNN \cite{ren2015faster} further exploits the shared convolutional features to extract region proposals used by the detector. Sharing convolutional features leads to substantially faster speed for object detection system. The third type is end-to-end deep learning approaches which do not need region proposals (also referred as one-stage method). The pioneer works are SSD~\cite{liu2016ssd} and YOLO~\cite{redmon2016you} . An SSD detector \cite{liu2016ssd} works by adding a sequence of feature maps of progressively decreasing the spatial resolution to replace the two stage's second classification stage, allowing a fast computation and multi-scale detection on one single input. YOLO detecor is an object detection algorithm much different from the region based algorithms. In YOLO~\cite{redmon2016you}, it regards object detection as an end-to-end regression problem and uses a single convolutional network to predict the bounding boxes and the corresponding class probabilities. It first takes the image and splits it into an $S \times S$ grid, within each of the grid we take $m$ bounding boxes. For each of the bounding box with multi scales, the convolutional neural network outputs a class probability and offset values for the bounding box. Then it selects bounding boxes which have the class probability above a threshold value and uses them to locate the object within the image. YOLO~\cite{redmon2016you} is orders of magnitude faster (45 frames per second) than other object detection approaches but the limitation is that it struggles with small objects within the image. \subsection{Multiple Object Tracking} Video object tracking is to locate objects over video frames and it has various important applications in robotics, video surveillance and video scene understanding. Based on the number of moving objects that we wish to track, there are Single Object Tracking (SOT) problem and Multiple Object Tracking (MOT) problem. In addition to detecting objects in video frame, the MOT solution requires to robustly associate multiple detected objects between frames to get a consistent tracking and this data association part remains very challenging. In MOT tasks, for each frame in a video, we aim at localizing and identifying all objects of interests, so that the identities are consistent throughout the video. Typically, the main challenge lies on speed, data association, appearance change, occlusions, disappear / re-enter objects and etc. In practice, it is desired that the tracking could be performed in real-time so as to run as fast as the frame-rate of the video. Also, it is challenging to provide a consistent labeling of the detected objects in complex scenarios such as objects change appearance, disappear, or involve severe occlusions. In general, Multiple Object Tracking (MOT) can be regarded as a multi-variable estimation problem~\cite{luo2014multiple}. The objective of multiple object tracking can be modeled by performing MAP (maximal a posteriori) estimation in order to find the \textit{optimal} sequential states of all the objects, from the conditional distribution of the sequential states given all the observations: \begin{equation} \label{eq:map} \widehat{\mathbf{S}}_{1:t} = \underset{\mathbf{S}_{1:t}}\argmax \ P\left(\mathbf{S}_{1:t}|\mathbf{O}_{1:t}\right). \end{equation} where $\mathbf{s}_t^i$ denotes the state of the $i$-th object in the $t$-th frame. $\mathbf{S}_t = (\mathbf{s}_t^1, \mathbf{s}_t^2, ..., \mathbf{s}_t^{M_t})$ denotes states of all the $M_t$ objects in the $t$-th frame. $\mathbf{S}_{1:t} = \{\mathbf{S}_1, \mathbf{S}_2, ..., \mathbf{S}_t\}$ denotes all the sequential states of all the objects from the first frame to the $t$-th frame. In tracking-by-detection, $\mathbf{o}_t^i$ denotes the collected observations for the $i$-th object in the $t$-th frame. $\mathbf{O}_t = (\mathbf{o}_t^1, \mathbf{o}_t^2, ..., \mathbf{o}_t^{M_t})$ denotes the collected observations for all the $M_t$ objects in the $t$-th frame. $\mathbf{O}_{1:t} = \{\mathbf{O}_1, \mathbf{O}_2, ..., \mathbf{O}_t\}$ denotes all the collected sequential observations of all the objects from the first frame to the $t$-th frame. Different Multiple Object Tracking (MOT) algorithms can be thought as designing different approaches to solving the above MAP problem, either from a \emph{probabilistic inference} perspective, e.g. Kalman filter or a \emph{deterministic optimization} perspective, e.g. Bipartite graph matching, and machine learning approaches. Multiple Object Tracking (MOT) approaches can be categorized by different types of models. A distinction based on \textit{Initialization Method} is that of Detection Based Tracking (DBT) versus Detection Free Tracking (DFT). DBT refers that before tracking, object detection is performed on video frames. DBT methods involve two distinct jobs between the detection and tracking of objects. In this paper, we focus on DBT, also refers as tracking-by-detection for MOT. The reason is that DBT methods are widely used due to excellent performance with deep learning based object detectors, while DFT methods require manually annotations of the targets and bad results could arise when a new unseen object appears. Another important distinction based on \textit{Processing Mode} is that of Online versus Offline models. An Online model receives video input on a frame-by-frame basis, and gives output per frame. This means only information from past frames and the current frame can be used. Offline models have access to the entire video, which means that information from both past and future frames can be used. Tracking-by-detection methods are usually utilized in online tracking models. A simple and classic pipeline is as (1) Detect objects of interest; (2) Predict new locations of objects from previous frames; (3) Associate objects between frames by similarity of detected and predicted locations. Well-performed CNN architectures can be used for object detection such as Faster R-CNN~\cite{ren2015faster}, YOLO~\cite{redmon2016you} and SSD~\cite{liu2016ssd}. For prediction of new locations of tracked objects, approaches model the velocity of objects, and predict the position in future frames using optical flow, or recurrent neural networks, or Kalman filters. The association task is to determine which detection corresponds to which object, or a detection represents a new object. One popular dataset for Multiple Object Tracking (MOT) is MOTChallenge~\cite{leal2015motchallenge}. In MOTChallenge~\cite{leal2015motchallenge}, detections for each frame are provided in the dataset, and the tracking capability is measured as opposed to the detection quality. Video sequences are labeled with bounding boxes for each pedestrian collected from multiple sources. This motivates the use of tracking-by-detection paradigm. MDPs~\cite{xiang2015learning} is a tracking-by-detection method and achieved the state-of-the-art performance on MOTChallenge~\cite{leal2015motchallenge} Benchmark when it was proposed. Major contributions can be solving MOT by learning a MDP policy in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. It also can handle the birth / death and appearance / disappearance of targets by simply treating them as state transitions in the MDP while leveraging existing online single object tracking methods. SORT~\cite{bewley2016simple} is a simple and real-time Multiple Object Tracking (MOT) method where state-of-the-art tracking quality can be achieved with only classical tracking methods. It is the most widely used real-time online Multiple Object Tracking (MOT) method and is very efficient for real-time applications in practice. Due to the simplicity of SORT~\cite{bewley2016simple}, the tracker updates at a rate of 260 Hz which is over 20x faster than other state-of-the-art trackers. On the MOTChallenge~\cite{leal2015motchallenge}, SORT~\cite{bewley2016simple} with a state-of-the-art people detector ranks on average higher than MHT~\cite{kim2015multiple} on standard detections. DeepSort~\cite{wojke2017simple} is an extension of SORT~\cite{bewley2016simple} which integrates appearance information to improve the performance of SORT~\cite{bewley2016simple} which can track through longer periods of occlusion, making SORT~\cite{bewley2016simple} a strong competitor to state-of-the-art online tracking algorithms. \subsection{Near Accident Detection} In addition to vehicle detection and vehicle tracking, analysis of the interactions or behavior of tracked vehicles has emerged as an active and challenging research area in recent years~\cite{sivaraman2011learning,hermes2009long,wiest2012probabilistic}. Near Accident Detection is one of the highest levels of semantic interpretation in characterizing the interactions of vehicles on the road. The basic task of near accident detection is to locate near accident regions and report them over video frames. In order to detect near accident on traffic scenes, robust vehicle detection and vehicle tracking are the prerequisite tasks. Most of the near accident detection approaches are based on motion cues and trajectories. The most typical motion cues are optical flow and trajectory. Optical flow is widely utilized in video processing tasks such as video segmentation~\cite{huang2018supervoxel}. A trajectory is defined as a data sequence containing several concatenated state vectors from tracking, an indexed sequence of positions and velocities over a given time window. In recent years, researches have tried to make long-term classification and prediction of vehicle motion. Based on vehicle tracking algorithms such as Kalman filtering, optimal estimation of the vehicle state can be computed one frame ahead of time. Trajectory modeling approaches try to predict vehicle motion more frames ahead of time, based on models of typical vehicle trajectories~\cite{sivaraman2011learning,hermes2009long,wiest2012probabilistic}. In~\cite{sivaraman2011learning}, it used clustering to model the typical trajectories in highway driving and hidden Markov modeling for classification. In~\cite{hermes2009long}, trajectories are classified using a rotation-invariant version of the longest common subsequence as the similarity metric between trajectories. In~\cite{wiest2012probabilistic}, it uses variational Gaussian mixture modeling to classify and predict the long-term trajectories of vehicles. Over the past two decades, for automatic traffic accident detection, a great deal of literature emerged in various ways. Several approaches have been developed based on decision trees, Kalman filters, or time series analysis, with varying degrees of success in their performance~\cite{srinivasan2003traffic,srinivasan2001hybrid,xu1998real,shuming2002traffic,jiansheng2014vision,bhonsle2000database}. Ohe et al.~\cite{ohe1995method} use neural networks to detect traffic incidents immediately by utilizing one minute average traffic data as input, and determine whether an incident has occurred or not. In~\cite{ikeda1999abnormal}, the authors propose a system to distinguish between different types of incidents for automatic incident detection. In~\cite{kimachi1994incident}, it investigates the abnormal behavior of vehicle related to accident based on the concepts of fuzzy theory where accident occurrence relies on the behavioral abnormality of multiple continual images. Zeng et al.~\cite{zeng2008data} propose an automatic accident detection approach using D-S evidence theory data fusion based on the probabilistic output of multi-class SVMs. In~\cite{sadeky2010real}, it presents a real-time automatic traffic accidents detection method using Histogram of Flow Gradient (HFG) and the trajectory of vehicles by which the accident was occasioned is determined in case of occurrence of an accident. In~\cite{kamijo2000traffic}, it develops an extendable robust event recognition system for Traffic Monitoring and Accident Detection based on the hidden Markov model (HMM). ~\cite{chen2010automatic} proposed a method using SVM based on traffic flow measurement. A similar approach using BP-ANN for accident detection has been proposed in~\cite{srinivasan2004evaluation,ghosh2003wavelet}. In~\cite{saunier2010large}, it presented a refined probabilistic framework for the analysis of road-user interactions using the identification of potential collision points for estimating collision probabilities. Other methods for Traffic Accident Detection has also been presented using Matrix Approximation~\cite{xia2015vision}, optical flow and Scale Invariant Feature Transform (SIFT)~\cite{chen2016vision}, Smoothed Particles Hydrodynamics (SPH)~\cite{ullah2015traffic}, and adaptive traffic motion flow modeling~\cite{maaloul2017adaptive}. With advances in object detection with deep neural networks, several convolutional neural networks (CNNs) based automatic traffic accident detection methods~\cite{singh2018deep,sultani2018real} and recurrent neural networks (RNNs) based traffic accident anticipation methods~\cite{chan2016anticipating,suzuki2018anticipating} have been proposed along with some traffic accident dataset~\cite{sultani2018real,suzuki2018anticipating,kataoka2018drive,shah2018accident} of surveillance videos or dashcam videos~\cite{chan2016anticipating}. However, either most of these methods do not have real-time performances for online accident detection without using future frames, or most of these methods mentioned above give unsatisfactory results. Besides that, no proposed dataset contains videos with top-down views such as drone/Unmanned Aerial Vehicles (UAVs) videos, or omnidirectional camera videos for traffic analysis. \section{Conclusion}\label{sec:conclusion} We have proposed a two-stream Convolutional Network architecture that performs real-time detection, tracking, and near accident detection of road users in traffic video data. The two-stream Convolutional Networks consist of a spatial stream network to detect individual vehicles and likely near accident regions at the single frame level, by capturing appearance features with a state-of-the-art object detection method. The temporal stream network leverages motion features of detected candidates to perform multiple object Tracking and generate individual trajectories of each tracking target. We detect near accident by incorporating appearance features and motion features to compute probabilities of near accident candidate regions. We have present a challenging Traffic Near Accident dataset (TNAD), which contains different types of traffic interaction videos that can be used for several vision-based traffic analysis tasks. On the TNAD dataset, experiments have demonstrated the advantage of our framework with an overall competitive qualitative and quantitative performance at high frame rates. The future direction of the work is the image stitching mehtods for our proposed multi-camera fisheye videos. \begin{acks} The authors would like to thank City of Gainesville for providing real traffic fisheye video data. \end{acks} \section{Experiments}\label{sec:experiments} In this section, we first introduce our novel traffic near accident dataset (TNAD) and describe the preprocessing, implementation detail and experiments settings. Finally, we present qualitative and quantitative evaluation in terms of the performance of object detection, multiple object tracking, and near accident detection, and comparison between other methods and our framework. \subsection{Traffic Near Accident Dataset (TNAD)} As we mentioned in Section~\ref{sec:background}, there is no such a comprehensive traffic near accident dataset containing top-down views videos such as drone/Unmanned Aerial Vehicles (UAVs) videos, or omnidirectional camera videos for traffic analysis. Therefore, we have built our own dataset, traffic near accident dataset (TAND) which is depicted in Figure~\ref{fig:dataset}. Intersections tend to experience more and severe near accident due to factors such as angles and turning collisions. Traffic Near Accident Dataset (TNAD) containes 3 types of video data of traffic intersections that could be utilized for not only near accident detection but also other traffic surveillance tasks including turn movement counting. \begin{figure} \includegraphics[width=1\linewidth]{samples/dataset.png} \caption{Samples of Traffic Near Accident Dataset (TNAD). Our dataset consists of a large number of diverse intersection surveillance videos and different near accident (cars and motorcycles). Yellow rectangles and lines represent the same object in multi-camera video. White circles represent the near accident regions.} \label{fig:dataset} \end{figure} The first type is drone video that monitoring an intersection with top-down view. The second type of intersection videos is real traffic videos acquired by omnidirectional fisheye cameras that monitoring small or large intersections. It is widely used in transportation surveillance. These video data can be directly used as input for our vision-intelligent framework, and also pre-processing of fisheye correction can be applied to them for better surveillance performance. The third type of video is video data simulated by game engine for the purpose to train and test with more near accident samples. The traffic near accident dataset (TAND) consists of 106 videos with total duration over 75 minutes with frame rates between 20 fps to 50 fps. The drone video and fisheye surveillance videos are recorded in Gainesville, Florida at several different intersections. Our videos are challenging than videos in other datasets due to the following reasons: \begin{itemize} \item Diverse intersection scene and camera perspectives: The intersections in drone video, fisheye surveillance video, and simulation video are much different. Additionally, the fisheye surveillance video has distortion and fusion technique is needed for multi-camera fisheye videos. \item Crowded intersection and small object: The number of moving cars and motorbikes per frame are large and these objects are relatively smaller than normal traffic video. \item Diverse accidents: Accidents involving cars and motorbikes are all included in our dataset. \item Diverse Lighting condition: Different lighting conditions such as daylight and sunset are included in our dataset. \end{itemize} We manually annotate the spatial location and temporal locations of near accidents and the still/moving objects with different vehicle class in each video. 32 videos with sparse sampling frames (only 20\% frames of these 32 videos are used for supervision) are used only for training the object detector. The remaining 74 videos are used for testing. \subsection{Fisheye and multi-camera video} The fisheye surveillance videos are recorded from real traffic data in Gainesville. We have collected 29 single-camera fisheye surveillance videos and 19 multi-camera fisheye surveillance videos monitoring a large intersection. We conduct two experiments, one directly using these raw videos as input for our system and another is first to do preprocessing for correcting fisheye distortion on video level and feed them into our system. As the original survellance video has many visual distortions especially near the circular boundaries of cameras, our system performs better on these after preprocessing videos. Therefore we keep the distortion correction preprocessing in the experiments for fisheye videos. For large intersection, two fisheye cameras placed at opposite directions are used for surveillance and each of them mostly shows half of roads and real traffic for the large intersection. In this paper, we do not investigate the real stitching problem (we'll leave it for further work). First, we do fisheye distortion correction and combine the two video with similar points. Then we apply a simple object level stitching methods by assigning the object identity for the same objects across the left and right video using similar features and appearing/vanishing positions. \subsection{Model Training} The layer configuration of our spatial and temporal convolutional neural networks (based on Darknet-19~\cite{wojke2017simple}) is schematically shown in Table~\ref{net}. We adopt the Darknet-19~\cite{wojke2017simple} for classification and detection with deepSORT using data association metric combining deep appearance feature. We implement our framework on Tensorflow and do multi-scale training and testing with a single GPU (Nvidia Titan X Pascal). Training a single spatial convolutional network takes 1 day on our system with 1 Nvidia Titan X Pascal card. For classification and detection training, we use the same training strategy as YOLO9000~\cite{wojke2017simple}. We train the network on our TNAD dataset with 4 class of vehicle (motorcycle, bus, car, and truck) for 160 epochs using stochastic gradient descent with a starting learning rate of 0.1 for classification, and $10^{-3}$ for detection (dividing it by 10 at 60 and 90 epochs.), weight decay of 0.0005 and momentum of 0.9 using the Darknet neural network framework~\cite{wojke2017simple}. \begin{figure} \includegraphics[width=1\linewidth]{samples/objectdetection.png} \caption{Sample results of object detection on TNAD dataset. \textbf{Left and middle left:} results of directly using YOLOv2~\cite{redmon2017yolo9000} detector pretrained on generic objects (VOC dataset)~\cite{Everingham15}. \textbf{Middle right and right:} results of our spatial network with multi-scale training based on YOLOv2~\cite{redmon2017yolo9000}.} \label{fig:objectdetection} \end{figure} \begin{figure} \includegraphics[width=\linewidth, height=8cm]{samples/tracking.png} \caption{Tracking and trajectory comparison with Urban Tracker~\cite{jodoin2014urban} and TrafficIntelligence~\cite{jackson2013flexible} on drone videos of TNAD dataset.\textbf{Left:} results of Urban Tracker~\cite{jodoin2014urban}(BSG with Multilayer and Lobster Model). \textbf{Middle:} results of TrafficIntelligence~\cite{jackson2013flexible}. \textbf{Right:} results of our spatial network.} \label{fig:tracking} \end{figure} \begin{figure} \includegraphics[width=1\linewidth]{samples/nearaccident.png} \caption{Sample results of tracking, trajectory and near accident detection of our two-stream Convolutional Networks on simulation videos of TNAD dataset.\textbf{Left:} tracking results based on DeepSORT~\cite{wojke2017simple}. \textbf{Middle:} trajectory results of our spatial network. ~\cite{jackson2013flexible}.\textbf{Right:} near accident detection results of our two-stream Convolutional Networks.} \label{fig:nearaccident} \end{figure} \begin{table}[htbp] \caption{Frame level near accident detection results }\label{tab:quantitative} \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \hline Video ID & near Accident (pos/neg) & \vtop{\hbox{\strut \# of frame for positive Near accident}\hbox{\strut (groundtruth)/total frame}} & \vtop{\hbox{\strut \# of frame for correct localization }\hbox{\strut (IoU >= 0.6)}} & \vtop{\hbox{\strut \# of frame for incorent localization}\hbox{\strut (IoU<0.6)}}\tabularnewline \hline \hline 1 & pos & 12/245 & 12 & 0 \tabularnewline \hline 2 & neg & 0/259 & 0 &0 \tabularnewline \hline 3 & neg & 0/266 & 0 & 0 \tabularnewline \hline 4& pos & 16/267 & 13 & 0 \tabularnewline \hline 5 & pos & 6/246 & 4 & 0\tabularnewline \hline 6 & pos & 4/243 & 4 & 0\tabularnewline \hline 7 & neg & 0/286 & 0 & 0\tabularnewline \hline 8 & pos & 2/298 & 0 & 0\tabularnewline \hline 9 & pos & 27/351 & 23 & 6\tabularnewline \hline 10 & neg & 0/301 & 0 & 0\tabularnewline \hline 11 & neg & 0/294 & 0 & 0\tabularnewline \hline 12 & pos & 6/350 & 6 & 6 \tabularnewline \hline 13 & neg & 0/263 & 0 & 0 \tabularnewline \hline 14 & pos & 5/260 & 5 & 0 \tabularnewline \hline 15 & pos & 4/326 & 4 & 0 \tabularnewline \hline 16 & neg & 0/350 & 0 & 0 \tabularnewline \hline 17 & neg & 0/318 & 0 & 1\tabularnewline \hline 18 & pos & 10/340 & 8 & 0\tabularnewline \hline 19 & pos & 6/276 & 0 & 0\tabularnewline \hline 20 & pos & 8/428 & 4 & 0\tabularnewline \hline 21 & neg & 0/259 & 0 & 0 \tabularnewline \hline 22 & pos & 10/631 & 8 & 0 \tabularnewline \hline 23 & pos & 35/587 & 30 & 2 \tabularnewline \hline 24 & neg & 0/780 & 0 & 0 \tabularnewline \hline 25 & neg & 0/813 & 0 & 0\tabularnewline \hline 26 & neg & 0/765 & 0 & 0 \tabularnewline \hline 27 & pos & 8/616 & 8 & 0 \tabularnewline \hline 28 & pos & 10/243 & 10 & 1\tabularnewline \hline 29 & pos & 6/259 & 6 & 0 \tabularnewline \hline 30 & pos & 17/272 & 15 & 3 \tabularnewline \hline \end{tabular} } \end{table} \subsection{Qualitative results} We present some example experimental results of object detection, multiple object tracking and near accident detection on our traffic near accident dataset (TNAD) for drone videos, fisheye videos, and simulation videos. For object detection (Figure~\ref{fig:objectdetection}), we present some detection results of directly using YOLO detector~\cite{redmon2017yolo9000} trained on generic objects (VOC dataset)~\cite{Everingham15} and results of our spatial network with multi-scale training based on YOLOv2~\cite{redmon2017yolo9000}. For multiple object tracking (Figure~\ref{fig:tracking}), we present comparison of our temporal network based on DeepSORT~\cite{wojke2017simple} with Urban Tracker~\cite{jodoin2014urban} and TrafficIntelligence~\cite{jackson2013flexible}. For near accident detection (Figure~\ref{fig:nearaccident}), we present near accident detection results along with tracking and trajectories using our two-stream Convolutional Networks method. The object detection results shows that, with multi-scale training on TNAD dataset, the performance of the detector significant improves. It can perform well vehicle detection on top-down view surveillance videos even for small objects. In addition, we can achieve fast detection rate at 20 to 30 frame per second. Overall, this demonstrates the effectiveness of our spatial neural network. For the tracking part, since we use a tracking-by-detection paradigm, our methods can handle still objects and measure their state where Urban Tracker~\cite{jodoin2014urban} and TrafficIntelligence~\cite{jackson2013flexible} can only handle tracking for moving objects. On the other hand, Urban Tracker~\cite{jodoin2014urban} and TrafficIntelligence~\cite{jackson2013flexible} can compute dense trajectories of moving objects with good accuracy but they have slower tracking speed around 1 frame per second. For accident detection, our two-stream Convolutional Networks are able to do spatial localization and temporal localization for diverse accidents regions involving cars and motorcycles. The three sub-tasks (object detection, multiple object tracking and near accident detection) can always achieve real-time performance at high frame rate, 20 to 30 frame per second according to the frame resolution (e.g. 28 fps for 960$\times$480 image frame). Overall, the qualitative results demonstrate the effectiveness of our spatial neural network and temporal network respectively. \begin{table}[] \caption{Quantitative evaluation.}\label{tab:prerecall} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{Benchmark Result}} & \multicolumn{2}{c|}{Predicted} \\ \cline{3-4} \multicolumn{2}{|c|}{} & Negative & Positive \\ \hline \multirow{2}{*}{Actual} & Negative & 11081 & 19 \\ \cline{2-4} & Positive & 32 & 160 \\ \hline \end{tabular} \end{table} \subsection{Quantitative results} Since our frame has three tasks and our dataset are much different than other object detection dataset, tracking dataset and near accident dataset such as dashcam accident dataset~\cite{chan2016anticipating}, it is difficult to compare individual quantitative performance for all three tasks with other methods. One of our motivation is to propose a vision-based solution for Intelligent Transportation System, we focus more on near accident detection and present quantitative analysis of our two-stream Convolutional Networks. The simulation videos are for the purpose to train and test with more near accident samples and we have 57 simulation videos with a total over 51,123 video frames. We sparsely sample only 1087 frames from them for whole training processing. We present the analysis of near accident detection for 30 testing videos (18 has positive near accident, 12 has negative near accident). Table~\ref{tab:quantitative} shows frame level near accident detection performance on 30 testing simulation videos. The performance of precision, recall and F-measure are presented in Table~\ref{tab:prerecall}. If a frame contains a near accident scenario and we can successfully localize it with Intersection of union (IoU) is large or equal than 0.6, this is a True Positive (TP). If we cannot localize it or localize it with Intersection of Union (IoU) is less than 0.6, this is a False Negative (FN). If a frame has no near accident scenario but we detect a near accident region, this is a False Positive (FP). Otherwise, this is a True Negative (TN). We compute the $\text{precision} = \frac{\text{TP}}{\text{TP + FP}}$, $\text{recall} = \frac{\text{TP}}{\text{TP + FN}}$, and F-measure $\text{F-measure} = \frac{2\times \text{precision} \times \text{recall}}{\text{precision + recall}}= \frac{\text{2TP}}{\text{2TP + FP + FN}}$. Our precision is about 0.894, recall is about 0.8333 and F1 score is about 0.863. The three sub-tasks (object detection, multiple object tracking and near accident detection) can always achieve real-time performance at high frame rate, 20~30 frame per second according to the frame resolution (e.g. 28 fps for 960$\times$480 image frame). In conclusion, we have demonstrated that our two-stream Convolutional Networks have an overall competitive performance for near accident detection on our TNAD dataset. \section{Introduction}\label{sec:intro} The technologies of Artificial Intelligence (AI) and Internet of Things (IoTs) are ushering in a new promising era of ''Smart Cities'', where billions of people around the world can improve the quality of their life in aspects of transportation, security, information and communications and etc. One example of the data-centric AI solutions is computer vision technologies that enables vision-based intelligence at the edge devices across multiple architectures. Sensor data from smart devices or video cameras can be analyzed immediately to provide real-time analysis for the Intelligent Transportation System (ITS). At traffic intersections, it has more volume of road users (pedestrians, vehicles), traffic movement, dynamic traffic event, near accidents and etc. It is a critically important application to enable global monitoring of traffic flow, local analysis of road users, automatic near accident detection. As a new technology, vision-based intelligence has a wide range of applications in traffic surveillance and traffic management~\cite{coifman1998real,valera2005intelligent,buch2011review,kamijo2000traffic,veeraraghavan2003computer, he2017single}. Among them, many research works have focused on traffic data acquirement with aerial videos~\cite{angel2002methods,salvo2017traffic}, where the aerial view provides better perspectives to cover a large area and focus resources for surveillance tasks. Unmanned Aerial Vehicles (UAVs) and omnidirectional cameras can acquire useful aerial videos for traffic surveillance especially at intersections with a broader perspective of the traffic scene, with the advantage of being both mobile, and able to be present in both time and space. UAVs has been exploited in a wide range of transportation operations and planning applications including emergency vehicle guidance, track vehicle movements. A recent trend of vision-based intelligence is to apply computer vision technologies to these acquired intersection aerial videos ~\cite{scotti2005dual,wang2006intelligent} and process them at the edge across multiple ITS architecture. From global monitoring of traffic flow for solving traffic congestion to quest for better traffic information, an increasing reliance of ITS has resulted in a need for better object detection (such as wide-area detectors for pedestrian, vehicles), and multiple vehicle tracking that yields traffic parameters such as flow, velocity and vehicle trajectories. Tracks and trajectories are measures over a length of path rather than at a single point. It is possible to tackle related surveillance tasks including traffic movement measurements (e.g. turn movement counting) and routing information. The additional information from vehicle trajectories could be utilized to improve near accident detection, by either detecting stopped vehicles with their collision status or identifying acceleration / deceleration patterns or conflicting trajectories that are indicative of near accidents. Based on the trajectories, it is also possible to learn and forecast vehicle trajectory to enable near accident anticipation. Generally, a vision-based surveillance tool for intelligent transportation system should meet several requirements: \begin{enumerate} \item Segment vehicles from the background and from other vehicles so that all vehicles (stopped or moving) are detected; \item Classify detected vehicles into categories: cars, buses, trucks, motorcycles and etc; \item Extract spatial and temporal features (motion, velocity, trajectory) to enable more specific tasks including vehicle tracking, trajectory analysis, near accident detection, anomaly detection and etc; \item Function under a wide range of traffic conditions (light traffic, congestion, varying speeds in different lanes) and a wide variety of lighting conditions (sunny, overcast, twilight, night, rainy, etc.); \item Operate in real-time. \end{enumerate} Over the decades, although an increasing number of research on vision-based system for traffic surveillance have been proposed, many of these criteria still cannot be met. Early solutions~\cite{hoose1992impacts} do not identify individual vehicles as unique targets and progressively track their movements. Methods have been proposed to address individual vehicle detection and vehicles tracking problems~\cite{koller1993model,mclauchlan1997real,coifman1998real} with tracking strategies including model based tracking, region based tracking, active contour based tracking, feature based tracking and optical flow employment. Compared to traditional hand-crafted features, deep learning methods~\cite{ren2015faster,girshick2016region,redmon2016you, tian2016detecting} in object detection have illustrated the robustness with specialization of the generic detector to a specific scene. Leuck~\cite{leuck1999automatic} and Gardner~\cite{gardner1996interactive} use three-dimensional (3-D) models of vehicle shapes to estimate vehicle images projected onto a two-dimensional (2-D) image plane. Recently, automatic traffic accident detection has become an important topic. One typical approach uses object detection or tracking before detecting accident events~\cite{sadeky2010real,kamijo2000traffic,jiansheng2014vision, jiang2007abnormal,hommes2011detection}, with Histogram of Flow Gradient (HFG), Hidden Markov Model (HMM) or, Gaussian Mixture Model (GMM). Other approaches~\cite{liu2010anomaly,ihaddadene2008real,wang2010anomaly,wang2012real,tang2005traffic,karim2002incident,xia2015vision,chen2010automatic,chen2016vision} use low-level features (e.g.\ motion features) to demonstrate better robustness. Neural networks have also been employed to automatic accident detection~\cite{ohe1995method,yu2008back,srinivasan2004evaluation,ghosh2003wavelet}. In this paper, we first propose a Traffic Near Accident Dataset (TNAD). Intersections tend to experience more and severe near accident, due to factors such as angles and turning collisions. Observing this, the TNAD dataset is collected to contain three types of video data of traffic intersections that could be utilized for not only near accident detection but also other traffic surveillance tasks including turn movement counting. The first type is drone video that monitoring an intersections with top-down view. The second type of intersection videos is real traffic videos acquired by omnidirectional fisheye cameras that monitoring small or large intersections. It is widely used in transportation surveillance. These video data can be directly used as inputs for any vision-intelligent framework. The pre-processing of fisheye correction can be applied to them for better surveillance performance. As there exist only a few samples of near accident in the reality per hour. The third type of video is proposed by simulating with game engine for the purpose to train and test with more near accident samples. We propose a uniformed vision-based framework with the two-stream Convolutional Network architecture that performs real-time detection, tracking, and near accident detection of traffic road users. The two-stream Convolutional Networks consist of a spatial stream network to detect individual vehicles and likely near accident regions at the single frame level, by capturing appearance features with a state-of-the-art object detection method~\cite{redmon2016you}. The temporal stream network leverages motion features extracted from detected candidates to perform multiple object Tracking and generate corresponding trajectories of each tracking target. We detect near accident by incorporating appearance features and motion features to compute probabilities of near accident candidate regions. Experiments demonstrate the advantage of our framework with an overall competitive performance at high frame rates. The contributions of this work can be summarized as: \begin{itemize} \item A uniformed framework that performs real-time object detection, tracking and near accident detection. \item The first work of an end-to-end trainable two-stream deep models to detect near accident with good accuracy. \item A Traffic and Near Accident Detection Dataset (TNAD) containing different types of intersection videos that would be used for several vision-based traffic analysis tasks. \end{itemize} The organization of the paper is as follows. Section~\ref{sec:background} describes background on Object Detection, Multiple Object Tracking and Near Accident Detection. Section~\ref{sec:method} describes the overall architecture, methodologies, and implementation of our vision-based intelligent framework. This is followed in Section~\ref{sec:experiments} by an introduction of our Traffic Near Accident Detection Dataset (TNAD) and video preprocessing techniques. Section~\ref{sec:experiments} presents a comprehensive evaluation of our approach and other state-of-the-art near accident detection methods both qualitatively and quantitatively. Section~\ref{sec:conclusion} concludes by summarizing our contributions and also discusses the scope for future work. \section{Two-Stream architecture for Near Accident Detection}\label{sec:method} We present our vision-based two-stream architecture for real-time near accident detection based on real-time object detection and multiple object tracking. The goal of near accident detection is to detect likely collision scenarios across video frames and report these near accident records. As videos can be decomposed into spatial and temporal components. We divide our framework into a two-stream architecture as shown in Fig~\ref{fig:two}. The spatial part consists of individual frame appearance information about scenes and objects in the video. The temporal part contains motion information for moving objects. For spatial stream convolutional neural network, we utilized a standard convolutional network of a state-of-the-art object detection method~\cite{redmon2016you} to detect individual vehicles and likely near accident regions at the single frame level. The temporal stream network is leveraging object candidates from object detection CNNs and integrates their appearance information with a fast multiple object tracking method to extract motion features and compute trajectories. When two trajectories of individual objects start intersecting or become closer than a certain threshold, we'll label the region covering two objects as high probability near accident regions. Finally, we take average near accident probability of spatial stream network and temporal stream network and report the near accident record. \subsection{Preliminaries} \textbf{Convolutional Neural Networks:} Convolutional Neural Networks (CNNs) have a strong capability of representing objects and helps to boost the performance of numerous vision tasks, comparing to traditional heuristic features \cite{dalal2005histograms}. A Convolutional Neural Networks (CNN) is a a class of deep neural networks which is widely applied for visual imagery analysis in computer vision. A standard CNN usually consists of an input and an output layer, as well as multiple hidden layers (convolutional layers, pooling layers, fully connected layers and normalization layers) as shown in Figure~\ref{fig:cnn}. The input to a convolutional layer is an original image $\boldsymbol{X}$. We denote the feature map of $i$-th convolutional layer as $\boldsymbol{H}_i$ and $\boldsymbol{H}_0=\boldsymbol{X}$. Then $\boldsymbol{H}_i$ can be described as \begin{equation} {\boldsymbol{H}_i} = f\left( {{\boldsymbol{H}_{i - 1}} \otimes {\boldsymbol{W}_i} + {\boldsymbol{b}_i}} \right) \end{equation} where $\boldsymbol{W}_i$ is the weight for $i$-th convolutional kernel, and $\otimes$ is the convolution operation of the kernel and $i-1$-th image or feature map. Output of convolution operation are summed with a bias $\boldsymbol{b}_i$. Then the feature map for $i$-th layer can be computed by applying a nonlinear activation function to it. Take an example of using a $32\times32$ RGB image with a simple ConvNet for CIFAR-10 classification~\cite{krizhevsky2009learning}. \begin{itemize} \item Input layer: the original image with raw pixel values as width is 32, height is 32, and color channels (R,G,B) is 3. \item Convolutional layer: compute output of neurons which are connected to local regions in the image through activation functions. If we use 12 filters, we have result in volume such as $[32\times32\times12]$. \item Pooling layer: perform a downsampling operation, resulting in volume such as $[16\times16\times12]$. \item Fully connected layer: compute the class scores, resulting in volume of size $[1\times1\times10]$, where these 10 numbers are corresponding to 10 class score. \end{itemize} In this way, CNNs transform the original image into multiple high-level feature representations layer by layer and compute the final final class scores. \begin{figure} \includegraphics[width=\textwidth]{samples/cnn.png} \caption[]{Architecture of Convolutional Neural Networks for image classification.\footnotemark} \label{fig:cnn} \end{figure} \footnotetext{The credit should be given to Adit Deshpande and his blog.} \subsection{Spatial stream network} In our framework, each stream is implemented using a deep convolutional neural network. Near accident scores are combined by the averaging score. Since our spatial stream ConvNet is essentially an object detection architecture, we build it upon the recent advances in object detection with YOLO detector~\cite{redmon2016you}, and pre-train the network from scratch on our dataset containing multi-scale drone, fisheye and simulation videos. As most of our videos contain traffic scenes with vehicles and traffic movement in top-down view, we specify different vehicle classes such as motorcycle, car, bus, and truck as object classes for training the detector. Additionally, near accident or collision can be detected from single still frame either from the beginning of a video or stopped vehicles associated in an accident after collision. Therefore, we train our detector to localize these likely near accident scenarios. Since the static appearance is a useful cue, the spatial stream network effectively performs object detection by operating on individual video frames. \subsubsection{YOLO object detection} You Only Look Once (YOLO)~\cite{redmon2016you} is a state-of-the-art, real-time object detection system. This end-to-end deep learning approach does not need region proposals and is much different from the region based algorithms. The pipeline of YOLO~\cite{redmon2016you} is pretty straightforward: YOLO~\cite{redmon2016you} passes the whole image through the neural network only once where the title comes from (You Only Look Once) and returns bounding boxes and class probabilities for predictions. Figure~\ref{fig:2} demonstrates the detection model and system of YOLO~\cite{redmon2016you}. In YOLO~\cite{redmon2016you}, it regards object detection as an end-to-end regression problem and uses a single convolutional network predicts the bounding boxes and the class probabilities for these boxes. It first takes the image and split it into an $S \times S$ grid, within each of the grid we take $m$ bounding boxes. For each grid cell, \begin{itemize} \item it predicts B boundary boxes and each box has one box confidence score \item it detects one object only regardless of the number of boxes B \item it predicts C conditional class probabilities (one per class for the likeliness of the object class) \end{itemize} \begin{figure} \includegraphics[width=0.9\linewidth]{samples/cnn2.png} \caption{The YOLO Detection System~\cite{redmon2016you}. It (1) resizes the input image to $448 \times 448$, (2) runs a single convolutional network on the image, and (3) thresholds the resulting detections by the model's confidence.} \label{fig:2} \end{figure} For each of the bounding box, the convolutional neural network (CNN) outputs a class probability and offset values for the bounding box. Then it selects bounding boxes which have the class probability above a threshold value and uses them to locate the object within the image. In detail, each boundary box contains 5 elements: $(x, y, w, h)$ and a box confidence. The $(x, y)$ are coordinates which represent the center of the box relative to the bounds of the grid cell. The $(w,h)$ are width and height. These elements are normalized as $x$, $y$, $w$ and $h$ are all between 0 and 1. The confidence prediction represents the intersection over union (IoU) between the predicted box and any ground truth box which reflects how likely the box contains an object (objectness) and how accurate is the boundary box. The mathematical definitions of those scoring and probability terms are: \begin{center} box confidence score $\equiv P_{r}(object)\cdot IoU$\\ conditional class probability $\equiv P_{r}(class_{i}|object)$\\ class confidence score $\equiv P_{r}(class_{i})\cdot IoU$\\ class confidence score $=$ box confidence score $\times$ conditional class probability \end{center} where $\equiv P_{r}(object)$ is the probability the box contains an object. $IoU$ is the IoU between the predicted box and the ground truth. $\equiv P_{r}(class_{i})$ is the probability the object belongs to $class_{i}$. $\equiv P_{r}(class_{i}|object)$ is the probability the object belongs to $class_{i}$ given an object is presence. The network architecture of YOLO~\cite{redmon2016you} simply contains 24 convolutional layers followed by two fully connected layers, reminiscent of AlexNet and even earlier convolutional architectures. Some convolution layers use $1 \times 1$ reduction layers alternatively to reduce the depth of the features maps. For the last convolution layer, it outputs a tensor with shape $(7, 7, 1024)$ which is flattened. YOLO~\cite{redmon2016you} performs a linear regression using two fully connected layers to make boundary box predictions and to make a final prediction using threshold of box confidence scores. The final loss adds localization, confidence and classification losses together. \begin{table*}[h] \begin{center} \begin{tabular}{c|c|c|c} Type & Filters & Size/Stride & Output\\ \hline Convolutional & 32 & $3 \times 3$ & $224 \times 224 $ \\ Maxpool & &$2 \times 2 / 2$ & $112 \times 112 $ \\ Convolutional & 64 & $3 \times 3$ & $112 \times 112 $ \\ Maxpool & & $2 \times 2 / 2$ & $56 \times 56 $ \\ Convolutional & 128 &$3 \times 3$ & $56 \times 56 $ \\ Convolutional & 64 &$1 \times 1$ & $56 \times 56 $ \\ Convolutional & 128 &$3 \times 3$ & $56 \times 56 $ \\ Maxpool & & $2 \times 2 / 2$ & $28 \times 28 $ \\ Convolutional & 256 & $3 \times 3$ & $28 \times 28 $ \\ Convolutional & 128 & $1 \times 1$ & $28 \times 28 $ \\ Convolutional & 256& $3 \times 3$ & $28 \times 28 $ \\ Maxpool & & $2 \times 2 / 2$ & $14 \times 14 $ \\ Convolutional & 512 & $3 \times 3$ & $14 \times 14 $ \\ Convolutional & 256& $1 \times 1$ & $14 \times 14 $ \\ Convolutional & 512 & $3 \times 3$ & $14 \times 14$ \\ Convolutional & 256& $1 \times 1$ & $14 \times 14$ \\ Convolutional & 512 & $3 \times 3$ & $14 \times 14 $ \\ Maxpool & & $2 \times 2 / 2$ & $7 \times 7 $ \\ Convolutional & 1024 & $3 \times 3$ & $7 \times 7 $ \\ Convolutional & 512 & $1 \times 1$ & $7 \times 7 $ \\ Convolutional & 1024 & $3 \times 3$ & $7 \times 7$ \\ Convolutional & 512 & $1 \times 1$ & $7 \times 7$ \\ Convolutional & 1024 & $3 \times 3$ & $7 \times 7$ \\ \hline \hline Convolutional & 1000 & $1 \times 1$ & $7 \times 7$ \\ Avgpool & & Global & $1000$ \\ Softmax & & &\\ \end{tabular} \end{center} \caption{\small \textbf{Darknet-19~\cite{redmon2017yolo9000}.}} \label{net} \end{table*} \scriptsize \begin{multline} \lambda_\textbf{coord} \sum_{i = 0}^{S^2} \sum_{j = 0}^{B} \mathlarger{\mathbbm{1}}_{ij}^{\text{obj}} \left[ \left( x_i - \hat{x}_i \right)^2 + \left( y_i - \hat{y}_i \right)^2 \right] + \lambda_\textbf{coord} \sum_{i = 0}^{S^2} \sum_{j = 0}^{B} \mathlarger{\mathbbm{1}}_{ij}^{\text{obj}} \left[ \left( \sqrt{w_i} - \sqrt{\hat{w}_i} \right)^2 + \left( \sqrt{h_i} - \sqrt{\hat{h}_i} \right)^2 \right] \\ + \sum_{i = 0}^{S^2} \sum_{j = 0}^{B} \mathlarger{\mathbbm{1}}_{ij}^{\text{obj}} \left( C_i - \hat{C}_i \right)^2 + \lambda_\textrm{noobj} \sum_{i = 0}^{S^2} \sum_{j = 0}^{B} \mathlarger{\mathbbm{1}}_{ij}^{\text{noobj}} \left( C_i - \hat{C}_i \right)^2 + \sum_{i = 0}^{S^2} \mathlarger{\mathbbm{1}}_i^{\text{obj}} \sum_{c \in \textrm{classes}} \left( p_i(c) - \hat{p}_i(c) \right)^2 \end{multline} \normalsize where $\mathbbm{1}_i^{\text{obj}}$ denotes if object appears in cell $i$ and $\mathbbm{1}_{ij}^{\text{obj}}$ denotes that the $j$th bounding box predictor in cell $i$ is ``responsible'' for that prediction. YOLO~\cite{redmon2016you} is orders of magnitude faster (45 frames per second) than other object detection approaches which means it can process streaming video in realtime and achieves more than twice the mean average precision of other real-time systems. For the implementation, we leverage the extension of YOLO~\cite{redmon2016you}, Darknet-19, a classification model that used as the base of YOLOv2~\cite{redmon2017yolo9000}. The full network description of it is shown in Table~\ref{net}. Darknet-19~\cite{redmon2017yolo9000} has 19 convolutional layers and 5 maxpooling layers and it uses batch normalization to stabilize training, speed up convergence, and regularize the model~\cite{ioffe2015batch}. \subsection{Temporal stream network} The spatial stream network is not able to extract motion features and compute trajectories due to single-frame inputs. To leverage these useful information, we present our temporal stream network, a ConvNet model which performs a tracking-by-detection multiple object tracking algorithm~\cite{bewley2016simple,wojke2017simple} with data association metric combining deep appearance features. The inputs are identical to the spatial stream network using the original video. Detected object candidates (only vehicle classes) are used to for tracking handling, state estimation, and frame-by-frame data association using SORT~\cite{bewley2016simple} and DeepSORT~\cite{wojke2017simple}, the real-time multiple object tracking methods. The multiple object tracking models each state of objects and describes the motion of objects across video frames. With tracking, we obtain results by stacking trajectories of moving objects between several consecutive frames which are useful cues for near accident detection. \subsubsection{SORT} Simple Online Realtime Tracking (SORT)~\cite{bewley2016simple} is a simple, popular and fast Multiple Object Tracking (MOT) algorithm. The core idea is to perform a Kalman filtering~\cite{kalman1960new} in image space and do frame-by-frame data association using the Hungarian methods~\cite{kuhn1955hungarian} with an association metric that measures bounding box overlap. Despite only using a rudimentary combination of the Kalman Filter~\cite{kalman1960new} and Hungarian algorithm~\cite{kuhn1955hungarian} for the tracking components, SORT~\cite{bewley2016simple} achieves an accuracy comparable to state-of-the-art online trackers. Moreover, due to the simplicity of it, SORT~\cite{bewley2016simple} can updates at a rate of 260 Hz on single machine which is over 20x faster than other state-of-the-art trackers. \textbf{Estimation Model.} The state of each target is modelled as: \begin{equation} \mathbf{x} = [u,v,s,r,\dot{u},\dot{v},\dot{s}]^T, \end{equation} where $u$ and $v$ represent the horizontal and vertical pixel location of the centre of the target, while the scale $s$ and $r$ represent the scale (area) and the aspect ratio (usually considered to be constant) of the target's bounding box respectively. When a detection is associated to a target, it updates the target state using the detected bounding box where the velocity components are solved optimally via a Kalman filter framework~\cite{kalman1960new}. If no detection is associated to the target, its state is simply predicted without correction using the linear velocity model. \textbf{Data Association.} In order to assign detections to existing targets, each target's bounding box geometry is estimated by predicting its new location in the current frame. The assignment cost matrix is defined as the IoU distance between each detection and all predicted bounding boxes from the existing targets. Then the assignment problem is solved optimally using the Hungarian algorithm~\cite{kuhn1955hungarian}. Additionally, a minimum IoU is imposed to reject assignments where the detection to target overlap is less than $IoU_{min}$. The IoU distances of the bounding boxes are found so as to handle short term occlusion caused by passing targets. \textbf{Creation and Deletion of Track Identities.} When new objects enter or old objects vanish in video frames, unique identities for objects need to be created or destroyed accordingly. For creating trackers, we consider any detection with an overlap less than $IoU_{min}$ to signify the existence of an untracked object. Then the new tracker undergoes a probationary period where the target needs to be associated with detections to accumulate enough evidence in order to prevent tracking of false positives. Tracks could be terminated if they are not detected for $T_{Lost}$ frames to prevent an unbounded growth in the number of trackers and localization errors caused by predictions over long durations without corrections from the detector. \subsubsection{DeepSORT} DeepSORT~\cite{wojke2017simple} is an extension of SORT~\cite{bewley2016simple} which integrates appearance information through a pre-trained association metric to improve the performance of SORT~\cite{bewley2016simple}. It adopts a conventional single hypothesis tracking methodology with recursive Kalman filtering~\cite{kalman1960new} and frame-by-frame data association. DeepSORT~\cite{wojke2017simple} helps to solve a large number of identities switching problem in SORT~\cite{bewley2016simple} and it can track objects through longer periods of occlusions. During online application, it establishs measurement-to-track associations using nearest neighbor queries in visual appearance space. \textbf{Track Handling and State Estimation}. The track handling and state estimation using Kalman filtering~\cite{kalman1960new} is mostly identical to the SORT~\cite{bewley2016simple}. The tracking scenario is defined using eight dimensional state space~$(u, v, \gamma, h, \dot{x}, \dot{y}, \dot{\gamma}, \dot{h})$ that contains the bounding box center position $(u, v)$, aspect ratio $\gamma$, height $h$, and their respective velocities in image coordinates. It uses a standard Kalman filter~\cite{kalman1960new} with a constant velocity motion and linear observation model, where it takes the bounding coordinates~$(u, v, \gamma, h)$ as direct observations of the object state. \textbf{Data Association}. To solve the frame-by-frame association problem between the predicted Kalman states and the newly arrived measurements, it uses the Hungarian algorithm~\cite{kuhn1955hungarian}. In formulation, it integrates both motion and appearance information through combination of two appropriate metrics. For motion information, the (squared) Mahalanobis distance between predicted Kalman states and newly arrived measurements is utilized: \begin{equation} d^{(1)}(i,j) = (\bm{d}_j - \bm{y}_i)^{\bm{T}}(\bm{S}_i)^{-1}(\bm{d}_j - \bm{y}_i) \end{equation} where the projection of the $i$-th track distribution into measurement space is ~$(\bm{y}_i, \bm{S}_i)$ and the $j$-th bounding box detection is $\bm{d}_j$. The second metric measures the smallest cosine distance between the~$i$-th track and~$j$-th detection in appearance space: \begin{equation} d^{(2)}(i, j) = \min \left \{ 1 - {\bm{r}_j}^{T} \bm{r}^{(i)}_k | \bm{r}^{(i)}_k\in \mathcal{R}_i \right \} \end{equation} Then this association problem is built with combination of both metrics using a weighted sum where the influence of each metric on combined association cost can be controlled through hyperparameter $\lambda$. \begin{equation} c_{i,j} = \lambda \, d^{(1)}(i, j) + (1 - \lambda) d^{(2)}(i, j) \end{equation} \textbf{Matching Cascade}. Rather than solving measurement-to-track associations in a global way, it adopts a matching cascade introduced in~\cite{wojke2017simple} to solve a series of subproblems. In some situation, when occlusion happens to a object for a longer period of time, the subsequent Kalman filter~\cite{kalman1960new} predictions would increase the uncertainty associated with the object location. In consequent, probability mass spreads out in state space and the observation likelihood becomes less peaked. Intuitively, the association metric should account for this spread of probability mass by increasing the measurement-to-track distance. Therefore, the matching cascade strategy gives priority to more frequently seen objects to encode the notion of probability spread in the association likelihood. \subsection{Near Accident Detection} When utilizing the multiple object tracking algorithm, we compute the center of each object in several consecutive frames to form stacking trajectories as our motion representation. These stacking trajectories can provide accumulated information through image frames, including the number of objects, their motion history and timing of their interactions such as near accident. We stack the trajectories of all object by every $L$ consecutive frames as illustrated in Figure~\ref{fig:traj} where $\mathbf{p}_t^i$ denotes the center position of the $i$-th object in the $t$-th frame. $\mathbf{P}_t = (\mathbf{p}_t^1, \mathbf{p}_t^2, ..., \mathbf{p}_t^{M_t})$ denotes trajectories of all the $M_t$ objects in the $t$-th frame. $\mathbf{P}_{1:t} = \{\mathbf{P}_1, \mathbf{P}_2, ..., \mathbf{P}_t\}$ denotes all the sequential trajectories of all the objects from the first frame to the $t$-th frame. As we only exam every $L$ consecutive frames, the stacking trajectories are sequentially as \begin{equation} \mathbf{P}_{1:L} = \{\mathbf{P}_1, \mathbf{P}_2, ..., \mathbf{P}_L\}, \mathbf{P}_{L+1:2L} = \{\mathbf{P}_{L+1}, \mathbf{P}_{L+2}, ..., \mathbf{P}_{2L}\},\cdots \end{equation} $\mathbf{O}_t = (\mathbf{o}_t^1, \mathbf{o}_t^2, ..., \mathbf{o}_t^{M_t})$ denotes the collected observations for all the $M_t$ objects in the $t$-th frame. $\mathbf{O}_{1:t} = \{\mathbf{O}_1, \mathbf{O}_2, ..., \mathbf{O}_t\}$ denotes all the collected sequential observations of all the objects from the first frame to the $t$-th frame. We use a simple detection algorithm which finds collisions between simplified forms of the objects, using the center of bounding boxes. Our algorithm is depicted in Algorithm~\ref{alg:detect}. Once the collision is detected, we set the region covering collision associated objects to be a new bounding box with class probability of near accident to be 1. By averaging the near accident probability of output from spatial stream network and temporal stream network, we are able to compute the final outputs of near accident detection. \begin{figure} \includegraphics[width=0.9\linewidth]{samples/trajectory.png} \caption{The stacking trajectories extracted from tracking. Consecutive frames and the corresponding displacement vectors are shown with the same colour.} \label{fig:traj} \end{figure} \begin{algorithm} \KwIn{current frame $t_{current}$, collision state list $Collision$} \KwOut{collision state list $Collision$} \For{$t_{L} \leftarrow t_{previous}$ to $t_{current}$ in steps of $L$ frames}{ \For{each pair of object trajectory ($\mathbf{p}_{:t_{L}}^1$, $\mathbf{p}_{:t_{L}}^2)$}{ \If{($\mathbf{p}_{:t_{L}}^1$ intersects $\mathbf{p}_{:t_{L}}^2$ as of $t_{L}$)}{add $\mathbf{o}_1$, $\mathbf{o}_2$ to $Collision$} } \If{($Collisions$)}{$t_{previous} \leftarrow t_{L}$; return TRUE} } $t_{previous} \leftarrow t_{d}$; return FALSE \caption{Collision Detection}\label{alg:detect} \end{algorithm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This paper is concerned with two different ways of transferring Riesz projection to the infinite-dimensional setting of Dirichlet series: first, by lifting it in a multiplicative way to the infinite-dimensional torus $\mathbb{T}^{\infty}$ and second, by using one-dimensional Riesz projection to study the partial sum operator acting on Dirichlet series. In either case, we will be interested in studying the action of the operator in question on functions in $L^p$ or $H^p$ spaces. By Fefferman's duality theorem \cite{F}, Riesz projection $P_{1}^{+}$ on the unit circle $\mathbb{T}$, formally defined as \[ P_{1}^{+}\big(\sum_{k\in \mathbb{Z}}c_k z^k\big):=\sum_{k\geq 0}c_k z^k , \] maps $L^{\infty}(\mathbb{T})$ into and onto $\operatorname{BMOA}(\mathbb{T})$, i.e., the space of analytic functions of bounded mean oscillation. We may thus think of the image of $L^{\infty}(\mathbb{T}^{\infty})$ under Riesz projection on $\mathbb{T}^{\infty}$ (or equivalently, in view of the Hahn--Banach theorem, the dual space $H^1(\mathbb{T}^\infty)^*$) as a possible infinite-dimensional counterpart to $\operatorname{BMOA}(\mathbb{T})$. This brings us to the second main topic of this paper which is to describe some of the main properties of this space. Our main result, given in Section~\ref{sec:Riesz}, verifies that Riesz projection does not map $L^{\infty}(\mathbb{T}^{\infty})$ into $H^p(\mathbb{T}^{\infty})$ for any $p>2$, whence $H^1(\mathbb{T}^\infty)^*$ is not embedded in $H^p(\mathbb{T}^{\infty})$ for any $p>2$. This result solves a problem posed in \cite{MS} and contrasts the familiar inclusion of $\operatorname{BMOA}(\mathbb{T})$ in $H^p(\mathbb{T})$ for every $p<\infty$. The key idea of the proof is to first show that the norm of a Fourier multiplier $M_{\chi_A}:L^p(\mathbb{T}^n)\to L^q(\mathbb{T}^n)$ corresponding to a bounded convex domain $A$ in $\mathbb{R}^n$ is dominated by the norm of the Riesz projection on $\mathbb{T}^{n+m}$ for $m$ sufficiently large, depending on $A$. Another crucial ingredient is Babenko's well-known lower estimate for spherical Lebesgue constants. We then proceed to view $H^1(\mathbb{T}^\infty)^*$ as a space of Dirichlet series, employing as usual the Bohr lift. This leads us in Section \ref{sec:BMOA} to a distinguished subspace of $H^1(\mathbb{T}^\infty)^*$ which is indeed a ``true'' $\operatorname{BMO}$ space, namely the family of Dirichlet series that belong to $\operatorname{BMOA}$ of the right half-plane. By analogy with classical results on $\mathbb{T}$, we give several conditions for membership in this space, also for randomized Dirichlet series, and we describe how this $\operatorname{BMOA}$ space relates to some other function spaces of Dirichlet series. In Section \ref{sec:compare}, we study Dirichlet polynomials of fixed length $N$ and compare the size of their norms in $H^p$, $\operatorname{BMOA}$, and the Bloch space. One of these results is then applied in the final Section \ref{sec:partial}, where we turn to our second usage of Riesz projection. Here we present an explicit device for expressing the $N$th partial sum of a Dirichlet series in terms of one-dimensional Riesz projection and give some $L^p$ estimates for the associated partial sum operator. We refer the reader to \cite{HLS} and \cite{MAHE} (see especially \cite[Section 6]{MAHE}) for definitions and basics on Hardy spaces of Dirichlet series of Hardy spaces on $\mathbb{T}^\infty$. \subsection*{Notation} We will use the notation $f(x)\ll g(x)$ if there is some constant $C>0$ such that $|f(x)|\le C|g(x)|$ for all (appropriate) $x$. If we have both $f(x)\ll g(x)$ and $g(x)\ll f(x)$, then we will write $f(x)\asymp g(x)$. If $\lim_{x\to \infty} f(x)/g(x)=1$, then we write $f(x)\sim g(x)$. \subsection*{Acknowledgements} We thank Ole Fredrik Brevig for allowing us to include an unpublished argument of his in this paper. We are also grateful to the referees for a number of valuable comments that helped improve the presentation. \section{The norm of the Riesz projection from $L^\infty(\Bbb{T}^n)$ to $L^p(\Bbb{T}^n)$}\label{sec:Riesz} The norm $\|f\|_p$ of a function $f$ in $L^p(\mathbb{T}^\infty)$ is computed with respect to Haar measure $m_\infty$ on $\mathbb{T}^\infty$, which is the countable product of one-dimensional normalized Lebesgue measures on $\mathbb{T}$. We denote by $m_n$ the measure on $\mathbb{T}^n$ that is the $n$-fold product of the normalised one-dimensional measures, and $L^p(\mathbb{T}^n)$ is defined with respect to this measure. We write the Fourier series of a function $f$ in $L^1(\mathbb{T}^n)$ on the $n$-torus $\mathbb{T}^n$ as \begin{equation} \label{def_expan} f(\zeta) = \sum_{\alpha\in\mathbb{Z}^n} \hat f(\alpha) \zeta^{\alpha}. \end{equation} For a function $f$ in $L^1(\mathbb{T}^\infty)$ the Fourier series takes the form $f(\zeta) = \sum_{\alpha\in\mathbb{Z}^\infty_{ fin}} \hat f(\alpha) \zeta^{\alpha}$, where $\mathbb{Z}^\infty_{ fin}$ stands for infinite multi-indices such that all but finitely many indices are zero. We also set $\mathbb{Z}_+:=\{0,1,\dots\}$ so that $\mathbb{Z}_+^n$ (respectively $\mathbb{Z}_+^\infty$) is the positive cone in $\mathbb{Z}^n$ (respectively $\mathbb{Z}^\infty$). The operator \[ P_{n}^+ f(\zeta) := \sum_{\alpha\in\mathbb{Z}_+^n} \hat f(\alpha) \zeta^{\alpha} \] is the Riesz projection on $\mathbb{T}^n$, and, as an operator on $L^2(\mathbb{T}^n)$, it has norm $1$. If we instead view $P_{n}^+$ as an operator on $L^p(\mathbb{T}^n)$ for $1<p<\infty$, then a theorem of Hollenbeck and Verbitsky \cite{HV} asserts that its norm equals $(\sin(\pi/p))^{-n}$. In an analogous way we denote by $P^+_\infty$ the Riesz projection on $\mathbb{T}^\infty,$ and obviously $P^+_\infty$ is bounded on $L^p(\mathbb{T}^\infty)$ only for $p=2$, when its norm equals 1. Using this normalization, we let $\|P_n^+\|_{q,p}$ denote the norm of the operator $P_n^+:\,L^q(\mathbb{T}^n)\to L^p(\mathbb{T}^n)$ for $q\ge p$. By H\"older's inequality, $p\to \|P_n^+\|_{\infty,p}$ is a continuous and nondecreasing function, and obviously $\|P^+_n\|_{\infty,p}\le(\sin(\pi/p))^{-n}$. Consider the quantity \[ p_n := \sup\left\{p\ge2:\,\|P_n^+\|_{\infty,p}\leq1\right\},\] which we following \cite{FIP} call the critical exponent of $P_n^+$. The critical exponent is well-defined since clearly $\|P_n^+\|_{\infty,2}=1$. We also set \[p_\infty := \sup\left\{p\ge2:\,\|P_\infty^+\|_{\infty,p}\leq1\right\}. \] Defining $A_m f(z_1,z_2,\ldots):=f(z_1,\ldots, z_m,0,0,\ldots)$ and using that $\|A_m f\|_p\to \| f\|_p$ as $m\to \infty$ for every $f$ in $L^p(\mathbb{T}^\infty)$ and $1\leq p\leq \infty$, we see that in fact \[ p_\infty= \lim_{n\to\infty} p_n. \] This also follows from the proof of Theorem \ref{n_to_infty} below. Marzo and Seip \cite{MS} proved that the critical exponent of $P_1^+$ is $4$ and moreover that \[ 2+2/(2^n-1)\le p_n< 3.67632\] for $n>1$. Recently, Brevig \cite{B} showed that $\lim_{n\to \infty} p_n \le 3.31138$. The following theorem settles the asymptotic behavior of the critical exponent of $P_n^+$ when $n\to\infty$. \begin{theorem}\label{n_to_infty} We have $p_\infty=\lim_{n\to\infty} p_n=2$. \end{theorem} By considering a product of functions in disjoint variables, we obtain the following immediate consequence concerning the Riesz projection $P_{\infty}^{+}$ on the infinite-dimensional torus, formally defined as \[ P_{\infty}^{+}\Big(\sum_{k\in \mathbb{Z}^{(\infty)}}c_\alpha z^\alpha\Big):=\sum_{\alpha\in \mathbb{N}^{(\infty)}}c_\alpha z^\alpha. \] \begin{corollary}\label{inf_dim} The Riesz projection $P_{\infty}^{+}$ is not bounded from $L^q$ to $L^p$ when $2<p<q\le \infty$. \end{corollary} In turn, since the ``analytic'' dual of $H^1$ obviously equals $P^+_\infty(L^\infty(\mathbb{T}^\infty))$, we obtain a further interesting consequence. \begin{corollary}\label{cor:dual_non_embedding} The dual space $H^1(\mathbb{T}^\infty)^*$ is not contained in $H^p(\mathbb{T}^\infty)$ for any $p>2.$ \end{corollary} The latter result has an immediate translation in terms of Hardy spaces of Dirichlet series, as will be recorded in Corollary \ref{cor:dual_non_embedding2} below. The proof of Theorem~\ref{n_to_infty} deals with the (pre)dual operator $P^+_\infty:L^q(\mathbb{T}^n)\to L^1(\mathbb{T}^n)$, where $q<2.$ The idea is to prove first that for the characteristic function $\chi_A$ of a bounded convex domain $A$ in $\mathbb{R}^n$, the norm of the Fourier multiplier $M_{\chi_{A}}$ on $\mathbb{T}^n$ is actually bounded by that of $P^+_{n+m}$ for large enough $m$, depending on $A$. This key observation will be applied when $A$ is a large ball $B(0,R)$ in $\mathbb{R}^n$, and the desired result is deduced by invoking the following result of Ilyin \cite{I1}. \begin{theorem}\label{thm:babenko} The circular Dirichlet kernel \[ D_{R,n}(\zeta) := \sum_{\alpha\in \mathbb{Z}^n:\,\|\alpha\|\le R} \zeta^{\alpha} \] on $\mathbb{T}^n$ satisfies $\| D_{R,n}\|_{L^1(\mathbb{T}^n)}\geq cR^{(n-1)/2},$ where $c=c(n)>0$ and $\|\cdot\|$ stands for the standard Euclidean norm. \end{theorem} Babenko's famous 1971 preprint (see \cite{BA,Li2}) gives another proof. Moreover, it establishes a comparable upper bound, which can also be found in Ilyin and Alimov's paper \cite{I2}. We refer to Liflyand's review \cite{Li1} for further information on the related literature and for a simple proof of Theorem \ref{thm:babenko}. \begin{proof}[Proof of Theorem \ref{n_to_infty}.] Fix $n\geq 2$ and $\alpha =(\alpha_1,\dots,\alpha_n)\in\mathbb{Z}^n$ together with $\beta^j\in\mathbb{Z}^n$ and $ b_j\in\mathbb{Z}$ for $j=1,\dots,m$, where $m\in\mathbb{N}$ is also fixed. We consider $n+m$ linear functions $\phi_j:\,\mathbb{Z}^n\to\mathbb{Z}$, with $j=1,\dots,n+m$, where \begin{align*} \phi_j(\alpha) & := \alpha_j, \quad j=1,\dots,n, \\ \phi_{n+j}(\alpha) &:= (\alpha,\beta^j) + b_j,\quad j=1,\dots,m. \end{align*} We associate with any trigonometric polynomial $f$ as in \eqref{def_expan} (that is, any $f$ of the form \eqref{def_expan} with finitely many non-zero terms) the function \[ g(\eta) := \sum_{\alpha\in\mathbb{Z}^n} \hat f(\alpha) \prod_{j=1}^{n+m} \eta_j^{\phi_j(\alpha)},\] where $\eta = (\eta_1,\dots,\eta_{n+m})\in\mathbb{T}^{n+m}$. \begin{lemma}\label{equal_norms} We have $\|g\|_p = \|f\|_p$ for $0<p\le \infty$. \end{lemma} \begin{proof} Set \[ \eta' :=(\eta_1,\dots,\eta_n),\quad \eta'' :=(\eta_{n+1},\dots,\eta_{n+m}).\] We have \[ g(\eta) = \psi_0(\eta'') \sum_{\alpha\in\mathbb{Z}^n} \hat f(\alpha)\prod_{j=1}^n (\psi_j(\eta'')\eta_j)^{\alpha_j},\] where \[ \psi_0(\eta''):=\prod_{k=1}^m\eta_{n+k}^{b_k}\; ,\quad\textrm{and}\quad \psi_j(\eta''):=\prod_{k=1}^m\eta_{n+k}^{\beta^{k}_j}\;\;\textrm{for}\;\; j=1,\ldots, n. \] We clearly have $\psi_j(\eta'')\in\mathbb{T}$ for $j=0,\dots,n$. For a fixed $\eta''$ in $\mathbb{T}^m$ consider $g$ as a function of $\eta'$: \[ g(\eta) =g_{\eta''}(\eta').\] Set $\tilde\eta'=(\tilde\eta_1,\dots,\tilde\eta_n)$, where $\tilde\eta_j = \psi_j(\eta'')\eta_j$ for $j=1,\dots,n$. We see that \[ g_{\eta''}(\eta') = \psi_0(\eta'') f(\tilde\eta').\] We therefore obtain the asserted isometry: \[ \|g_{\eta''}\|_p = \|f\|_p \] \end{proof} By duality, for any positive integer $N$ and $p>2$, we have $\|P_N^+\|_{\infty,p} = \|P_N^+\|_{p',1}$ where $p'=p/(p-1)$. Hence, to prove Theorem \ref{n_to_infty}, we have to show that for any $q$ in $(1,2)$ there exist a positive integer $N$ and $g$ in $L^q(\mathbb{T}^N)$ such that \begin{equation} \label{refor_Theorem} \|g\|_q=1,\quad \|P_N^+ g\|_1 >1. \end{equation} Indeed, by duality, this will imply the existence of a function $h$ in $L^\infty(\mathbb{T}^n)$ such that \[ \|h\|_\infty = 1,\quad \|P_N^+(h)\|_{q'} >1,\] where $q' = q/(q-1)$. Since $q<2$ is arbitrary, Theorem~\ref{n_to_infty} then follows. For a bounded set $E$ in $\mathbb{R}^n$ and a function $f$ in $L^{1}(\mathbb{T}^n)$, we consider a partial sum of the Fourier series of $f$: \[ \big(S_Ef\big)(\zeta) := \sum_{\alpha\in E\cap\mathbb{Z}^n} \hat f(\alpha) \zeta^{\alpha}.\] Note that as an operator, $S_E$ coincides with the Fourier multiplier $M_{\chi_E}$. We say that a polytope $E$ in $\mathbb{R}^n$ is non-degenerate if it is not contained in a hyperplane. \begin{lemma}\label{reduc_polyt} Let $1<q<2$. Assume that there is a non-degenerate convex polytope $E$ in $\mathbb{R}^n$ with integral vertices such that, for some $f$ in $L^q(\mathbb{T}^n)$ with a finite set of non-zero Fourier coefficients $\hat f(\alpha)$, we have \[ \|f\|_q=1,\quad \|S_E(f)\|_1 >1.\] Then there are a positive integer $N\in\mathbb{N}$ and a function $g$ in $L^q(\mathbb{T}^N)$ satisfying \eqref{refor_Theorem}. \end{lemma} \begin{proof} Let $e:=(1,1,\ldots, 1)\in\mathbb{Z}^+_n$. By considering instead $E+Ne$ and $(\eta_1\ldots \eta_n)^Nf(\eta)$ with large enough $N\in \mathbb{N}$, if necessary, we may assume that $E$ and the Fourier coefficients of $f$ satisfy \begin{equation} \label{assum_posit} E\subset\mathbb{Z}_n^+\quad\textrm{and}\quad \hat f(\alpha)\not=0 \;\;\Rightarrow\;\; \alpha_j\geq 0 \;\,\textrm{for all}\;\;j=1,\ldots , n. \end{equation} It is known that $E$ is the intersection of closed semispaces, bounded by the hyperplanes containing the faces of $E$ of dimension $n-1$ (see \cite[Ch. 1, Thm. 5.6]{L}). All hyperplanes are determined by their intersections with the set of the vertices of $E$. Since the vertices are integral, the semispaces can be defined by inequalities \[ (\alpha,\beta^j) + b_j\ge0,\quad j=1,\dots,m,\] where $\beta^j\in\mathbb{Z}^n, b_j\in\mathbb{Z}$ for $j=1,\dots,m$. Thus \[ E=\bigcap_{j=n+1}^{n+m} \{\alpha\in\mathbb{R}^m:\,\phi_{j}(\alpha)\ge0\},\] where $\phi_{j}(\alpha) = (\alpha,\beta^{j-n}) + b_{j-n},\quad j=n+1,\dots,n+m$. We set $N:=n+m$ and construct the function $g$ from $f$ as in Lemma~\ref{equal_norms}. Using that lemma, we get \[ \|g\|_q = \|f\|_q =1,\quad \|P_N^+ g\|_1 = \|S_E(f)\|_1 >1, \] and Lemma \ref{reduc_polyt} follows. \end{proof} To construct an integer $n$, a polytope $E$, and a function $f$ satisfying Lemma \ref{reduc_polyt}, we take first $n$ satisfying the inequality \begin{equation} \label{choice_n} n>q/(2-q). \end{equation} For sufficiently large $R$, let $E$ be the convex hull of the integral points contained in the euclidean ball $\{\alpha\in\mathbb{R}^n:\,\|\alpha\|\le R\}$. Hence for any function $f$ in $L^{1}(\mathbb{T}^n)$, we have \[ (S_Ef)(\zeta) = \sum_{\alpha\in \mathbb{Z}^n:\,\|\alpha\|\le R} \hat f(\alpha) \zeta^{\alpha}.\] Recall the circular Dirichlet kernel from Theorem \ref{thm:babenko}: \[ D_{R,n}(\zeta) = \sum_{\alpha\in \mathbb{Z}^n:\,\|\alpha\|\le R} \zeta^{\alpha}.\] Define the function $\displaystyle \widetilde f (\zeta):=\sum_{|\alpha_1|\le R}\dots \sum_{|\alpha_n|\le R}\zeta^\alpha$ so that $S_R \widetilde f = D_{R,n}.$ It is easy to see that \[ \|\widetilde f\|_q = \left\|\sum_{|\alpha_1|\le R} \zeta_1^{\alpha_1}\right\|_q^n \le C R^{n(1-1/q)},\] where $C= C(q,n) >0$. In view of \eqref{choice_n}, which amounts to $\frac{n-1}{2}>n(1-\frac{1}{q}),$ and by recalling Theorem \ref {thm:babenko}, we obtain \[ \|S_E(\widetilde f)\|_1 > \|\tilde f\|_q. \] for sufficiently large $R$. Taking $f := \widetilde f\big/\|\widetilde f\|_q,$ we get a function $f$ satisfying the conditions of Lemma \ref{reduc_polyt}, and this completes the proof of Theorem \ref{n_to_infty}. \end{proof} \section{The space of Dirichlet series in $\operatorname{BMOA}$}\label{sec:BMOA} The result of the preceding section is purely multiplicative in the sense that it only involves analysis on the product space $\mathbb{T}^n$. Function spaces on $\mathbb{T}^{n}$ or on $\mathbb{T}^{\infty}$ may however, by a device known as the Bohr lift (see below for details), also be viewed as spaces of Dirichlet series. From an abstract point of view (see for example \cite[Ch. 8]{R}), this means that we equip our function spaces with an additive structure that reflects the additive order of the multiplicative group of positive rational numbers $\mathbb{Q}_+$. This results in interesting interaction between function theory in polydiscs and half-planes that sometimes involves nontrivial number theory. As we will see in the next subsection, this point of view leads us naturally from $H^1(\mathbb{T}^{\infty})^*$ to the space of ordinary Dirichlet series $\sum_{n=1}^{\infty} a_n n^{-s}$ that belong to $\operatorname{BMOA}$, i.e., the space of analytic functions $f(s)$ in the right half-plane $\operatorname{Re} s> 0$ satisfying \begin{equation} \label{eq:int1} \sup_{\sigma>0} \int_{-\infty}^{\infty} \frac{|f(\sigma+it)|^2}{1+\sigma^2+t^2} dt < \infty \end{equation} and \[\|f\|_{\operatorname{BMO}} := \sup_{I \subset \mathbb{R}} \frac{1}{|I|}\int_{I}\left|f(it)-\frac{1}{|I|}\int_I f(i\tau)\,d\tau\right|\,dt < \infty.\] Here the supremum is taken over all finite intervals $I$; \eqref{eq:int1} means that $g(s):=f(s)/(s+1)$ belongs to the Hardy space $H^{2}(\mathbb{C}_0)$ of the right half-plane $\mathbb{C}_0$, and then $f(it):=\lim_{\sigma\to 0^+} f(\sigma+it)$ exists for almost all real $t$ by Fatou's theorem applied to $g$. We will use the notation $\operatorname{BMOA} \cap \mathcal{D}$ for this $\operatorname{BMOA}$ space, where $\mathcal{D}$ is the class of functions expressible as a convergent Dirichlet series in some half-plane $\operatorname{Re} s > \sigma_0$. The space $\operatorname{BMOA} \cap \mathcal{D}$ arose naturally in a recent study of multiplicative Volterra operators \cite{BPS}. We refer to that paper for a complementary discussion of bounded mean oscillation in the context of Dirichlet series. By combining \cite[Cor. 6.4]{BPS} and \cite[Thm. 5.3]{BPS}, we may conclude that $\operatorname{BMOA} \cap \mathcal{D}$ can be viewed, via the Bohr lift, as a subspace of $H^{1}(\mathbb{T}^{\infty})^\ast$. This inclusion may however be proved in a direct way by an argument that we will present in the next subsection. \subsection{The Bohr lift and the inclusion $\operatorname{BMOA} \cap \mathcal{D}\subset (\mathcal{H}^1)^*$} We begin by considering an ordinary Dirichlet series of the form \begin{equation} \label{eq:f} f(s)=\sum_{n=1}^\infty a_n n^{-s}. \end{equation} By the transformation $z_j=p_j^{-s}$ (here $p_j$ is the $j$th prime number) and the fundamental theorem of arithmetic, we have the Bohr correspondence, \begin{equation}\label{eq:bohr} f(s):= \sum_{n=1}^\infty a_{n} n^{-s}\quad\longleftrightarrow\quad \mathcal{B}f(z):=\sum_{n=1}^{\infty} a_n z^{\kappa(n)}, \end{equation} where $\kappa(n)=(\kappa_1,\ldots,\kappa_j,0,0,\ldots)$ is the multi-index such that $n = p_1^{\kappa_1} \cdots p_j^{\kappa_j}$. The transformation $\mathcal{B}$ is known as the Bohr lift. For $0 < p < \infty$, we define $\mathcal{H}^p$ as the space of Dirichlet series $f$ such that $\mathcal{B}f$ is in $H^p(\mathbb{T}^\infty)$, and we set \[\|f\|_{\mathcal{H}^p} := \|\mathcal{B}f\|_{H^p(\mathbb{T}^\infty)} = \left(\int_{\mathbb{T}^\infty} |\mathcal{B}f(z)|^p\,dm_\infty(z)\right)^\frac{1}{p}.\] Note that for $p=2$, we have \[\|f\|_{\mathcal{H}^2} = \left(\sum_{n=1}^\infty |a_n|^2\right)^\frac{1}{2}.\] In terms of the spaces ${\mathcal{H}^p}$, Corollary \ref{cor:dual_non_embedding} takes the form \begin{corollary}\label{cor:dual_non_embedding2} The dual space $(\mathscr{H}^1)^*$ is not contained in $\mathscr{H}^p$ for any $p>2.$ \end{corollary} We will now use the notation $\mathbb{C}_{\theta}:=\{s=\sigma+it: \sigma>\theta\}$. The conformally invariant Hardy space $H_{\operatorname{i}}^p(\mathbb{C}_\theta)$ consists of functions $f$ that are analytic on $\mathbb{C}_\theta$ and satisfy \[\|f\|_{H_{\operatorname{i}}^p(\mathbb{C}_\theta)} := \sup_{\sigma>\theta} \left(\frac{1}{\pi}\int_{\mathbb{R}} |f(\sigma+it)|^p\,\frac{dt}{1+t^2}\right)^\frac{1}{p} <\infty.\] These spaces show up naturally in our discussion in the following two ways. First, we will repeatedly use that a function $g$ analytic on $\mathbb{C}_0$ is in $\operatorname{BMOA}$ if and only if the measure \[ d\mu(s):=|g'(\sigma+it)|^2 \sigma d\sigma \frac{ dt}{1+t^2} \] is a Carleson measure for $H_{\operatorname{i}}^1(\mathbb{C}_0)$, which means that there is a constant $C$ such \[ \int_{\mathbb{C}_0} |f(s)| d\mu(s) \le C \| f \|_{H_{\operatorname{i}}^1(\mathbb{C}_0)} \] for all $f$ in $H_{\operatorname{i}}^1(\mathbb{C}_0)$. The smallest such constant $C$ is called the Carleson norm of the measure. Second, by Fubini's theorem, we have the following connection between $\mathcal{H}^p$ and $H_{{\operatorname{i}}}^p(\mathbb{C}_0)$: \begin{equation} \label{eq:avgrotemb} \big\|f\big\|_{\mathcal{H}^p}^p = \int_{\mathbb{T}^\infty} \|f_\chi\|^p_{H^p_{\operatorname{i}}(\mathbb{C}_0)} \, dm_\infty(\chi), \end{equation} where $\chi$ is a character on $\mathbb{Q}^+$, i.e., a completely multiplicative function taking only unimodular values, and \[ f_{\chi}(s):=\sum_{n=1}^{\infty} \chi(n) a_n n^{-s}. \] Here we recall that an arithmetic function $g:\mathbb{N}\to\mathbb{C}$ is completely multiplicative if it satisfies $g(nm)$ $=g(n)g(m)$ for all integers $m,n\geq 1$. A completely multiplicative function $g$ satisfies $g(1)=1$ unless $g$ vanishes identically, and it is completely determined by its values at the primes. Note that we identify via the Bohr lift $\alpha\mapsto p^{\alpha}$ the group $\mathbb{Z}^{(\infty)}$ with the group $Q^+$, and by duality the group $\mathbb{T}^\infty$ with the group of completely multiplicative functions $\chi:\mathbb{N}\to \mathbb{T}$. Accordingly, we identify the Haar measures $dm_{\infty}(z)$ and $dm_{\infty}(\chi)$ of both groups. We also used in \eqref{eq:avgrotemb} the fact that, for almost every character $\chi$ and $\sum_{n=1}^\infty a_n n^{-s}$ in $\mathcal{H}^p$, the series $\sum_{n=1}^\infty a_n \chi(n) n^{-s}$ converges $m_{\infty}$-almost everywhere in $\mathbb{C}_0$, and defines an element in $H_{{\operatorname{i}}}^p(\mathbb{C}_0)$. For these facts, we refer e.g. to \cite[Section 4.2]{HLS} and \cite[Thm 5]{Ba}. From \eqref{eq:avgrotemb} we may deduce Littlewood--Paley type expressions for the norms of $\mathcal{H}^p$. This was first done for $p=2$ in \cite[Prop.~4]{Ba}, and later for $0<p < \infty$ in \cite[Thm.~5.1]{BQS}, where the formula \begin{equation} \label{eq:LPp} \|f\|_{\mathcal{H}^p}^p \asymp |f(+\infty)|^p + \frac{4}{\pi}\int_{\mathbb{T}^\infty} \int_{\mathbb{R}}\int_0^\infty |f_\chi(\sigma+it)|^{p-2}|f_\chi'(\sigma+it)|^2 \sigma d\sigma\frac{dt}{1+t^2}dm_\infty(\chi) \end{equation} was obtained. When $p=2$, we have equality between the two sides of \eqref{eq:LPp}. The Littlewood--Paley formula \eqref{eq:LPp} for $p=2$ may be polarized, so that we have \[ \langle f, g \rangle_{\mathcal{H}^2} = f(+\infty)\overline{g(+\infty)} + \frac{4}{\pi} \int_{\mathbb{T}^\infty} \int_\mathbb{R} \int_0^\infty f_\chi'(\sigma+it) \overline{g_\chi'(\sigma+it)} \sigma\,d\sigma\,\frac{dt}{1+t^2}\,dm_\infty(\chi). \label{eq:LPpolar} \] Hence, by the Cauchy--Schwarz inequality and \eqref{eq:LPp}, we have for $f$ in $\mathcal{H}^1$ and $g$ in $\operatorname{BMOA}\cap \mathcal{D}$, \begin{align*} \big|\langle f, g \rangle_{\mathcal{H}^2} - f(+\infty)\overline{g(+\infty)}\big|^2 & \le \frac{4}{\pi}\int_{\mathbb{T}^\infty} \int_\mathbb{R} \int_0^\infty |f_\chi(\sigma+it)|^{-1} |f_\chi'(\sigma+it)|^2 \sigma\,d\sigma\,\frac{dt}{1+t^2}\,dm_\infty(\chi) \\ & \times \int_{\mathbb{T}^\infty} \int_\mathbb{R} \int_0^\infty |f_\chi(\sigma+it)| |g_\chi'(\sigma+it)|^2 \sigma\,d\sigma\,\frac{dt}{1+t^2}\,dm_\infty(\chi) \\ & \ll \| f \|_{\mathcal{H}^1} \int_{\mathbb{T}^{\infty}}\|f_\chi\|_{H^1_{\operatorname{i}}(\mathbb{C}_0)} \, dm_\infty(\chi) = \| f \|_{\mathcal{H}^1}^2, \end{align*} where we in the second step used the Littlewood--Paley formula for $p=1$ and that \[ |g'_{\chi}(\sigma+it)|^2 \sigma d\sigma \frac{ dt}{1+t^2} \] is a Carleson measure for $H_{\operatorname{i}}^1(\mathbb{C}_0)$, with Carleson constant uniformly bounded in $\chi$, as follows from \cite[Lem. 2.1 (ii) and Lem. 2.2]{BPS}. Hence we conclude that a Dirichlet series $g$ in $\operatorname{BMOA}\cap \mathcal{D}$ belongs to $(\mathcal{H}^1)^*$. The ``reverse'' problem\label{brevig} of finding an embedding of $(\mathcal{H}^1)^*$ into a ``natural'' space of functions analytic in $\mathbb{C}_{1/2}$ appears challenging. (This is a reverse question only in a rather loose sense as we are now considering functions defined in $\mathbb{C}_{1/2}$.) It was mentioned in \cite[Quest. 4]{SaSe} that $(\mathcal{H}^1)^*$ is not contained in $H_{\operatorname{i}}^q(\mathbb{C}_{1/2})$ for any $q>4$. Since no argument for this assertion was given in \cite{SaSe}, we take this opportunity to offer a proof\footnote{We thank Ole Fredrik Brevig for showing us this argument and allowing us to include it in this paper.}. To begin with, let us consider the interval from $1/2-i$ to $1/2+i$ and let $E$ denote the corresponding local embedding of $\mathscr{H}^2$ into $L^2(-1,1)$, given by $Ef(t) := f(1/2+it)$, so that \[\|E f\|_{L^2(-1,1)}^2 = \int_{-1}^1 |f(1/2+it)|^2 \,dt.\] Then the adjoint $E^\ast \colon L^2(-1,1) \to \mathscr{H}^2$ is \[E^{\ast}g(s) := \sum_{n=1}^\infty \frac{\widehat{g}(\log{n})}{\sqrt{n}} n^{-s},\] where $\widehat{g}(\xi) = \int_{-1}^1 e^{-i \xi t} g(t)\,dt$. Fix $0<\beta<1$ and set $g_\beta(t): = |t|^{\beta-1}$. Plainly, $g_\beta$ is in $L^q(-1,1)$ if and only if $\beta>1-1/q$. Moreover, if $\xi\geq \delta>0$, then $\widehat g_\beta(\xi) \asymp \xi^{-\beta}$, where the implied constants depend only on $\delta$ and $\beta$. We now invoke Helson's inequality \cite[p. 89]{He2} \[ \Big\| \sum_{n=1}^{\infty} a_n n^{-s} \Big\|_1 \ge \left(\sum_{n=1}^{\infty} \frac{|a_n|^2}{d(n)} \right)^{1/2}, \] where $d(n)$ is the divisor function. We then use the classical fact that $\sum_{n\le x} 1/d(n)$ is of size $x (\log x)^{-1/2}$; the precise asymptotics of this summatory function was first computed by Wilson \cite[Formula (3.10)]{W} and may now be obtained as a simple consequence of a general formula of Selberg \cite{Se}. Taking $\beta=1/4$, we may therefore infer by partial summation that $E^\ast$ is unbounded from $L^q(-1,1)$ to $\mathscr{H}^1$ whenever $q < 4/3$. By duality we conclude that for any $q>4$, there are $\varphi$ in $(\mathscr{H}^1)^\ast$ that are not locally embedded in $L^q(-1,1)$ and hence do not belong to $H_{\operatorname{i}}^q(\mathbb{C}_{1/2})$. Note that here $(\mathscr{H}^1)^\ast$ is identified as a subspace of $H^2$ (with respect to the natural pairings of $L^2(-1,1)$ and $L^2(\mathbb{T}^\infty)$) ,whence $E^{**}g=Eg$ for $(\mathscr{H}^1)^\ast$. In view of Corollary~\ref{cor:dual_non_embedding}, it is natural to ask if the situation is even worse, namely that $(\mathcal{H}^1)^*$ fails to be contained in $H_{\operatorname{i}}^q(\mathbb{C}_{1/2})$ for any $q>2$. We conclude from the preceding argument that there is no simple relation between $(\mathcal{H}^1)^*$ and $\operatorname{BMOA}(\mathbb{C}_{1/2})$. We may further illustrate this point by the following example. The Dirichlet series \[ h(s):=\sum_{n=2}^{\infty} \frac{1}{\log n} n^{-s-1/2} \] belongs to $\operatorname{BMOA}(\mathbb{C}_{1/2})$ (see \eqref{eq:hilbert}) below), but it is unknown whether it is in $(\mathcal{H}^1)^*$. It would be interesting to settle this question about membership in $(\mathcal{H}^1)^*$, as $h$ is both a primitive of $\zeta(s+1/2)-1$ and the analytic symbol of the multiplicative Hilbert matrix \cite{BPSSV}. \subsection{Fefferman's condition for membership in $\operatorname{BMOA}\cap \mathcal{D}$} The following theorem gives interesting information about Dirichlet series in $\operatorname{BMOA}$. It is an immediate consequence of existing results, as will be explained in the subsequent discussion. \begin{theorem} \label{thm:fefferman} \begin{itemize} \item[(i)] Suppose that $a_n\ge 0$ for every $n\ge 1$. Then $f(s):=\sum_{n=1}^{\infty} a_n n^{-s}$ is in $\operatorname{BMOA}$ if and only if \begin{equation} \label{eq:feff} S^2:=\sup_{x\ge e } \sum_{k=1}^{\infty} \Big(\sum_{x^k\le n <x^{k+1}} a_{n} \Big)^2 < \infty, \end{equation} and we have $S\asymp \Vert f\Vert_{\operatorname{BMOA}}$. \item[(ii)] If \ $\sum_{n=1}^{\infty} |a_n| n^{-s}$ is in $\operatorname{BMOA}$, then $\sum_{n=1}^{\infty} a_n n^{-s}$ is in $\operatorname{BMOA}$. \end{itemize} \end{theorem} It is immediate from (i) that \begin{equation} \label{eq:hilbert} \sum_{n=2}^{\infty} \frac{1}{\log n} n^{-s-1} \end{equation} is in $\operatorname{BMOA}$ (see \cite[Thm. 2.5]{BPS}). By Mertens's formula \begin{equation} \label{eq:mertens} \sum_{p\le x} \frac{1}{p} = \log\log x + M +O\left((\log x)^{-1}\right), \end{equation} where the sum is over the primes $p$, part (i) also implies that $ \sum_p p^{-1-s} $ is in $\operatorname{BMOA}$, and consequently $\log \zeta(s+1)$ is a function in $\operatorname{BMOA}$, where $\zeta(s)$ is now the Riemann zeta function. Then part (ii) of Theorem \ref{thm:fefferman} implies also that $\sum_p \chi(p) p^{-1-s}$ is in $\operatorname{BMOA}$ for any sequence of unimodular numbers $\chi(p)$. In fact, we have more generally: \begin{corollary}\label{cor:prime} A Dirichlet series $\sum_{p} a_p p^{-s}$ over the primes $p$ is in $\operatorname{BMOA}$ if and only if \begin{equation} \label{eq:bmo} \sup_{x\ge e} \sum_{k=1}^{\infty} \Big(\sum_{x^k\le p < x^{k+1}} |a_{p}| \Big)^2 < \infty.\end{equation} \end{corollary} \noindent Corollary~\ref{cor:prime} is a consequence of part (i) of Theorem~\ref{thm:fefferman} and the fact (see \cite[Lem. 2.1]{BPS}) that $\sum_p a_p p^{-s}$ is in $\operatorname{BMOA}$ if and only if $\sum_p a_p \chi(p) p^{-s}$ is in $\operatorname{BMOA}$ for every sequence of unimodular numbers $\chi(p)$. The sufficiency of condition \eqref{eq:feff} in Theorem~\ref{thm:fefferman})(i) follows as a corollary to an $H^1$ multiplier theorem of Sledd and Stegenga \cite[Thm. 1]{SS} via Fefferman's duality theorem \cite{F, FS} and Parseval's theorem. The necessity also follows from \cite[Thm. 1]{SS} if we first note that for any $f$ in $H^1(\mathbb{C}_0)$, using the standard $H^2$ factorization of $H^1$, we may construct $g$ in $H^1(\mathbb{C}_0)$ with $\| g\|_{H^1(\mathbb{C}_0)}=\| f\|_{H^1(\mathbb{C}_0)}$ and $\widehat g(\xi)\geq |\widehat f(\xi)|\geq 0$ for all $\xi\in\mathbb{R}$. Here $\widehat f,\widehat g$ refer to the Fourier transforms of the boundary values on the imaginary axis. A corresponding result for $\operatorname{BMO}$ in the unit disc is stated in \cite[Cor. 2]{SS}: The Taylor series $\sum_{m=0}^{\infty} c_m z^m$ with $c_m\ge 0$ belongs to $\operatorname{BMO}$ of the unit circle $\mathbb{T}$ if and only if \[ \sup_{m\ge 1} \sum_{j=0}^{\infty} \left(\sum_{r=0}^{m-1} c_{mj+r}\right)^2 < \infty. \] Other proofs of this result, relying more directly on Hankel operators, can be found in \cite{Bon, HW}. This result is commonly known to have appeared in unpublished work of Fefferman. To establish part (ii) of Theorem~\ref{thm:fefferman}, we use the following Carleson measure characterization of $\operatorname{BMOA}\cap \mathcal{D}$ which could be used to give an alternative proof of part (i) of Theorem~\ref{thm:fefferman}. \begin{lemma}\label{basaux} Suppose that $f$ is in $H_{\operatorname{i}}^{2}(\mathbb{C}_0)\cap \mathcal{D}$. Then $f$ is in $\operatorname{BMOA}\cap\mathcal{D}$ if and only if there exists a positive constant $C$ such \begin{equation} \label{eq:carl} \sup_{t\in \mathbb{R}} \int_{0}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma \le C h \end{equation} for $0\le h \le 1$. Moreover, the best constant $C$ in ( \ref{eq:carl}) and $\Vert f\Vert_{\operatorname{BMO}}^{2}$ are equivalent. \end{lemma} \begin{proof} We first observe that \eqref{eq:carl} and the assumption that $f$ is in $H_{i}^{2}(\mathbb{C}_0)$ imply, by the maximum modulus principle, that $f'(\sigma+it)$ is uniformly bounded by $O(\sqrt{C})$ for $\sigma\geq 1$. Then, if $h>1$ and $t\in \mathbb{R}$ are given and \begin{align*} I & :=\int_{0}^h \int_{t}^{t+h} |f'(\sigma+i\tau)|^2 \sigma d\tau d\sigma \\ &\ =\int_{0}^{1}\Big[\int_{t}^{t+h}|f'(\sigma+i\tau)|^2 d\tau\Big]\sigma d\sigma+ \int_{1}^{h}\Big[\int_{t}^{t+h}|f'(\sigma+i\tau)|^2 d\tau\Big]\sigma d\sigma=:I_1+I_2,\end{align*} we have $I_1\ll Ch$ by \eqref{eq:carl}, while \[ I_2 \ll \int_{1}^{\infty}\Big[\int_{t}^{t+h}|f'(\sigma+i\tau)|^2 d\tau\Big]\sigma d\sigma\ll \int_{t}^{t+h}\Big[\int_{1}^{\infty} \sigma C4^{-\sigma}d\sigma\Big]d\tau \ll Ch.\] To obtain the final estimate above, we used that $f'(\sigma+it)=O(\sqrt{C} 2^{-\sigma})$, which holds uniformly in $t$ when $\sigma\geq 1$ because $f$ is a Dirichlet series. \end{proof} Part (ii) of Theorem~\ref{thm:fefferman} is immediate from this lemma along with a property of almost periodic functions established by Montgomery \cite[p. 131]{MO} (see also \cite[p. 4]{M}) which asserts that if $|a_n|\leq b_n$, then for sums with a finite number of non-zero terms \[ \int_{T_1-T}^{T_1+T} \big|\sum a_n e^{i\lambda_n t}\big|^2dt\leq 3\int_{-T}^T \big|\sum b_n e^{i\lambda_n t}\big|^2dt.\] Here $T>0$, $T_1$ is a real number, $a_n, b_n$ respectively complex and nonnegative coefficients, and $\lambda_n$ are distinct real frequencies. We will now apply Theorem~\ref{thm:fefferman} to see how our $\operatorname{BMOA}$ space of Dirichlet series relates to Hardy spaces and the Bloch space. We denote as usual $H^{\infty}(\mathbb{C}_0) \cap \mathcal{D}$ by $\mathcal{H}^\infty$, and we say that a function $f(s)$ analytic in $\operatorname{Re} s >0$ is in the Bloch space $\mathfrak{B}$ if \[ \| f \|_{\mathfrak{B}}:=\sup_{\sigma+it: \sigma>0} \sigma |f'(\sigma+i t)| < \infty . \] We have \[ \mathcal{H}^{\infty} \subset \operatorname{BMOA} \cap \mathcal{D} \subset \bigcap_{0<q<\infty} \mathcal{H}^q, \] where the inclusion to the left is trivial and that to the right was established in \cite[Lem. 2.1]{BPS}. Hence, in contrast to $(\mathcal{H}^1)^*$ itself, the subspace $\operatorname{BMOA}\cap \mathcal{D}$ is included in $ \bigcap_{0<q<\infty} \mathcal{H}^q$. Moreover, is a classical fact and easy to see that $\operatorname{BMOA} \subset \mathfrak{B}$. The following consequence of Corollary~\ref{cor:prime} is a Dirichlet series counterpart to a result of Campbell, Cima, and Stephenson \cite{CCS} that further enunciates the relation between the spaces in question. Our proof is close to that found in \cite{HT}. \begin{corollary}\label{cor:bloch}There exist Dirichlet series that belong to $\mathfrak{B}$ and $\bigcap_{0<q<\infty} \mathcal{H}^q$ but not to $\operatorname{BMOA}$. \end{corollary} \begin{proof} It is an easy consequence of the definition of the Bloch space that $\sum_{n=1}^{\infty} a_n n^{-s}$ with $a_n\ge 0$ is in $\mathfrak{B}$ if and only if \begin{equation}\label{eq:Bloch} \sup_{x\ge 2} \sum_{x\le n <x^2} a_n < \infty. \end{equation} Indeed, if \eqref{eq:Bloch} holds, then we use it with $x_j=\exp(2^{j}/\sigma),\ x_{j+1}=x_{j}^{2}$, to show that for $\sigma>0$, \[ \sum_{n\geq 2} a_n\, \sigma \log n\,e^{-\sigma\log n}\le \sum_{j} 2^{-j} \big(\sum_{x_j\leq n<x_{j+1}}a_n\big)\ll \sum_{j} 2^{-j}.\] Conversely, if $\sum_{n\geq 2} a_n\, \sigma \log ne^{-\sigma\log n}\leq C$ for all $\sigma>0$, then choosing $\sigma=1/\log x$, we see that the sum on the left-hand side of \eqref{eq:Bloch} is bounded by $C e^2/2$. Let $\mathbb{P}_j$ be the primes in the interval $[e^{2^j}, e^{2^j+1}]$. Then $|\mathbb{P}_j|\sim (e-1) e^{2^j} 2^{-j}$ by the prime number theorem. Setting $a_p:=e^{-2^j} 2^j$ if $p$ is in $\mathbb{P}_j$ and $a_p=0$ otherwise, we see from \eqref{eq:Bloch} that $\sum_p a_p p^{-s}$ is in the Bloch space, but from part (i) of Theorem~\ref{thm:fefferman} that it fails to be in $\operatorname{BMOA}$. We next recall Khinchin's inequality for the Steinhaus variables $Z_p$ (that are i.i.d. random variables with uniform distribution on $\mathbb{T}$): \[ \mathbb{E} \big| \sum_pa_p Z_p\big|^q\asymp \big(\sum_p|a_p|^2 \big)^{q/2}, \] with the implied constants only depending on $q>0$ (see \cite[Thm. 1]{K}). Since in the Bohr correspondence $p_k^{-s}$ corresponds to the independent variable $z_k$, we see that they form a sequence of Steinhaus variables with respect to the Haar measure on $\mathbb{T}^\infty$. Thus, in view of the bound \[ \sum_p a_p^2 \ll \sum_{j=0}^{\infty} e^{-2^j} 2^j < \infty, \] Khinchin's inequality implies that $\sum_p a_p p^{-s}$ belongs to $\mathcal{H}^q$. \end{proof} \subsection{The relation between Dirichlet series in $H^{\infty}$, $\operatorname{BMOA}$, and $\mathfrak{B}$} \label{sec:relation} We turn to some further comparisons between the three spaces $\mathcal{H}^{\infty}$, $\operatorname{BMOA}\cap \mathcal{D}$, and $\mathfrak{B}\cap \mathcal{D}$. We begin with a discussion of uniform and absolute convergence of Dirichlet series in $\mathfrak{B}\cap \mathcal{D}$. The following lemma will be useful in this discussion. Here we use the notation $\log_+ x:=\max(0,\log x)$ for $x>0$, and we will also write $(T_c f)(s):=f(s+c)$ in what follows. \begin{lemma}\label{vari}Suppose that $f(s)=\sum_{n=1}^\infty a_n n^{-s}$ is in $\mathfrak{B}\cap \mathcal{D}$. Then \begin{align} \label{eq:coeff} |a_n| & \le e \| f \|_{\mathfrak{B}}, \quad n\ge 2, \\ \label{eq:point} |f(\sigma+it)-a_1| & \le \left(\log_+\frac{1}{\sigma} + C 2^{-\sigma} \right) \| f \|_{\mathfrak{B}}, \quad \sigma>0, \end{align} for some absolute constant $C$. Up to the precise value of $C$, these bounds are both optimal. \end{lemma} \begin{proof} To prove \eqref{eq:coeff}, we use that $T_{\varepsilon}f'$ is in ${\mathscr{H}}^\infty$ for every $\varepsilon>0$. By either viewing the coefficients of a Dirichlet series as Fourier coefficients or using that $\| f \|_{\mathcal{H}^2}\le \| f \|_{\mathcal{H}^{\infty}}$, we see that they are dominated by its $\mathcal{H}^\infty$ norm. We therefore have \[ |a_n|(\log n) n^{-\varepsilon}\leq \Vert T_{\varepsilon}f' \Vert_\infty \leq \frac{\Vert f\Vert_{\mathfrak{B}}}{\varepsilon} \] and hence \[ |a_n|\leq \frac{n^{\varepsilon}\Vert f\Vert_{\mathfrak{B}}}{\varepsilon \log n}.\] We conclude by taking $\varepsilon=1/\log n$. In addition, we notice that the bound is optimal because $\| n^{-s} \|_{\mathfrak{B}}=1/e$. To prove \eqref{eq:point}, we begin by noticing that \eqref{eq:coeff} implies that \begin{equation} \label{eq:large} |f(\sigma+it)-a_1| \le \sum_{n=2}^{\infty} |a_n| n^{-\sigma} \le e (\zeta(\sigma)-1) \| f \|_{\mathfrak{B}} \end{equation} holds for $\sigma\ge 2$. For $\sigma\le 2$, we use that \[ |f(\sigma+it)-a_1| \le |f(2+it)-a_1|+ \int_{\sigma}^2 \| f \|_{\mathfrak{B}} \frac{d\alpha}{\alpha} \le \left(\log \frac{1}{\sigma} + C\right)\| f \|_{\mathfrak{B}}, \] where we in the final step used \eqref{eq:large} with $\sigma=2$. The example $\sum_{n=2}^{\infty} n^{-1-s}/\log n$ shows that the inequality is optimal, up to the precise value of $C$. \end{proof} The pointwise bound \eqref{eq:point} implies that what is known about uniform and absolute convergence of Dirichlet series in $\mathcal{H}^{\infty}$ carries over in a painless way to $\mathfrak{B}\cap \mathcal{D}$. In fact, a rather weak bound of the form \begin{equation} \label{eq:gen} |f(\sigma+it)|\le C(\sigma), \quad \sigma>0, \end{equation} suffices to draw such a conclusion, as will now be explained. To begin with we will assume that $C(\sigma)$ is an arbitrary positive function and later specify its required behavior as $\sigma\to 0^+$. First, by a classical theorem of Bohr \cite[p. 145]{MAHE}, a bound like \eqref{eq:gen} implies that the Dirichlet series of $f(s)$ converges uniformly in every half-plane $\operatorname{Re} s \ge \sigma_0>0.$ Following Bohr, we then see that $\sigma_{u}(f)\leq 0$, where $\sigma_u(f)$ is the abscissa of uniform convergence, defined as the infimum over those $\sigma_0$ such that the Dirichlet series of $f(s)$ converges uniformly in $\operatorname{Re} s \ge \sigma_0$. Second, as observed by Bohr, it is immediate that $\sigma_{u}(f)\leq 0$ implies $\sigma_{a}(f)\leq 1/2$, where $\sigma_{a}(f)$ is the abscissa of absolute convergence of $f$, i.e., the infimum over those $\sigma_0$ such that the Dirichlet series of $f(s)$ converges absolutely in $\operatorname{Re} s \ge \sigma_0$. Thanks to more recent work originating in \cite{BCQ}, an interesting refinement of this result holds when $C(\sigma)$ does not grow too fast as $\sigma\searrow 0$. To arrive at that refinement, we set $(S_N f)(s):=\sum_{n=1}^{N} a_n n^{-s} $ and recall that \begin{equation}\label{improv2}\sum_{n=1}^N|a_n| \leq \sqrt{N} e^{-c_N \sqrt{\log N \log\log N}}\Vert S_N f\Vert_\infty \end{equation} with $c_N\to 1/\sqrt{2}$ when $N\to \infty$. This ``Sidon constant'' estimate was proved in \cite{KQ} with a smaller value of $c_N$. The proof from \cite{KQ}, using at one point the hypercontractive Bohnenblust--Hille inequality from \cite{DFOOS}, yields \eqref{improv2} with $c_N\to 1/\sqrt{2}$, which is stated as Theorem 3 in \cite{DFOOS}. This is optimal by \cite{dB}. It was proved in \cite{BCQ} that there exists an absolute constant $C$ such that if $f(s):=\sum_{n=1}^{\infty}{a_nn^{-s}}$ is in $\mathcal{H}^{\infty}$, then $ \| S_N f\|_{\infty}\le C \log N \| f\|_{\infty} $. See also Section~\ref{sec:partial}, where an alternate proof of this bound will be given. Using this fact, we obtain from \eqref{improv2} that \begin{equation}\label{improv3}\sum_{n=1}^N|a_n| \leq \sqrt{N} e^{-c_N \sqrt{\log N \log\log N}}\Vert f \Vert_\infty , \end{equation} still with $c_N\to 1/\sqrt{2}$ when $N\to \infty$. Now applying \eqref{improv3} to $T_{\varepsilon} f $ with $\varepsilon=1/\log N$ and taking into account \eqref{eq:gen}, we get \[ \sum_{n=1}^N |a_n| \le e \sum_{n=1}^N |a_n| n^{-\varepsilon} \le \sqrt{N} e^{-c_N \sqrt{\log N \log\log N}} C(1/\log N). \] We now see that if $\log C(\sigma)=o(\sqrt{|\log \sigma |/\sigma }) $ when $\sigma\searrow 0$, then \begin{equation} \label{eq:imp}\sum_{n=1}^N |a_n| \le \sqrt{N} e^{-c_N \sqrt{\log N \log\log N}} \end{equation} with $c_N\to 1/\sqrt{2}$. When $f$ is in $\mathfrak{B}$, we have $C(\sigma)=O(|\log \sigma|)$ and hence \eqref{eq:imp} clearly holds. Summing by parts and using \eqref{eq:imp}, we get \begin{equation} \label{eq:sum} \sum_{n=3}^{\infty} \frac{|a_n|}{\sqrt{n}} e^{{c \sqrt{\log n \log\log n}}} < \infty \end{equation} for every $c<1/\sqrt{2}$. This is a bound previously known to hold for functions $f$ in $\mathcal{H}^{\infty}$ (see \cite{BCQ, DFOOS}). As shown in \cite{DFOOS}, the result is optimal in the sense that there exist functions $f$ in $\mathcal{H}^{\infty}$ for which the series in \eqref{eq:sum} diverges when $c>1/\sqrt{2}$. In Section~\ref{sec:poly}, we will establish ``reverse'' inequalities to $\| f \|_{\mathfrak{B}} \le \| f \|_{\infty} $ and $\| f \|_{\mathfrak{B}} \ll \| f\|_{\operatorname{BMOA}}$ when $f(s)=\sum_{n=1}^N a_n n^{-s}$ and $N$ is fixed. \subsection{A condition for random membership in $\operatorname{BMOA} \cap \mathcal{D}$} In the sequel, if $f(s)=\sum_{n=1}^\infty a_n n^{-s}$ is a Dirichlet series, we denote by $f_\omega$ the corresponding randomized Dirichlet series, namely $f_{\omega}(s):=\sum_{n=1}^\infty \varepsilon_{n}(\omega)a_n n^{-s}$ where $(\varepsilon_n)$ is a standard Rademacher sequence. We are interested in extending the following result of Sledd \cite{S} (see also \cite{DUR}) to the setting of ordinary Dirichlet series: \begin{theorem}\label{dur}Suppose $\sum_{n=1}^\infty |a_n|^2 \log n<\infty$. Then, the power series $\sum \varepsilon_n a_n z^n$ is almost surely in $\operatorname{BMOA}$. \end{theorem} This result is optimal in a rather strong sense as shown in \cite{ACP}: If one replaces $\log n$ by any sequence growing at a slower rate, then the condition does not guarantee membership even in the Bloch space. We see from Theorem~\ref{dur} that if we require slightly more than $\ell^2$ decay of the coefficients, then we may expect that a ``generic'' analytic function in the unit disc will be in $\operatorname{BMOA}$. The results of the preceding sections show in two respects that a similarly strong result can not hold in the context of Hardy spaces of Dirichlet series. First, we know that $f(s)=\sum_{p} a_p p^{-s}$ is in $\operatorname{BMOA}\cap \mathcal{D}$ if and only if \eqref{eq:bmo} of Corollary~\ref{cor:prime} holds, and by the Cauchy--Schwarz inequality, this implies in particular that the abscissa of absolute convergence is $0$. Hence \[ \sum_{p} \pm p^{-\alpha-s} \] can not be in $\operatorname{BMOA}\cap \mathcal{D} $ for any choice of the signs $\pm$ when $1/2<\alpha<1$, although, from an $\ell^2$ point of view, the coefficients decay fast when $\alpha$ is close to $1$. Second, in view of \eqref{eq:sum}, none of the Dirichlet series \[f(s):=\sum_{n=2}^\infty \pm \frac{1}{\sqrt {n}} \exp\Big(-c \sqrt{\log n\log\log n}\Big)\, n^{-s},\quad 0<c<1/\sqrt 2, \] with random signs $\pm$ can be in $\operatorname{BMOA}\cap \mathcal{D}$, again in spite of fairly good $\ell^2$ decay of the coefficients. These observations indicate that we should impose an extra condition to obtain a result of the same strength as that of Theorem~\ref{dur}. In fact, they suggest that a possible remedy could be to consider integers generated by a very thin sequence of primes. We will therefore assume that we are in this situation with a fixed set $\mathcal{P}_0$ (finite or not) of prime numbers. We will measure the thinness of this set in terms of its distribution function \[ \pi_{0}(x):=\sum_{p\in \mathcal{P}_0 p\leq x} 1. \] We will say that $\mathcal{P}_0$ is an ultra-thin set of primes if \begin{equation} \label{eq:ultra} \int_{3}^{\infty} \frac{\pi_{0}(x)\log\log x}{x\log^{3}x}dx<\infty , \end{equation} and we declare the numbers $w_1=w_2=1$, \[ w_n:=\int_{n}^{\infty} \frac{\pi_{0}(x)\log\log x}{x\log^{3}x}dx , \quad n\ge 3,\] to constitute the weight sequence of $\mathcal{P}_0$. We denote by $\mathcal{N}_0$ the set of all $\mathcal{P}_0$-smooth integers, i.e., the set of positive integers with all their prime divisors belonging to $\mathcal{P}_0$. Our extension of Theorem~\ref{dur} now reads as follows. \begin{theorem}\label{proba} Let $\mathcal{P}_0$ be an ultra-thin set of primes with weight sequence $(w_n)$. If \begin{equation} \label{eq:durendir} \sum_{n\in \mathcal{N}_0} |a_n|^2 w_n \log^{2} n <\infty, \end{equation} then the Dirichlet series $ f_{\omega}(s)=\sum_{n\in \mathcal{N}_0} \varepsilon_n a_n n^{-s}$ is almost surely in $\operatorname{BMOA}\cap\mathcal{D}$. \end{theorem} Let us first note that this is in fact a true extension of Theorem~\ref{dur}, i.e., it reduces to Theorem~\ref{dur} when $\mathcal{P}_0$ consists of a single prime. To see this, we first observe that if $\pi_0(x) \ll \log^{\delta} x$ for some $\delta$, $0\le \delta < 2$, then $\mathcal{P}_0$ is ultra-thin and $w_n \ll (\log\log n)/\log^{2-\delta} n$. In particular, in the special case when $\mathcal{P}_0$ is a finite set, we find that $w_n\asymp (\log\log n)/\log^2 n$ and hence the series in \eqref{eq:durendir} becomes $\sum_{n\in \mathcal{N}_0} |a_n|^2 \log\log n $. If $\mathcal{P}_0$ consists of a single prime $p$, then the Dirichlet series over $\mathcal{N}_0$ becomes a Taylor series in the variable $z:=p^{-s}$ and $\log\log n=\log k + \log\log p\sim \log k$ for $n=p^k$, and hence \eqref{eq:durendir} becomes the condition of Theorem~\ref{dur}. Finally, we note that, plainly, the Dirichlet series over the numbers $p^k$ will be in $\operatorname{BMOA}(\mathbb{C}_0)$ if and only if the corresponding Taylor series in the variable $z$ is in $\operatorname{BMOA}(\mathbb{T})$. In view of this relation between Theorem~\ref{dur} and Theorem~\ref{proba}, we see by again appealing to \cite{ACP} that we cannot replace $\log^2 n$ by any sequence growing at a slower rate. For the proof of Theorem~\ref{proba}, we begin by observing that for fixed $\sigma>0$, we have \[ \mathbb{E}\Big(\int_{-\infty}^\infty \frac{|f_{\omega}(\sigma+it)|^2}{t^2+1}dt\Big)=\pi \sum_{n=1}^\infty |a_n|^2 n^{-2\sigma}\leq \pi \sum_{n=1}^\infty |a_n|^2, \] and hence $f_{\omega}$ is almost surely in $H_{\operatorname{i}}^{2}(\mathbb{C}_0)$. This means that we may base our proof on Lemma~\ref{basaux}. The rest of the proof of Theorem~\ref{proba} relies on a lemma from \cite {BCQ} (see also \cite[Theorem 5.3.4]{MAHE}) which is deduced, via the Bohr lift, from a multivariate analogue of a classical inequality of Salem and Zygmund due to Kahane \cite[Thm. 3, Sect. 6]{KAH2}. \begin{lemma}\label{trois}There exists an absolute constant $C$ such that if $P(s)=\sum_{k=1}^n a_k k^{-s}$ is a $\mathcal{P}_0$-smooth Dirichlet polynomial of length $n \ge 3$ and $P_\omega$ the corresponding randomized polynomial, then \[ \mathbb{E}(\Vert P_\omega\Vert_\infty) \le C \big(\sum_{k=1}^n |a_k|^2\big)^{1/2}\sqrt{\pi_{0}(n)}\sqrt{\log\log n}.\] \end{lemma} Here the price we pay for estimating the uniform norm on the whole of $\mathbb{R}$ is this additional factor $\sqrt{\pi_{0}(n)}$. By considering the randomization (i.e. adding random signs) of the Dirichlet polynomial $\sum_{1\leq k\leq N}p_k^{-s}$ (or randomizing more complicated polynomials of the form $\sum_{1\leq k\leq N}p_k^{-s}g(p_{N+k}^{-s})$), with a fixed standard polynomial $g$, we see that this extra factor is more or less mandatory. \begin{proof}[Proof of Theorem~\ref{proba}] We may for convenience assume that $a_2=0$. Let $X$ be the random variable defined by \begin{equation}\label{rava}X(\omega):=\int_{0}^1 \sigma\, \Vert T_{\sigma} f'_{\omega} \Vert_{\infty}^{2}d\sigma.\end{equation} We will prove that $\mathbb{E}(X)<\infty$. This will imply that $X(\omega)<\infty$ a.s., hence that $f_\omega$ is in $\operatorname{BMOA}\cap \mathcal{D}$ a.s. in view of Lemma \ref{basaux}. We fix $\sigma>0$ and set \[ S(x,t):=-\sum_{3\le j\le x} \varepsilon_{j} a_j (\log j) j^{-it} \quad \text{and} \quad B(x):=\Big(\sum_{3\le j\le x} |a_j|^2 \log^{2}j\Big)^{1/2}. \] Since $(T_{\sigma} f'_{\omega})(it)=-\sum_{n=3}^\infty \varepsilon_n\,a_n (\log n)\, n^{-it} n^{-\sigma}$, we find by partial summation that \[ \big|(T_{\sigma} f'_{\omega})(it)\big|\le \int_3^\infty \sigma x^{-\sigma-1}|S(x,t)| dx.\] Now using the $L^1-L^2$ Khintchin--Kahane inequality and Lemma~\ref{trois}, we find that \begin{equation}\label{khka}\mathbb{E}\big(\big\Vert T_{\sigma} f'_{\omega}\big\Vert_{\infty}^{2}\big)\ll \big(\mathbb{E}\big\Vert T_{\sigma} f'_{\omega}\big\Vert_{\infty}\big)^{2}\ll \Big(\int_{3}^\infty \sigma x^{-\sigma-1} B(x) \sqrt{\pi_{0}(x)} \sqrt{\log\log x} \, dx\Big)^{2} ,\end{equation} whence \begin{equation} \label{eq:wh} \mathbb{E}(X) \ll \int_0^1 \sigma \Big(\int_{3}^\infty \sigma x^{-\sigma-1} B(x) \sqrt{\pi_{0}(x)} \sqrt{\log\log x} \, dx\Big)^{2} d\sigma. \end{equation} Setting for convenience $h(x):=B(x) \sqrt{\pi_{0}(x)} \sqrt{\log\log x}$ and using that for $x,y>1$ \[ \int_{0}^{1} \sigma^3 (xy)^{-\sigma}d\sigma\leq \int_{0}^{\infty} \sigma^3 (xy)^{-\sigma}d\sigma=\frac{6}{\log^{4}(xy)}, \] we find by Fubini's theorem that \begin{align*} \int_0^1 \sigma^3 \Big(\int_{3}^\infty x^{-\sigma-1} h(x) \, dx\Big)^{2} d\sigma & \le 6 \int_{3}^{\infty}\int_{3}^{\infty} \frac{h(x) h(y)}{xy \log^4 (xy)} dxdy \\ &\le \frac{3}{4} \int_{3}^{\infty}\int_{3}^{\infty} \frac{h(x) h(y)}{(\log x \log y)^{3/2}} \frac{dxdy}{xy \log (xy)} \le \frac{3 \pi}{4} \int_{3}^{\infty} \frac{h(x)^2}{x\log^3 x} dx. \end{align*} Here we used in the last step that \[ \int_{1}^\infty \int_{1}^\infty\psi(x)\psi(y)\frac{dxdy}{xy (\log xy)}\leq \pi \int_{1}^\infty\psi^{2}(x) \frac{dx}{x}\] holds for a nonnegative function $\psi$, which we recognize as Hilbert's inequality \cite[Thm. 316]{HLP} \[ \int_{0}^\infty \int_{0}^\infty\varphi(u)\varphi(v)\frac{dudv}{u+v}\leq \pi \int_{0}^\infty \varphi^{2}(u)du\] for $\varphi(u):=\psi(e^{u})$, after the change of variables $u=\log x$, $v=\log y$. Hence, returning to \eqref{eq:wh}, we see that \begin{equation}\label{eq:change} \mathbb{E}(X) \ll \int_3^{\infty} \frac{B^2(x) \pi_0(x) \log\log x}{x \log^3 x } dx. \end{equation} Now using the definition of $B^2(x)$ as a finite sum and changing the order of integration and summation, we observe that the right-hand side of \eqref{eq:change} equals the series in \eqref{eq:durendir}, and hence we conclude that $\mathbb{E}(X)<\infty$. \end{proof} \section{Comparison of norms for Dirichlet polynomials}\label{sec:poly}\label{sec:compare} We will now establish some relations between the various norms considered so far, when computed for Dirichlet polynomials of fixed length. Throughout this section, our Dirichlet polynomials will be denoted by $f$ and not $P$ as before. Our results complement the main result of \cite{DP} which shows that the supremum of the ratio $\| f \|_q/\| f \|_{q'}$ for nonzero Dirichlet polynomials $f$ of length $N$ is \begin{equation} \label{eq:comp} \exp\left((1+o(1))\frac{\log N}{\log\log N}\log \sqrt{q/q'} \right) \end{equation} when $1\le q'<q < \infty$. We begin with comparisons involving $\operatorname{BMOA}$ and $\mathfrak{B}$. For the purpose of this discussion, it will be convenient to agree that \[ \| f \|_{\operatorname{BMOA}}^2:= \sup_{h>0} \frac{1}{h} \sup_{t\in \mathbb{R}} \int_{0}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma,\] in accordance with the Carleson measure condition of Lemma~\ref{basaux}. We denote by $\mathcal{D}_N$ the space of Dirichlet polynomials of length $N$ vanishing at $+\infty$. The respective ratios $\| f \|_{\infty}/\|f\|_{\mathfrak{B}}$ and $\| f \|_{\operatorname{BMOA}}/\| f\|_{\mathfrak{B}}$ are quite modest compared to \eqref{eq:comp}: \begin{theorem}\label{unus} When $N\to \infty$, we have \begin{align} \label{eq:asymp} \sup_{f\in \mathcal{D}_N\setminus \{0\}} \frac{\| f \|_{\infty}}{\| f \|_{\mathfrak{B}}} & \sim \log\log N, \\ \label{eq:asymp1} \sup_{f\in \mathcal{D}_N\setminus \{0\}} \frac{\| f \|_{\operatorname{BMOA}}}{\| f \|_{\mathfrak{B}}} & \asymp \sqrt{\log\log N},\\ \label{eq:asymp2}\sup_{f\in \mathcal{D}_N\setminus\{0\}} \frac{\Vert f\Vert_\infty}{\Vert f\Vert_{BMOA}} & \asymp \log \log N . \end{align} \end{theorem} We require two new lemmas. The first contains two versions of Bernstein's inequality. \begin{lemma}[Bernstein inequalities] \label{es} We have \begin{equation} \label{eq:bern} \| f' \|_\infty \le \log N \| f \|_{\infty} \quad \text{and} \quad \Vert f'\Vert_\infty \leq 4\log N \Vert f\Vert_{\mathfrak{B}}\end{equation} for every $f$ in $\mathcal{D}_N$. \end{lemma} The first inequality in \eqref{eq:bern} is a special case of a general version of Bernstein's inequality for finite sums of purely imaginary exponentials (see \cite[p. 30]{KAH}). We will find that the second inequality is a consequence of the next lemma. \begin{lemma}\label{mardi} We have \[ \| f \|_{\infty} \le \frac{1}{(1-c)} \| T_{c/\log N}f \|_{\infty} \] for every Dirichlet polynomial $f$ in $\mathcal{D}_N$, when $0<c<1$ and $N\ge 2$. \end{lemma} \begin{proof} The first inequality in \eqref{eq:bern} and the maximum modulus principle give for any fixed $\sigma>0$ \[ |f(it)-f(\sigma+it)|\leq \sigma \| f'\Vert_{\infty}\leq \sigma \log N\Vert f\Vert_{\infty}. \] Hence, setting $\sigma=c/\log N$, we see that \[ |f(it)|\leq \big|\big(T_{c/\log N}f\big)(it)\big|+c \| f \|_{\infty} \] from which the result follows. \end{proof} \begin{proof}[Proof of the second inequality in \eqref{eq:bern}] Using the definition of the Bloch norm, we see for any fixed $\sigma>0$ that \[ \| f\|_{\mathfrak{B}}\geq \sup_{t\in \mathbb{R}} \sigma |f'(\sigma+it)|. \] Setting $\sigma=c/\log N$ and applying Lemma~\ref{mardi} to $f'$, we then get \[ \| f\|_{\mathfrak{B}} \ge \frac{c(1-c)}{\log N} \| f' \|_{\infty} .\] Choosing $c=1/2$, we obtain the asserted result. \end{proof} \begin{proof}[Proof of Theorem~\ref{unus}] Combining \eqref{eq:point} and Lemma~\ref{mardi}, we find that if $f(+\infty)=0$, then \begin{equation} \label{eq:bloch} \|f\|_{\infty} \le \frac{\log\log N + \log (1/c) + C}{(1-c)} \| f \|_{\mathfrak{B}}. \end{equation} Choosing $c=1/\log\log N$, we obtain \[ \frac{\|f\|_{\infty}}{\| f\|_{\mathfrak{B}}}\le \log\log N +O(\log\log\log N), \] assuming that $f\neq 0$. On the other hand, the polynomial $f(s)=\sum_{n=2}^N \frac{1}{n\log n} n^{-s}$ satisfies $\| f \|_{\infty} = \log\log N + O(1)$, while \[ |f'(s)| \le \sum_{n=2}^{\infty} n^{-\sigma-1}\leq \zeta(\sigma+1)-1\leq \frac{1}{\sigma}, \] so that $\| f \|_{\mathfrak{B}} \le 1$. Hence we have shown that the supremum over $f$ of the left-hand side of \eqref{eq:bloch} exceeds $\log\log N+O(1)$. We conclude that \eqref{eq:asymp} holds. We now use Lemma \ref{basaux} to estimate $\| f \|_{\operatorname{BMOA}}$ under the assumption that $f$ is in $\mathcal{D}_N$ and $\| f \|_{\mathfrak{B}}=1$. We first observe that if $h\le 1/\log N$, then by the second Bernstein inequality of Lemma~\ref{es}, \[ \int_{0}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma \le 16 (\log N)^2 h \int_{0}^h \sigma d\sigma \le 8h.\] On the other hand, if $1/\log N<h \le 1$, then we obtain by the same argument \[ \int_{0}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma \le 8h + \int_{1/\log N}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma .\] Using the bound $|f'(\sigma+i \tau)| \le 1/\sigma$ in the integral term, where $1/\log N \le \sigma \le h\le1$, we infer from this that \[ \| f \|_{\operatorname{BMOA}}^2 \le \log\log N+O(1). \] The optimality of the latter bound is seen by considering the function \[ g(s):=\sum_{k\le \log\log N} \left[e^{e^k}\right]^{-s} , \] that satisfies $\| g \|_{\mathfrak{B}} \asymp 1$ and $\| g \|_{\operatorname{BMOA}}^2 \asymp \log \log N$. Here the first relation is trivial, and the second follows from \eqref{eq:feff} of Theorem~\ref{thm:fefferman}. Hence \eqref{eq:asymp1} has been established. Finally, to prove \eqref{eq:asymp2}, we first infer from \eqref{eq:asymp} that \[ \Vert f\Vert_\infty\ll \log\log N\Vert f\Vert_{\mathcal{B}}\ll \log\log N \Vert f\Vert_{BMOA}. \] The example $f(s)=\sum_{2\leq n\leq N} \frac{1}{n\log n} n^{-s}$ used above satisfies $\Vert f\Vert_{BMOA}\asymp 1$ by \eqref{eq:hilbert} and trivially $\Vert f\Vert_{\infty}\asymp \log\log N$. This establishes the reverse inequality in \eqref{eq:asymp2}. \end{proof} We close this section by establishing a lemma that will be used in two different ways in the next section. In contrast to the preceding comparison results, as well as those of \cite{DP}, Lemma~\ref{lem:finite} is a purely multiplicative result, and we therefore state it for polynomials in several complex variables. \begin{lemma}\label{lem:finite} There exists an absolute constant $C$ such that if $F$ is a holomorphic polynomial of degree $d\ge 2$ in $n\ge 1$ complex variables, then \begin{equation} \label{eq:sub} \| F \|_{\infty} \le C \| F\|_{n \log d}. \end{equation} \end{lemma} \begin{proof} We now apply a multi-dimensional version of Bernstein's inequality, namely \[ |F(z)-F(w)|\leq \frac{\pi}{2} d \Vert z-w\Vert_\infty \Vert F\Vert_\infty ,\] which holds for holomorphic polynomials $F$ in $n$ complex variables and all points $z=(z_j)$ and $ w=(w_j)$ on $\mathbb{T}^n$ (see \cite[pp. 125--126]{MAHE}). This implies that if $w$ is a point on $\mathbb{T}^n$ at which $|F(w)|=\| F\|_\infty $, then $|F(z)|\ge \| F \|_\infty/2$ whenever we have $|w_j-z_j|\le \frac{c}{d} $ for $j\le \pi_0(n)$ with $c:=1/\pi$. It follows that \[ \| F \|_q\ge \frac{1}{2} (2c)^{n/q} d^{-n/q} \| F\|_\infty \] and hence we get \[ \| F\|_\infty \le 2e (2c)^{-1/\log d} \| F\|_{n \log d} \le 2 \pi^{1/\log 2} \| F\|_{n \log d}. \] \end{proof} \section{The partial sum operator for Dirichlet series and Riesz projection on $\mathbb{T}$}\label{sec:partial} We will now make some remarks about the partial sum operator $S_N$ which is defined by the formula \[ (S_N f)(s):=\sum_{n\le N} a_n n^{-s} \] for $f(s)=\sum_{n=1}^{\infty} a_n n^{-s}$. We are interested in computing the norm of $S_N$ when it acts on $\mathcal{H}^q$. In what follows, we denote this norm by $\| S_N \|_q$. Most of what is known about $\|S_N\|_q$ for different values of $q$ and $N$ can be deduced from an idea that goes back to Helson \cite{He}, by which we may effectively rewrite $S_N$ as a one-dimensional Riesz projection. We will now state and prove a theorem in this vein that can be obtained almost immediately by combining \cite[Thm. 8.7.2]{R} with the optimal bounds of Hollenbeck and Verbitsky \cite{HV} for Riesz projection on $\mathbb{T}$. We choose to offer a detailed proof, however, because it makes the transference to one-dimensional Riesz projection explicit and leads to nontrivial quantitative estimates. We will consider a somewhat more general situation to emphasize the main idea of the transference to the unit circle. To this end, we fix a completely multiplicative function $g(n)\ge 1$ such that $g(n)\to \infty$ when $n\to \infty$. By considering $g(p^k)$ for $k\geq 1$, we see that this means that $g(p)>1$ for all primes $p$ and that $\lim_{p\to\infty} g(p)=\infty.$ We then introduce the projection \[ P_{g,x} \left(\sum_{n=1}^\infty a_n n^{-s}\right) := \sum_{g(n)\le x} a_n n^{-s}. \] We see that $S_N=P_{g,N}$ in the special case when $g(n)=n$. \begin{theorem}\label{show} Suppose that $g$ is a completely multiplicative function taking only positive values and that $g(n)\to\infty$ when $n\to\infty$. Then \begin{equation}\label{key} \sup_{x\ge 1} \| P_{g,x} \|_{\mathcal{H}^q}= \frac{1}{\sin(\pi/q)} \end{equation} for $1<q<\infty$. \end{theorem} \begin{proof} We consider first the easy direction, namely that $\sup_{x\geq 1} \| P_{g,x} \|_q\geq \frac{1}{\sin(\pi/q)}$. It is classical and straightforward to check that the norm of the Riesz projection equals $\sup_{N\geq 1} \| \widetilde S_N \|_q,$ where $\widetilde S_N$ is the 1-dimensional partial sum operator acting on $H^q(\mathbb{T})$. On the other hand, clearly $\| P_{g,g(2^N)} \|_q\ge \| \widetilde S_{N} \|_q$, so the claim follows from the fact that the bound of Hollenbeck and Verbitsky is optimal. In order to treat the more interesting direction, we begin by fixing a positive integer $Q$ that will be specified later, depending on $x$. Then for every prime $p$, we choose a positive integer $m_p$ such that \[ \left| Q \log g(p) -m_p\right| \le \frac{1}{2}. \] This is possible because $g(p)>1$ by the assumption that $g(n)\to \infty$. Now let $z$ be a point on the unit circle. Write $n$ in multi-index notation as $n=p^{\alpha(n)}=\prod_{p} p^{\alpha_p(n)}$, set accordingly $\beta(n)=\sum_{p} \alpha_{p}(n) m_p$ and consider the transformation \[ T_{g,Q,z} \left(\sum_{n=1}^\infty a_n n^{-s}\right)=\sum_{n=1}^\infty a_n z^{\beta(n)} n^{-s}. \] Taking the Bohr lift, we see that the effect of $T_{g,Q,z}$ acting on $f$ is that each variable is multiplied by a unimodular number. This shows that $T_{g,Q,z}$ acts isometrically on $\mathscr{H}^q$ for every $q>0$. Note that by construction \[ \left| \beta(n) - Q \log g(n) \right|\le \frac{1}{2}\big|\alpha(n)\big|=\frac{1}{2}\Omega(n), \] where $\Omega(n)$ is the number of prime factors of $n$ counting with multiplicity. We now choose the parameter $Q$ so large that \begin{equation}\label{eq:req} \max_{g(n)\leq x }\beta(n)< \inf_{g(n)>x}\beta(n). \end{equation} This is obtained if \begin{equation} \label{eq:sep} \inf_{g(n)>x} \big(Q\log g(n)- \frac{1}{2} \Omega(n) \big) > \max_{g(n)\le x} \big(Q \log g(n) + \frac{1}{2} \Omega(n) \big). \end{equation} We may achieve \eqref{eq:sep} because the assumptions on $g$ imply that $\log g(n) \ge c \Omega(n) $ for some $c>0$. Namely, this inequality clearly yields that \[ \inf_{g(n)>x} \big(Q\log g(n)- \frac{1}{2} \Omega(n) \big)\ge (Q-c^{-1}/2) \log (x+1) \] for some $\varepsilon>0$, while on the other hand \[ \max_{g(n)\le x} \big(Q \log g(n) + \frac{1}{2} \Omega(n) \big) \le (Q+c^{-1}/2) \log x.\] Having made this choice of $Q$, we see that \eqref{eq:req} ensures that we may write \[ (T_{g,Q,z} P_{g,x} f)(s)=\sum_{\beta(n)\le x'} a_n z^{\beta(n)} n^{-s}\] for a suitable $x'$. Hence, using the Bohr lift $B$, the translation invariance of $m_\infty$ under $T_z$ with $ T_{z}(w)=(w_p z^{m_p})$, Fubini's theorem, and Hollenbeck and Verbitsky's theorem \cite{HV} on the $L^q$ norm of the Riesz projection on $\mathbb{T}$, we get successively: \begin{align*} \Vert S_{N}(f)\Vert_{q}^{q}& =\int_{\mathbb{T}^\infty} \big|S_{N}(Bf)(w)\big|^{q}dm_{\infty}(w)\\ &=\int_{\mathbb{T}}\Big(\int_{\mathbb{T}^\infty} \big|S_{N}(Bf)(T_{z} w)\big|^{q}dm_{\infty}(w)\Big)dm(z)\\ &=\int_{\ T^\infty}\Big(\int_{\mathbb{T}} \big|\sum_{n=1}^N a_n w^{\alpha(n)}z^{\beta(n)} \big|^{q}dm(z)\Big)dm_{\infty}(w)\\ &\leq \Big(\frac{1}{\sin \pi/q}\Big)^q \int_{\mathbb{T}^\infty}\Big(\int_{\mathbb{T}} \big|\sum_{n=1}^\infty a_n w^{\alpha(n)}z^{\beta(n)}\big|^{q}dm(z)\Big)dm_{\infty}(w)\\ &= \Big(\frac{1}{\sin \pi/q}\Big)^q \Vert f\Vert_{q}^{q}.\end{align*} \end{proof} If we specialize to the case when $g(n)=n$ and $x= N$, it is of interest to see how large the intermediate parameter $Q$ has to be to ensure that \eqref{eq:sep} holds. We see that this happens if \begin{equation} \label{eq:special} Q\log(N+j)-\frac{\log (N+j)}{2\log 2}>Q\log N+\frac{\log N}{2\log 2}.\end{equation} when $j=1,2 ...$. We may assume that $Q>1/(2\log 2)$ so that \[ Q\log(N+j)-\frac{\log (N+j)}{2\log 2} \ge Q \log N - \frac{\log N}{2\log 2} + \Big(Q- \frac{1}{2\log 2}\Big) \frac{1}{2N}. \] This shows that \eqref{eq:special} holds if we choose \begin{equation} \label{eq:require} Q\ge c N \log N \end{equation} with $c>0$ large enough. Since $T_{n,Q} S_N f$ will be a polynomial of degree at most $Q \log N+(\log N)(2\log 2)$ in the dummy variable $z$, we may now, following again the reasoning of the above proof, use Lemma~\ref{lem:finite} with $n=1$ and $d= Q \log N+O(\log N)$ to deduce that $\| S_N \|_{\infty} \ll \log N $. We thus recapture a result that was first established in \cite{BCQ} by use of Perron's formula and contour integration. The bound just obtained remains the best known upper bound for $\| S_N\|_{\infty} $. On the other hand, it is known that $\| S_N \|_{\infty}\gg \log \log N$ (obtained for Dirichlet series over powers of a single prime). We are thus far from knowing the right order of magnitude of $\| S_N\|_{\infty}$. A similar situation persists when $q=1$ in which case we have $\log\log N \ll \| S_N\|_1 \ll \log N/\log\log N$ by a result of \cite{BBSS}. We will now show that if we specialize to Dirichlet series over $\mathcal{P}_0$-smooth numbers, then the estimate in the case $q=\infty$ can be improved for certain ultra-thin sets of primes $\mathcal{P}_0$. To this end, we denote by $\mathcal{H}^q(\mathcal{P}_0)$ the subspace of $\mathcal{H}^{q}$ consisting of Dirichlet series over the sequence $\mathcal{N}_0$ of $\mathcal{P}_0$-smooth numbers, and we let $\| S_{N}\|_{\mathcal{H}^q(\mathcal{P})_0}$ be the norm of $S_N$ when restricted to $\mathcal{H}^q(\mathcal{P}_0)$. The crucial observation is that it may now be profitable to apply Lemma~\ref{lem:finite} \emph{before} we make the transference to one-dimensional Riesz projection. Indeed, we observe that the Bohr lift of a Dirichlet polynomial of length $N$ over $\mathcal{P}_0$-smooth numbers will be a polynomial of degree at most $\log N/\log 2$ in $\pi_0(N)$ complex variables. Hence the norm on the right-hand side of \eqref{eq:sub} can be taken to be the $\pi_0(N) \log \log N$-norm. Combining this observation with Theorem~\ref{show}, we then get the following result which yields an improvement when $\pi_0(x)=o(\log x/\log\log x)$. \begin{theorem}\label{thm:reis} There exists an absolute constant $C$ such that \begin{equation}\label{d}\Vert S_N\Vert_{\mathcal{H}^{\infty}(\mathcal{P}_0)} \leq C \pi_0(N) \log\log N \end{equation} when $\pi_0(N)\ge 1$ and $\log\log N\ge 2$. \end{theorem} Following the proof of \cite[Thm. 5.2]{BBSS} word for word, we may obtain a similar result for $\| S_N \|_{\mathcal{H}^{1}(\mathcal{P}_0)}$ with $\pi_0(N)\log\log N$ replaced by the logarithm of the maximal order of the divisor function at $N$ when restricted to $\mathcal{N}_0$. In contrast to \eqref{d}, this bound is nontrivial for all sets of primes $\mathcal{P}_0$. In particular, it yields $\| S_N\|_1 \ll \log\log N$ when $\mathcal{P}_0$ is a finite set and $\| S_N\|_1\ll \log N/\log\log N$ when $\mathcal{P}_0$ is the set of all primes, since then the logarithm of the maximal order of the divisor function at $N$ is $O(\log N/\log\log N)$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In \cite{JePu}, the notion of a dimension effect algebra was introduced as a counterpart of the notion of a dimension group. Recall that a dimension group (or a Riesz group) is a directed, unperforated interpolation group. By \cite{EHS}, dimension groups can be also characterized as direct limits of directed systems of simplicial groups. In analogy with the latter characterization, dimension effect algebras were defined as direct limits of directed systems of finite effect algebras with the Riesz decomposition property. It is well known that the latter class of effect algebras corresponds to the class of finite MV-algebras, and in analogy with simplicial groups, we call them simplicial effect algebras. It turns out that dimension effect algebras are exactly the unit intervals in unital dimension groups, and simplicial effect algebras are exactly the unit intervals in unital simplicial groups. In \cite {JePu}, an intrinsic characterization of dimension effect algebras was found, and also a categorical equivalence between countable dimension effect algebras and unital AF C*-algebras was shown \cite[Theorem 5.2]{JePu}. In this paper we continue the study of dimension effect algebras. In particular, we study the tensor product of dimension effect algebras in the category of effect algebras. We recall that the tensor product in the category of effect algebras exists, and its construction was described in \cite{Dvu}. We first prove that the tensor product of simplicial effect algebras is again a simplicial effect algebra and is (up to isomorphism) the unit interval in the tensor product of the corresponding unital simplicial groups (Theorem \ref{th:tenprodfmv}). Then we extend this result to any dimension effect algebras, using the fact that every dimension effect algebra is a direct limit of a directed system of simplicial effect algebras. Namely, we prove that the tensor product of dimension effect algebras is a dimension effect algebra (Theorem \ref{th:tenproddimea}), and is (up to isomorphism) the unit interval in the tensor product of the corresponding dimension groups (Corollary \ref{cor:unigroup}). We conjecture that this last statement holds more generally for tensor products of interval effect algebras and their universal groups. We note that the categorical equivalence between effect algebras with RDP and interpolation groups proved in \cite[Theorem 3.8]{JePu}, or the known constructions of tensor products in the category of interval effect algebras \cite{FGB} cannot be applied here, since the category of effect algebras is much larger than the category of effect algebras with RDP or interval effect algebras. In the last section, we apply our results to the interval ${\mathbb R}^+[0,1]$ and construct a directed system of simplicial groups that has this interval as its direct limit. \section{Preliminaries} The notion of an effect algebra was introduced by D.J. Foulis and M.K. Bennett in \cite{FoBe}. An alternative definition of so called \emph{D-poset} was introduced in \cite{KCh}. Effect algebras and D-posets are categorically equivalent structures \cite{DvPu}. \begin{definition}\label{de:ea} An \emph{effect algebra} is an algebraic system $(E;0,1,\oplus)$, where $\oplus$ is a partial binary operation and $0$ and $1$ are constants, such that the following axioms are satisfied for $a,b,c\in E$: \begin{enumerate} \item[{\rm(i)}] if $a\oplus b$ is defined the $b\oplus a$ is defined and $a\oplus b=b\oplus a$ (commutativity); \item[{\rm(ii)}] if $a\oplus b$ and $(a\oplus b)\oplus c$ are defined, then $a\oplus(b\oplus c)$ is defined and $(a\oplus b)\oplus c=a\oplus(b\oplus c)$ (associativity); \item[{\rm(iii)}] for every $a\in E$ there is a unique $a^{\perp}\in E$ such that $a\oplus a^{\perp}=1$; \item[{\rm(iv)}] if $a\oplus 1$ is defined then $a=0$. \end{enumerate} \end{definition} In what follows, if we write $a\oplus b$, $a,b\in E$, we tacitly assume that $a\oplus b$ is defined in $E$. The operation $\oplus$ can be extended to the $\oplus$-sum of finitely many elements by recurrence in an obvious way. Owing to commutativity and associativity, the element $a_1\oplus a_2\oplus \cdots \oplus a_n$ is ambiguously defined. In any effect algebra a partial order can be defined as follows: $a\leq b$ if there is $c\in E$ with $a\oplus c=b$. In this partial order, $0$ is the smallest and $1$ is the greatest element in $E$. Moreover, if $a\oplus c_1=a\oplus c_2$, then $c_1=c_2$, and we define $c=b\ominus a$ iff $a\oplus c=b$. In particular, $1\ominus a=a^{\perp}$ is called the \emph{orthosupplement} of $a$. We say that $a,b\in E$ are \emph{orthogonal}, written $a\perp b$, iff $a\oplus b$ exists in $E$. It can be shown that $a\perp b$ iff $a\leq b^{\perp}$. An effect algebra which is a lattice with respect to the above ordering is called a \emph{lattice effect algebra}. Let $E$ and $F$ be effect algebras. A mapping $\phi:E\to F$ is an \emph{effect algebra morphism} iff $\phi (1)=1$ and $\phi(e\oplus f)=\phi(e)\oplus \phi(f)$ whenever $e\oplus f$ is defined in $E$. The category of effect algebras with effect algebra morphisms will be denoted by $\ea$. \subsection{Interval effect algebras and RDP} Important examples of effect algebras are obtained in the following way. Let $(G,G^+,0)$ be a (additively written) partially ordered abelian group with a positive cone $G^+$ and neutral element $0$. For $a\in G^+$ define the interval $G[0,a]:=\{ x\in G: 0\leq x\leq a\}$. Then $G[0,a]$ can be endowed with a structure of an effect algebra by defining $x\perp y$ iff $x+y\leq a$, and then putting $a\oplus b:=a+b$. Effect algebras obtained in this way are called \emph{interval effect algebras}. We note that a prototype of effect algebras is the interval $[0,I]$ in the group of self-adjoint operators on a Hilbert space, so-called algebra of Hilbert space effects. Hilbert space effects play an important role in quantum measurement theory, and the abstract definition was motivated by this example. On the other hand, there are effect algebras that are not interval effect algebras, see e.g. \cite{Na}. The partially ordered abelian group $G$ is \emph{directed} if $G=G^+-G^+$. An element $u\in G^+$ is an \emph{order unit} if for all $a\in G$, $a\leq nu$ for some $n\in {\mathbb N}$. If $G$ has an order unit $u$, it is directed, indeed, if $g\leq nu$, then $g=nu-(nu-g)$. An element $u\in G^+$ is called a \emph{generating unit} if every $a\in G^+$ is a finite sum of (not necessarily different) elements of the interval $G[0,u]$. Clearly, a generating unit is an order unit, the converse may be false. If $G$ and $H$ are partially ordered abelian groups, then a group homomorphism $\phi:G\to H$ is \emph{positive} if $\phi(G^+)\subseteq H^+$. An isomorphism $\phi : G\to H$ is an \emph{order isomorphism} if $\phi(G^+)=H^+$. If $G$ and $H$ have order units $u$ and $v$, respectively, then a positive homomorphism $\phi:G\to H$ is called \emph{unital} if $\phi(u)=v$. The category of partially ordered abelian groups having an order unit, with positive unital homomorphisms will be denoted by $\pog$. Relations between interval effect algebras and partially ordered abelian groups are described in the following theorem, proved in \cite{BeFo}. Recall that a mapping $\phi:E\to K$, where $E$ is an effect algebra and $K$ is any abelian group, is called a \emph{$K$-valued measure} on $E$ if $\phi(a\oplus b)=\phi(a)+\phi(b)$ whenever $a\oplus b$ is defined in $E$. \begin{theorem}\label{th:unigroup} Let $E$ be an interval effect algebra. Then there exists a unique (up to isomorphism) partially ordered directed abelian group $(G,G^+)$ and an element $u\in G^+$ such that the following conditions are satisfied: \begin{enumerate} \item[{\rm(i)}] $E$ is isomorphic to the interval effect algebra $G^+[0,u]$. \item[{\rm(ii)}] $u$ is a generating unit. \item[{\rm(iii)}] Every $K$-valued measure $\phi:E\to K$ can be extended uniquely to a group homomorphism $\phi^*:G\to K$. \end{enumerate} \end{theorem} The group $G$ in the preceding theorem is called a \emph{universal group} for $E$, and will be denoted by $G_E$. In what follows we consider a property that ensures that a partially ordered group with order unit is the universal group for its unit interval. There are examples (see \cite[Example 11.3, 11.5]{FoGr}) that show that this is not true in general. A partially ordered abelian group $G$ is said to have the \emph{Riesz interpolation property} (RIP), or to be an \emph{interpolation group}, if given $a_i,b_j$ ($1\leq i\leq m, 1\leq j\leq n$) with $a_i\leq b_j$ for all $i,j$, there exists $c\in G$ such that $a_i\leq c\leq b_j$ for all $i,j$. The Riesz interpolation property is equivalent to the \emph{Riesz decomposition property} (RDP): given $a_i,b_j\in G^+$, ($1\leq i\leq m, 1\leq j\leq n$) with $\sum a_i=\sum b_j$, there exist $c_{ij}\in G^+$ with $a_i=\sum_jc_{ij}, b_j=\sum_ic_{ij}$. An equivalent definition of the RDP is as follows: given $a,b_i$ in $G^+$, $i\leq n$ with $a\leq \sum_{i\leq n}b_i$, there exist $a_i\in G^+$ with $a_i\leq b_i, i\leq n$, and $a=\sum_{i\leq n}a_i$. To verify these properties, it is only necessary to consider the case $m=n=2$ (cf. \cite{Fuchs, Good}). For interpolation groups we have the following theorem \cite{Pu}, \cite[Theorem 3.5]{JePu}. \begin{theorem}\label{th:rdpunigroup} Let $G$ be an interpolation group with order unit $u$. Put $E:=G^+[0,u]$. Then $(G,u)$ is the universal group for $E$. \end{theorem} In a similar way as for partially ordered abelian groups, RDP can be defined for effect algebras. We say that an effect algebra $E$ has the \emph{Riesz decomposition property} (RDP) if one of the following equivalent properties is satisfied: \begin{enumerate} \item[(R1)] $a\leq b_1\oplus b_2\oplus \cdots \oplus b_n$ implies $a=a_1\oplus a_2\oplus \cdots \oplus a_n$ with $a_i\leq b_i, i\leq n$; \item[(R2)] $\oplus_{i\leq m}a_i=\oplus_{j\leq m} b_j$, $m,n\in {\mathbb N}$, implies $a_i=\oplus_jc_{ij}, i\leq m$, and $b_j=\oplus_ic_{ij}, j\leq n$, where $c_{ij}\in E$. \end{enumerate} Similarly as for partially ordered groups, it suffices to consider the case $m=n=2$. Let us remark that RIP can be also defined for effect algebras. In contrast with the case of partially ordered abelian groups, RIP and RDP are not equivalent for effect algebras: RDP implies RIP, but there are examples of effect algebras with RIP which do not have RDP (e.g., the "diamond" is lattice ordered effect algebra that does not satisfy RDP, \cite{DvPu}). It was proved by Ravindran \cite{Rav}, that every effect algebra with RDP is an interval effect algebra, and its universal group is an interpolation group. Ravindran's result can be extended to a categorical equivalence between the category of effect algebras with RDP with effect algebra morphisms and the category of interpolation groups with order unit with positive unital group homomorphisms, \cite[Theorem 3.8]{JePu}. \section{Dimension groups and dimension effect algebras} In this section, we study dimension groups and their effect algebra counterpart, introduced in \cite{JePu}. These are interpolation groups with some additional properties. A partially ordered abelian group $G$ is called \emph{unperforated} if given $n\in {\mathbb N}$ and $a\in G$, then $na\in G^+$ implies $a\in G^+$. Every Archimedean, directed abelian group is unperforated \cite[Proposition 1.24]{Good}, and also every lattice ordered abelian group is unperforated \cite[Proposition 1.22]{Good}. \begin{definition}\label{de:dingr} {\rm \cite{Good}} A partially ordered group $G$ is a \emph{dimension group} (or a \emph{Riesz group}) if it is directed, unperforated and has the interpolation property. \end{definition} A simple example of a dimension group is as follows. \begin{definition}\label{de:simplicgr} {\rm \cite[Definition p. 183]{Good}, \cite{GoHa}} A \emph{simplicial group} is any partially ordered abelian group that is isomorphic (as partially ordered abelian group) to ${\mathbb Z}^n$ (with the product ordering) for some nonnegative integer $n$. A \emph{simplicial basis} for a simplicial group $G$ is any basis $(x_1,\ldots,x_n)$ for $G$ as a free abelian group such that $G^+={\mathbb Z}^+x_1+\cdots +{\mathbb Z}^+x_n$. \end{definition} It was proved by Effros, Handelman and Shen \cite{EHS} that the dimension groups with order unit are precisely the direct limits of directed systems of simplicial groups with an order unit in the category $\pog$. Note that an element $v\in {\mathbb Z}^r$ is an order unit if and only if all of its coordinates are strictly positive. In this case, the interval $({\mathbb Z}^+)^r[0,v]$ is the direct product of finite chains $(0,1,\ldots,v_i), i=1,2,\ldots,r$ and therefore is a finite effect algebra with RDP. Conversely, every finite effect algebra with RDP is a unit interval in a simplicial group. Below, such effect algebras will be called \emph{simplicial}. In analogy with dimension groups, in \cite{JePu}, direct limits of directed systems of simplicial effect algebras have been called \emph{dimension effect algebras}. It was shown that an effect algebra is a dimension effect algebra if and only if its universal group is a dimension group. An intrinsic characterization of dimension effect algebras was found in \cite[Theorem 4.2]{JePu}. For the convenience of the readers, we give a short description of the directed system and direct limit of effect algebras \cite[Definition 1.9.36]{DvPu}. A \emph{directed system of effect algebras} is a family $A_I:=(A_i; (f_{ij}: A_j\to A_i, i,j\in I, j\leq i)$ where $(I,\leq)$ is a directed set, $A_i$ is an effect algebra for each $i\in I$, and $f_{ij}$ is a morphism such that \begin{enumerate} \item[(i1)] $f_{ii}=id_{A_i}$ fir every $i\in I$; \item[(i2)] if $m\leq j\leq i$ in $I$, then $f_{ij}f_{jm}=f_{im}$. \end{enumerate} Let $A_I$ be a directed system of effect algebras, then $\underline{f}:=(A; (f_i:A_i\to A; i\in I))$ is called the \emph{direct limit} of $A_I$ iff the following conditions hold: \begin{enumerate} \item[(ii1)] $A$ is an effect algebra; $f_i$ is a morphism for each $i\in I$; \item[(ii2)] if $j\leq i$ in $I$, then $f_if_{ij}=f_j$ (i.e., $\underline{f}$ is compatible with $A_I$); \item[(ii3)] if $\underline{g}:=(B; (g_i:A_i\to B, i\in I))$ is any system compatible with $A_I$, then there exists exactly one morphism $g:A\to B$ such that $gf_i=g_i$, for every $i\in I$. \end{enumerate} It was proved (cf. \cite[Theorem 1.9.27]{DvPu}) that the direct limit in the category of effect algebras exists. A sketch of the construction of the direct limit is as follows. Let $A= \dot{\cup}_{i\in I}A_i$ be the disjoint union of $A_i, i\in I$. Define a relation $\equiv$ on $A$ as follows. Put $a\equiv b$ ($a\in A_i, b\in A_j$) if there exists a $k\in I$ with $i,j\leq k$ such that $f_{ki}(a)=f_{kj}(b)$ in $A_k$. Then $\equiv$ is an equivalence relation, and the quotient $\bar{A}:=A/\equiv$ can be organized into an effect algebra with the operation $\oplus$ defined as follows: let $\bar{a}$ denotes the equivalence class corresponding to $a$. For $a\in A_I, b\in A_j$, $\bar{a}\oplus \bar{b}$ is defined iff there is $k\in I$, $i,j\leq k$ such that $f_{ki}(a)\oplus f_{kj}(b)$ exists in $A_k$, and then $\bar{a}\oplus \bar{b}=\overline{(f_{ki}(a)\oplus f_{kj}(b))}$ in $\bar{A}$. For every $i\in I$, define $f_i: A_i\to A/\equiv$ as the natural projection $f_i(a)=\bar{a}$. Then $\lim_{\rightarrow} A:=(\bar{A}; f_i:A_i\to \bar{A}, i\in I)$ is the desired direct limit.\label{pg:direct} From this construction, it can be derived that properties involving finite number of elements, such as RDP or being a dimension effect algebra (cf. the characterization in \cite[Thm. 4.2]{JePu}), are preserved under direct limits in $\ea$. \section{Tensor product of dimension effect algebras} The tensor product in the category ${\bf EA}$ is defined below as an universal bimorphism. We will show that such a tensor product always exists and that it is essentially given by the construction in \cite{Dvu}, see also \cite[Chap. 4.2]{DvPu}. Let $E,F,L$ be effect algebras. A mapping $\beta: E\times F\to L$ is called a \emph{bimorphism} if \begin{enumerate} \item[(i)] $a,b\in E$ with $a\perp b$, $q\in F$ imply $\beta(a,q)\perp \beta(b,q)$ and $\beta(a\oplus b,q)=\beta(a,q)\oplus \beta(b,q)$; \item[(ii)] $c,d\in F$ with $c\perp d$, $p\in E$ imply $\beta(p,c)\perp \beta(p,d)$ and $\beta(p,(c\oplus d))=\beta(p,c)\oplus \beta(p,d)$; \item[(iii)] $\beta(1,1)=1$. \end{enumerate} \begin{definition}\label{de:tenprodea} Let $E$ and $F$ be effect algebras. A pair $(T,\tau)$ consisting of an effect algebra $T$ and a bimorphism $\tau:E\times F \to T$ is said to be the \emph{tensor product} of $E$ and $F$ if whenever $L$ is an effect algebra and $\beta : E\times F \to L$ is a bimorphism, there exists a unique morphism $\phi :T\to L$ such that $\beta=\phi \circ \tau$. \end{definition} It is clear that if the tensor product exists, it is unique up to isomorphism. We will use the notation $E\otimes F$ for the effect algebra $T$ and $\otimes$ for the bimorphism $\tau$: $\tau(e,f)=e\otimes f\in E\otimes F$. \begin{theorem}\label{th:tenprodea} The tensor product always exists in ${\bf EA}$. \end{theorem} \begin{proof} The theorem was essentially proved in \cite[Theorem 7.2]{Dvu}, see also \cite[Theorem 4.2.2]{DvPu}. There a somewhat different definition of a tensor product is considered and the bimorphisms are assumed nontrivial, that is, the target algebra is required to satisfy $0\ne 1$. If at least one such bimorphism exists, it is easy to see that \cite{Dvu} provides a construction of a tensor product in our sense. On the other hand, if there are no nontrivial bimorphisms, then the tensor product is given by the one-element effect algebra $\{0=1\}$ and the unique bimorphism $E\times F\to \{0\}$. \end{proof} The tensor product of dimension groups in the category $\pog$ was studied by Goodearl and Handelman \cite{GoHa} and it was proved that such a tensor product is a dimension group as well. Recall that the tensor product of $G_1$ and $G_2$ in $\pog$ can be constructed as the abelian group tensor product $G_1\otimes G_2$, endowed with the positive cone $G_1^+\otimes G_2^+$, generated by simple tensors of positive elements. Our aim in this section is to describe the tensor product of dimension effect algebras in the category ${\bf EA}$. Note that we cannot directly apply the above result via the categorical equivalence of \cite[Theorem 3.8]{JePu}, since the category ${\bf EA}$ is much larger than the category of effect algebras with RDP. We first consider the case of simplicial effect algebras. Let $E$ and $F$ be simplicial effect algebras, with atoms \[ (e_1,\dots,e_n),\qquad (f_1,\dots,f_m) \] and unit elements \[ u=\sum_iu_ie_i, \qquad v=\sum_jv_jf_j, \] respectively. Then $G_E$ and $G_F$ are simplicial groups and $G_E\otimes G_F$ is a simplicial group with generators \[ g_{ij}=e_i\otimes f_j, i=1,\dots,n; j=1,\dots,m. \] Hence the unit interval $G_E\otimes G_F[0,u\otimes v]$ is a simplicial effect algebra with atoms $g_{ij}$ and and unit element $w=\sum_{i,j} u_iv_jg_{ij}$. \begin{theorem}\label{th:tenprodfmv} Tensor product of simplicial effect algebras in the category ${\bf EA}$ is a simplicial effect algebra, namely \[ E\otimes F\simeq G_E\otimes G_F[0,u\otimes v]. \] \end{theorem} \begin{proof} Let $G$ denote the simplicial effect algebra on the right hand side. Obviously, (bi)morphisms on simplicial effect algebras are uniquely determined by their values on the atoms. Let $\tau: E\times F\to G$ be the bimorphism determined by \[ \tau(e_i,f_j)=g_{ij}, \qquad i=1,\dots,n, \ j=1,\dots,m. \] We need to prove that for any effect algebra $H$ and bimorphism $\beta: E\times F\to H$, there is a morphism $\psi: G\to H$, such that \[ \psi(g_{ij})=\beta(e_i,f_j),\qquad i=1,\dots,n, \ j=1,\dots,m. \] Since $g_{ij}$ generate $G$, uniqueness of such a morphism is clear. So let $z\in G$, then $z=\sum_{i,j}z_{ij}g_{ij}$, for $z_{ij}\le u_iv_j$ for all $i$ and $j$. There are nonnegative integers $q_{ij}$, $r_{ij}$ such that \[ z_{ij}= v_jq_{ij}+r_{ij},\qquad r_{ij}< v_j, \] then since $v_j q_{ij}\le z_{ij}\le u_iv_j$, we have $q_{ij}\le u_i$, with equality only if $r_{ij}=0$. Then $a_j:=\sum_iq_{ij} e_i\in E$ and $r_{ij}f_j\in F$. We have \begin{align*} z&=\sum_j (\sum_iq_{ij}v_j g_{ij}+ \sum_{i} r_{ij}g_{ij})\\ &= \sum_j \tau(a_j,v_j f_j)+ \sum_{i, r_{ij}>0}\tau(e_i,r_{ij}f_j) \end{align*} Put $a'_j:=\sum_{i, r_{ij}>0} e_i$, then $a_j\perp a_j'$. Now we can write \begin{align*} H\ni 1=\beta(u,v)&=\sum_j\beta(u,v_jf_j)\\ &= \sum_j \left[\beta(a_j, v_jf_j)+ \beta(a'_j, v_jf_j)+\beta(u-(a_j+a_j'), v_jf_j)\right]\\ &= \sum_j [\beta(a_j, v_jf_j)+\sum_{i}\beta (e_i, r_{ij}f_j)+ \sum_{i, r_{ij}>0}\beta(e_i, (v_j-r_{ij})f_j)\\ &+\beta(u-(a_j+a_j'), v_jf_j)] \end{align*} It follows that \begin{align*} \sum_{i,j} z_{ij}\beta(e_i,f_j)&=\sum_{i,j} [q_{ij}v_j\beta(e_i,f_j)+r_{ij}\beta(e_i,f_j)]\\ &=\sum_j [\beta(a_j, v_jf_j)+\sum_i\beta (e_i, r_{ij}f_j)] \end{align*} is a well defined element in $H$ and we may put \[ \psi(z)=\sum_{i,j} z_{ij}\beta(e_i,f_j), \] which clearly defines a morphism $G\to H$. \end{proof} Let \begin{align*} A_I&=(A_i; (f_{ij}:A_j \to A_i); i,j\in I, j\leq i),\\ B_J&=(B_k; (g_{k\ell}:B_\ell \to B_{k}); k,\ell \in J, \ell\leq k) \end{align*} be directed systems of simplicial effect algebras. Let us define the index set $(\mathcal I,\leq)$ as the product $I\times J$ with pointwise ordering. By the previous theorem, each $A_i\otimes B_k$, $(i,k)\in \mathcal I$ is a simplicial effect algebra. Let $(j,\ell)\in \mathcal I$ be such that $(j,\ell)\leq (i,k)$, then we have morphisms $f_{ij}:A_j\to A_i$ and $g_{k\ell}: B_\ell\to B_k$. For $a\in A_j$, $b\in B_\ell$, put $\beta(a,b)=f_{ij}(a)\otimes g_{k\ell}(b)\in A_i\otimes B_k$, this defines a bimorphism $A_j\times B_\ell\to A_i\otimes B_k$. By properties of tensor product, this extends to a unique morphism $f_{ij}\otimes g_{k\ell}: A_j\otimes B_{\ell}\to A_i\otimes B_k$. \begin{theorem}\label{th:directed} Let \begin{eqnarray*} & A_I\otimes B_J:=(A_i\otimes B_k; (f_{ij}\otimes g_{k\ell}:A_j\otimes B_\ell \to A_i\otimes B_{k}), \\ & (i,k), (j,\ell)\in \mathcal I, (j,\ell)\leq (i,k)). \end{eqnarray*} Then $A_I\otimes B_J$ is a directed system of simplicial effect algebras. \end{theorem} \begin{proof} We have to check properties (i1) and (i2). For (i1), note that $f_{ii}=id_{A_i}$, $g_{kk}=id_{B_k}$ imply $f_{ii}\otimes g_{kk}=id_{A_i\otimes B_k}$. For (i2), let $(m,n)\leq (j,\ell)\leq (i,k)$. Then \[ m\leq j\leq i \ \implies \ f_{ij}f_{jm}=f_{im} \] \[n\leq \ell \leq k\ \implies \ g_{k\ell} g_{\ell n}=g_{kn} \] and for $a_m\in A_m, b_n\in B_n$, \begin{eqnarray*} (f_{ij}\otimes g_{k\ell})(f_{jm}\otimes g_{\ell n})(a_m\otimes b_n) &=& (f_{ij}\otimes g_{k\ell})(f_{jm}(a_m)\otimes g_{\ell n}(b_n))\\ &=& f_{ij}f_{jm}(a_m)\otimes g_{k\ell}g_{\ell n}(b_n)\\ &=& f_{im}(a_m)\otimes g_{kn}(b_n)\\ &=& f_{im}\otimes g_{kn}(a_m\otimes b_n). \end{eqnarray*} Since this holds on simple tensors, it extends to whole $A_m\otimes B_n$. \end{proof} \begin{theorem}\label{th:tenproddimea} Let $A_I, B_J$ be directed systems of simplicial effect algebras, and let $(\bar{A};(f_i:A_i\to \bar{A}, i\in I))$ and $(\bar{B}; (g_j:B_j\to \bar{B}, j\in J))$ be their corresponding direct limits. Then $(\bar{A}\otimes \bar{B}; (f_i\otimes g_j:A_i\otimes B_j \to \bar{A}\otimes \bar{B}, i\in I, j\in J))$ is the direct limit of $A_I\otimes B_J$. \end{theorem} \begin{proof} We have to check properties (ii1), (ii2) and (ii3). The first one is clear: since $\bar{A}$, $\bar{B}$ are effect algebras, $\bar{A}\otimes \bar{B}$ is an effect algebra as well. To prove compatibility, let $(j,\ell) \leq(i,k)$. Then $j\leq i, \ell\leq k$ and we have \[ (f_i\otimes g_k)(f_{ij}\otimes g_{k\ell})= f_if_{ij}\otimes g_kg_{k\ell}=f_j\otimes g_{\ell}. \] For (ii3), let $(C; (h_{ij}:A_i\otimes B_j \to C, i\in I, j\in J))$ be another system compatible with $A_I\otimes B_J$ (i.e., $h_{ik}(f_{ij}\otimes g_{k\ell})=h_{j\ell}, j\leq i, \ell\leq k$). Let $a\in \bar A$, $b\in \bar B$. Since $\bar A$ and $\bar B$ are direct limits, there are some indices $i\in I$, $k\in J$ and elements $a_i\in A_i$ and $b_k\in B_k$ such that $a=f_i(a_i)$ and $b=g_k(b_k)$, see the construction on page \pageref{pg:direct}. Define $h(a,b):=h_{ik}(a_i\otimes b_k)$. Then $h:\bar{A}\times \bar{B} \to C$ is a bimorphism, which extends to a morphism $\bar{h}:\bar{A}\otimes \bar{B} \to C$. \end{proof} \begin{corollary}\label{cor:unigroup} Let $E$ and $F$ be dimension effect algebras, and let $G_E$ and $G_F$ be their universal groups with units $u_E$ and $u_F$. Then the tensor product $E\otimes F$ is isomorphic to the unit interval $[0,u_E\otimes u_F]$ in the tensor product $G_E\otimes G_F$ of their universal groups, that is \[ G_E[0,u_E]\otimes G_F[0,v_F]\simeq G_E\otimes G_F[0,u_E\otimes u_F]. \] \end{corollary} \begin{proof} Let $E=\bar A$, $F=\bar B$ be direct limits of directed systems $A_I$ and $B_J$. Each $A_i$, $i\in I$ and $B_k$, $k\in J$ is a simplicial effect algebra and $G_{A_i}$, $G_{B_k}$ are simplicial groups. By \cite[Theorem 4.1]{JePu}, we obtain that $G_E$ is a direct limit of $(G_{A_i}, f_{ij}^*)$, where $f_{ij}^*$ are the unique morphisms in $\pog$, extending $f_{ij}$, similarly for $G_F$. By Theorem \ref{th:tenprodfmv}, $A_i\otimes B_k$ is a simplicial effect algebra and $G_{A_i\otimes B_k}\simeq G_{A_i}\otimes G_{B_k}$. By Theorem \ref{th:tenproddimea}, $E\otimes F$ is the direct limit of the directed system $A_I\otimes B_J$. Since $A_I\otimes B_J$ has RDP, it follows by \cite[Theorem 4.1]{JePu} that the universal group $G_{E\otimes F}$ is a direct limit of the system of universal groups \[ \{ G_{A_i\otimes B_k}\simeq G_{A_i}\otimes G_{B_k}, (f_{ij}\otimes g_{k\ell})^*\simeq f_{ij}^*\otimes g_{k\ell}^*\}, \] Using \cite[Lemma 2.2]{GoHa}, we obtain \[ G_{E\otimes F}\simeq G_E\otimes G_F,\qquad u_{E\otimes F}=u_E\otimes u_F. \] \end{proof} \section{Conclusions and a conjecture} We have proved that the $\ea$ tensor product of dimension effect algebras is again a dimension effect algebra. The tensor product $E\otimes F$ is proved to be the direct limit of a directed system of simplicial effect algebras, obtained as a ''tensor product'' of the directed systems corresponding to dimension effect algebras $E$ and $F$. It is also proved that $E\otimes F$ is (isomorphic to) the unit interval in the $\pog$ tensor product of the corresponding universal groups $G_E$ and $G_F$. We conjecture that this is true for general interval effect algebras. Note that in the category of \emph{interval} effect algebras, the tensor product exists \cite[Theorem 9.1]{FGB} and our conjecture says that it is (isomorphic to) the $\ea$ tensor product. A special class of interval effect algebras are the algebras with RDP. It is again an open question whether in this case the $\ea$ tensor product has RDP. If our conjecture is true, $E\otimes F$ is the unit interval in the $\pog$ tensor product of groups with RDP. As it was shown in \cite[cf. Remark 2.13]{W}, the $\pog$ tensor product of groups with RDP might not have RDP, but in the presence of generating units, RDP holds in an asymptotic form in the sense of \cite{Par}. \section{An example: $\mathbb R[0,1]$} Let us consider the interval $[0,1]$ in $(\mathbb R,\mathbb R^+,0)$. This is clearly a dimension group with order unit $1$ and hence the interval $[0,1]$ is a dimension effect algebra. It was proved in \cite{Pu2} that the ${\bf EA}$ tensor product $[0,1]\otimes [0,1]$ is not lattice ordered and thus not isomorphic to $[0,1]$. By our results, $[0,1]\otimes [0,1]$ is a dimension effect algebra, which is the interval $R\otimes R[0,1\otimes 1]$. Note that the fact that the $\pog$ tensor product $R\otimes R$ is not lattice ordered was shown in \cite{W}. As an example, we will present $[0,1]$ as a direct limit of a directed system of simplicial effect algebras. The tensor product $[0,1]\otimes [0,1]$ is then obtained as a direct limit as in Theorem \ref{th:tenproddimea}. We first need to introduce some notations. For any $n$-tuple \[ A=(x_1,\dots,x_n) \] of elements in $\mathbb R^+$, let $f_A$ denote the positive group homomorphism \[ f_A: \mathbb Z^n\to \mathbb R,\quad e^n_i\mapsto x_i,\ i=1,\dots,n \] and let \[ L(A):= f_A(\mathbb Z^n), \quad L(A)^+:= f_A((\mathbb Z^n)^+),\quad L_>(A)^+:=f_A(\mathbb (Z_>^n)^+), \] where $\mathbb (Z_>^n)^+:= \{\sum_i z_i e^n_i$ with $z_i>0$ for all $i=1,\dots,n\}$. We also use the notations \[ Q(A):=Lin_{\mathbb Q}(A),\quad Q(A)^+:=Q(A)\cap\mathbb R^+. \] Let us define the index set as \[ \mathcal I:=\{A\subset [0,1], \mbox{finite, }\mathbb Q-\mbox{linearly independent, } 1\in L_>(A)^+\}. \] Any $A\subset \mathbb R^+$ with cardinality $n$ can be identified with the $n$-tuple of its elements $(x_1,\dots,x_n)$, indexed so that $x_1<\dots <x_n$. For $A,B\in \mathcal I$, write $B\preceq A$ if $B\subset L(A)^+$. It is easy to see that $\preceq$ is a preorder in $\mathcal I$. \begin{prop}\label{prop:directed} $(\mathcal I,\preceq)$ is directed. \end{prop} For the proof, we need some lemmas. \begin{lemma}\label{lemma:sums} Let $B=(y_1,\dots,y_k)$ be a tuple of elements in $\mathbb R^+$. Assume that for some $1\le N< k$, \[ \sum_{i=1}^Ny_i=\sum_{i=N+1}^ky_i. \] Then there is some tuple $A=(x_1,\dots, x_l)$ of elements in $Q(B)^+$ such that $l<k$ and $y_i\in L(A)^+$, $i=1,\dots,k$. \end{lemma} \begin{proof} We proceed by induction on $k$. By the assumptions, we see that $k$ is at least 2, in which case we have $y_1=y_2$. Put $A:=\{y_2\}$ and we are done. Now let $k>2$ and assume that the assertion is true for tuples of length $k-1$. By reindexing and rearranging the sums, we may assume that $y_k=\min\{y_1,\dots,y_k\}$. Put $y_1':=y_1-y_k$, then $y_1'\in Q(B)^+$ and we have the equality \[ y_1'+y_2+\dots +y_N=y_{N+1}+\dots+ y_{k-1} \] containing only $k-1$ elements. By the induction hypothesis, there is some tuple $A'=(x_1,\dots,x_{l'})$ with elements in $Q(B)^+$ and $l'< k-1$, and some $(k-1)\times l'$ matrix $Z'$ with values in nonnegative integers such that \[ y_1'=f_{A'}(z'_{1\cdot}),\quad y_i=f_{A'}(z'_{i\cdot}),\ i=2,\dots,k-1, \] here $z'_{i\cdot}$ denotes the $i$-th row of $Z'$. Let $A=(x_1,\dots,x_{l'},y_k)$ and \[ Z=\left(\begin{array}{cc} Z' & \begin{array}{c} 1\\ 0\\ \vdots\\ 0\end{array}\\ 0 & 1 \end{array}\right). \] Then $A$ is an $l$-tuple of elements in $Q(B)^+$, $l=l'+1<k$ and $y_i=f_A(z_{i\cdot})\in L(A)^+$ for all $i$. \end{proof} \begin{lemma}\label{lemma:basis_positive} Let $B=(y_1,\dots,y_k)$ be a tuple of elements in $\mathbb R^+$. Then there is a $\mathbb Q$-linearly independent tuple $A=(x_1,\dots,x_n)$ of elements in $Q(B)^+$ such that $y_i\in L(A)^+$, $i=1,\dots,k$. \end{lemma} \begin{proof} If $B$ is $\mathbb Q$-linearly independent, there is nothing to do. Otherwise, there are some $r_i\in\mathbb Q$ such that $\sum_i r_i y_i=0$ with some $r_i\ne 0$. Clearly, by multiplying by a common denominator, we may assume that $r_i\in \mathbb Z$. Assume that the elements are arranged in such a way that \[ r_i\left\{\begin{array}{cc} >0 & \mbox{ for } i=1,\dots,N\\ <0 & \mbox{ for } i=N+1,\dots M\\ =0 & \mbox{ for } i=M+1,\dots k. \end{array}\right. \] Put $p_i=\Pi_{i\ne j\le M} |r_j|$ and let $y'_i=\frac{y_i}{p_i}$ for $i=1,\dots,M$. Clearly, $y_1',\dots, y_M'\in Q(B)^+$. Then by multiplying the equality by $\Pi_{j=1}^M|r_i|^{-1}$, we obtain \[ \sum_{i=1}^Ny'_i=\sum_{i=N+1}^My'_i. \] Applying Lemma \ref{lemma:sums}, there is some $l$-tuple $A'=(x_1',\dots, x_l')\in Q(B)^+$ with $l<M$ such that $y_i'\in L(A')^+$ for $i=1,\dots,M$, so that also $y_i=p_iy_i'\in L(A')^+$, $i=1,\dots,M$. We now repeat the same process with $B'=(x_1',\dots,x_l',y_{M+1},\dots,y_k)$. Since $Q(B')=Q(B)$ and $|B'|<k$, after a finite number of steps we obtain a $\mathbb Q$-linearly independent set $A=\{x_1,\dots,x_n\}$ with the required properties. \end{proof} \noindent \textit{Proof of Proposition \ref{prop:directed}}. Let $B,C\in \mathcal I$, then by Lemma \ref{lemma:basis_positive} there is some $\mathbb Q$-linearly independent tuple $A=(x_1<\dots<x_n)$ of elements in $Q(B\cup C)^+$ such that $B\cup C\subset L(A)^+$. By assumptions, $1\in L_>(B)^+\subset L(A)^+$, so that $1=\sum_i z_ix_i$ for unique coefficients $z_1,\dots, z_n\in \mathbb Z^+$. Assume that $z_{i_0}=0$ for some $i_0$. Let $B=(y_1<\dots<y_k)$. There are some positive integers $v_1,\dots,v_k$ such that $1=\sum_{j=1}^k v_jy_j$ and some nonnegative integers $w^j_1,\dots,w^j_n$ such that $y_j=\sum_iw^j_ix_i$. It follows that \[ 1=\sum_{j=1}^k v_jy_j=\sum_i(\sum_j v_j w^j_i )x_i=\sum_i z_ix_i, \] so that $\sum_j v_j w^j_{i}=z_i$, in particular, $\sum_j v_j w^j_{i_0}=0$. Since all $v_j$ are positive, this implies that $w^j_{i_0}=0$ for all $j$ and we have \[ y_j=\sum_{i\ne i_0} w^j_ix_i. \] Hence $B\subset L(A\setminus \{x_{i_0}\})^+$, similarly also $C\subset L(A\setminus \{x_{i_0}\})^+$. It follows that we may assume that $1\in L_>(A)^+$. This means that $1=\sum_i z_i x_i$ for positive integers $z_i$, which implies that we must have $0<x_i\le 1$. It follows that $A\in \mathcal I$ and $\mathcal I$ is directed. \qed We now construct a directed system of simplicial effect algebras. Let $A\in \mathcal I$. Since $A$ is $\mathbb Q$-linearly independent, $f_A$ is a $\pog$ isomorphism onto its range. Let $E_A$ be the interval $[0,f_A^{-1}(1)]$ in $\mathbb Z^{|A|}$ and let $g_A=f_A|_{E_A}$. Then $g_A$ is an effect algebra isomorphism onto the interval $[0,1]$ in $(L(A),L(A)^+,0)$. Let $B\in \mathcal I$, $B\preceq A$, then since $L(B)^+\subseteq L(A)^+$, we have $g_B(E_B)\subseteq g_A(E_A)$. Put \[ g_{AB}: E_B\to E_A, \qquad g_{AB}=g_A^{-1}g_B, \] then it is clear that \[ \mathcal E=(E_A, A\in \mathcal I; g_{AB}, B\preceq A) \] is a directed system of simplicial effect algebras. \begin{prop} $([0,1]; g_A, A\in \mathcal I)$ is the direct limit of $\mathcal E$. \end{prop} \begin{proof} It is clear that $([0,1]; g_A, A\in \mathcal I)$ is compatible with $\mathcal E$. Note also that any $x\in [0,1]$ is contained in the range of some $g_A$. Indeed, assume that $x\in \mathbb Q\cap[0,1]$, then $x=\tfrac mn$ for $n\in \mathbb N$, $m\in \mathbb Z^+$. Let $A=\{\tfrac1n\}$, then $A\in \mathcal I$ and we have $E_A=[0,n]_{\mathbb Z}$, $x=g_A(m)$. If $x\notin \mathbb Q$, then $A=\{x,1-x\}\in \mathcal I$ and $x\in A\subset g_A(E_A)$. Now let $E$ be an effect algebra and let $k_A: E_A\to E$ be a morphisms for $A\in \mathcal I$, such that $(E; k_A, A\in \mathcal I)$ is compatible with $\mathcal E$. Let $x\in [0,1]$ be in the range of $g_A$ and put \[ \psi(x)=k_A(g_A^{-1}(x)). \] Assume that $B\in \mathcal I$ is such that $x$ is also in the range of $g_B$ and let $C\in \mathcal I$ be such that $A,B\preceq C$. Then $g_A(E_A)\subseteq g_C(E_C)$ and by compatibility \[ k_A(g_A^{-1}(x))=k_Cg_{CA}(g_A^{-1}(x))=k_C(g_C^{-1}(x)). \] Similarly we obtain that $k_B(g_B^{-1}(x))=k_C(g_C^{-1}(x))$, hence $\psi$ is a well defined map. Let $I=\{1\}$, then clearly $I\in \mathcal I$, $E_I=\{0,1\}\subset \mathbb Z$ and we have \[ \psi(0)=k_I(0)=0,\qquad \psi(1)=k_I(1)=1, \] since $k_I$ is an effect algebra morphism. Further, let $x_1,x_2,x\in [0,1]$ be such that $x=x_1+x_2$. Let $A\in \mathcal I$ be such that $x_1,x_2\in g_A(E_A)$, then clearly also $x\in g_A(E_A)$ and we have $g_A^{-1}(x_1)+g_A^{-1}(x_2)=g_A^{-1}(x)$, since $g_A$ is an isomorphism onto its range. Hence \[ \psi(x)=k_A(g_A^{-1}(x))=k_A(g_A^{-1}(x_1)+g_A^{-1}(x_2))=\psi(x_1)+\psi(x_2). \] This proves that $\psi$ is an effect algebra morphism $[0,1]\to E$. Further, for any $A\in \mathcal I$ and $z\in E_A$, \[ k_A(z)=k_A(g_A^{-1}g_A(z))=\psi g_A(z), \] so that $k_A=\psi g_A$. Since $\psi$ is obviously the unique map $[0,1]\to E$ with this property, this proves the statement. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The formation of solar type protostars is triggered by the gravitational collapse of dense fragments of molecular clouds, the so-called prestellar cores. In molecular clouds and prestellar cores, the low temperature and interstellar UV flux promote the formation of icy mantles around the dust grains. These mantles are mainly composed of H$_2$O, CO, CO$_2$, H$_2$CO, or CH$_3$OH \citep[see][]{Boogert2004}. The two last become abundant with the freeze-out of CO. {In addition, the CO freeze-out coupled with the cold conditions enhance the abundance of the deuterated molecules \citep[see][]{Ceccarelli2007, Caselli2008}. } Several theoretical studies have shown that the molecular deuteration is very sensitive to the physical conditions. For instance, the deuteration of gaseous species increases with the total density $n_{\textrm{H}}$ and decreases with the temperature \citep[see][]{Millar1989, Roberts2003}. Similarly, the deuteration of icy species formed on the grain surfaces by H and D atom addition reactions, like H$_2$O, H$_2$CO and CH$_3$OH, depends on the gaseous atomic [D]/[H] ratio, which also increases with the density and the CO freeze-out at low temperatures \citep{Cazaux2011, Taquet2012b, Taquet2013}. In theory, therefore, the deuteration of different mantle species can be used to reconstruct the history of the ice formation and, consequently, of the protostar \citep[e.g.][]{Taquet2013}. In practice, unfortunately, the direct measurement of the deuteration of frozen species is not possible. {Observations of solid HDO towards protostars only yielded upper limits \citep[with HDO/H$_2$O $\lesssim$ few percent,][]{Dartois2003, Parise2003}.} However, one can observe these species where the icy mantles sublimate, for example in the hot corino regions. Since the timescale needed to significantly alter the deuteration after the ice sublimation is longer than the typical age of Class 0 protostars \citep[$\sim 10^5$ versus $\sim 10^4$ yr,][]{Charnley1997, Andre2000}, the measured deuteration of the gaseous mantle species likely reflects that in the ices prior to the sublimation. A comparison between the measured and predicted deuteration in interstellar ices is, therefore, possible. In \citet{Taquet2013}, we did a first study by comparing the predictions of our gas-grain GRAINOBLE model \citep{Taquet2012a} with the observations towards the protostar IRAS16293-2422 (hereafter, IRAS16293). This source displays a very high deuterium fractionation of formaldehyde and methanol \citep[with D/H ratios of 15 and 40 \%, respectively, see][]{Loinard2001, Parise2002, Parise2004} and a lower fractionation of water \citep[0.1 - 3 \%, see][]{Butner2007, Vastel2010, Coutens2012, Coutens2013, Persson2013}. We concluded that the lower fractionation of water with respect to that of formaldehyde and methanol is likely due to a different epoch of formation of the three species. Water is predicted to be mainly produced during the molecular cloud phase, while most of formaldehyde and methanol is formed during the colder and denser prestellar phase. We carried out a similar study using the measured deuteration of H$_2$O, H$_2$CO, and CH$_3$OH towards the outflow shock L1157-B1 and concluded that this site had a similar sequence for the formation of the ice, but in a less dense environment \citep{Codella2012}. Encouraged by these two studies, we want here to extend the analysis to other solar type protostars with the goal to reconstruct the formation history of their ices and compare it with the two previous cases. Ultimately, a similar study in a large sample of solar type protostars will provide us with a more complete picture of how the environment influences the chemical composition of the ices and will supply strong constraints to the theory. Although the fractionation of formaldehyde and methanol has been measured towards several solar type protostars \citep{Parise2002, Parise2004, Parise2005,Parise2006}, observational studies of deuterated water are scarce. In NGC1333-IRAS4B, the non detection of the HDO line at 225.6 GHz yields an upper limit to the [HDO]/[H$_2$O] ratio of $< 6 \times 10^{-4}$ \citep{Jorgensen2010}. In NGC1333-IRAS2A, several HDO and H$_2$O lines have been observed with single-dish telescopes. Using the \textit{Herschel Space Observatory}, \citet{Kristensen2010} observed a broad outflow component {for several H$_2$O lines}, but could not accurately estimate the water abundance in the warm compact region. In contrast, \citet{Liu2011} derived the HDO abundance profile in the warm and cold regions of the envelope. {However, single-dish telescopes also encompass the cold envelope, and the possible outflow component. Complementary interferometric observations, with arcsecond resolutions, are needed to resolve the emission coming from the hot corinos, where the ices are sublimated and where the deuteration likely reflects the ice pristine deuteration \citep{Jorgensen2010, Persson2013}. } In this Letter, we present interferometric IRAM Plateau de Bure observations of the HDO $4_{2,2}$-$4_{2,3}$ line at 143 GHz towards NGC1333-IRAS2A (hereinafter IRAS2A) and NGC1333-IRAS4A (hereinafter IRAS4A). These sources are located in the Perseus complex, in the NGC1333 cloud, whose distance is about 220 pc \citep{Cernis1990}. They were selected because they are the two line brightest low-mass protostars after IRAS16293 due to their distance and their luminosity and because interferometric observations of H$_2^{18}$O have been recently obtained by \citet{Persson2012} towards them. The [HDO]/[H$_2$O] ratio derived in the present work, combined with previous observations of deuterated formaldehyde and methanol, will be compared with the predictions of our gas-grain model GRAINOBLE \citep{Taquet2013} to reconstruct the chemical history of these two protostars. \section{Observations and results} \label{obs} The two low-mass Class 0 protostars IRAS2A and IRAS4A were observed with the IRAM Plateau de Bure Interferometer on 2010 August 1, August 3 and 2011 March 10 and in the C and D configurations of the array. Due to the proximity to each other, the two sources were observed in the same track. The $4_{2,2}$-$4_{2,3}$ HDO transition at 143.727 GHz and the 2 mm continuum emission have been obtained simultaneously using the WIDEX correlator, with a 1.8 GHz bandwidth centered at 143.5 GHz, and providing a spectral resolution of 1.95 MHz (4 km s$^{-1}$). Phase and amplitude were calibrated by performing regular observations of the nearby point sources 3C454.3, 3C84, and 0333+321. The amplitude calibration uncertainty is estimated to be $\sim 20$ \%. The data calibration and imaging were performed using the CLIC, and MAPPING packages of the GILDAS software\footnote{The GILDAS package is available at http://www.iram.fr/IRAMFR/GILDAS}. Continuum images were produced by averaging line-free channels in the WIDEX correlator before the Fourier transformation of the data. The coordinates of the sources, and the size of the synthesized beams are reported in Table \ref{description}. \begin{table}[h] \centering \caption{Coordinates, synthesized beams, continuum fluxes, and sizes of the observed low-mass protostars.} \begin{tabular}{l c c c c c} \tableline \tableline Source & IRAS2A & IRAS4A \\ \tableline RA & 03:28:55.56 & 03:29:10.45 \\ Dec & 31:14:37.05 & 31:13:31.18 \\ Synthesized Beam (\arcsec) & 2.16 x 1.73 (25$\ensuremath{^\circ}$) & 2.18 x 1.76 (25$\ensuremath{^\circ}$) \\ Continuum flux (Jy) \tablenotemark{a} & 0.13 & 1.08 \\ Source size (\arcsec) \tablenotemark{a} & 1.75 x 1.69 & 2.13 x 1.72 \\ \tableline \end{tabular} \tablecomments{$^a$Continuum fluxes and sizes are obtained from elliptical Gaussian fits in the (u,v) plane (i.e., deconvolved full width at half-maximum (FWHM) size).} \label{description} \end{table} Figure \ref{maps} shows the maps of the continuum emission at 2 mm of IRAS2A and IRAS4A obtained after natural weighted cleaning. Parameters of the continuum emission (flux and deconvolved FWHM size), obtained from elliptical Gaussian fits, are given in Table \ref{description}. For the two sources, since the FWHM size of the continuum emission is very similar to the size of the synthesized beam, the continuum emission is, therefore, not resolved. In particular, IRAS4A is known to be a binary system with a 1.8\arcsec~separation \citep{Looney2000}, as depicted in Figure \ref{maps}. Although the continuum emission of IRAS4A is peaked at the southeast (SE) position rather than at the northwest (NW) position, we cannot resolve the two sources. \begin{figure*}[h] \centering \includegraphics[width=50mm]{fig1a.eps} \includegraphics[width=50mm]{fig1b.eps} \\ \includegraphics[width=50mm]{fig1c.eps} \includegraphics[width=50mm]{fig1d.eps} \\ \includegraphics[width=50mm]{fig1e.eps} \includegraphics[width=50mm]{fig1f.eps} \caption{Maps and spectra towards IRAS2A (left) and IRAS4A (right). Upper panels: HDO spectra integrated over the emission region of each source (middle panels). The velocity resolution is 4 km s$^{-1}$ and the dashed blue lines mark the $V_{LSR}$ at 7 km s$^{-1}$. Middle panels: Continuum maps at 143 GHz of IRAS2A (rms of 1.7 mJy/beam, contour levels are in steps of 4 $\sigma$), and of IRAS4A (rms of 9.6 mJy/beam, contour levels are in steps of 4 $\sigma$). Green contours depict the deconvolved full width at half-maximum size. % Bottom panels: Maps of the HDO $4_{2,2}$-$4_{2,3}$ line towards IRAS2A (with a rms of 2.9 mJy/beam km s$^{-1}$, contour levels are in steps of 3 $\sigma$) and in IRAS4A (with a rms of 2.8 mJy/beam km s$^{-1}$, contour levels are in steps of 3 $\sigma$). Green contours depict the region showing a flux at half maximum of the line peak. The red crosses mark the source positions measured by \citet{Looney2000}. The bottom-left ellipses represent the beam sizes. } \label{maps} \end{figure*} Figure \ref{maps} shows the maps of the integrated HDO $4_{2,2}$-$4_{2,3}$ line towards the two sources obtained after natural weighted cleaning. For both sources, the FWHM size of the HDO line is very similar to the size of the synthesized beam, as shown in Figure \ref{maps}. The emission clearly originates in compact regions, that are confined within the synthesized beam of the telescope. In particular, although the SE position of IRAS4A is the brightest in the continuum, the HDO line emission comes from the NW position. The spectra of the HDO transition integrated within the FWHM size are shown in Figure \ref{maps} assuming $V_{LSR} = 7$ km s$^{-1}$ for the two sources. Table \ref{description2} gives the flux and the brightness temperature of the HDO line transition integrated inside the FWHM size. \begin{table*}[h] \centering \footnotesize \caption[Parameters of the HDO and H$_2^{18}$O lines observed towards IRAS2A and IRAS4A.]{Parameters of the HDO and H$_2^{18}$O lines observed towards IRAS2A and IRAS4A.} \begin{tabular}{l c c c c c c c c c c} \tableline \tableline Transition & Frequency & $E_{up}$ & $A_{ij}$ & Flux & $(\int T_{\textrm{B}} dv)_{obs}$ & Beam & Telescope & $\Delta v$ & Ref. & \\ & (GHz) & (K) & (s$^{-1}$) & (Jy km s$^{-1}$) & (K km s$^{-1}$) & (\arcsec) & & (km s$^{-1}$) & & \\ \tableline & & & & \multicolumn{6}{c}{NGC1333 IRAS2A} \\ \cline{4-9} HDO $4_{2,2}$-$4_{2,3}$ & 143.727 & 319.2 & $3.5 \times 10^{-6}$ & 0.43 & $6.8 \pm 1.4$ & 2.2 x 1.8 & IRAM PdBi & 7 & 1 & \\ HDO $1_{1,0}$-$1_{1,1}$ & 80.578 & 46.8 & $1.3 \times 10^{-6}$ & 0.37 & $0.07 \pm 0.02$ & 31.2 & IRAM 30m & 3.9 & 2 & \\ HDO $2_{1,1}$-$2_{1,2}$ & 241.561 & 95.3 & $1.2 \times 10^{-5}$ & 2.2 & $0.43 \pm 0.05$ & 10.4 & IRAM 30m & 4.1 & 2 & \\ HDO $3_{1,2}$-$2_{2,1}$ & 225.896 & 167.7 &$1.3 \times 10^{-5}$ & 2.6 & $0.50 \pm 0.03$ & 11.1 & IRAM 30m & 4.2 & 2 & \\ H$_2^{18}$O $3_{1,3} - 2_{2,0}$ & 203.408 & 203.7 & $4.8 \times 10^{-6}$ & 0.98 & $46 \pm 9$ & 0.9 x 0.7 & IRAM PdBi & 4.0 & 3 & \\ \tableline & & & & \multicolumn{6}{c}{NGC1333 IRAS4A} \\ \cline{4-9} HDO $4_{2,2}$-$4_{2,3}$ & 143.727 & 319.2 & $3.5 \times 10^{-6}$ & 0.21 & $3.2 \pm 0.6$ & 2.2 x 1.8 & IRAM PdBi & 6 & 1 & \\ H$_2^{18}$O $3_{1,3} - 2_{2,0}$ & 203.408 & 203.7 & $4.8 \times 10^{-6}$ & 0.27 & $13 \pm 3$ & 0.9 x 0.7 & IRAM PdBi & 2.9 & 3 & \\ \tableline \end{tabular} \\ \tablerefs{1: This work, 2: \citet{Liu2011}, 3: \citet{Persson2012}.} \tablecomments{The flux uncertainties include the calibration uncertainties, estimated to be $\sim 20$ \%.} \label{description2} \end{table*} \normalsize Comparison with maps of the H$_2^{18}$O $3_{1,3}$-$2_{2,0}$ line transition by \citet{Persson2012} shows that, although our beam is two times larger, most of the HDO and H$_2$O emissions originate in the same region. \citet{Persson2012} estimated the FWHM size of the H$_2^{18}$O emission, from a Gaussian fit in the $(u,v)$ plane, and they found that most of the emission originates in a 0.8\arcsec~ellipse, similar to their synthesized beam. Observational data derived by \citet{Persson2012} are given in Table \ref{description2}. {The HDO lines observed in this work are broader, by a factor of 2, than the other HDO and H$_2^{18}$O lines, due to the low spectral resolution of our observations.} In IRAS2A, even if the bulk of the H$_2^{18}$O emission is associated with the central warm envelope, an outflow component towards the southwest is also observed \citep[see Fig. 2 of][]{Persson2012} whereas the HDO line only originates in the compact region located within the synthesized beam. Our maps are, therefore, in good agreement with {previous single-dish observations of H$_2^{16}$O and HDO} towards IRAS2A by \citet{Kristensen2010} and \citet{Liu2011}, described in the Introduction, which show that H$_2$O {mostly} traces the outflow whereas HDO only traces the central envelope. In IRAS4A, both the HDO and H$_2^{18}$O emission come from the NW position and the FWHM sizes are similar to the respective synthesized beams, suggesting that the emission originates in the central region. Therefore, it is meaningful to compare the flux of the HDO and H$_2^{18}$O lines originating in the FWHM central regions. The fluxes are given in Table \ref{description2} and are used in the next section to estimate the [HDO]/[H$_2$O] ratio. \section{Deuterium fractionation of water} \label{wat_deut} \subsection{Method} A single transition line of HDO and H$_2^{18}$O does not allow us to derive an accurate estimate of the HDO and H$_2$O column densities towards the two protostars and, therefore, of [HDO]/[H$_2$O]. In order to derive the physical conditions of the line emitting gas and the relevant column densities, we compared the predictions from a non-LTE LVG code \citep{Ceccarelli2003} with our observations and the observations by \citet{Liu2011} of several HDO lines towards IRAS2A, obtained with the IRAM 30m, JCMT, and APEX telescopes. We considered the collisional coefficients from \citet{Daniel2011} for H$_2^{18}$O and from \citet{Faure2012} for HDO. The Einstein coefficients are from the Jet Propulsion Laboratory molecular database \citep{Pickett1998}. We ran a grid of models covering a large parameter space in kinetic temperature $T_{kin}$ (15 values from 70 to 220 K), $n_{\textrm{H}}$ (15 values from $1 \times10^6$ to $1\times10^9$ cm$^{-3}$), HDO column density $N$(HDO) (15 values from $8\times10^{14}$ to $1\times10^{17}$ cm$^{-2}$), and source size $\theta_s$ (30 values from 0.1 to 200 arcsec). In addition, we considered three values for the ortho to para ratio (opr) of H$_2$: $10^{-2}$ (namely all H$_2$ molecules are in the para state), 1, and 3 (thermal equilibrium value at $T > 50$ K). To find the best fit to the data, we excluded the 464 GHz line observed by \citet{Liu2011} as it may be contaminated by the cold envelope emission, given its low energy level (22 K). \subsection{Results} \paragraph{IRAS2A} We ran the grid of models to reproduce the emission of the HDO lines towards IRAS2A. The H$_2$ opr has a low influence on the column densities derived from the observations. Varying the H$_2$ opr between 0.1 and 3 causes a small variation of the results, by no more than 20 \%, namely within the uncertainties of the observations. In the following, we consider an H$_2$ opr of 3. The flux of all the HDO lines are well reproduced (reduced $\chi^2 < 1$) for $T_{kin} \sim$ 75-80 K, $\theta_s = 0.4$\arcsec, and a wide range of $n_{\textrm{H}}$ between $6 \times 10^5$ and $2 \times 10^8$ cm$^{-3}$. The derived $N$(HDO), varies between $5 \times 10^{17}$ and $10^{19}$ cm$^{-2}$ and decreases with $n_{\textrm{H}}$. To evaluate the H$_2$O column density $N$(H$_2$O) from the 203.4 GHz transition, we considered three physical cases that reproduce the emission of the HDO lines (see Table \ref{table:bestfit}). The density used in Case 1 ($6 \times 10^5$ cm$^{-3}$) is similar to the density used by \citet{Maret2004} for reproducing the H$_2$CO emission with a non-LTE LVG analysis. The densities used in Cases 2 and 3 are slightly lower than the density in the hot corino region (where the temperature is higher than 100 K) of IRAS2A derived by \citet{Jorgensen2002}. Higher densities do not reproduce the observed HDO emission (the reduced $\chi^2$ increases to values much higher than 1). Regardless of the density, the derived column density of H$_2^{18}$O is equal to $6-7 \times 10^{16}$ cm$^{-2}$. Note that at $n_\textrm{H} = 6 \times 10^5$ cm$^{-3}$, the line weakly masers \citep[see also][]{Neufeld1991}. The column densities we obtain are slightly higher, by a factor of two, than that derived by \citet{Persson2012}. {The difference can, therefore, come from a combination of the LTE versus non-LTE population, gas temperature and line opacity (in our model, it is 1.4). The low temperature could indicate that the gas is thermally decoupled from the dust.} $N$(H$_2^{16}$O) can then be derived by assuming an isotopic abundance ratio $^{16}$O/$^{18}$O of 560 \citep{Wilson1994} and an opr of 3 \citep[see][]{Emprechtinger2010, Emprechtinger2013}. Depending on $n_{\textrm{H}}$, we derive an [HDO]/[H$_2$O] abundance ratio between 0.3 and 8 \% (see Table \ref{table:bestfit}). \paragraph{IRAS4A} For IRAS4A, no other HDO lines but the line observed in this work are available. The flux of the HDO and H$_2^{18}$O lines are, therefore, compared with the predictions obtained by using the same set of physical conditions as for IRAS2A. We also used another set of parameters presenting a larger source size $\theta_s$ of 0.8\arcsec, consistent with the upper limit given by \citet{Persson2012} for the H$_2^{18}$O transition. {The increase in $\theta_s$ slightly decreases the column density of HDO and H$_2^{18}$O by approximately the same factor (2-3), giving similar results to those by \citet{Persson2012}. The [HDO]/[H$_2$O] ratio, therefore, decreases by a factor of two, at maximum.} For both sets of physical conditions, we predict an [HDO]/[H$_2$O] abundance ratio between 0.5 and 3 \% (see Table \ref{table:bestfit}). \begin{table}[htp] \centering \caption[Physical conditions and column densities of HDO and H$_2^{18}$O in IRAS2A and IRAS4A.] {LVG best fit parameters for HDO and H$_2^{18}$O emissions.} \begin{tabular}{l c c c} \hline \hline Case & 1 & 2 & 3 \\ Density (cm$^{-3}$) & $6 \times 10^5$ & $2 \times 10^7$ & $1 \times 10^8$ \\ \hline \multicolumn{4}{c}{IRAS2A ($T = 80$ K, $\theta_s = 0.4$ \arcsec)} \\ \cline{1-4} $N$(HDO) (cm$^{-2}$) & $1 \times 10^{19}$& $1 \times 10^{18}$ & $6 \times 10^{17}$ \\ $\tau$(HDO $4_{2,2}$-$4_{2,3}$) & 23 & 2 & 1 \\ $N$(p-H$_2^{18}$O) (cm$^{-2}$) & $6 \times 10^{16}$ & $7 \times 10^{16}$ & $7 \times 10^{16}$ \\ $\tau$(p-H$_2^{18}$O) & -0.4 & 0.8 & 1.5 \\ $N$(H$_2$O) (cm$^{-2}$) & $1.3 \times 10^{20}$ & $1.6 \times 10^{20}$ & $1.6 \times 10^{20}$ \\ HDO/H$_2$O & $0.08$ & $0.006$ & $0.003$ \\ \hline \multicolumn{4}{c}{IRAS4A} \\ \hline & \multicolumn{3}{c}{$T = 80$ K, $\theta_s = 0.4$ \arcsec} \\ \cline{2-4} $N$(HDO) (cm$^{-2}$) & $1.5 \times 10^{18}$& $3 \times 10^{17}$ & $2 \times 10^{17}$ \\ $\tau$(HDO $4_{2,2}$-$4_{2,3}$) & 3 & 0.8 & 0.4 \\ $N$(p-H$_2^{18}$O) (cm$^{-2}$) & $2.5 \times 10^{16}$ & $1.5 \times 10^{16}$ & $1.5 \times 10^{16}$ \\ $\tau$(p-H$_2^{18}$O) & -0.4 & 0.05 & 0.15 \\ $N$(H$_2$O) (cm$^{-2}$) & $5 \times 10^{19}$ & $3 \times 10^{19}$ & $3 \times 10^{19}$ \\ HDO/H$_2$O & $0.03$ & $0.01$ & $0.007$ \\ \cline{2-4} & \multicolumn{3}{c}{$T = 80$ K, $\theta_s = 0.8$ \arcsec} \\ \cline{2-4} $N$(HDO) (cm$^{-2}$) & $5 \times 10^{17}$& $1 \times 10^{17}$ & $6 \times 10^{16}$ \\ $\tau$(HDO $4_{2,2}$-$4_{2,3}$) & 0.8 & 0.3 & 0.2 \\ $N$(p-H$_2^{18}$O) (cm$^{-2}$) & $1.5 \times 10^{16}$ & $6 \times 10^{15}$ & $5.5 \times 10^{15}$ \\ $\tau$(p-H$_2^{18}$O) & -0.2 & -0.02 & 0.08 \\ $N$(H$_2$O) (cm$^{-2}$) & $3 \times 10^{19}$ & $1.3 \times 10^{19}$ & $1.3 \times 10^{19}$ \\ HDO/H$_2$O & $0.016$ & $0.008$ & $0.005$ \\ \hline \end{tabular} \\ \label{table:bestfit} \end{table} \section{Discussion and conclusions} \label{discussion} The first result of this work is the relatively high water deuteration, $\sim1$ \%, in IRAS2A and IRAS4A. In IRAS2A, this value is compatible with the lower limit derived by \citet{Liu2011} in the same source (see Introduction). In IRAS4A, this is the first published estimate. Second, as in IRAS16293 and L1157-B1, the water deuteration is lower, by about one order of magnitude, than the deuteration of formaldehyde and methanol in the same sources, previously measured by \citet{Parise2006}. Third, the water deuteration in IRAS2A and IRAS4A is very similar to that measured in IRAS16293 by \citet{Coutens2012}, $\sim$3 \%{, but higher than the ratio derived by \citet{Persson2013} in the same source. The difference between the two results might come from the choice of the method. \citet{Persson2013} derived the [HDO]/[H$_2$O] ratio from few lines by assuming LTE population and optically thin emission, whereas the quoted column density implies a line opacity $\sim 5$ and the 203 GHz line may maser (see above). On the contrary, \citet{Coutens2012} uses single-dish observations which also encompass the cold envelope even though most of the lines have $E_{up} >$ 50K, so that the contamination from the outer cold envelope is accounted for. } The ratio is at least one order of magnitude larger than the value measured in IRAS4B, $< 6 \times 10^{-4}$, by \citet{Jorgensen2010}, despite the fact that this source lies in the same molecular cloud, NGC1333, as IRAS2A and IRAS4A and it is only $\sim 15$\arcsec~away from IRAS4A \citep{Sandell1991}. To add to this oddity, the deuteration of formaldehyde and methanol in IRAS4B is very similar to that measured in IRAS2A and IRAS4. Figure \ref{obs_mod} summarizes the situation, with a plot of the measured deuteration of water, formaldehyde and methanol in the outflow shock L1157-B1 and in the protostars IRAS 16293, IRAS2A, IRAS4A and IRAS4B. \begin{figure}[tb] \centering \includegraphics[width=88mm]{fig2_new.ps} \caption{Deuterium fractionation of water, formaldehyde, methanol of simply (top) and doubly (bottom) deuterated species. From left to (with only one exception right: predictions of GRAINOBLE model \citep{Taquet2013} at 20 K (grey) and 10 K (black), and observations towards L1157-B1 \citep[red, from][]{Codella2012}, IRAS16293 \citep[solid blue, from][and dashed blue from Persson et al. 2013]{Loinard2001, Parise2002, Parise2004, Vastel2010, Coutens2012}, IRAS2A and IRAS4A \citep[green, from this work, for water, and][for formaldehyde and methanol]{Parise2006}, and IRAS4B \citep[purple, from][]{Parise2006, Jorgensen2010}. } \label{obs_mod} \end{figure} { In the same figure, we also show the theoretical predictions by the gas-grain model GRAINOBLE \citep{Taquet2013}. Briefly, the model follows the multilayer formation of deuterated ices with a pseudo time-dependent approach. We report the icy [HDO]/[H$_2$O] ratio computed at $3 \times 10^5$ yr (the typical age of prestellar cores) for different constant densities and temperatures, $A_V = 10$ mag. {The H$_2$ opr, which is difficult to constrain observationally, is one of the key parameters in setting the [HDO]/[H$_2$O] ratio \citep{Taquet2013}. Following the value derived by \citet{Dislaire2012} towards IRAS 16293, we used a H$_2$ opr of $10^{-3}$ . } The comparison between the observations and the theoretical predictions shows that the [HDO]/[H$_2$O] ratio measured in IRAS2A and IRAS4A is reproduced for a large range of physical conditions: $n_H \sim 10^3 - 10^5$ cm$^{-3}$ for $T=10$ K and $n_H \sim 10^3 - 10^6$ cm$^{-3}$ for $T=20$ K. On the contrary, our model cannot reproduce the [HDO]/[H$_2$O] value reported by \citet{Jorgensen2010} for densities larger than $10^3$ cm$^{-3}$. {One possible explanation is that water ice has formed at a lower H$_2$ opr \citep[see][]{Taquet2013} or the model is missing some ingredients regarding the deuterated ice formation.} As in our previous work, we note that the larger deuteration of formaldehyde and methanol testifies to a formation of these species on the grain surfaces at a later and higher density stage than water, likely the prestellar core phase. Finally, NGC1333 is a very active star forming region undergoing the destruction and alteration from various outflows of the first generation stars {that might have initiated the formation of IRAS2A and IRAS4A} \citep{Liseau1988, Warin1996, Lefloch1998, Knee2000} whereas the cloud containing IRAS16293 is relatively quiescent \citep{Mizuno1990}. Nevertheless, the similar deuterium fractionation derived in IRAS2A, IRAS4A and IRAS 16293 suggests that these protostars have followed a similar chemical history even though they are located in very different environments. {However, the [HDO]/[H$_2$O] ratio observed in IRAS4B by \citet{Jorgensen2010} and in IRAS 16293 by \citet{Persson2013} does not fit with this conclusion and remains puzzling.} \begin{acknowledgements} This work has been supported by l\textquoteright Agence Nationale pour la Recherche (ANR), France (project FORCOMS, contracts ANR-08-BLAN-022). \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The \emph{Bayesian Brain} hypothesis has led to a number of important insights about neural coding in the brain \citep{knill2004bayesian, friston2010free, friston2012history, pouget2013probabilistic, lee2003hierarchical} by characterizing neural representation and processing in terms of formal probabilistic inference and sampling. Furthermore, the introduction of related probabilistic representations and sampling processes in modern deep learning \emph{variational} models has led to improved performance on a range of different tasks \citep{zhang2018advances, blei2017variational, kingma2013auto, detorakis2019inherent}. The widely-used \emph{dropout} technique in deep learning can be seen as a form of variational inference and sampling \citep{srivastava2014dropout, gal2016dropout} with direct analogy to the random failure of synapses in the brain. This link has led to biologically-motivated models of variational deep learning that use network weight dropout to simulate synaptic failure \citep{mostafa2018learning, wan2013regularization, neftci2016stochastic}. In this paper, we build on these and other recent findings in machine learning and neurobiology to show how the brain can accurately represent the two primary components of probabilistic inference, distributions of observed data and distributions of unobserved values (such as model parameters), with the single, biologically established mechanism of synaptic failure. In the first section, we introduce relevant concepts of probability and review recent developments. Next, we propose a theory of neural sampling by synaptic failure and lateral inhibition and show that under that theory, network weights can be analytically mapped to transmission probabilities that result in unbiased samples from arbitrary distributions. Our result further leads to a local, biologically-realizable learning rule for synaptic failure probabilities, consistent with recent evidence that the rate of synaptic failures appears to be under adaptive control. Finally, we use simulations to demonstrate complete posterior sampling in an abstracted network using only locally-learned transmission probabilities. \subsection{Probabilistic reasoning} To introduce the relevant concepts, consider a simple network that learns about the sizes and weights of different dog breeds. Any particular sample dog has a well-defined size, but across the population, there is a distribution of sizes. A \emph{probability distribution} assigns likelihood to each size. If we consider only a particular breed, then we would obtain a likelihood from the \emph{conditional distribution} of size given that breed. It would be useful for a neural system to be able to represent the conditional distributions of sizes, rather than just a central tendency such as the most frequently observed size. Conditional distributions, as a one-to-many mapping, allow the neural system to explore possibilities and plan accordingly, for instance, when designing dog houses and toys for the most likely range of end users. There are two primary sources of variance or uncertainty in the target distribution (e.g., dog sizes): the distribution of the data (e.g., the observed sizes across all dogs of a given breed), and the distribution of the parameters in a model of that data (e.g., the network weights or synapses linking size to breed). Accordingly, any prediction about the size of a particular dog must come in the form of a distribution that draws from both. The first type of variance is often called the \textit{residual} variance in classical statistics because it refers to observed variation in the dependent variables that is left over after conditioning them on the independent variables. It is called the \textit{risk} in machine learning in reference to a model's probability of misclassification despite converging on a solution in training, and more recently as the \textit{aleatory} (i.e, random) uncertainty in variational deep learning. Conversely, uncertainty in parameters is known in classical statistics, as the \textit{sampling error} or by its inverse, precision, and serves as the basis for drawing scientific inferences from data. In recent variational deep learning literature, parameter imprecision has been called the \textit{epistemic uncertainty}.\footnote{This use of epistemic uncertainty to refer only to parameter imprecision is a misnomer in that the residual distribution also depends upon epistemic degrees of freedom, namely what data are chosen and how the models are specified. More broadly, epistemic uncertainty includes problems of measurement, experimental design, interpretation of model results, and many other sources that are unlikely to be represented in the model itself. \cite{o2016weapons} cites wide-spread, misguided confidence in machine learning models as a source of systemic bias and inequity in its societal impacts, which are not likely to be resolved by variational methods alone.} In Bayesian modeling, the goal is typically to estimate the posterior distribution of the parameters given data and a prior distribution, which can then be used to draw inferences about the unknown quantities in question. As residual variance reflects observations that the model cannot ``explain away,'' learning to sample from the residual distribution can allow a network to explore the range of events that are known to be possible. To the extent that different regions of a distribution lead to markedly different processes in subsequent layers, and ultimately different behavioral outcomes, activation of only the mean value or other central tendency will severely limit the capabilities of the network. Parameter uncertainty, as one form of epistemic uncertainty, produces additional noise that propagates through the network into inferences. While we might wish to eliminate noise and be maximally certain in any situation, parameter uncertainty itself likely serves the human brain the same way it serves scientific inference: we can avoid drawing premature conclusions and acting too soon. Furthermore, whereas the conditional distribution of the data is a kind of record of previous observations, the uncertainty associated with unobserved parameters can extend indefinitely beyond the bounds of previous or future experience. For instance, adults learn that there is a finite range of colors and patterns in dogs' fur, and this knowledge only becomes more bounded with additional experience. Young children, however, may readily imagine dogs with blue, red, or green fur, with less concern for plausibility. In this way, the process of sampling from what we call epistemic uncertainty may hold implications for the search space of human imagination. While adults grow to be less interested in outlandishly colored dogs, we remain no less imaginative about other unknown aspects of nature. While the relationship between a conditional distribution of data and any associated parameter distributions can be derived analytically for simple, parametric models like linear regression, doing so is difficult or intractable for models in general, requiring numeric estimation instead. In classical inference with maximum likelihood estimation, the Fisher information is computed from the matrix of parameter second derivatives with respect to the likelihood function and provides estimates of the complete parameter covariance matrix \citep{lehmann2006theory}. In Bayesian modeling, Markov Chain Monte Carlo is used to draw samples from the posterior distribution of the parameters, which can then be used to calculate means, modes, and variances. \paragraph{Variational inference} Deep learning models are too complex for exact analytic or numeric inference to be practical or even possible, so recent developments have primarily focused on \emph{variational inference} as an alternative. Variational inference refers to the use of greatly simplified, approximate distributions to draw inferences from complex models. Variational neural networks were pioneered with the Boltzmann machine \citep{hinton2007boltzmann}, Helmholtz machine \citep{dayan1995helmholtz}, and their later generalization, the variational autoencoder \citep[VAE,][]{kingma2013auto} and have since been further generalized to networks of many kinds \citep{zhang2018advances}, including networks that imitate neurobiology \citep{mostafa2018learning, neftci2016stochastic}. In these models, sampling from the variational distributions is used to approximate an intractable integral in the objective function, to learn regularized solutions, and to generate plausible out-of-sample realizations from the learned, latent distributions. These models generally rely on several mechanisms that are not biologically plausible, including negative weight values, parametric distributions, and global objective functions, typically a KL-divergence function. \paragraph{Dropout} Recent innovations in sampling in machine learning have focused on dropout, the random masking of network activations \citep{srivastava2014dropout}, weights \citep{wan2013regularization}, or both, as a form of variational inference \citep{labach2019survey, neftci2016stochastic}. \cite{gal2016dropout} demonstrated that minimizing the KL-divergence of a Binomial-Gaussian approximation of each network weight to a standard normal prior is functionally equivalent to $\mathcal L^2$ regularization. As regularization implies a distribution of the parameters within which they may be shifted without incurring excess error, the dropout algorithm allows one method of sampling from that distribution, and hence sampling from distributions of the subsequent node activations that condition on those parameters. Under that application, the dropout distribution represents uncertainty in the network weights, which then propagates to the activations and final predictions. To date, it remains to be demonstrated and implemented that weight dropout can be used to sample from the conditional distribution of the data. \subsection{Biological neural networks} The critical question addressed in this paper concerns how a population of neurons represents distributions and propagates uncertainty toward the end goals of action and perception. Although it is possible to consider individual neurons as representing entire distributions, it is more likely that the relevant information is distributed across populations of neurons, in a \emph{population code} \citep{zemel1998probabilistic}. Population codes can be understood as transformations of an input to a set of neural activations according to the neurons' \textit{tuning curves} (Figure \ref{fig:popcode1}). A widely-used tuning curve is the Gaussian function, which has a central, preferred value of the stimulus, and a width representing the range of stimuli that activate the neuron to a lesser degree. Input values closer to the center of a Gaussian tuning curve cause the neuron to fire more frequently. By taking the the point corresponding to the maximum of the weighted sum of many neurons' firing rates, and hence, the heights of their respective tuning curves, values of the inputs can recovered. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{popcode.pdf} \caption{Dual representation of a stimulus as a real-valued number, the position of the vertical line on the left, and as neural firing rates, the heights of the lines on the right. Gray background curves represent the tuning curves of the sensory receptors. Asterisks shows how each representation is understood in the space of the other.} \label{fig:popcode1} \end{figure} Such codes are well-established in the brain, for example in the population coding of visual orientation angle, or motor variables such as velocity or angle of a saccade \citep{Albright84,GeorgopoulosSchwartzKettner86,SommerWurtz00}. As shown in Figure \ref{fig:tuningcurve} \citep[taken from][]{hubel1968receptive}, the orientation of a line is represented by the firing rates of several neurons with distinct, preferred values. If the receiving layer, or network output representing a saccade is similarly population coded, then the projections from the input to the target layer define a probability distribution over the output layer. We can simplify the representation further by discretizing the space and considering the inputs and outputs as binned ranges, i.e., as a bivariate histogram. \begin{figure}[!ht] \centering \includegraphics[width=0.7\linewidth]{angle.png} \caption{Gaussian kernel population code for visual orientation. Each neuron has a tuning curve around a preferred orientation, and the entire population of neural activity encodes the current orientation as a weighted average of the neuron activity times its preferred value \citep{hubel1968receptive}.} \label{fig:tuningcurve} \end{figure} Population codes lend to a natural interpretation of neural activity in terms of the primary operations used in probabilistic reasoning: marginalization and inference. Because a receiving layer integrates over input from a sending layer, co-activation of multiple inputs to a single set of outputs is equivalent to marginalization (also known as mixture) over the conditional distributions produced by each neuron. Each individual input may be further regarded as a histogram that marginalizes over the probability mass values represented by its projections. Evidence accumulation for inferences may be best understood in terms of the cumulative change in synapses over long-term potentiation (LTP) and long-term depression (LTD). In statistical terms, synaptic plasticity itself serves as the multiplication of likelihoods, including priors, leading the posterior distribution encoded over synaptic connections. The prior distribution, under this interpretation, is just the previous state of the synapses, or the cumulative result of learning up that point. \cite{aitchison2021synaptic} discuss the relationship between evidence and Bayesian inference as it pertains to synapses. Their analysis is directly related to the property of maximum likelihood estimation in which the second partial derivatives of the likelihood function with respect to the parameters computes the variances and covariances of those parameters at the solution, and therefore the first derivatives, or step sizes, are a function of underlying parameter uncertainty \citep{lehmann2006theory}. Inference, more broadly, can validly refer to both this abstracted relationship between evidence and learned synaptic states and the active use of those synaptic states by the network to process ambiguous stimuli. \paragraph{Population codes as probability mass} By combining population codes with sampling, the brain can represent probability distributions over two dimensions: space and time. Across space, the overall distributed pattern of neural activity across the population code can represent variance in terms of the breadth of neurons active: if only a single neuron is strongly active, the corresponding value is represented with low variance (high certainty); and if many neurons are weakly active, that corresponds to a high variance (low certainty). The nature of the learned weights clearly will drive the shape of this neural activity distribution. Formally, one only needs to normalize the neural activations by the total activity to have a well-defined probability distribution (Figure \ref{fig:neuron2hist}). \begin{figure}[!ht] \centering \includegraphics[width=.65\linewidth]{neuron2hist.pdf} \caption{Neural activations may be interpreted as probability mass functions on a distribution that marginalizes over the receiving layer. Probabilities are ultimately encoded in the projections from the sending layer.} \label{fig:neuron2hist} \end{figure} Furthermore, with a few additional assumptions, the propagation of a population of distributed neural activity from one layer to the next will appropriately propagate the relative level of uncertainty into subsequent stages of processing. For example, if the size of a particular dog is uncertain, then many neurons representing different possible sizes will be active. The widespread activity will cause further widespread activation in a subsequent layer tasked with deciding best size for a dog house, effectively \emph{convolving} uncertainty estimates computed at the next level with the uncertainty present in the lower level. It has been postulated that neural activations map directly to likelihoods or log-likelihoods, a concept known as probabilistic population codes \citep{pouget2013probabilistic, pouget2003inference, beck2012complex, ma2006bayesian}. Though several interpretations exist, the common theme is that each provides a way to decode neural activations in terms of statistical formalisms and operations. \cite{zemel1998probabilistic} examined three possible schemes for decoding neural activity as probabilistic inference. In the first, distributions are implied by the likelihood function of a Poisson rate parameter conditional on the rate of the input neuron. This approach is limited in the range of variances that can be represented, and functions more literally as an analysis of signal fidelity among neurons rather than a language of probability among them. Furthermore, it only applies to inputs that are point values. To remedy these issues, they extend the approach such that the Poisson rate parameter is the inner product of the tuning curves with an arbitrary input function, effectively allowing simultaneity among many inputs. A somewhat different approach considered by the same authors is to represent distributions as a weighted sum of tuning curves, a form of kernel density estimation, but that approach entails implausible lower bounds on the precision and constraints between precision and location. Generally, these approaches focus on inference in the space of neural activations, rather than through the long-term course of learning. The purely spatially distributed population coding case represents a kind of static, ``omniscient'' perspective on probabilistic processing, where an entire distribution is represented at once. At the other extreme is a purely time-based encoding of the distribution, which is synonymous with representing discrete \emph{samples} from the overall distribution \citep{HoyerHyvarinen2003, fiser2010statistically, orban2016neural}. For example, at the relevant sampling timescale, the population code could engage in a strong inhibitory competition to select a single neuron, with the statistics of this competitive process following the relevant underlying probability distribution (i.e., higher probability neurons are activated more frequently). This is the idea behind the widely-used SoftMax distribution, for example, where the likelihood of activating each neuron in a layer is normalized by the sum over all neural activations in that same layer. Theories of such probabilistic sampling over time underlie proposals for how noisy neural activity might implement non-synaptic-failure-based inference algorithms \citep{Buesing11, Hennequin14_fastsampling, SavinDeneve14}, have been used to explain stochasticity in synaptic weights \citep{Kappel15, aitchison2021synaptic}, and can explain aspects of response variability in sensory cortices \citep{orban2016neural, Echeveste20_corticallike}. \paragraph{Synaptic failure as dropout sampling} It is in this time-based sampling case that synaptic dropout can play a critical role, acting as the source of stochastic variability that drives the sampling process. As noted above, there is extensive evidence that synapses in the mammalian neocortex exhibit high rates of communication failure, specifically the failure of presynaptic vesicles to release neurotransmitters \citep{allen1994evaluation}. In vivo, release probabilities vary widely as a result of maturational differences, extracellular calcium \citep{dodge1967co}, inhibition by ambient neurotransmitters, synapse type, and postsynaptic cell type, with most falling well below 50\% \citep{borst2010low, branco2009probability}. \cite{huang1997estimating} estimated that two-thirds of synapses transmit subsequent to fewer than 17-33\% of presynaptic action potentials. Interestingly, a purely temporal sampling scheme does not directly convey the uncertainty associated with a particular sample to subsequent processing stages, in contrast to the space-based representation. Repeated samples over time are required to gather a sense of the relative variance in neural activities. For this reason, it may be particularly powerful to combine some of both the space and time representations of a distribution, representing variance across a relatively sparse population of neural activity, while selecting a sample from the larger overall distribution. It is difficult to analyze such a hybrid scheme, but given our ability to apply the same framework to both the purely spatial and temporal schemes separately, there is no obvious reason why it should not work in practice. \paragraph{Lateral inhibition} In addition to synaptic dropout, sampling in the brain is going to be affected by inhibitory connections among the receiving neurons. The effect of these connections, called \emph{lateral inhibition}, can produce contrast and winner-take-all dynamics. Without sampling, a complex, multi-modal distribution subject to lateral inhibition may disproportionately express only the mode represented by the few strongest synapses. If synapses fail randomly, then competition is randomly reduced and all neurons constituting the distribution are given a chance to fire over the course of repeated samples. In fact, the network cannot be said to sample from an encoded distribution unless each neuron's ultimate chance of out-competing the rest of the sample is exactly the probability encoded by its incoming synapses. If transmission probabilities are simply uniform or proportional to the synaptic weights, then the likelihood of sampling the tails of the distribution, encoded by the smallest weights, vanishes as the size of the network grows. For the theory we have outlined to be possible, the probability that a synapse fails must be systematically related to its efficacy, but no functional mappings have been found previously. This final observation motivates the central aims of this paper: to discover such a mapping and demonstrate biologically plausible sampling of all the distributional components relevant for Bayesian inference. In summary, we make the following theoretical proposals: (1) Probability distributions are encoded in the sending projections and manifested in the receiving layer as population codes; (2) Synaptic plasticity constitutes the accumulation of evidence and inference between layers; (3) Integration over a sending layer constitutes marginalization over conditional distributions in the receiving layer; (4) Synaptic failure serves as a dropout-type sampling algorithm; (5) Transmission probabilities adapt in tandem with weights during learning to sample from distributions that account for both observed and unobserved variance, i.e., that of sensed data and model parameters (in neurons, synaptic efficacy itself). All together, these propositions allow for the complete realization of approximate Bayesian inference in the brain. In what follows, we describe a formal framework based on the histogram-like interpretation of population codes that enables both spatial and temporal representations of arbitrary probability distributions. We formulate a sampling scheme that consists of iterations of synaptic dropout followed by lateral inhibition. Within this framework, we find a mapping from synaptic weights to transmission probabilities that enables sampling from encoded distributions that include both observed and epistemic variance components. From our result, we then derive and demonstrate a locally-computable learning rule for synaptic transmission probabilities that is consistent with recent biological findings. \section{Theoretical model} Our theoretical model incorporates several constraints and design considerations motivated by an attempt to capture some basic properties of biological networks: (1) Continuous inputs and outputs are represented in the space of neural rate codes by way of neural tuning curves; (2) Both firing rates and synaptic weights exhibit physical lower and upper bounds, or saturation points; (3) Weights representing the encoded distribution must be non-negative, consisting of excitatory connections; (4) Neuron-to-neuron operations in learning and inference must be local, using only the available state of the network at a given time. Additionally, it is important to consider the effects of lateral inhibition and the resulting in winner-take-all dynamics among receiving layers, which is common throughout the cortex. \subsection{Evidence accumulation} To start, a neural network could be considered Bayesian if the distribution of free parameters, i.e., the weights or synapses, $P(\mathbf{W})$, updates according to Bayes' theorem as new data are introduced. It is not the goal of this paper to show that this is true of synapses, but it is a prerequisite for epistemic uncertainty and the premise of ongoing speculation \citep[e.g.,][]{aitchison2021synaptic}. Bayes' theorem for updating weights $\mathbf{W}$, given input activations $\mathbf{X}$ and output activations $\mathbf{Y}$ is given by: \begin{align} P_{t+1}(\mathbf{W}|\mathbf{X}, \mathbf{Y}) &= \frac{P_t(\mathbf{X}, \mathbf{Y} | \mathbf{W})P_t(\mathbf{W})}{P_t(\mathbf{X, Y})}, \end{align} The model is conditioned as $\mathbb{E}[\mathbf{Y}]=f(\mathbf{X}, \mathbf{W})$, where $f$ is most often a sigmoidal, reLU, or radial link function in artificial neural networks. For simplicity, take $\mathbf{X,Y,W}\in[0,1]$, as both biological synaptic weights and neural firing rates are positive valued with an upper saturation boundary. Under these constraints, $f$ may simply be the dot product $\mathbf{XW}^\intercal$. The beta distribution then gives us a mean field approximation of $P(\mathbf{W})$ that respects the [0,1] interval: \begin{align} \text{Beta}(p; \alpha, \beta) &= \frac{p^{\alpha-1}(1-p)^{\beta-1}}{\int_0^1 x^{\alpha-1}(1-x)^{\beta-1}dx}, \\ P(\mathbf{W}) &\sim \text{Beta}(\mathbf{A}+1,\mathbf{B}+1), \label{eq:beta} \end{align} such that weights are distributed according to the cumulative evidence for and against each association represented by the cells of matrices $\mathbf{A}$ and $\mathbf{B}$, respectively: \begin{align} \mathbf{A} &= \mathbf{A_0}+\lambda \mathbf{X^\intercal Y},\quad \mathbf{B} = \mathbf{B_0}+\lambda \mathbf{X^\intercal (1-Y)}. \end{align} Zero-subscripted matrices represent the prior cumulative evidence, and $\lambda$ is a learning rate or weight assigned to the new evidence. The matrices of evidence for and against each possible input-output association are based on the logic of long-term potentiation and depression, respectively. $\mathbf{A}$ sums over the pairwise products of input and output activations, representing the extent to which inputs and outputs fired jointly, while the matrix of evidence against each association is given by $\mathbf{B}$, the extent to which outputs did not fire following input activations. The matrix of mean weight values is calculated as, \begin{align} \mathbb{E}[P(\mathbf{W})] &= \frac{\mathbf{A}}{\mathbf{A}+\mathbf{B}}. \label{eq:betamean} \end{align} Note that whereas the mean of $\mathbf{W}$ is simply the normalized evidence for association, the precision of $\mathbf{W}$ scales with the non-normalized, cumulative values in $\mathbf{A}$ and $\mathbf{B}$ as \begin{align} \text{var}(\mathbf{W}) &=\frac{\mathbf{A\odot B}}{(\mathbf{A+B})^2(\mathbf{A+B}+1).}\label{eq:betavar} \end{align} The above equations define a variational model in which the solution and approximate distribution of the parameters are analytically computed from the data. Weights may be sampled from Equation \ref{eq:beta} and used to produce a posterior distribution of the expected values for the output layer given any particular input vector. Variational approximations of $P(\mathbf{W})$ do not, however, provide the complete conditional distribution of output activations. Rather, they give the distribution of the \emph{mean} activation conditional on the parameters, which will depend on $\lambda$ and the sample size. This beta-distribution based model conveniently serves our purposes in three ways: (1) It simplifies our task of abstracting from biological networks and preserving their domain constraints; (2) It allows learning to be directly derived from the rules of long-term potentiation and depression; (3) It provides a quasi-analytic expectation for parameter variance that we will later use as a point of reference for the biologically-motivated alternative of weight dropout. Notably, variational, mean-field approximations such as this do \emph{not} give the true, ``full Bayes'' posterior of the weights, which is implausible in the brain and intractable in artificial networks. \subsection{Epistemic uncertainty} We can use the mean-field beta distribution above, namely Equation \ref{eq:betavar}, to derive transmission probabilities that produce roughly the same epistemic uncertainty. That is, we want to transfer the properties of the beta model to an equivalent mean-field Bernoulli approximation. With dropout, the actual distribution of the parameters is a scaled Bernoulli: \begin{align} w_i &= \alpha_i z_i,\quad z_i \in \{0,1\},\quad P(z_i=1) = \phi_i. \end{align} To produce an analogous distribution to the beta model in terms of dropout, we must equate the variances and solve for transmission probability $\phi_i$. Note that the mean value of the weight given dropout is $w_i\phi_i$. If we are equating the mean and variance to the beta model, we must use $\hat w_i = w_i / \phi_i$. With the matrix of transmission probabilities, $\Phi$, and denoting the element-wise Hadamard product with $\odot$, \begin{align} \mathbf{\hat W}^2 \odot \mathbf{\Phi \odot (1-\Phi)} &= \frac{\mathbf{A\odot B}}{(\mathbf{A+B})^2(\mathbf{A+B}+1).}. \nonumber \\ \left(\frac{\mathbf{A}}{\mathbf{\Phi\odot(A+B)}}\right)^2 \mathbf{\Phi\odot(1-\Phi)} &= \frac{\mathbf{A\odot B}}{(\mathbf{A+B})^2(\mathbf{A+B}+1)}. \end{align} Solving for $\mathbf{\Phi}$ when $\mathbf{\Phi, A,B}>0$, gives us \begin{align} \mathbf{\Phi} &= \frac{\sqrt{(-\mathbf{A}^2-\mathbf{AB-A})^2 + 4\mathbf{B(A}^2+\mathbf{AB+A})} - \mathbf{A}^2-\mathbf{AB-A} }{2\mathbf{B}} \label{eq:beta2dropoutvar} \end{align} We can then perform dropout sampling of the posterior distribution over unobserved values, or epistemic uncertainty, by generating binary mask matrices from $\mathbf{\Phi}$ that then element-wise multiply by the weight matrix. For minimal bias, the weight matrix should be element-wise divided over $\mathbf{\Phi}$, such that the mean value of each weight is $\mathbf{W}$. Simplified approximations of the above function are possible. For instance, the special case of $\mathbf{A}=\mathbf{B}$ is close to $\Phi=1-\frac{1}{\mathbf{A+B+3}}$. Whereas these results produce the kind of epistemic uncertainty familiar in classical statistics, i.e., sampling error of the parameters, in the brain, epistemic uncertainty may be modulated in response to numerous contextual factors, making the above result only a special case. \subsection{Transmission probability from synaptic weight} Unlike epistemic uncertainty, there is no clear precedent for deriving the observed data distribution using variational modeling principles alone. Here we will consider a first-principles approach using the abstract model defined so far. If we take $\mathbf{X}, \mathbf{Y} \in \{0, 1\}$, i.e., a discrete activation space, and we consider the special case of $m=1$ input neuron $x$, then the $1\times n$ weight vector $\mathbf{w}$ is exactly a vector of observed frequencies resulting from the conditional distribution $P(\mathbf{Y}|x=1)$. Activation of multiple inputs, each representing a unique conditional distribution, therefore corresponds to a marginalization (i.e, mixture) of each in the output layer. We will use this fact to determine a rule for mapping weight values to transmission probabilities such that the frequencies of output samples produced by a dropout algorithm are their learned conditional probabilities. In biological networks, we propose that sampling of the conditional distribution involves two mechanisms. First, random synaptic failure results in only a subset of active outputs. Then, the most active output neuron among the subset suppresses the others by lateral inhibition. This process is approximated by an artificial sampling scheme in which weights are randomly set to zero, and then the maximum output activation is chosen from the resulting subset. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{sf_diagrams_2.pdf} \caption{A sample of the encoded posterior distribution is drawn through synaptic failure followed by lateral inhibition. A subset of activations in the receiving layer first results from the random failure of several sending synapses. Then, lateral connections from the most active receiving neuron inhibit the competing receiving neurons, resulting in a single, random realization from the complete distribution.} \label{fig:samplediagram} \end{figure} To find a mapping of weights to transmission probabilities for this subset-max scheme, consider the encoded histogram in descending order, where $i\in 1,...,n$, and $w_1 > w_i > w_n$, and let $q_i$ be the final transmission probability for weight $i$. Let $p_i$ represent the encoded probability of sampling output activation $i$, defined as the normalized synaptic weight $w_i/\sum_{i=1}^n w_i$. Because we are taking only the maximum weight from each subset, the probability of sampling from the largest of all weights is just its normalized value: $q_i=p_i$. For each successive weight of rank $i>1$, the encoded probability must be equal to the probability that weight $i$ transmits and all larger weights fail: \begin{align} p_i &= q_i\prod_{j=0}^{i-1}(1-q_j), \end{align} where $q_0=0$. Solving recursively for $q$ in terms of $p$, we find \begin{align} p_1 &= q_1, \nonumber\\ p_2 &= q_2(1-p_1), \nonumber\\ p_3 &= q_3(1-p_1)(1-q_2)\nonumber\\ &=q_3(1-p_1)(1-\frac{p_2}{1-p_1}) \nonumber \\ &=q_3(1-p_1-p_2),\nonumber\\ p_4 &= q_4(1-p_1)(1-q_2)(1-q_3) \nonumber \\ &=q_4(1-p_1)(1-\frac{p_2}{1-p_1})(1-\frac{p_3}{1-p_1-p_2}),\nonumber\\ &=q_4(1-p_1-p_2-p_3), \end{align} and so on for all $p_i$, giving us \begin{align} p_i &= q_i(1-\sum_{j=0}^{i-1}p_j), \end{align} which is the probability that weight $i$ does not fail and no larger weights are ultimately sampled. A complete proof of this result by induction is given in the appendix. The transmission probability for weight $i$ is therefore \begin{align} q_i &= \frac{p_i}{1-\sum_{j=0}^{i-1}p_j} \nonumber \\ &= \frac{w_i}{(\sum_i w_i)(1-\sum_{j=0}^{i-1}\frac{w_j}{\sum_i w_i})} \nonumber \\ &=\frac{w_i}{\sum_i w_i - \sum_{j=0}^{i-1}w_j }\nonumber \\ &= \frac{w_i}{\sum_{j=i}^n w_j}. \label{eq:mapping} \end{align} For $m>1$ input neurons sharing an output layer, we conjecture that no general solution exists such that transmission probabilities are held constant across combinations of active inputs. The above single-input mapping simplifies our derivation because the output activation $y_i = w_i$, so $p_i$ is taken to be $w_i$ normalized over the sole vector of input weights to which it belongs. For multiple inputs vectors, $p_i = f(\mathbf{W}^\intercal \mathbf{x})$, and so normalization of a particular weight is dependent on $\mathbf{x}$ and no longer constant. It may be possible that separate solutions exist for every possible combination of input activations over $\mathbf{x}$, in which case transmission probabilities are dynamic with respect to input activations. When multiple inputs are active, probabilities must account for the possibility of being out-competed by projections from other inputs. The simplest approximation is to compute Equation \ref{eq:mapping} for each vector of weights, then divide all probabilities by the total number of active input neurons. In a model of continuous data with tuning curves such that activations are continuous between zero and one, the denominator may be the sum of the input vector. Numerically and biologically, the same problem may be solved by simultaneous, local learning applied across the set of all relevant weights, as we will see later. \paragraph{Combined uncertainty} Altogether, samples from the posterior distribution over the receiving layer should vary according to both the observed data and unobserved values in the model, i.e., both aleatoric and epistemic sources of uncertainty. As we have now derived particular probability functions for both, the functions can be multiplied, i.e., $\mathbf{\bar Q} = \mathbf{\Phi \odot \mathbf{Q}}$, where $\mathbf{\bar Q}$, to produce the final dropout probability matrix used to generate a mask over $\mathbf{W}$. \subsection{Local, biologically plausible learning rules} Equation \ref{eq:mapping} compels us to search for a biologically plausible algorithm by which synaptic release probabilities are updated. The algorithm must not involve global operations and draw only from the present state of the network. We propose two algorithms by which probabilities converge to their analytically shown solution over the course of posterior sampling. To start, a generic recurrence equation for learning of transmission probability $i$ at sampling iteration $t$ is \begin{align} \hat q_{i,t} &= \hat q_{i, t-1} + \gamma (q^*_i - \hat q_{i,t-1}), \label{eq:learningRule_form} \end{align} where $q^*_i$ is the target value of the learned transmission probability $\hat q_i$. With the previous rank-order notation, $w_1 > w_i > w_n$, the set $\mathbf{s}_t$ of non-zero input weights after dropout allows for a biased approximation to Equation \ref{eq:mapping}, which we can use as target $\hat q^*_i$ of our learning rule: \begin{align} \hat q^*_i = \frac{w_i}{\sum_{j \in \mathbf{s}_t} w_j}. \end{align} Critically, $w_i$ is the maximum weight in $\mathbf{s}_t$, and so only the maximum is updated per sample. The form of this target is possible to instantiate biologically if vesicle release probability is modulated by an extracellular agent that is available in limited supply during each iteration of sampling. If synapses compete to take up the agent at rates that are a function of their efficacy, then the amount taken up by the largest synapse corresponds to the normalized value above. The approximate target given above differs from the analytic target in that the denominator sums over only a subset of $w_i...w_n$, and so estimates will be upwardly biased. There may be many ways to correct for that bias with varying degrees of biological plausibility and effectiveness. \paragraph{Fixed adjustment} The simplest method of correcting bias may be to subtract a fixed value from the target: \begin{align} \hat q^*_{i,t} = \frac{w_i}{\sum_{j \in \mathbf{s}_t} w_j}-c. \end{align} This may correct for overall probability but result in inaccurate scaling of the relative differences among $\hat q_i$. \paragraph{Scale adjustment} As $|\mathbf{s}_t|\leq (n-i+1)$ and $\mathbf{s}_t \subset \{i...n\}$, $\hat q^*_i$ is larger than $q_i$ on average by a factor of $\frac{n-i+1}{|\mathbf{s}_t|}.$ We could multiply $\hat q_i$ by the inverse of this factor to correct for average bias across $\hat q_i$. \begin{align} \hat q^*_{i,t} = \frac{|\mathbf{s}_t|w_i}{(n-i+1) \sum_{j \in \mathbf{s}_t} w_j} \end{align} To the extent that the weights are not uniform, this will over-correct relative differences among $\hat q_i$, resulting in shrinkage in the tails of the distribution. \paragraph{Exponential adjustment} Raising the approximate target to an exponent, $\psi$ results in shrinkage of the target that preserves the relative differences among transmission probabilities. \begin{align} \hat q^*_{i,t} = \left(\frac{w_i}{\sum_{j \in \mathbf{s}_t} w_j}\right)^\psi. \end{align} There may be many other possible learning rules that achieve adequate approximation of $q_i$, with varying degrees of biological plausibility. We consider only these few to show that it is possible. The form of the target that we have outlined here, $\hat q^*_i$, makes Equation \ref{eq:learningRule_form} a nonlinear stochastic recurrence equation, which has no explicit form for its equilibrium state and is difficult to analyze. For further details, see Appendix B. \paragraph{Algorithms} In the first algorithm, the update is performed once per sample, affecting only the largest member: \begin{lstlisting} for i iterations: Generate Mask from Bernoulli(q) Subset = Mask*Weight m = max weight index from Subset g = Subset/Sum(Subset) q[m] = q[m] - LR(q[m] - g[m]^psi) \end{lstlisting} In the second algorithm, the update is performed in descending rank order on all members of the subset per sample. Each maximum is masked after its update, producing a smaller subset of size $|\mathbf{s}_t|-1$ and an update to the next largest member: \clearpage \begin{lstlisting} for i iterations: Generate Mask from Bernoulli(q) Subset = Mask*Weights while Subset contains non-zero values: m = max weight index from Subset g = Subset/Sum(Subset) q[m] = q[m] - LR(q[m] - g[m]^psi) Subset[m] = 0 \end{lstlisting} This second algorithm converges to a reasonable approximation of $\mathbf{q}$ in far fewer samples than the first, but requires a more complex sequence of events within-sample. Either algorithm may involve a gradually decreasing learning rate (\texttt{LR}), and both may include boundaries to fix $q$ between zero and one. A small, non-zero lower bound is preferable, as a true zero will eliminate the synapse from all future samples, and consequently, any further learning. Whereas analytically obtained dropout weights must be down-scaled to account for multiple active inputs, the above algorithms may be applied to a pooled set of multiple weight vectors so that no adjustment is necessary. \paragraph{Learning rule simulations} To compare learning rules, a weight vector representing an encoded bimodal distribution was generated. Each learning rule was used to train a vector of transmission probabilities, that was then used to draw random samples from the weight vector. Sampling was performed by first applying the dropout mask, then taking the maximum of the remaining weights. Transmission probability start values were set to $\hat q_{i,0} \sim N(0.3, 0.1)$, and each learning rule was applied with Algorithm 2 for 10,000 iterations at learning rate $\gamma=0.0025$. Figure \ref{fig:learningRules} compares the results of five possible learning rules under Algorithm 2, each with a different strategy for adjusting the target bias. In the left panel, analytic and learned transmission probabilities are plotted against the rank of their associated weights, in order for largest to smallest. In the center panel, learned probabilities are compared against the analytic values. In the right panel, the true distribution is compared to each learned result. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{learningRules_line.pdf} \caption{Comparison of learning rules with different bias adjustments to the target. Subtractive: a constant (0.35) is subtracted from $\hat q^*_{i,t}$. Rescale: $\hat q^*_{i,t}$ is rescaled by $|\mathbf{s}_t|/(n-i+1)$. Power (Constant): $\hat q^*_{i,t}$ is raised to a fixed exponent ($\psi=7$). Power (Variable): $\hat q^*_{i,t}$ is raised to a varying exponent ($\psi=(n-i)\hat q_{i,t-1}+1$).} \label{fig:learningRules} \end{figure} First, it can be seen in the left and center panels that using the unadjusted target for learning results in probabilities (in purple) that retain the form of the analytic probabilities (black) but differ in both scale and offset. Subtracting a small value from the target preserves but exaggerates features of the distribution (orange). Rescaling the target by the given subset size over its max value results in transmission probabilities biased toward their mean value (blue). Raising the target to a constant exponent $\psi$ results in substantially less biased results (green) in all but the smallest weights, which would otherwise be assigned the highest transmission probability. Using a variable exponent $\psi_t=(n-i)\hat q_{i,t-1} +1$ (red) corrects for all bias, including the smallest weights. The panel on the right shows that subtractive and scalar adjustments to the learning target do not adequately suppress the largest activations, resulting in under-represented tails. Conversely, both exponent based adjustments produce very close approximations to the full distribution. Bias in the smallest weights has little effect on the final distribution, so the gains of using a variable exponent over a fixed exponent are negligible. Interestingly, the fixed exponent adjustment has particular support from \cite{dodge1967co}, who found that the postsynaptic potential of muscle fibers was related to the fraction of an inferred receptor type bound by calcium ions raised to the fourth power. They infer that this relationship results from the cooperative action of four calcium ions per vesicle release. \subsection{Network Simulations} Monte Carlo simulations were used to look at the average tendency of dropout to sample both the conditional distribution of the output data (i.e., Equation \ref{eq:mapping}) and epistemic uncertainty resulting from the parameter distributions (Equation \ref{eq:beta2dropoutvar}) in the model described previously. Weight samples from the beta distribution were used as a semi-analytic point of comparison for the latter. Analytic transmission probabilities and the local, iterative learning methods were compared. \paragraph{Data} Two data-generating models were used. The first involved mapping discrete input values to output distributions with increasing variance, i.e., heteroskedasticity, to test the accuracy of estimated residual variances by the dropout model. The second involved both heteroskedasticity and bimodality over a continuous range to examine the overall versatility of the model. The first input and output data were generated from continuous normal distributions as $y\sim \mathcal N(0, 0.2+0.2(x+4)), x\in \{-4, -2, 0, 2, 4\}$, with $N=4,000$ rows of data total, with an equal number per value of $x, N_x = 800$. The second model was a mixture distribution with $N=4,000$ rows of data total: \begin{align} y&\sim p(x)f_1(x) + [1-p(x)]f_2(x), \quad x\in [-4, 4] \nonumber \\ p(x)&=\text{logit}^{-1}(x/2),\nonumber \\ f_1(x)&= \mathcal{N}(-2, 0.2),\nonumber \\ f_2(x)&= \mathcal{N}(x/4, 0.2 + 0.0625(x+4)). \end{align} In this scenario, one distribution has a fixed mean and standard deviation, but with a declining density, while the other has a positive linear trend in the mean, variance, and density. For all simulations, $x$ and $y$ were coordinate transformed to multivariate tuning curve activation matrices $\mathbf{X}$ and $\mathbf{Y}$ using a kernel of 100 Gaussian curves, equally spaced in $[-6, 6]$ with $\sigma=0.05$. To transform random posterior samples $\mathbf{\hat Y}=f[(\mathbf{M \odot W})^\intercal \mathbf{X}]$, where $\mathbf{M}\sim \text{Bernoulli}(\mathbf{\bar Q})$, to real numbers $\hat y$, we defined a sequence $z\in [-6,6] \subset \mathbb{R}$, a Gaussian kernel matrix $\mathbf{H}$ defined over $z$, and used \begin{align} \hat y_i &= \underset{z_i}{\text{arg max}}\, \mathbf{H Y_i^\intercal }. \end{align} \paragraph{Models} Learning network weights involved summing the outer products of $\mathbf{X,Y}$ as described previously. The initial values, i.e., priors, for the evidence matrices were generated as $\mathbf{A_0} \sim \mathcal U(0.025, 0.026)$ and $\mathbf{B_0} \sim \mathcal U(0.100, 0.101)$, and the learning rate was set to 1. 1,000 samples per $x$ value were generated, and 200 repetitions of the simulation were run to produce distributions of each estimated variance component. The beta distribution model was used as a reference to produce posterior samples from the distributions of weights, representing epistemic uncertainty, where $\mathbf{W}\sim\text{Beta}(\mathbf{A}+1,\mathbf{B}+1)$. The primary model of interest used the fixed matrix of weight means, $\mathbf{W}=\mathbb{E}[\text{Beta}(\mathbf{A}+1,\mathbf{B}+1)]$ (Equation \ref{eq:betamean}), and relied only on dropout for both epistemic and residual variance components. For each posterior sample, this weight matrix was masked by probability matrix $\mathbf{\bar Q}=\Phi \odot \mathbf{Q}$, i.e., transmission probabilities combining both empirical (Equation \ref{eq:mapping}) and epistemic (Equation \ref{eq:beta2dropoutvar}) distributions. The model was run with both locally learned probability matrix $\mathbf{\bar Q}$ using Algorithm 2 with 5,000 iterations and the analytically obtained probability matrix, i.e, rows computed according to Equation \ref{eq:mapping}. The learning rule with fixed exponential bias adjustment was used, with $\psi=8$ and $\gamma=0.01$. \paragraph{Results} In our primary results, we visually and numerically compare simulated data to samples drawn from the posterior distribution encoded in the learned network weights. If the network is accurately representing the complete Bayesian posterior, then the network samples should be distributed approximately according to the original data but with additional variance corresponding to epistemic uncertainty, i.e., inflation in regions with fewer data points. Figure \ref{fig:simdata1} shows the simulated data (left) with the associated posterior samples over the full domain of $x\in[-6,6]$ (right). \begin{figure*}[!p] \centering \includegraphics[width=\linewidth]{simresults1.pdf} \includegraphics[width=\linewidth]{simresults2.pdf} \caption{Simulated data versus dropout samples from the network. Top row: simple variance model using only five input values. Bottom: bimodal, heteroskedastic model. Left column: One trial of simulated data from each model. Right column: Samples from the complete estimated posterior distribution (blue) and samples of the MAP only (red). X-axis jitter added to better show sample density. Epistemic variance is shown to vary with the amount of available data from which to learn each part of the input-output domain. Uniform priors are sampled where no data were available.} \label{fig:simdata1} \end{figure*} Samples from the complete posterior distribution are shown in blue. The additional epistemic uncertainty is overlaid in red as samples of the \textit{maximum a posteriori} (MAP). The MAP was sampled by setting transmission probabilities to only Equation \ref{eq:beta2dropoutvar} such that the observed data distribution was excluded. The network draws uniformly distributed samples where no data are available to inform input-output associations, such as between activated inputs values in the first simulation, and toward the edges of the input-output domains in both simulations. The uniformity of the samples in these regions demonstrates maximal epistemic uncertainty. Specifically, the network samples from the uniform priors of the network weights in the absence of any data to update those priors. Conversely, the variance of MAP samples is smallest where the training data are most available, reflecting a high certainty in the most central or likely output value. The gradations from maximal to minimal uncertainty visible around each input value in the first simulation reflects the weaker co-activation of tuning curves peripheral to each input value. The first simulation was used to make numeric comparisons of the posterior distribution to the data-generating model. The left panel of Figure \ref{fig:simresults} shows the average estimated standard deviations of output samples for each active input value. These samples used only the analytic and learned probabilities representing the observed distribution, whereas epistemic uncertainty was excluded. Results for both the analytic and iteratively learned transmission probabilities are plotted against their respective data-generating values. Both methods closely approximate the true standard deviations with minor bias. An overall upward bias in the local learning reflects imperfect convergence to the analytic probability function, which can be improved with more iterations and a lower learning rate. The analytically derived transmission probabilities were less biased overall, but with slight downward bias in the highest variances, and slight upward bias in the lowest. These biases may reflect either or both imperfect choices for normalizing over multiple inputs or an artifact from our chosen strategy for inverting the tuning curves to produce the final continuous samples. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{simresults_both_c.pdf} \caption{Left: Estimated standard deviations of posterior samples, excluding epistemic uncertainty, versus the true, data-generating standard deviations. Both analytic and iteratively learned transmission probabilities produce accurate samples of the observed distribution with minor bias. Right: Epistemic uncertainty comparisons as MAP standard error estimates obtained by sampling from the Beta model (x-axis) versus by dropout (y-axis). Open symbols represent 95\% quantiles.} \label{fig:simresults} \label{fig:weights} \end{figure} The right pane of Figure \ref{fig:simresults} compares the standard error, i.e., standard deviation of MAP samples, from the transmission probabilities to estimates given by samples from the beta distribution model. Here, too, results fell along the identity line, showing that dropout produces a variational model with comparable results to the beta distribution. This result is expected because the epistemic adjustment to transmission probabilities was derived from the variance equation for the beta distribution. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{weightmaps_11_24.pdf} \includegraphics[width=\linewidth]{weightMaps2_11_24.pdf} \caption{From left to right: (1) Weights relating inputs $\mathbf{X}$ (x-axis) to outputs $\mathbf{Y}$ (y-axis); (2) Analytic release probabilities per input neuron; (3) Release probabilities iteratively learned per neuron; (4) Release probabilities iteratively learned across pooled inputs; Lighter corresponds to higher values in all plots. } \end{figure} The matrices of learned weights and their release probabilities from a single iteration of simulation are shown in Figure \ref{fig:weights}, with lighter gray representing higher values in all plots. In the first panels to the left, the data distributions are clearly encoded by the learned weight values for the active values of the input. For input domains where no data were simulated, the priors appear as flat, uniform weights over the output domain. Under all learning schemes for transmission probabilities, the probabilities are highest among weights representing the tails of the distributions. The heightened transmission probability in the distributional tails serves to counteract the inhibitory effects of the most active output neurons, which would otherwise prevent less likely network states from being sampled. In coordinates where no data were observed, release probabilities only reflect the random noise of the priors. The smallest network weights show the same dropout tendencies here as were apparent in the previous simulations of local learning. Under the analytic mapping, the smallest weights are assigned the highest probabilities. Under the iterative, fixed exponent learning rule used here, the smallest weights are neglected in favor of those that most define the observed distribution. All methods result in the absolute smallest weights transmitting 100\% of the the time, shown as the scattered white pixels, but this aspect is artifactual to our abstracted derivations and is not assigned any practical or theoretical importance. \section{Discussion} Our simulations demonstrate that neural networks subject to a range of biological constraints can represent and sample from a complete posterior distribution that includes both observed variation and epistemic uncertainty by synaptic failure alone when release probabilities are derived from weight values. Furthermore, the mapping from weights to release probabilities can be learned locally, using only information available from the present state of the network during each iteration of sampling. The analytic mapping from weights to release probabilities can be stated concisely: the release probability of each synapse is its strength normalized by the sum over itself and all weaker synapses. Intuitively, this means that the normalization factor is largest when there are many projections with just slightly weaker connections. The strange consequence is that for a nearly uniform conditional distribution, the transmission probabilities will not at all be uniform, but rather hyperbolic. This distribution is apparent as the sparse noise in the probability matrices shown in Figure \ref{fig:weights} where no data were available. Further analysis may be warranted to rigorously generalize both our analytic derivation of release probabilities and the local learning rule to the multi-input case, though we found that such rules derived from the single-input simplification appear to work well in simulation. It is unclear how much tolerance to the inevitable but small approximation errors we should expect from the brain. For initial analytic steps and notes toward multivariate generalization, see Appendix C. \paragraph{Biological plausibility} In this paper, we do not comprehensively establish a biological basis of the local learning rule, but only show that such a rule is possible. \cite{branco2009probability} name several mechanisms that appear to be involved in rapidly modulating dropout rates, including Ca\textsuperscript{2+} ions, astrocytes \citep{semyanov2020making}, postsynaptic endocannabinoids, and other ambient neurotransmitters. Of particular interest is the similarity of our fixed exponent learning rule, which best balanced simplicity and effectiveness, to the model given by \citep{dodge1967co} in which postsynaptic potential in muscle fibers is related to the fraction of calcium ions bound in the synapse to the fourth power. For our learning rule, a power of four is suitable for an accurate distribution over an output layer represented by 10-50 neurons, i.e., projections per input neuron (For more notes and analysis of learning rules, see Appendix B). We derived a conventional, Bayesian form of epistemic uncertainty for neural networks that maps cumulative synaptic activity to an additional factor in the transmission probabilities. Our mapping achieves the basic principle of increasing precision as more evidence is gathered. However, by searching for the simplest mapping or learning rule for epistemic uncertainty, we risk overlooking an important theoretical point. In the the brain, several advantages may be conferred by situationally modulating epistemic uncertainty. Whereas observed distributions are, according to our learning rules, shaped relatively slowly over the course of sampling, epistemic uncertainty may be modulated rapidly in response to external contexts and network states more broadly. In dangerous circumstances, the network may increase transmission probabilities to limit sampling and act swiftly according to only the most likely inferences. Biological evidence suggests that astrocytes respond to external stimuli and behavior with increased Ca\textsuperscript{2+} signaling by way of neurotransmitters such as noradrenaline, dopamine, and acetylecholine \citep{semyanov2020making, paukert2014norepinephrine}. Furthermore, hippocampal and cortical astrocytes modulate vesicle release probabilities and plasticity, and may be key to establishing biological control over statistical modes of processing. In particular, the distal branches of astrocytes undergo rapid, externally induced Ca\textsuperscript{2+} transients as a function of their morphology, which changes in response to local neural activity \citep{semyanov2020making, bazargani2016astrocyte}. Subsequent learning rules for synaptic failure probabilities should consider mathematical constraints based on the morphological and signaling dynamics of astrocytes at the synapses. Likewise, our current findings may provide one functional interpretation of such activity. More broadly, it has been suggested that Ca\textsuperscript{2+} signaling among astrocytes constitutes an additional, complementary pathway for longer-term information processing and modulation of mental states \citep{kastanenka2020roadmap}. In theory, states of creative problem solving, idle thought, and rumination may be a few examples of posterior-sampling processes that are evoked by restricting Ca\textsuperscript{2+} and broadly reducing vesicle release. In this mode, as we have shown, the network may conduct searches across its encoded distributions. By reducing probabilities below the optimal rates for sampling from observed distributions, a network can sample from priors that extend beyond the boundaries of its encoded experiences. This requires that such priors are true, literal priors, or preexisting synaptic connections that did not result from what we typically regard as learning. With regard to sampling from observed distributions, the key information used in our local learning rule is the size of the maximum active synapse relative to the sum of all active synapses. This term is motivated in part by mathematical considerations but is plausible if we assume that the active synapses uptake an extracellular agent at rates related to their sizes. The agent may be Ca\textsuperscript{2+} or an ambient neurotransmitter, but most importantly it must only be available in limited supply during each sample. That way, synapses compete and the amount taken up by each is approximately the aforementioned term in the learning rule: its size or rank relative to the active subset. Other mechanisms of normalization may exist in the cell bodies of astrocytes with branches extending to the set of relevant, competing synapses in a receiving layer. \paragraph{Salience and epistemic uncertainty} In our models, the coefficient $\lambda$ represents the salience assigned to data relative to a regularizing prior. It is well known that perceptual salience is flexible in the human brain and controlled by many factors, including prior expectations, dangers, and emotional states \citep{kanouse1987negativity, baumeister2001bad}. This is unlike most classical statistical analyses in which rows are given equal weight, though inverse probability-weighting and matching schemes are popular among epidemiological research \citep{Rosenbaum87, PaulRubin83, li2018balancing, mansournia2016inverse}. Under Bayes' theorem, different data points may have different epistemic weight, as each is itself a likelihood with a certain precision corresponding to its perceptual salience. \paragraph{Spatial vs temporal representation} For simple models, population codes may simultaneously represent the complete conditional distribution, making sampling an unnecessarily slow mode of processing. In more complex models, sampling may be necessary in the same way it is necessary for serial computing: to approximate a complete distribution. Population codes provide the capability to fully encode highly complex distributions, but in practice, applying those distributions to a particular action or perception may not be straightforward or possible. Different intervals of a distribution over one layer may correspond to mutually exclusive endpoints in downstream layers. Sampling allows the network to control its search across arbitrarily complex domains by engaging in random activity in the simpler, earlier stages of processing leading up to them. This idea is analogous to sampling from the lowest-dimensional encoding layer of a variational autoencoder \citep{kingma2013auto} to search the final layer of complex reconstructions. As the network samples, it can use the relative frequency of downstream activations to weigh the likelihoods or value attributions of outcomes before making a final selection. Additionally, resources in the brain may be too limited for complex simultaneous representations. \cite{levy2002energy} shows that synapses fail at rates that are optimal to promote energy efficiency. Indeed, it may be natural to link regularization of computational models to resource efficiency of the model architecture, as the motivation behind the former is to produce models with fewer connections and better generalization to out-of-sample data. These ideas can be further explored by simulations of neural networks with reinforcement learning and appropriately complex, probabilistic tasks. Future implementations of our model in spiking neural networks are planned, namely in the Axon framework that is currently under development \citep{axonGithub}. Preliminary tests of an analogous network structure in the same data model presented here have been successful at recovering conditional distributions from spike rates, but more work is needed to further refine and establish our proposed learning rules according to biological findings and limitations. \subsection{Conclusion} In summary, this paper demonstrates that the two primary components of Bayesian inference can be represented by population codes and sampled using synaptic failure as the only source of randomness in a network. We further demonstrate that in a biologically motivated sampling scheme consisting of synaptic failure and lateral inhibition, correct failure probabilities can be learned using only the current, local state of the network during each iteration of sampling. Rapid modulation of sampling behavior may allow networks to situationally search and evaluate likelihoods over complex, learned distributions, with possible implications for complex planning, problem solving, and creativity. \bibliographystyle{apalike}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The fact that the Laplace spectrum of a compact Riemannian manifold does not determine its geometry became popularly known with the 1992 announcement by Gordon, Webb and Wolpert \cite{GWW} that ``One cannot hear the shape of the drum." Determining the information that the Laplace spectrum of a manifold \emph{does} contain about the geometry or topology of that manifold has been a productive endeavor for many decades. One way to approach this question is to quantify the similarities shared by manifolds with a given spectrum. For example, Osgood, Phillips and Sarnak \cite{OPS} showed that the spectrum of a closed surface determines the metric of the surface up to a family of metrics which are compact in the $C^{\infty}$-topology. In the presence of a uniform lower bound on sectional curvature, Brooks, Perry and Petersen \cite{BPP} showed that isospectral sets of compact Riemannian manifolds, of dimension different than four, are finite up to diffeomorphism type. In this paper we extend the result of Brooks, Perry and Petersen mentioned above to the category of two dimensional Riemannian orbifolds. We prove the following theorem for compact, closed Riemannian 2-orbifolds: \medskip \noindent \bf{Main Theorem 1:} \it For a fixed real number $k$, let $\mathcal{S}(2,k)$ denote the set of isospectral Riemannian 2-orbifolds with sectional curvature uniformly bounded below by $k$. The collection $\mathcal{S}(2,k)$ contains orbifolds of only finitely many orbifold diffeomorphism types. \rm \medskip \noindent In the process of obtaining this first theorem we prove a more general finiteness theorem with geometric bounds rather than spectral ones. In particular we obtain the following result for compact, closed Riemannian 2-orbifolds: \medskip \noindent \bf{Main Theorem 2:} \it For $D>0$, $v >0$ and $k$ fixed real numbers, let $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ denote the set of Riemannian 2-orbifolds with sectional curvature uniformly bounded below by $k$, diameter bounded above by $D$, and volume bounded below by $v$. The collection $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ contains orbifolds of only finitely many orbifold diffeomorphism types. \rm \medskip \noindent The notation $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ follows that in \cite{GP}. By adding and deleting bounds, this notation can be used to express related statements. For example, to formulate an orbifold analogue of the Cheeger finiteness theorem \cite{Ch} one would add an upper bound $K$ on sectional curvature and consider the set of $n$-orbifolds denoted $\mathcal{O}_{k, \cdot, v}^{K, D, \cdot}(n)$. \medskip An orbifold is a mild generalization of a manifold obtained by allowing coordinate patches to be modeled on $\mathbb{R}^n$ modulo the action of a finite group. Orbifolds first arose as objects of study in algebraic geometry over a century ago. Satake's \cite{S1} formulation of orbifolds in the language of differential geometry, under the name of $V$-manifold, appeared in 1956. Later Thurston \cite{T} popularized $V$-manifolds among topologists and differential geometers under the name ``orbifolds." Recently, interest in orbifolds has risen markedly due to their use in string theory (see \cite{ALR} for example). This paper contributes to a new and expanding literature in the spectral geometry of Riemannian orbifolds. For a concise survey of this literature the authors recommend the introduction to \cite{DGGW}. We begin with a review of orbifold structures in Section \ref{background}. In Section \ref{singularbounds} we show how the geometric bounds on $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ imply finiteness for two aspects of the topology of an orbifold's singular set. An application of Perelman's Stability Theorem in Section \ref{topology} shows that the underlying space of an orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ has one of only finitely many homeomorphism types. These controls on the singular set and the underlying space of an orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ are combined to prove Main Theorem 2 in Section \ref{finiteoflddiffeo}. Main Theorem 1 then follows from a short argument, recalled from \cite{St}, which shows that $\mathcal{S}(2,k)$ is a subset of $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$. \subsection{Acknowledgments} We thank Carolyn Gordon and David Webb for helpful discussions. We also thank Peter Storm for a suggestion that clarified the proof of Proposition~\ref{okdvtprop2}. \eject \section{Orbifold background}\label{background} In this section we detail the orbifold related definitions and notation that are used in what follows. \subsection{Definition of a Riemannian orbifold} Just as a manifold is a topological space which locally has the structure of $\mathbb{R}^n$, an orbifold is a topological space which locally has the structure of $\mathbb{R}^n$ modulo the action of a finite group. We have the following, which is a direct generalization of the definition of a manifold chart. \begin{definition} Let $X_{O}$ be a second countable Hausdorff topological space. Given an open set $U$ contained in $X_{O}$, an \textit{orbifold chart} over $U$ is a triple $(\widetilde{U}, \Gamma_U, \pi_U)$ such that: \begin{enumerate} \item $\widetilde{U}$ is a connected open subset of $\mathbb{R}^n$, \item\label{groupaction} $\Gamma_U$ is a finite group which acts on $\widetilde{U}$ by diffeomorphisms, \item $\pi_U: \widetilde{U} \to U$ is a continuous map such that $\pi_U\circ\gamma = \pi_U$ for all $\gamma\in \Gamma_U$ and which induces a homeomorphism from $\widetilde{U}/\Gamma_U$ to $U$. \end{enumerate} \end{definition} As with manifolds, we cover the space $X_{O}$ with orbifold charts subject to a suitable compatibility condition (see page 2 in \cite{ALR}). A smooth orbifold $O$ is the topological space $X_{O}$ together with a maximal atlas of orbifold charts. The topological space $X_{O}$ is called the \textit{underlying space} of the orbifold. We note that if a group $\Gamma$ acts properly discontinuously on a manifold $M$, then the quotient space $M/\Gamma$ is an orbifold. Any orbifold which can be realized as a quotient of a group action on a manifold in this way is called a \textit{good} orbifold. Otherwise, the orbifold is called a \textit{bad} orbifold. A Riemannian structure on an orbifold is defined by endowing the local cover $\widetilde{U}$ of each orbifold chart $(\widetilde{U},\Gamma_U, \pi_U)$ with a $\Gamma_U$-invariant Riemannian metric. By patching these local metrics together with a partition of unity we obtain a \emph{Riemannian orbifold}. \subsection{Local structures on Riemannian orbifolds} Let $p$ be a point in an orbifold $O$ and take $(\widetilde{U},\Gamma_U, \pi_U)$ an orbifold chart over a neighborhood $U$ of $p$. If a point $\tilde{p}$ in $\pi_U^{-1}(p)$ has nontrivial isotropy, we say that $p$ is a \emph{singular} point. The isomorphism class of the isotropy group of $\tilde{p}$ is independent of both choice of element of $\pi_U^{-1}(p)$ and choice of orbifold chart about $p$. This isomorphism class is called the \emph{isotropy type} of $p$. As in \cite{DGGW}, we call a chart about $p$ in a Riemannian orbifold a \emph{distinguished chart of radius $r$} if $\widetilde{U}$ is a convex geodesic ball of radius $r$ centered at point $\tilde{p}$ with $\pi_U(\tilde{p}) = p$. In this situation the isotropy type of $p$ is the isomorphism class of the group coming from the chart, $\Gamma_U$. We denote the tangent bundle of an orbifold $O$ by $TO$. Here we shall simply recall the structure of a fiber of $TO$ over point $p\in O$, but refer the interested reader to \cite{S2} for more details. Take $(\widetilde{U},\Gamma_U, \pi_U)$ a distinguished chart about $p$ and let $\gamma \in \Gamma_U$. The differential of $\gamma$ at $\tilde{p}$ acts on $T_{\tilde{p}}\widetilde{U}$. Let $\Gamma_{U*\tilde{p}}$ denote the set of all such differentials. The fiber of $TO$ over $p$, denoted $T_pO$, is defined to be $T_{\tilde{p}}\widetilde{U}/\Gamma_{U*\tilde{p}}$. Fiber $T_pO$ is independent of choice of orbifold chart and is called the \emph{tangent cone to} $O$ \emph{at} $p$. When $O$ is a Riemannian orbifold, the set of unit vectors in $T_pO$ is called the \emph{unit tangent cone to} $O$ \emph{at} $p$ and is denoted $S_pO$. The tangent cone at a point in an orbifold need not be a vector space. One consequence of this for Riemannian orbifolds is that the measure of the angle between vectors in a tangent cone needs a careful definition. \begin{defn}\label{angledef} Let $p$ be a point in a Riemannian orbifold that lies in an orbifold chart $(\widetilde{U},\Gamma_U, \pi_U)$. Take point $\tilde{p} \in \pi_U^{-1}(p)$. Let $\pi_{U*\tilde{p}}$ denote the differential of $\pi_U$ at $\tilde{p}$. For vectors $v$ and $w$ in $T_pO$, let $\tilde{v}_1, \tilde{v}_2, \dots, \tilde{v}_r$ denote the elements of the set $(\pi_{U*\tilde{p}})^{-1}(v)$, and $\tilde{w}_1, \tilde{w}_2, \dots, \tilde{w}_s$ denote the elements of the set $(\pi_{U*\tilde{p}})^{-1}(w)$. The angle between $v$ and $w$ in $T_pO$ is defined to be \begin{align*} \angle(v,w) = \min_{\substack{ i = 1, 2 \dots, r \\ j = 1, 2 \dots, s }} \{\angle(\tilde{v}_i, \tilde{w}_j)\}. \end{align*} \end{defn} Finally, we are able to discuss curvature on a Riemannian orbifold by using local manifold covers. We say that a Riemannian orbifold has sectional (resp. Ricci) curvature bounded below by $k$ if each point in the orbifold can be locally covered by a manifold with sectional (resp. Ricci) curvature bounded below by $k$. \subsection{Global structures on Riemannian orbifolds}\label{globalstructures} We give a length space structure to a Riemannian orbifold using the distance function, \[d(p,q) = \inf\{\text{Length}(c) : c \ \text{is a continuous curve from} \ p \ \text{to} \ q\}.\] When an orbifold $O$ is complete with respect to this metric, any two points in $O$ can be joined by a curve that achieves the distance between them. Such a curve, parametrized with respect to arclength, is called a \emph{segment} in $O$. Details on these ideas are given in \cite{Bz}. Smooth functions on an orbifold, as well as the Laplace operator acting on those functions, are described in \cite{Chi}. For compact Riemannian orbifolds, by \cite{Chi} and \cite{DGGW}, the eigenvalue spectrum of the Laplace operator is a discrete set of positive real numbers, tending to infinity. Two orbifolds are said to be \textit{isospectral} if they have the same eigenvalue spectrum. \begin{remark}\label{spectrumremark} Chiang's original proof that the eigenvalue spectrum is a discrete set tending to infinity is based on Satake's original definition of $V$-manifold, for which the singular set has codimension at least 2. An orbifold is a slight generalization of the notion of a $V$-manifold which has no restriction on the singular set. Chiang's proof is extended in \cite{DGGW} to include all compact Riemannian orbifolds. At the time that \cite{St} was published, only Chiang's result was known and hence many of the results in \cite{St} appear to depend on the codimension $\geq 2$ condition, though in fact, they do not. We use some of these results from \cite{St} in Sections~\ref{singularbounds} and \ref{finiteoflddiffeo} below. However, for this paper we state them using the most general definition of a Riemannian orbifold. \end{remark} \subsection{Smooth maps between orbifolds} We now generalize the notion of a diffeomorphism of manifolds to the orbifold setting. An orbifold diffeomorphism represents an equivalence of the smooth orbifold structure as well as of the underlying topological space. The two definitions below come from \cite{ALR}. \begin{definition}\label{weakmap} Let $O_1$ and $O_2$ be orbifolds. A \textit{smooth orbifold map} $f:O_1\to O_2$ consists of a continuous map from $X_{O_1}$ to $X_{O_2}$ such that for any $x\in O_1$ there are orbifold charts $(\widetilde U, \Gamma_U, \pi_U)$ over neighborhood $U$ of $x$ and $(\widetilde V, \Gamma_V, \pi_V)$ over neighborhood $V$ of $f(x)$ such that: \begin{enumerate} \item $f(U) \subset V$, \item there exists a smooth lift $\tilde f$ of $f$ carrying $\widetilde U$ to $\widetilde V$ for which $\pi_V \circ \tilde f = f \circ \pi_U$. \end{enumerate} \end{definition} \begin{definition}\label{diffeo} Orbifolds $O_1$ and $O_2$ are diffeomorphic if there exist smooth orbifold maps $f:O_1\to O_2$ and $g:O_2\to O_1$ such that $f \circ g = 1_{O_2}$ and $g \circ f = 1_{O_1}$. \end{definition} \begin{remark} The literature contains several different definitions of maps between orbifolds. Although maps given by Definition~\ref{weakmap} are weak in the sense that they behave poorly with respect to bundles, they suffice for the present discussion. In particular, effective orbifolds which are diffeomorphic via Definition~\ref{diffeo} have strongly diffeomorphic groupoid presentations. \end{remark} \vspace{15mm} \section{Bounds on the singular set of a 2-orbifold}\label{singularbounds} Singular points in a smooth 2-orbifold have one of only three possible forms. A cone point is locally modeled on a disk in the plane modulo the action of a cyclic group of rotations, a mirror point is modeled on a disk modulo a reflection, and a dihedral point is modeled on a disk modulo the action of a dihedral group. There are also only three forms that a connected component of the singular set of a compact 2-orbifold without boundary can take. If the underlying space of the 2-orbifold is a surface without boundary, the only singular points are isolated cone points. When the underlying space has non-empty boundary, each component of its boundary is circular. These circles do not form an orbifold boundary, however, because they are either reflector circles, made up entirely of mirror points, or reflector crowns, which consist of a finite number of dihedral points linked together by continua of mirror points. Thus, when the underlying space of a compact 2-orbifold has boundary, the connected components of its singular set are some combination of cone points, mirror circles and reflector crowns. For further details, see Section 13.3 in \cite{T}. In this section we establish a universal upper bound on the number of connected components of the singular set of any orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$. In addition we prove that the number of dihedral points in an orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ is universally bounded above. These controls, combined results from Section \ref{topology} and \cite{St}, will be used in Section \ref{finiteoflddiffeo} to prove the two main theorems of this paper. We begin with two technical lemmas. Suppose that $K$ is a compact subset of complete Riemannian orbifold $O$. For $p \in O$, let $\mathfrak{d}_{\scriptscriptstyle{pK}} \subset S_pO$ denote the set of initial velocity vectors of segments running from $p$ to $K$. We call $\mathfrak{d}_{\scriptscriptstyle{pK}}$ the set of directions from $p$ to $K$. Also, given a subset ${\mathfrak a}$ of the unit $n$-sphere $S^n$, we define \begin{align*} {\mathfrak a}(\theta) &= \{v \in S^n : \angle({\mathfrak a},v) < \theta\}. \end{align*} \begin{lemma}\label{technicallemma1} Let $O \in \mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(n)$ and $p, q \in O$. Then there exist $\alpha \in (0, \frac{\pi}{2})$ and $r>0$ such that if \begin{align*} \frak{d}_{pq}(\tfrac{\pi}{2} + \alpha) = S_pO, \ \text{and} \ \frak{d}_{qp}(\tfrac{\pi}{2} + \alpha) = S_qO, \end{align*} then $d(p, q) \ge r$. The constants $\alpha$ and $r$ depend only on $k$, $D$, $v$ and $n$. \end{lemma} \begin{proof} The statement of this lemma is precisely that of Lemma 8.2 in \cite{St}, using the more general definition of an orbifold (see Remark~\ref{spectrumremark}) and without the requirement that $O$ have only isolated singularities. The assumption about isolated singularities is never used in the proof given in \cite{St}, so the result holds for orbifolds with general singularities as well. \end{proof} The second technical lemma shows that, for $r>0$, there is a universal upper bound on the size of a set of pairwise $\ge r$-apart points in $O \in \mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(n)$. We recall that a \emph{minimal $\varepsilon$-net} is an ordered set of points $p_1, \dots, p_N$ in a metric space such that the open balls $B(p_i,\varepsilon)$ cover the metric space, but the open balls $B(p_i, \tfrac{\varepsilon}{2})$ are pairwise disjoint. When the metric space is compact and connected, it is known that one can find a minimal $\varepsilon$-net in that space for any $\varepsilon>0$. \begin{lemma}\label{technicallemma2} Suppose that $O$ is an $n$-dimensional orbifold with diameter bounded above by $D>0$ and Ricci curvature greater than or equal to $(n-1)k$. Also suppose that $\{p_1, p_2, \dots, p_m\}$ is a set of points in $O$ for which $d(p_i,p_j) \ge r>0$ with $i,j = 1, 2, \dots, m$, $i \ne j$. Then there is a constant $C(r, k, D, n)$ such that $m \le C$. \end{lemma} \begin{proof} Let $\{x_1, x_2, \dots, x_N\}$ be a minimal $\frac{r}{2}$-net in $O$. Without loss of generality assume $B(x_1, \tfrac{r}{4})$ has the minimal volume among all of the $\frac{r}{4}$-balls about points in this net. Because the $\frac{r}{4}$-balls are disjoint, \[N \vol B(x_1, \tfrac{r}{4}) \le \sum_{i=1}^N \vol B(x_i, \tfrac{r}{4}) \le \vol O.\] Thus $\vol B(x_1, \frac{r}{4}) \le \vol O/N$. Recall for $p \in O$ and $0 \le s \le S$, the Relative Volume Comparison Theorem for orbifolds in \cite{Bz} implies, \begin{align}\label{inequality} \frac{\vol B(p, S)}{\text{Vol}B(p, s)} \le \frac{\vol B^n_{k}(S)}{\text{Vol}B^n_{k}(s)} \end{align} where $B^n_{k}(r)$ denotes the geodesic ball of radius $r$ in the simply connected $n$-dimensional space form of constant curvature $k$. Now apply line~ (\ref{inequality}) with $p=x_1$, $s= \frac{r}{4}$ and $S=D$. This yields \begin{align}\label{volfact2} \frac{\vol B(x_1, D)}{\text{Vol}B(x_1, \frac{r}{4})} \le \frac{\vol B^n_{k}(D)}{\text{Vol}B^n_{k}(\frac{r}{4})}. \end{align} Using $\vol B(x_1,D) = \vol O$ and $\vol B(x_1, r/4) \le \vol O/N$ we find that line (\ref{volfact2}) becomes \begin{align*} N \le \frac{\vol B^n_{k}(D)}{\text{Vol}B^n_{k}(\frac{r}{4})} = C(r, k, D, n), \end{align*} yielding a universal upper bound on the number of elements in the minimal $\frac{r}{2}$-net. Because each pair of points in $\{p_1, p_2, \dots, p_m\}$ are at least a distance $r$ apart from each other, there can be at most one of these points per open $\frac{r}{2}$-ball. Thus the bound on the number of elements in our minimal $\frac{r}{2}$-net is also a bound on $m$. In particular we have $m \le N \le C(r, k,D, v) $. \end{proof} \begin{prop}\label{components} There is a universal upper bound on the number of connected components of the singular set of an orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$. \end{prop} \begin{proof} Let $O\in\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$. The desired upper bound is the sum of upper bounds $B_C$ on the number of cone points in $O$, $B_R$ on the number of reflector crowns in $O$, and $B_M$ on the number of mirror circles in $O$. We derive each of these bounds and confirm that each depends only on $k, D$ and $v$. The proof of the existence of the upper bound $B_C$ is the same as the proof of a similar bound in Proposition 8.3 of \cite{St}. The bound is achieved by showing that if $p$ and $q$ are any two cone points in an orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$, then since the isotropy groups of $p$ and $q$ are cyclic, it must be the case that there exists $\alpha \in (0, \frac{\pi}{2})$ such that $\frak{d}_{pq}(\tfrac{\pi}{2} + \alpha) = S_pO \ \text{and} \ \frak{d}_{qp}(\tfrac{\pi}{2} + \alpha) = S_qO$. By Lemma~\ref{technicallemma1} we conclude that $p$ and $q$ are at least a distance $r$ apart from each other. An application of Lemma~\ref{technicallemma2} implies that there is a universal constant $B_C$ bounding the number of cone points in an orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$. Essentially the same argument can be used to obtain the bound $B_R$ on the number of reflector crowns. In this case, we know that there must be at least one dihedral point per crown so we show there is a universal bound, $B_D$, on the number of dihedral points in an orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$. We first note that the isotropy group $\Gamma$ of any dihedral point contains a cyclic subgroup and then follow the argument for the cone point case. Obtaining the bound $B_M$ on the number of mirror circles in $O$ is a bit more subtle. To begin, list the mirror circles in $O$ as $S_1, S_2, \dots, S_m$. For $1 \le j \le m$, let $q_j$ be a point on mirror circle $S_j$. Mirror circles in $O$ are non-intersecting since any point of intersection would fail to have one of the three singular structures mentioned above. Thus the set $Q = \{q_1, q_2, \dots, q_m\}$ contains $m$ distinct points. As in the other two cases, we will show that any two points in $Q$ are a distance greater than or equal to $r$ apart and apply Lemma \ref{technicallemma2} to obtain the required bound. Suppose $S_i$ and $S_j$ are distinct mirror circles in $O$. The distance between these circles, $d(S_i, S_j)$, is the infimum of the distance function (see Section~\ref{globalstructures}) restricted to $S_i \times S_j$. We will show that there is an $r>0$ for which $d(S_i, S_j)>r$, and in so doing conclude that points in set $Q$ are pairwise greater than or equal to $r$ apart. Begin by observing that because $S_i \times S_j$ is compact, we can take $(p_i,p_j) \in S_i \times S_j$ an ordered pair for which $d(p_i,p_j)=d(S_i, S_j)$. Also, because $S_i$ and $S_j$ are nonintersecting closed sets, $d(S_i, S_j)>0$. Let $\gamma$ be the segment of length $d(p_i,p_j)$ running from $p_i$ to $p_j$. We claim that the segment $\gamma$ is perpendicular to mirror circle $S_i$ at $p_i$ and to mirror circle $S_j$ at $p_j$. Take a distinguished coordinate chart $(\widetilde{U},\Gamma_U, \pi_U)$ for neighborhood $U$ about $p_i$ such that $\widetilde{U}$ is an open neighborhood of the origin in $\mathbb{R}^2$, $\pi_U(0) = p_i$, and $\Gamma_U = \{id, \rho\}$ where $id$ is the identity map on $\widetilde{U}$ and $\rho$ is reflection of $\widetilde{U}$ across the $y$-axis. By Proposition 15 in \cite{Bz}, there exists a point $z$ in $U$ that lies on $\gamma$ but does not lie on the mirror edge of $U$. This means $\pi_U^{-1}(z) = \{\tilde{z}_1, \tilde{z}_2\}$ with $\tilde{z}_1$ and $\tilde{z}_2$ distinct in $\widetilde{U}$ and $\rho(\tilde{z}_1) = \tilde{z}_2$. Let $\tilde{\gamma}_1$ be the lift of $\gamma$ that connects $0$ and $\tilde{z}_1$. Because $\gamma$ minimizes the distance from $S_i$ to $S_j$, segment $\tilde{\gamma}_1$ achieves the minimal distance from $\tilde{z}_1$ to the $y$-axis in $\widetilde{U}$. Observing that the $y$-axis is a submanifold of $\widetilde{U}$, we see that $\tilde{\gamma}_1$ is orthogonal to the $y$-axis at $0$. We conclude that $\gamma$ is perpendicular to mirror circle $S_i$ at $p_i$. The argument for $p_j$ on $S_j$ is similar. Now $\frak{d}_{p_i p_j}\subset S_{p_i}O$ contains the initial tangent vector to $\gamma$. Because $\gamma$ is perpendicular to the mirror circle $S_i$ at $p_i$, this initial vector lifts to two antipodal vectors in $S_{0}\widetilde{U}$. Thus no matter the value of $\alpha>0$ from Lemma~\ref{technicallemma1}, the fact that $\alpha$ is nonzero implies $\frak{d}_{p_i p_j}(\frac{\pi}{2}+\alpha)=S_{p_i}O$. Similarly $\frak{d}_{p_j p_i}(\frac{\pi}{2}+\alpha)=S_{p_j}O$. Having satisfied the hypotheses of Lemma~\ref{technicallemma1}, we obtain $r>0$ such that $d(S_i,S_j)=d(p_i,p_j) \ge r$. We have shown that points in set $Q$ are pairwise greater than or equal to $r$ apart. Lemma~\ref{technicallemma2} provides universal bound $B_M$ on the number of points in $Q$, and thus on the number of mirror circles in $O$. \end{proof} Note that the universal constant $B_D$ obtained in the proof above is the one required to prove the following proposition. \begin{proposition}\label{dihedralpoints} The number of dihedral points in an 2-orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ is universally bounded above. \end{proposition} \vspace{15mm} \section{Controls on underlying space topology}\label{topology} Using Perelman's Stability Theorem for Alexandrov spaces, we observe that the underlying space of an orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(n)$ has one of only a finite number of homeomorphism types. We begin by recalling Perelman's theorem, denoting the Gromov-Hausdorff metric by $d_{GH}$. Perelman \cite{P} originally proved this result in 1991, but it remained in preprint form. Kapovitch \cite{K} ultimately wrote the result for wider distribution. \begin{theorem}\label{stability} Suppose $X$ is an $n$-dimensional Alexandrov space with curvature bounded below by $k$. Then there exists $\varepsilon = \varepsilon(X)>0$ such that if $Y$ is an $n$-dimensional Alexandrov space and $d_{GH}(X,Y)<\varepsilon$, then $Y$ is homeomorphic to $X$. \end{theorem} This theorem applies in our context because an orbifold with sectional curvature bounded below by $k$ is an example of an Alexandrov space with curvature bounded below by $k.$ We obtain our underlying space topological finiteness result with the following lemma and a compactness argument. \begin{lemma}\label{precompact} The set $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(n)$ is precompact relative to the Gromov-Hausdorff metric. Limit points of this set are Alexandrov spaces of dimension $n$ with the same lower bound on curvature and upper bound on diameter. \end{lemma} \begin{proof} The precompactness result follows from Gromov's Compactness Theorem (Theorem 10.7.2 in \cite{BuBuI}) and relies on the universal upper diameter bound $D$ and the universal constant given in Lemma~\ref{technicallemma2}. Theorem 10.7.2 in \cite{BuBuI} implies that the limit points are Alexandrov spaces with curvatures bounded below by $k$ and diameters bounded above by $D$. By Corollary 10.10.11 in \cite{BuBuI}, the lower volume bound on $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(n)$ prevents collapsing, so the dimension of any limit space is $n$. \end{proof} \begin{prop}\label{ustoptype} The underlying space of an orbifold in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(n)$ has one of only a finite number of homeomorphism types. \end{prop} \begin{proof} Arguing by contradiction, we suppose that $\{O_i\}$ is an infinite sequence of orbifolds in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(n)$ each having an underlying space of a distinct homeomorphism type. Lemma~\ref{precompact} implies that this sequence has a $d_{GH}$-convergent subsequence, and that the limit space $X$ is an $n$-dimensional Alexandrov space with curvature bounded below by $k$. If we choose $\varepsilon = \varepsilon(X)$ as in Theorem~\ref{stability}, there exists $N$ such that $d_{GH}(O_i, X)<\varepsilon$ for all $i>N$. But then each $O_i$, with $i>N$, must be homeomorphic to $X$. This is a contradiction. \end{proof} \section{Finiteness of orbifold diffeomorphism types}\label{finiteoflddiffeo} We use the finiteness results from Propositions \ref{components}, \ref{dihedralpoints} and \ref{ustoptype}, as well as a result from \cite{St}, to prove orbifold diffeomorphism finiteness for orbifolds in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$. Once this result is established, a brief argument shows that orbifolds in $\mathcal{S}(2,k)$ have only finitely many orbifold diffeomorphism types. For ease of exposition, we break $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ into four disjoint subsets following Theorem 13.3.6 in \cite{T}. In particular we write \[\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2) = \mathcal{B} \sqcup \mathcal{E} \sqcup \mathcal{P} \sqcup \mathcal{H}\] where $\mathcal{B}$ are the bad orbifolds, $\mathcal{E}$ the elliptic orbifolds, $\mathcal{P}$ the parabolic orbifolds, and $\mathcal{H}$ the hyperbolic orbifolds. Our argument begins with a lemma from which orbifold diffeomorphism finiteness for non-hyperbolic 2-orbifolds follows immediately. \begin{lemma}\label{isotropy} Orbifolds in $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ contain points of only finitely many possible isotropy types. \end{lemma} \begin{proof} This follows directly from Main Theorem 1 in \cite{St} and Remark~\ref{spectrumremark}. \end{proof} \begin{prop}\label{okdvtprop1} The subset $\mathcal{B}\sqcup \mathcal{E} \sqcup \mathcal{P} \subset \mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ contains orbifolds of only finitely many orbifold diffeomorphism types. \end{prop} \begin{proof} Suppose $\mathcal{B}\sqcup \mathcal{E} \sqcup \mathcal{P}$ contains orbifolds of infinitely many orbifold diffeomorphism types. By Thurston's classification of bad, elliptic and parabolic 2-orbifolds, this implies orbifolds in this collection contain points of arbitrarily large order isotropy type. This contradicts Lemma~\ref{isotropy}. \end{proof} We next consider the case of the hyperbolic orbifolds $\mathcal{H} \subset \mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$. In what follows, we say that two reflector crowns \emph{have the same type} if they have the same number of dihedral points and if, when listed in order, the isotropy types of the dihedral points in the first reflector crown match the isotropy types of the dihedral points in the second reflector crown, up to a cyclic permutation. \begin{prop}\label{okdvtprop2} The subset $\mathcal{H} \subset \mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ contains orbifolds of only finitely many orbifold diffeomorpism types. \end{prop} \begin{proof} Partition $\mathcal{H}$ so that within each partition element orbifolds have the same number and type of cone points, number of reflector circles, number and type of reflector crowns, and underlying space homeomorphism type. Together Propositions~\ref{components}, \ref{dihedralpoints}, \ref{ustoptype}, and Lemma~\ref{isotropy} imply that this partition has a finite number of elements. We prove that $\mathcal{H}$ contains orbifolds of only finitely many orbifold diffeomorphism types by showing that pairs of orbifolds within a partition element must be orbifold diffeomorphic. To begin, let $O_1$ and $O_2$ be orbifolds in the same element of the partition on $\mathcal{H}$. Because $O_1$ and $O_2$ are smooth, Thurston's orbifold classification implies that each of these orbifolds is orbifold diffeomorphic to a quotient of the hyperbolic plane by the properly discontinuous action of a group of isometries. Denote these quotient structures by $O_1 = \mathbb{H}^2/\Gamma_1$ and $O_2 = \mathbb{H}^2/\Gamma_2$. In \cite{M} it is shown that because $O_1$ and $O_2$ have underlying spaces with the same genus and orientability, and they have matched singular data, there exists a group isomorphism $\varphi:\Gamma_1\to\Gamma_2$. Because $\Gamma_1$ and $\Gamma_2$ are cocompact, an application of Theorem 8.16 in \cite{Ka} yields a $\varphi$-equivariant quasi-M\"obius homeomorphism $f$ from the boundary circle of $\mathbb{H}^2$ at infinity to itself. Let $\tilde f:\mathbb{H}^2 \rightarrow \mathbb{H}^2$ denote the Douady-Earle extension of $f$ (see Section 8.4 in \cite{Ka}). The properties of the Douady-Earle extension imply that $\tilde f$ is a $\varphi$-equivariant diffeomorphism. Because the diffeomorphism $\tilde f$ is $\varphi$-equivariant, it induces a homeomorphism $h$ of the underlying spaces of $O_1$ and $O_2$. Using the global orbifold charts on $O_1$ and $O_2$ provided by their quotient structures, the map $\tilde f$ is precisely what is needed to conclude $h$ is a diffeomorphism of orbifolds. Therefore $O_1$ and $O_2$ are orbifold diffeomorphic. \end{proof} We are now in a position to prove the two Main Theorems. \medskip \noindent \bf{Main Theorem 2:} \it For $D>0$, $v >0$ and $k$ fixed real numbers, let $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ denote the set of Riemannian 2-orbifolds with sectional curvature uniformly bounded below by $k$, diameter bounded above by $D$, and volume bounded below by $v$. The collection $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ contains orbifolds of only finitely many orbifold diffeomorphism types. \rm \begin{proof} Apply Propositions~\ref{okdvtprop1} and \ref{okdvtprop2}. \end{proof} \medskip \noindent \bf{Main Theorem 1:} \it For a fixed real number $k$, let $\mathcal{S}(2,k)$ denote the set of isospectral Riemannian 2-orbifolds with sectional curvature uniformly bounded below by $k$. The collection $\mathcal{S}(2,k)$ contains orbifolds of only finitely many orbifold diffeomorphism types. \rm \begin{proof} By the orbifold version of Weyl's asymptotic formula \cite{F}, we know that all orbifolds in $\mathcal{S}(2,k)$ must have the same volume $v$. In addition Proposition 7.4 in \cite{St} states that a collection of isospectral orbifolds, satisfying a uniform lower bound $k(n-1)$ on Ricci curvature, has a corresponding upper bound $D$ on diameter. Thus $\mathcal{S}(2,k)$ is a subcollection of $\mathcal{O}_{k, \cdot, v}^{\cdot, D, \cdot}(2)$ and we apply Main Theorem 2. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} How to engineer systems to perform complex and structured behavior in physical environments has been a long-standing goal of robotics and of artificial intelligence for many decades~\cite{Brooks1986,Prescott1999,Merel2019}. This interest was rewarded with early progress in the design of optimal feedback controllers for a well-defined action in a simple dynamical system, but significant limitations were encountered for controlling more complex systems over a wide range of actions~\cite{Todorov2002}. With the advances in recurrent neural network (RNN) training, this question has been revisited -- notably within the field of deep reinforcement learning~\cite{Heess2017} -- and increased computational power now permits the control of more complex systems. However, significant obstacles remain. First, it remains unclear how to design hierarchically organized modules to interact efficiently to control structured behavior. Second, in the setting of acquiring knowledge in a sequential fashion, ANNs suffer from the phenomenon of catastrophic forgetting, where the incorporation of new information may result in the loss of what has been previously learned. Third, ANNs frequently struggle to apply their knowledge flexibly and thus rapidly improvise to novel situations~\cite{Merel2019,Sodhani2020}. Here, we investigate these questions by designing modular network architectures that can avoid catastrophic forgetting, and, in the setting of a simple sequence generation task, we explore network training techniques that permit networks to generalize. Interestingly, we find that an analytically tractable model inspired by the motor thalamocortical architecture provides a robust solution in our task as well as offers insights that we show can improve the performance of gradient-trained networks. \section{Task and architecture design\label{seq:TasksModels}} Our goal is to study the ability of RNNs to capture two important attributes of successful hierarchical motor control: (\emph i) the ability to acquire a library of complex motor motifs and then learn new motifs without interfering with the execution of previously acquired ones, and (\emph{ii}) the ability to flexibly string motifs into a sequence of arbitrary order without having necessarily rehearsed all of the transitions composing the sequence (Fig.~\ref{fig:TaskArchit}a). Thus, we will train RNNs with different architectures to learn a set of motifs without interference and test them to generate both learned and novel sequences. Our target motifs are chosen to challenge the expressivity of continuous-dynamics networks. Specifically, they have discrete jumps at random times but are constant between these jumps (Fig.~\ref{fig:TaskArchit}a and see appendix for the full list of motifs used in this paper). To output both jumps and constant periods, networks must generate high and low frequency oscillations, as is clearly illustrated by the Fourier series decomposition of a square wave or by examining the fit of one of our motifs with different numbers of exponentially modulated sines (such functions being the natural basis functions of linear recurrent networks; Fig.~\ref{fig:TaskArchit}b). Furthermore, training RNNs to generate motifs such as these is difficult because of instability issues when gradients are propagated through many recurrent steps~\cite{Pascanu2012,Bengio1994}. \begin{figure}[t!] \centering \includegraphics[width=5in]{./comb_Fig12_v3.pdf} \caption{\label{fig:TaskArchit} \textbf{Task and candidate networks.} \textbf{a}) Flexible and extendable motif sequencing. Our goals are to execute time-varying motifs from a library in sequences of arbitrary order without training all possible transitions, and to learn motifs without interfering with the existing library for incorporation into novel sequences. \textbf{b}) Example motif fit with an increasing number of complex exponentials (the eigenmodes of linear dynamical systems). \textbf{c-f}) Top row: Additive architecture; bottom row: multiplicative architecture. \textbf{c}) RNNs that may succeed in the task shown in \textbf{a} because they segregate parameters into motif-specific sets (schematized in colors) while benefiting from fixed shared parameters (schematized in black). \textbf{d-f}) Hyperparameter optimization when all transitions are trained. \textbf{d}) Minimum root mean square error over training, depending on the gain hyperparameters $g^{\textrm{ad}}$, $g^{\textrm{mu}}$, and $g_0^{\textrm{ptb}}$ (defined in section~\ref{seq:TasksModels}). Dots are individual networks, the line is the average. In red, for reference, we show the mean minimum root mean square error in the control architecture (averaged over five individually trained networks for each $g_0^{\textrm{cn}}$). \textbf{e}) Example learning curves. \textbf{f}) Example training trials (2 of 90 motif pairs that compose a minibatch, sampling all transitions). } \end{figure} To minimize interference between motifs, we only consider network architectures that separate the trainable parameters into motif-specific sets (Fig.~\ref{fig:TaskArchit}c). Each architecture employs a shared linear readout and recurrent network (table~\ref{table_params}), but differs in the use of the motif-specific parameters. In the `additive' architecture (Fig.~\ref{fig:TaskArchit}c, top), each motif $\mu$ is produced in response to a learned input vector $\mathbf{b}_{\mu}$, leading to the following dynamics for the activities $\mathbf{x}$: \begin{equation} \tau \; \dot{\mathbf{x}} = - \mathbf{x} + g^{\textrm{ad}} \, \mathbf{J} \tanh \left( \mathbf{x} \right) + \mathbf{b}_{\mu}, \end{equation} where the gain $g^{\textrm{ad}}$ is a hyperparameter and $\mathbf{J}$ is the connectivity matrix -- with iid elements taken from a centered Gaussian with standard deviation (std) $1/\sqrt{N}$ (previous work has shown that if $g^{\textrm{ad}}>1$ this leads to a rich dynamical regime appropriate for complex computations~\cite{Sompolinsky1988,Sussillo2009}). In the `multiplicative' architecture (Fig.~\ref{fig:TaskArchit}c, bottom; inspired by both previous machine learning literature~\cite{ICML2011Sutskever} and the motor thalamocortical architecture~\cite{Logiaco2019}), each motif $\mu$ is produced in response to both a learned input vector $\mathbf{b}_{\mu}$ and a learned rank-one perturbation of the connectivity $\mathbf{u}_{\mu} \mathbf{v}^{\intercal}_{\mu}$. The latter is equivalent to a loop through an instantaneous `unit' receiving input from the recurrent network through the weights $\mathbf{v}_{\mu}$ and feeding back through the weights $\mathbf{u}_{\mu}$. The dynamics are then: \begin{equation} \tau \; \dot{\mathbf{x}} = - \mathbf{x} + \, \left( g^{\textrm{mu}} \, \mathbf{J} + \mathbf{u}_{\mu} \mathbf{v}^{\intercal}_{\mu} \right) \tanh \left( \mathbf{x} \right) + \mathbf{b}_{\mu}, \end{equation} where $g^{\textrm{mu}}$ and $\mathbf{J}$ are defined as for the additive network, and $\mathbf{u}_{\mu}$ and $\mathbf{v}_{\mu}$ are each learned and initialized iid from a centered Gaussian with std $g_0^{\textrm{ptb}}/\sqrt{N}$ (i.e. expected norm $g_0^{\textrm{ptb}}$). Finally, we also consider a `control' network that does not separate all trainable parameters into motif-specific sets. Our aim was to assess to what extent lifting this constraint can result in performance gains. This control network was inspired by work showing that -- if hyperparameters are properly optimized -- performance appears to be relatively independent of the architecture and only dependent on the number of parameters that are directly trained~\cite{Collins2017}. Therefore, for our control network, we trained all input, recurrent, and output weights with the number of neurons adjusted such that this network has as many tunable parameters as the additive and multiplicative architectures for a fixed number of motifs (10 motifs; see table~\ref{table_params}). The dynamics of this control network are: $ \tau \; \dot{\mathbf{x}} = - \mathbf{x} + \mathbf{J} \tanh \left( \mathbf{x} \right) + \mathbf{b}_{\mu}, $ where $\mathbf{J}$ is initialized with iid elements taken from a centered Gaussian with std $g_0^{\textrm{cn}}/\sqrt{N}$. In all networks, the output $y$ is produced through a linear combination of the rates $\mathbf{r}=\tanh\mathbf{x}$: $y=\mathbf w^\intercal\mathbf r$. The elements of $\mathbf w$ are sampled iid from a centered Gaussian distribution with std $1/\sqrt{N}$ and are either \emph{(i)} fixed (additive and multiplicative networks) or \emph{(ii)} optimized during training (control network). Given the $\tanh$ nonlinearity, such a readout vector can produce network outputs of maximal positive or negative magnitude equal to $\sqrt{N}$, which is sufficient for generating all of our motifs. \begin{table} \caption{Number of neurons and number of parameters in the different architectures} \label{table_params} \centering \begin{tabular}{llll} \toprule & Additive & Multiplicative & Control \\ \midrule \# of recurrent units $N$ & 300 & 100 & 50 \\ \midrule \# of learned parameters for 10 motifs & 3000 & 3000 & 3050 \\ \midrule \# of motif-specific parameters per motif & 300 & 300 (input: 100, loop: 200) & 50 \\ \bottomrule \end{tabular} \end{table} \section{Gradient-trained RNNs trade off flexibility and robustness}\label{sec:gradientRNNresults} \subsection{Optimizing learning using a training set with all possible transitions} We trained these RNNs using gradient descent (more specifically, ADAM~\cite{Kingma2015}) with a time discretization of $dt=0.1 \tau$ and $\tau=1$ such that each motif is roughly a thousand timesteps (appendix). Our objective function was the mean square error between desired and actual output. After testing various parameters of ADAM, we identified the following as yielding successful training in our setting: $\textrm{learning rate}=10^{-4}$, $\beta_1=\beta_2=0.5$, and $\epsilon=10^{-8}$. In order to verify the basic ability of our networks to perform motif sequencing, as well as to have a reliable performance criterion with which to choose appropriate hyperparameter values (i.e., $g^{\textrm{ad}}$, $g^{\textrm{mu}}$, $g_0^{\textrm{ptb}}$, and $g_0^{\textrm{cn}}$), we first trained our networks on all possible transitions between motifs: 90 trials consisting of a sequence of at least two motifs (optionally followed by the start of the first motif such that all trials were of equal duration). The sequences were initialized with network activities sampled from the standard normal distribution. Example trials are shown in Fig.~\ref{fig:TaskArchit}f. We trained the networks over 50,000 minibatches, after which the derivative of the loss function was near zero (Fig.~\ref{fig:TaskArchit}e). We repeated training many times to optimize $g^{\textrm{ad}}$, $g^{\textrm{mu}}$, $g_0^{\textrm{ptb}}$, and $g_0^{\textrm{cn}}$. Similarly to previous reports~\cite{Collins2017}, we find that after sweeping through these hyperparameters, the peak performance for each architecture appears to be approximately equivalent (Fig.~\ref{fig:TaskArchit}d). Interestingly, the multiplicative architecture -- inspired by the thalamocortical model~\cite{Logiaco2019} that we will evaluate in the next section -- appears to be less sensitive to choices of the gain parameters. For this multiplicative architecture, theoretical arguments suggest that the loop weights $\mathbf u_\mu$ and $\mathbf v_\mu$ might need to be large to have a major impact on the recurrent network dynamics~\cite{Schuessler2020,Tao2011,Logiaco2019}. Therefore, we tried initializing $g_0^{\textrm{ptb}}$ to large values, but this appeared to hurt rather than help performance (Fig.~\ref{fig:TaskArchit}d, bottom right). Of note, while the learning curves for our additive RNNs usually appeared relatively smooth, the learning curves for our multiplicative RNNs often included multiple quasi-plateaus followed by more rapid cost function decreases, suggesting the presence of slow points or local minima in the stochastic cost function (Fig.~\ref{fig:TaskArchit}e). \subsection{Training without transitions yields transition failures upon testing} The results in the prior section employed a training set with all pairs of transitions represented. This approach requires that, when adding a new motif to a network's behavioral library, the transitions to and from all previously acquired motifs have to be learned. This ultimately scales quadratically with the number of motifs and is thus prohibitive as a solution for extensible and flexible motor sequencing. As a possible solution, here we propose training each motif in isolation but with initial network activities selected to approximate the distribution of activities at the ends of all other motifs. This solution attempts to leverage the ability of RNNs to be trained to process samples from a known distribution and to then be able to generalize when presented with samples from a similar distribution (e.g.~\cite{VezhnevetsFeudal2017}). In our scenario, if we can approximate the distribution of end-of-motif activities well enough and if training succeeds, then all transitions will work with no transition-specific training. Unfortunately, this distribution is unknown and is training-dependent. However, since we are using networks with large $N$, Gaussian weights, and a $\tanh$ nonlinearity, a Gaussian should well-approximate the marginal statistics of this unknown distribution~\cite{Landau2018,Rivkind2017}. Indeed, we observe that a standard normal is a good choice (Fig.~\ref{fig:ANN_lim_robutsness}a,f,e). Hence, here our training set consists of single motifs starting with standard normal iid entries for $\mathbf{x}$. We set our hyperparameters to the optima observed in Fig.~\ref{fig:TaskArchit}d ($g^{\textrm{ad}}=1.4$, $g^{\textrm{mu}}=1.4$ and $g_0^{\textrm{ptb}}=1.5$) and otherwise trained as above. For both network types, training was successful (Fig.~\ref{fig:ANN_lim_robutsness}a and f). \begin{figure}[t!] \centering \includegraphics[width=5in]{./Fig3_Gene_Fail_v2.pdf} \caption{\label{fig:ANN_lim_robutsness} \textbf{Transition failures in gradient-trained RNNs}. \textbf{a-e}) Additive architecture; \textbf{f-i}) multiplicative architecture. \textbf{a,f}) Top: Example training trials with single motifs starting from iid standard Gaussian initial $\mathbf x$ values. Bottom: marginal distribution of $\mathbf x$ values, comparing the initial conditions during training with examples of end-of-motif distributions. \textbf{b,g}) Example test trials. \textbf{c,h}) Comparing motif production when starting with iid standard Gaussian initial $\mathbf x$ values (left) as opposed to when preceded by other motifs during sequences (right). \textbf{d,i}) Change of root mean square error, separately for the 10 motifs (color coded as in the other panels), between the training condition (starting from standard Gaussian iid initial $\mathbf x$ values) and the sequencing condition. For each motif, this corresponds to the root mean square error on the left \emph{vs} right column of panels \textbf{c} and \textbf{h}. The p-values are for a two-sided Wilcoxon signed-rank test. \textbf{e}) For the additive architecture, training single motifs starting from Gaussian iid $\mathbf x$ values with zero mean but standard deviation of 1.2. Conventions as in panels \textbf{a} and \textbf{d}.} \end{figure} We then asked our networks to generate sequences (examples are shown in Fig.~\ref{fig:ANN_lim_robutsness}b and g) and found that even though many transitions are successful, there are cases where the performance of a given motif is substantially degraded throughout its entire duration when preceded by a particular other motif. This is a transition failure. Interestingly, the shape of the marginal distribution of the elements of $\mathbf x$ at the end of a motif was not a good predictor of whether transitions from this motif would lead to worse performance. For instance, in the case of the additive network, the transition from motif 1 to motif 2 leads to poor performance on motif 2 (Fig.~\ref{fig:ANN_lim_robutsness}b, top), even though the distribution of $\mathbf x$ at the end of motif 1 appears extremely similar to a standard Gaussian (Fig.~\ref{fig:ANN_lim_robutsness}a, bottom). Conversely the distribution of $\mathbf x$ at the end of motif 5 is wider than the standard Gaussian but it does not lead to a bad transition to motif 2 (Fig.~\ref{fig:ANN_lim_robutsness}b, second row). In Fig.~\ref{fig:ANN_lim_robutsness}c and h, we further illustrate performance for two example motifs when transitioning from all other other motifs as opposed to when starting from multiple samples of a standard Gaussian as during training. The statistics for the root mean square errors over these random \emph{vs.} end-of-other-motif initial conditions are shown in Fig.~\ref{fig:ANN_lim_robutsness}d,i. For all the 10 motifs that were learned, the root mean square error increased when transitioning, a significant result according to a two-sided Wilcoxon signed-rank test (p=0.002). Figs.~\ref{fig:ANN_lim_robutsness}a,f shows that the std of the marginal distribution of the values of $\mathbf x$ slightly exceeds 1 for some motifs. To ensure that the transition failures we observed when initializing with the standard normal were not due to this, we retrained our additive network with initial $\mathbf x$ values sampled from a Gaussian with std of 1.2 (Fig.~\ref{fig:ANN_lim_robutsness}e). We observe that the end-of-motif $\mathbf x$ distributions still have standard deviations around 1 for all motifs (Fig.~\ref{fig:ANN_lim_robutsness}e top). Despite training with a wider distribution, performance errors after transitioning still occurred with motif production even more impaired compared to training with a standard normal (Fig.~\ref{fig:ANN_lim_robutsness}e bottom and Fig.~\ref{fig:thalamocort}f). The failure to robustly transition between motifs when training on single motifs that are randomly initialized is a particular instance of an out-of-distribution generalization limitation. Not only is the true marginal distribution of each element of $\mathbf x$ at the end of a given motif not exactly matched to the Gaussian distribution we use during training, but additionally these elements can be highly correlated with each other due to correlated inputs and the recurrent structure of the network. This problem is very hard to solve if we do not want to train all possible transition pairs since these correlations are instrumental for performance and we do not know them in advance before training. As we are unable to implement robust and flexible sequencing with classical techniques used to train RNNs, we now investigate a different potential solution that leverages insights from the motor thalamocortical circuit. These insights constrain not only the network architecture, but also the specific dynamical regimes of the network during different parts of a motif sequence. \section{Flexible, robust and extensible sequencing in a thalamocortical model}\label{sec:TCmodel} The thalamocortical model we investigate follows prior work described by Logiaco and colleagues~\cite{Logiaco2019}, but improves on this to ensure that the transitions between motifs are smooth. \begin{figure}[t!] \centering \includegraphics[width=5in]{./Fig4_thalamocort_resc_portrait_ArXiv_v6.pdf} \caption{\label{fig:thalamocort} \textbf{Robust transitions in thalamocortical model}. \textbf{a}) Adjusting a motif-specific loop through the thalamic unit $\textbf{t}_2$ (left), leading to the control of both eigenvalues (middle) and eigenvectors of the dynamics $\mathbf{x}(t)$ such that the readout robustly follows motif 2 (right). \textbf{b}) Thalamic module used for all motif transitions (left). After optimization, the thalamic module sets the eigenvalues of the dynamics to be more negative (middle) which contributes to a fast decrease of the distance to steady-state $|\delta\mathbf x|$ (right). \textbf{c}) Example sequences. \textbf{d}) As in Fig.~\ref{fig:ANN_lim_robutsness}c,h, for the thalamocortical model. \textbf{e}) Change of root mean square error for each motif between random initialization and sequencing conditions (not significant as per a Wilcoxon signed rank test). \textbf{f}) As in \textbf e, but for the additive (left) and multiplicative (right) networks when augmented with a thalamic transition module for the first $5\tau$ of each motif. \textbf{g}) For the different network architectures, the average root mean square error in both the random initialization and sequencing conditions.} \end{figure} \subsection{Model construction and parameter adjustment} The model parameters can be adjusted through analytical and semi-analytical techniques which do not require stochastic gradient descent on the simulated dynamics, a major advantage compared to standard training of ANNs. Details are described elsewhere~\cite{Logiaco2019} and we only provide a brief overview here. The model consists of a recurrent cortical module with connectivity $\mathbf{J^\mathrm{cc}}$ and activities $\mathbf{x}$ whose projection through the readout weights $\mathbf{w}$ constitutes the output. This cortical module interacts with a non-recurrent thalamic module through instantaneous loops consisting of corticothalamic and thalamocortical weights. We model the basal ganglia, which provides inhibitory input into thalamus, as selectively disinhibiting specific thalamic loops in order to cause execution of the associated motif. Thus, the entire model is a switching linear system. \textbf{Motif execution:} During motif $\mu$, a single thalamic loop is disinhibited leading to the dynamics: \begin{align} \tau \dot{\mathbf x} & = \tilde{\mathbf J}_\mu \mathbf x, \quad\mbox{where}\quad \tilde{\mathbf J}_\mu\equiv g \left(\mathbf J^\mathrm{cc} - \mathbf{I} \right) + \mathbf{u}_\mu \mathbf{v}_\mu^\intercal, \label{eq:InitGeneRateDyn} \end{align} with motif-specific loop vectors $\mathbf u_\mu$ and $\mathbf v_\mu^\intercal$. We now consider how these dynamics can approximate a desired output $y_\mu$, knowing that in general a good approximation for $y_\mu$ can be reached through a linear combination of a small number $K$ of complex exponentials: $y_\mu(t)\approx\hat{y}_\mu(t)=\sum^K_{k=1} [\hat{\boldsymbol\alpha}_\mu]_{k} e^{[\hat{\boldsymbol\lambda}_\mu]_kt}$ (Fig.~\ref{fig:TaskArchit}b,~\cite{Logiaco2019}). The cortical readout can exactly match $\hat{y}_\mu$ if the eigenvalues of $\tilde{\mathbf J}_\mu$ contain the entries of the vector $\hat{\boldsymbol\lambda}_\mu$ and if the initial network activities $\mathbf x^{\mathrm{init}}_\mu$ are set correctly. We accomplish the former (Fig.~\ref{fig:thalamocort}a) by setting $\mathbf v_\mu=\mathbf L^\intercal\operatorname{diag}(\mathbf{Lu_\mu})^{-1}\mathbf Q^+ \mathbf{1},$ where $\mathbf{L}$ is the left eigenvector matrix of $\mathbf{J^\mathrm{cc}}$, and $Q_{k,j}=1/([\hat{\boldsymbol\lambda}_\mu]_k-\lambda_{j})$ where $\lambda_{j}$ is an eigenvalue of $g\left( \mathbf{J^\mathrm{cc}} - \mathbf{I} \right)$. Next, we set the initial activities at the beginning of motif $\mu$ to $\mathbf{x}_\mu^\textrm{init}= \tilde{\mathbf R}_{\mu} \, \operatorname{diag}(\tilde{\mathbf R}_{\mu}^{\intercal} \mathbf{w})^{-1} \boldsymbol{\alpha}_{\mu}$ where $\tilde{\mathbf R}_{\mu}$ contains right eigenvectors of $\tilde{\mathbf J}_\mu$ (with the first $K$ columns corresponding to the eigenvalues in $\hat{\boldsymbol\lambda}_\mu)$, and $[\boldsymbol{\alpha}_{\mu}]_{k\le K} = [\hat{\boldsymbol{\alpha}}_{\mu}]_k$ and $[\boldsymbol{\alpha}_{\mu}]_{k>K} =0$. The preceding two steps do not specify $\mathbf{u}_\mu$, and with random $\mathbf{u}_\mu$ the readout will be highly sensitive to noise in $\mathbf x^{\mathrm{init}}_\mu$ (pink trace in Fig.~\ref{fig:thalamocort}a right; see \cite{Logiaco2019}). However, if $\mathbf{u}_\mu$ is set to minimize the analytically-computed expected readout deviation due to noise in $\mathbf x^{\mathrm{init}}_\mu$, then robust readout is possible (cyan trace in Fig.~\ref{fig:thalamocort}a right; minimization of the cost $C\left( \mathbf{u} \right)$ in~\cite{Logiaco2019}). \smallskip \textbf{Motif transitions:} To successfully transition to motif $\mu$, it is sufficient to implement a mechanism by which $\mathbf x$ approaches $\mathbf x^{\mathrm{init}}_\mu$, which will be the case if the dynamics during a so-called "preparatory period" has $\mathbf x^{\mathrm{init}}_\mu$ as its steady-state. Additionally, it is desirable that the transition dynamics are fast and that they do not cause large transients values on the readout while relaxing to steady-state~\cite{Kaufman2014}. To achieve this, we employ a specific thalamic subpopulation of size $P$ which is disinhibited during all motif transitions, as well as a constant input $\mathbf b_\mu$ specific to the upcoming motif $\mu$ (which here, unlike for the ANNs above, is only present during the transition periods), leading to the dynamics: \begin{align} \tau \dot{\mathbf x}= \mathbf J_\textrm{prep}\mathbf x+\mathbf b_\mu, \quad\mbox{where}\quad \mathbf J_\textrm{prep} \equiv g \left( \mathbf{J^\mathrm{cc}} - \mathbf{I} \right) + \mathbf U_{\textrm{prep}} \mathbf V_{\textrm{prep}}^\intercal, \end{align} with $N\times P$ preparatory loop weights $\mathbf U_{\textrm{prep}}$ and $\mathbf V_{\textrm{prep}}$. With these dynamics, the activity at steady-state will match $\mathbf{x}^{\textrm{init}}_\mu$ if $\mathbf b_\mu= -\mathbf J_\textrm{prep} \mathbf{x}^{\textrm{init}}_\mu$ (Fig.~\ref{fig:thalamocort}b). Note that the difference $\delta\mathbf x$ between the cortical activities and their steady state decays at a rate that is independent of $\mathbf{b}_\mu$ and therefore of the upcoming motif: $\tau \dot{\delta\mathbf x} = \mathbf J_\textrm{prep}\delta\mathbf x$ for all $\mu$. This allows us to optimize the same weights $\mathbf U_{\textrm{prep}}$ and $\mathbf V_\textrm{prep}$ to favor rapid and smooth transitions between all pairs of motifs. Following ref.~\cite{Logiaco2019}, we achieve fast transitions by minimizing the time-integral of the expected square norm of $\delta\mathbf x$, with rates $\delta\mathbf x_0$ at the beginning of the transition period sampled iid. We also augment our cost function with the time-integral of the expected squared derivative of the readout to ensure smooth transitions. Our total cost function is therefore: \begin{align} C(\mathbf U_\textrm{prep},\mathbf V_\textrm{prep}) &= E_{ \delta\mathbf x_0 } \left[ \int_0^\infty\!\! dt \left\| \delta\mathbf x \right\|^{2} \right] + \beta \, N \, E_{ \delta\mathbf x_0 } \left[ \int_0^\infty\!\! dt \left(\frac{d}{dt} \mathbf{w}^\intercal \delta\mathbf{x} \right)^{2} \right] \\ &\propto \Tr{\left( \mathbf{R}_\textrm{prep} \left( \left(\mathbf{L}_\textrm{prep} \, \mathbf{L}^{\intercal}_\textrm{prep} \right) \odot \boldsymbol{\Lambda} \right) \mathbf{R}^{\intercal}_\textrm{prep} \right)} + \beta \, N \; \mathbf{w}^\intercal \mathbf{R}_\textrm{prep} \left( \left(\mathbf{L}_\textrm{prep} \, \mathbf{L}^{\intercal}_\textrm{prep} \right) \odot \boldsymbol{\Gamma} \right) \mathbf{R}^{\intercal}_\textrm{prep} \nonumber \end{align} where $\mathbf{R}_\textrm{prep}$ and $\mathbf{L}_\textrm{prep}$ are the right and left eigenvectors of $\mathbf J_\textrm{prep}$, and its eigenvalues $\boldsymbol\lambda^{\textrm{prep}}$ are used to compute $\Lambda_{ij}=-1 / (\lambda^{\textrm{prep}}_i + \lambda^{\textrm{prep}}_j)$ and $\Gamma_{ij}=\lambda^{\textrm{prep}}_i\lambda^{\textrm{prep}}_j\Lambda_{ij}$. Finally, $N$ is the number of cortical units and $\beta$ is a hyperparameter which trades off the relative importance of transition speed and readout smoothness. Notice that this cost is not impacted by the shape of the initial rates' distribution. \subsection{Simulation of the model confirms flexible, robust and extensible sequencing} We simulated the model on the same task as the gradient-trained RNNs. The motif-specific parameters scale as in the multiplicative architecture (table~\ref{table_params}), but the thalamocortical model does have a few more hyperparameters. To make sure that these did not induce an inability to compare between approaches, we reduced the cortical size to $N=99$. After exploring a few values, we set $g=0.5\tau$, $K=10$, $P=50$, $\beta=1/20$, and the motif transition duration to $5 \tau$. The readout weights $\mathbf{w}$ and recurrent weights $\mathbf{J^\mathrm{cc}}$ were sampled from a centered Gaussian distribution with std $1/\sqrt{N}$. The approximations $\hat{y}_\mu$ were fit to the target motifs $y_\mu$ under the constraints that: $\hat y_\mu(0)=0$; the elements of $\hat{\boldsymbol\lambda}_\mu$ had negative real part and were at least epsilon apart from each other; and the magnitudes of the elements of $\hat{\boldsymbol\alpha}_\mu$ were not exceedingly large. The resulting $\hat{\boldsymbol\lambda}_\mu$ and $\hat{\boldsymbol\alpha}_\mu$ were then used to optimize $\mathbf u_\mu$, $\mathbf v_\mu$, and $\mathbf b_\mu$ as described above. We generated sequences from the thalamocortical model after initializing with iid standard Gaussian samples for the elements of $\mathbf x$ (this choice of a unit standard deviation indeed leads to readout values within the same range as the target motifs, Fig.~\ref{fig:thalamocort}c). Importantly, motifs were produced with low error either when starting from random initial conditions or when transitioning from another motif as part of a sequence (Fig.~\ref{fig:thalamocort}d,e). Of note, setting $K$ to 10 was a choice made to match the performance of the thalamocortical model to that of the gradient-trained ANNs in Fig.~\ref{fig:ANN_lim_robutsness} (see comparison in Fig.~\ref{fig:thalamocort}g). Larger $K$ would improve performance of this model. \subsection{A thalamic-like transition module rescues transitioning in ANNs trained with SGD}\label{sec:prepwithSGD} Due to the transition module, our thalamocortical model is asymptotically guaranteed to converge to the correct initial condition for the upcoming motif and tends to do so quickly if the eigenvalues are all large and negative (as in Fig.~\ref{fig:thalamocort}b). Hence, inspired by the success of this approach in guarding against transition failures, we revisited our additive and multiplicative networks and augmented their recurrent connectivity with the perturbation $\mathbf U_\textrm{prep}\mathbf V_{\textrm{prep}}^\intercal$ (with $P=50$), mimicking the preparatory dynamics of the thamalocortical model. We trained these models in two steps. First, with fixed $\mathbf J$ as described above and starting with standard normal iid entries for $\mathbf x$, but with no network input, we trained $\mathbf U_\textrm{prep}$ and $\mathbf V_{\textrm{prep}}$ with the cost function $\sum_t|\mathbf r(t)|^2$. This results in preparatory loop weights that drive the networks quickly to zero in $<10\tau$, similar to the linear case (see appendix and compare with Fig.~\ref{fig:thalamocort}b). Then we trained both networks on our 10 motifs (with no transitions) and modified training only as follows: for the first $5\tau$, we add our learned $\mathbf U_\textrm{prep}\mathbf V_{\textrm{prep}}^\intercal$ to the recurrent connectivity, and, emulating the thalamocortical model for the multiplicative network only, we use $\mathbf b_\mu$ and $\mathbf u_\mu\mathbf v_\mu^\intercal$ only during the preparatory and post-preparatory periods respectively. Both networks now learn better than before and, when instructed to generate sequences, show no transition failures (Figs.~\ref{fig:thalamocort}f,g). \section{Discussion} We trained RNNs with stochastic gradient descent on a flexible motif sequencing task and used random noise to leverage the generalization and interpolation capabilities of artificial networks~\cite{VezhnevetsFeudal2017,MordatchNIPS2015,MerelNeuralProbaMotorPrims2019}. These RNNs succeeded at learning randomly initialized individual motifs, but struggled to string motifs together in a sequence. This fundamentally boils down to a limitation of ANNs performance during out-of-distribution generalization, and is reminiscent of action transition issues in state-of-the-art networks during flexible movement control tasks~\cite{MerelHierarchVisuoMotor2019,Liu2012}. This issue could be circumvented by manually resetting the state of the network before each motif, however this would create undesirable discontinuities in the readout; alternatively, modules that are specific for each transition may be added, but this inefficient solution prevents 0-shot sequence improvisation, limiting flexibility. Using these tricks would become even more problematic in general settings where additional network modules process contextual inputs to flexibly select motifs with more levels of control hierarchy. In contrast, these difficulties can be overcome by a network whose structure and dynamics are inspired by motor neuroscience: despite training motifs individually, this model can implement robust transitions while maintaining similar within-motif expressivity as the gradient-trained RNNs. Our work outlines the need to constrain or design both the architecture and the dynamics of recurrent networks in order to achieve maximal performance, and calls for broadening the applications of the thalamocortical insights. For instance, the thalamocortical model could be extended with more modules and its dynamics could be smoothed by introducing and removing the connectivity perturbations in a gradual fashion, hence introducing more nonlinearities and functionalities while conserving some analytical tractability and performance guarantees. Alternatively, gradient-trained ANNs can be combined with a biologically-inspired motif transition module for robust sequencing, as we illustrate above and in more detail in the appendix. Therefore, our results open the door for improved recurrent network performance in real-world applications. \section*{Broader Impact} Our work lays theoretical foundations whose ultimate applications would involve flexible and extensible learning of complex continuous outputs. This is notably required to design robots that can be deployed in complex environments and learn from experience (for instance by observing another agent that performs a task~\cite{Argall2009}). Ultimately, these robotics advances could be used in the manufacturing industry or domestically to free people from the necessity to perform actions themselves in order to obtain desired outcomes, such as producing goods or storing groceries. If this succeeds, many jobs performed by humans today could be performed by robots tomorrow. In order for such a change to benefit everyone rather than the robots' owners only, societal changes would need to occur such that the product of the robots' work would be fairly returned back to the whole society. Universal basic income may be a way to implement this redistribution. Moreover, psychosocial adjustments may be needed to adapt to the loss of some symbolic guidelines followed by many, such as the ideology promoting good character and hard work through promising monetary rewards in return and, more generally, the association of social status with income. \section*{Acknowledgements} We thank Larry Abbott and Christopher J. Cueva for useful conversations. This research was supported by NIH BRAIN award (U19 NS104649), NSF/NIH Collaborate Research in Computational Neuroscience award (R01 NS105349), NIH Director's Early Independence award (DP5 OD019897), and NSF NeuroNex award (DBI-1707398), as well as the Leon Levy foundation, the Gatsby Charitable Foundation, and the Swartz Foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and statement of results}\label{S:intro} Let $\mathbb{D}$ denote the open unit disc in the complex plane $\C$ centered at $0$. Given $n\in\mathbb{Z}_{+}$ the $n^2$-dimensional {\em spectral unit ball} is the set $\Omega_n:=\{A\in M_{n}(\C)\,:\,\sigma(A)\subset\mathbb{D}\}$, where $M_n(\C)$ denotes the set of all $n\times n$ complex matrices and $\sigma$ denotes the spectrum of a matrix. The interpolation problem referred to in the title of this article is the following problem: \begin{itemize} \item[$(*)$] {Given $M$ distinct points $\zeta_1,\dots,\zeta_M\in \mathbb{D}$ and matrices $W_1,\dots, W_M\in\Omega_n$, $n\geq 2$, find conditions on the data $\{(\zeta_j,\,W_j):1\leq j\leq M\}$ such that there exists a holomorphic map $F:\mathbb{D}\longrightarrow\Omega_n$ satisfying $F(\zeta_j)=W_j, \ j=1,\dots,M$.} \end{itemize} When such a function $F$ exists, we shall say that $F$ is an {\em interpolant} of the data. \smallskip One of the important steps towards understanding the problem $(*)$ was an operator-theoretic approach due to Bercovici, Foias and Tannenbaum. Using a spectral version of the commutant-lifting theorem, the authors in \cite{HariCiprianAllen:asclt91}\,---\,under the restriction that $\sup_{\zeta\in\mathbb{D}}\rho(F(\zeta))<1$, where $\rho$ denotes the spectral radius\,---\,provided a characterization for the existence of an interpolant. This characterization involves a search for $M$ appropriate matrices in $GL_n(\C)$. \smallskip Another influential idea was introduced by Agler and Young in \cite{aglerYoung:cldC2si99}. They observed that in the case $W_1,\dots,W_M$ are all non-derogatory, then $(*)$ is equivalent to an interpolation problem from $\mathbb{D}$ to the $n$-dimensional {\em symmetrized polydisc} $G_n$, $n\geq 2$. This is a bounded domain in $\C^n$ (see \cite{costara:osNPp05} for the definition of $G_n$). Its relevance to $(*)$ is that, for ``generic'' matricial data $(W_1,\dots,W_M)$, the problem $(*)$ descends to a region of much lower dimension with many pleasant properties. This idea has further been developed in \cite{aglerYoung:2psNPp00}, in the papers \cite{costara:22sNPp05} and \cite{costara:osNPp05} by Costara, and in Ogle's thesis \cite{ogle:thesis99}. A matrix $A\in M_n(\C)$ is said to be \emph{non-derogatory} if it admits a cyclic vector. It is a fact that $A$ being non-derogatory is equivalent to $A$ being similar to the companion matrix of its characteristic polynomial (see \cite[p.~195]{hornJohn:matanal85}, for instance). Recall: given a monic polynomial of degree $k$ of the form $p(t)=t^k+\sum_{j=1}^ka_j\,t^{k-j}$, where $a_j\in\C$, the \emph{companion matrix} of $p$ is the matrix $\mathsf{C}_p\in M_{k}(\C)$ given by \[ \mathsf{C}_p:= \begin{bmatrix} \ 0 & {} & {} & -a_k \ \\ \ 1 & 0 & {} & -a_{k-1} \ \\ \ {} & \ddots & \ddots & \vdots \ \\ \ \text{\LARGE{0}} & & 1 & -a_{1} \ \end{bmatrix}_{k\times k}. \] By way of the $G_n$-interpolation problem, Costara \cite{costara:osNPp05} and Ogle \cite{ogle:thesis99} arrived independently at a necessary condition for the existence of an interpolant for the problem $(*)$ when the data $(W_1,\dots,W_M)$ are non-derogatory. \smallskip Bharali in \cite{gb:itplSpUb07} observed that when $n\geq 3$, the necessary condition given in \cite{costara:osNPp05, ogle:thesis99} is not sufficient. He also established\,---\,for the case $M=2$\,---\,a new necessary condition for the existence of an interpolant. Result~\ref{Res:gbintS} below is this necessary condition. It is reminiscent of the inequality in the classical Schwarz lemma; here $\mathcal{M}_{\mathbb{D}}(z_1,z_2)$ is the {\em M{\"o}bius distance} between $z_1$ and $z_2$, which is defined as: \[ \mathcal{M}_{\mathbb{D}}(z_1,z_2) \ := \ \hyper{z_1}{z_2} \quad\forall z_1,z_2\in\mathbb{D}. \] \begin{result}[Bharali, \cite{gb:itplSpUb07}]\label{Res:gbintS} Let $F\in\mathcal{O}(\mathbb{D},\,\Omega_n)$, $n\geq 2$, and let $\zeta_1,\zeta_2\in\mathbb{D}$. Write $W_j=F(\zeta_j)$, and if $\lambda\in\sigma(W_j)$, then let $m(\lambda)$ denote the multiplicity of $\lambda$ as a zero of the minimal polynomial of $W_j$. Then: \begin{equation}\label{E:SchwarzIneq} \max\left\{\max_{\mu\in\sigma(W_2)}\prod_{\lambda\in\sigma(W_1)}\mathcal{M}_{\mathbb{D}}(\mu,\lambda)^{m(\lambda)}, \ \max_{\lambda\in\sigma(W_1)}\prod_{\mu\in\sigma(W_2)}\mathcal{M}_{\mathbb{D}}(\lambda,\mu)^{m(\mu)}\right\} \ \leq \ \hyper{\zeta_1}{\zeta_2}. \end{equation} \end{result} The above theorem gives a necessary condition for the two-point interpolation problem without any restriction on the matrices, in contrast to the necessary condition in \cite{costara:osNPp05, ogle:thesis99}. In the same article, Bharali also shows that for each $n\geq 3$, there exists a data-set for which \eqref{E:SchwarzIneq} implies that it cannot admit an interpolant whereas the condition in \cite{costara:osNPp05, ogle:thesis99} is inconclusive. \smallskip The ideas behind Result~\ref{Res:gbintS} strongly influence a part of this work. One of the key tools introduced in \cite{gb:itplSpUb07} that lead to Result~\ref{Res:gbintS} are the following maps: \begin{definition} Given $A\in M_n(\C)$, let $\minpo{A}$ denote its minimal polynomial and write: \[ \minpo{A}(t)=\sum_{\lambda\in\sigma(A)}(t-\lambda)^{m(\lambda)}. \] The finite Blaschke product induced by $\minpo{A}$ if $A\in\Omega_n$: \begin{equation} B_{A}(t):=\prod_{\lambda\in\sigma(A)\subset\mathbb{D}}{\intf{(}{)}{t-\lambda}{1-\overline{\lambda}t}^{m(\lambda)}}. \label{E:mbpcmp} \end{equation} will be called the {\em minimal Blaschke product corresponding to $A$}. \end{definition} \noindent $B_{A}$ induces, via the holomorphic functional calculus (which we will discuss in Section~\ref{S:holo_fc}), a holomorphic self-map of $\Omega_n$ that maps $A$ to $0\in M_n(\C)$. This sets up a form of the Schur algorithm on $\Omega_n$, and yields an easy-to-check necessary condition for the existence of an interpolant for the data in $(*)$, for the case $M=3$. The existence of these maps $B_A$ is extremely useful, since the automorphism group of $\Omega_n$ does not act transitively on $\Omega_n$ (see \cite{ransfordWhite:hsmsub91}), $n\geq 2$ (whence the {\em classical} Schur algorithm is not even meaningful). \smallskip In \cite{baribeauKamara:rSlsnpI14}, Baribeau and Kamara take a new look at the ideas in \cite{gb:itplSpUb07}. This they combine with an inequality\,---\,which may be viewed as a Schwarz lemma for {\em algebroid multifunctions} of the unit disc (see \cite{nokraneRansford:Slam01} for a definition)\,---\,due to Nokrane and Ransford \cite[Theorem~1.1]{nokraneRansford:Slam01}. Before we present their result we need the following: given $F\in\mathcal{O}(\mathbb{D},\,\Omega_n)$ and $\zeta_1\in\mathbb{D}$, if we denote by $B_1$ the minimal Blaschke product corresponding to $F(\zeta_1)$ then Theorem~1.3 in \cite{baribeauKamara:rSlsnpI14} states, essentially, that for every $\zeta\in\mathbb{D}$ we have \[ \sigma\big(B_1(F(\zeta))/\psi_1(\zeta)\big)= S\cup\sigma(F_1(\zeta)), \] where $S\subset\partial{\mathbb{D}}$ is a finite (possibly empty) set independent of $\zeta$, $F_1\in\mathcal{O}(\mathbb{D},\,\Omega_{\nu})$, and \begin{equation}\label{E:nu_BK} \nu = \max\nolimits_{\zeta\in \mathbb{D}}\left|\sigma\big(B_1(F(\zeta))/\psi_1(\zeta)\big)\cap\mathbb{D}\right|. \end{equation} Here and in what follows, for $\zeta_j\in\mathbb{D}$, $j=1,2,3$, $\psi_j$ will denote the automorphism $\psi_j(\zeta):=(\zeta-\zeta_j)(1-\overline{\zeta}_j\zeta)^{-1}$, $\zeta\in\mathbb{D}$, of $\mathbb{D}$. We are now in a position to state: \begin{result}[paraphrasing {\cite[Corollary~3.1]{baribeauKamara:rSlsnpI14}}]\label{Res:inequality_baribeauKamara} Let $\zeta_1,\zeta_2,\zeta_3$ be distinct points in $\mathbb{D}$. Let $F\in\mathcal{O}(\mathbb{D},\,\Omega_n)$, $n\geq 2$. Denote by $B_1$ the minimal Blaschke product corresponding to $F(\zeta_1)$, and suppose that $\sigma\big({B_1(F(\zeta))}/{\psi_1(\zeta)}\big)\not\subset\partial{\mathbb{D}}$ for every $\zeta\in\mathbb{D}$. Let $\nu$ be the number given by \eqref{E:nu_BK}. Then we have: \begin{equation}\label{E:inequality_baribeauKamara} \mathcal{H}^{\mathcal{M}_{\mathbb{D}}}\left(\sigma\intf{(}{)}{B_1(W_{2})}{\psi_1(\zeta_{2})}\cap\mathbb{D},\;\; \sigma\intf{(}{)}{B_1(W_{3})}{\psi_1(\zeta_{3})}\cap\mathbb{D}\right) \leq{\mathcal{M}_{\mathbb{D}}(\zeta_{2},\,\zeta_{3})}^{1/{\nu}} \end{equation} where $W_j=F(\zeta_j)$, $j=2,\,3$. \end{result} \noindent{Here, $\mathcal{H}^{\mathcal{M}_{\mathbb{D}}}$ denotes the Hausdorff distance induced by the M{\"o}bius distance (see \cite[p.~279]{munk:topo_74} for the definition of Hausdorff distance) on the class of bounded subsets of $\mathbb{D}$.} \smallskip Now we are ready to present the first result of this article (in what follows, $B_j$ will denote the minimal Blaschke product\,---\,as well as its extension to $\Omega_n$\,---\,associated to the matrix $W_j$, $j=1,2,3$): \begin{theorem}\label{T:3pt_nec} Let $\zeta_1,\zeta_2,\zeta_3\in\mathbb{D}$ be distinct points and let $W_1,W_2,W_3\in\Omega_n$, $n\geq 2$. Let $m(j,\,\lambda)$ denote the multiplicity of $\lambda$ as a zero of the minimal polynomial of $W_j$, $j\in\{1,2,3\}$. Given $j,k\in\{1,2,3\}$ such that $j\not=k$, and $\nu\in \mathbb{D}$, we write: \[ q(\nu,j,k):=\max\left\{\intf{[}{]}{m(j,\,\lambda)-1} {\mathsf{ord}_{\lambda}{B'_k}+1}+1:\,\lambda\in \sigma(W_j)\cap B_k^{-1}\{\nu\} \right\}. \] Finally, for each $k\in\{1,2,3\}$ let \[ G(k):=\max\,(\{1,2,3\}\setminus\{k\}),\,\,\text{and}\,\,\, L(k):=\min\,(\{1,2,3\}\setminus\{k\}). \] If there exists a map $F\in\mathcal{O}(\mathbb{D},\,\Omega_n)$ such that $F(\zeta_j)\,=\,W_j$, $j\in\{1,2,3\}$, then for each $k\in\{1,2,3\}$, we have: \begin{itemize} \item either $\sigma\left(B_k(W_{G(k)})\right)\subset D\left(0,\,|\,\psi_{k}(\zeta_{G(k)})\,|\right)$, $\sigma\left(B_k(W_{L(k)})\right)\subset D\left(0,\,|\,\psi_{k}(\zeta_{L(k)})\,|\right)$ and \begin{align*} {}&\max\left\{\sub{\mu\in\sigma\left(B_k(W_{L(k)})\right)}{\max} \prod_{\nu\in\sigma\left(B_k(W_{G(k)})\right)} {\mathcal{M}_{\mathbb{D}}\left(\intf{}{}{\mu}{\psi_k(\zeta_{L(k)})},\,\intf{}{}{\nu}{\psi_k(\zeta_{G(k)})} \right)}^{q(\nu,\,G(k),\,k)},\right.\\ &\left.\sub{\mu\in\sigma\left(B_k(W_{G(k)})\right)}{\max}\prod_{\nu\in\sigma\left(B_k(W_{L(k)})\right)} {\mathcal{M}_{\mathbb{D}}\left(\intf{}{}{\mu}{\psi_k(\zeta_{G(k)})},\,\intf{}{}{\nu}{\psi_k(\zeta_{L(k)})} \right)}^{q(\nu,\,L(k),\,k)}\right\}\leq\mathcal{M}_{\mathbb{D}}\left(\zeta_{L(k)},\,\zeta_{G(k)}\right), \end{align*} \item or there exists a $\theta_0\in\mathbb{R}$ such that \[ {B_k}^{-1}\{e^{i\theta_0}\psi_{k}(\zeta_{G(k)})\}\subseteq\sigma(W_{G(k)}) \ \text{and} \ {B_k}^{-1}\{e^{i\theta_0}\psi_{k}(\zeta_{L(k)})\}\subseteq\sigma(W_{L(k)}). \] \end{itemize} \end{theorem} \noindent Here, $[\boldsymbol{\cdot}]$ denotes the greatest-integer function. Given $a\in \C$ and a function $g$ that is holomorphic in a neighbourhood of $a$, $\mathsf{ord}_a{g}$ will denote the order of vanishing of $g$ at $a$ (with the understanding that $\mathsf{ord}_a{g} = 0$ if $g$ does not vanish at $a$). \smallskip \begin{remark}\label{Rmk:comparison} Theorem~\ref{T:3pt_nec}, unlike Result \ref{Res:inequality_baribeauKamara}, incorporates information about the Jordan structure of the matricial data. Thus, Theorem~\ref{T:3pt_nec} gives a more restrictive inequality than \eqref{E:inequality_baribeauKamara} if $\nu=n$. Moreover, in Section~\ref{S:3pt_nec_proof} we will present a class of $3$-point matricial data in $\mathbb{D}\times\Omega_n$, $n\geq 4$, for which the condition \eqref{E:inequality_baribeauKamara} and that in \cite{costara:osNPp05, ogle:thesis99} provide no information while Theorem~\ref{T:3pt_nec} implies that these data do not admit a $\mathcal{O}(\mathbb{D},\,\Omega_n)$-interpolant. \end{remark} The above discussion about the role of the Nokrane--Ransford result \cite[Theorem~1.1]{nokraneRansford:Slam01} establishes how holomorphic correspondences are naturally related to the problem $(*)$. This is why we also consider holomorphic correspondences in this paper. Indeed, the method that we employ to provide the proof of Theorem~\ref{T:3pt_nec} motivated our investigation into finding a Schwarz lemma for holomorphic correspondences, which are generalizations of algebroid multifunctions. Before we present our result, we need a few definitions: \begin{definition}\label{D:hol_corres} Given domains $D_i\subseteq\C^n$, $i=1,2$, {\em a holomorphic correspondence} from $D_1$ to $D_2$ is an analytic subvariety $\Gamma$ of $D_1\times D_2$ of dimension $n$ such that $\left.\pi_1\right|_\Gamma$ is surjective (where $\pi_1$ denotes the projection onto $D_1$). \end{definition} \noindent{A {\em proper holomorphic correspondence} $\Gamma$ from $D_1$ to $D_2$ is a holomorphic correspondence (as defined above) such that $\overline{\Gamma}\cap (D_1\times\partial{D_2})=\emptyset$. We refer the reader to Section~\ref{S:nott_comx_geom} for a discussion as to why holomorphic correspondences with the latter property are called proper holomorphic correspondences. A proper holomorphic correspondence $\Gamma$ from $D_1$ to $D_2$ also induces the following set-valued map: \begin{equation}\label{E:img_corr} F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(z):=\{w\in D_2\,:\,(z,w)\in\Gamma\} \;\; \forall z\in D_1. \end{equation} The {\em Carath\'{e}odory pseudo-distance}, denoted by $C_{\Omega}$, on a domain $\Omega$ in $\C$ is defined by: \begin{equation}\label{E:defn_carathdist} C_{\Omega}(p,\,q):=\sup\{\mathcal{M}_{\mathbb{D}}(f(p),\,f(q))\,:\,f\in\mathcal{O}(\Omega,\,\mathbb{D})\}. \end{equation} The reader will notice that we have defined $C_{\Omega}$ in terms of the M{\"o}bius distance rather than the hyperbolic distance on $\mathbb{D}$. This is done purposely because {\em most} conclusions in metric geometry that rely on $C_{\Omega}$ are essentially unchanged if $\mathcal{M}_{\mathbb{D}}$ is replaced by the hyperbolic distance on $\mathbb{D}$ in \eqref{E:defn_carathdist}, and because the M{\"o}bius distance arises naturally in the proof of our next theorem. We now present this theorem: \begin{theorem}\label{T:Schwarzlemma_corres_corol} Let $\Omega$ be a bounded domain in $\C$ and let $\Gamma$ be a proper holomorphic correspondence from $\mathbb{D}$ to $\Omega$. Then for every $\zeta_1,\zeta_2\in\mathbb{D}$ we have: \[ \mathcal{H}^{C_{\Omega}}\big(F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1),\,F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_2)\big)\leq {\mathcal{M}_{\mathbb{D}}\left(\zeta_1,\,\zeta_2\right)}^{{1}/{n}}, \] where $\mathcal{H}^{C_{\Omega}}$ denotes the Hausdorff distance induced by $C_{\Omega}$, and $n$ is the multiplicity of $\Gamma$. \end{theorem} \noindent{Here, the multiplicity $n$ is as given by Lemma~\ref{L:corres_equa_zeroset} (also see Remark~\ref{Rem:mult_corr}) below.} The inequality above reduces to the distance-decreasing property for the Carath\'{e}odory pseudo-distance if $\Gamma$ is merely the graph of a holomorphic map $F:\mathbb{D}\longrightarrow\Omega$. From this perspective, we can view Theorem~\ref{T:Schwarzlemma_corres_corol} as a Schwarz lemma for proper holomorphic correspondences. It turns out that algebroid multifunctions are precisely the proper holomorphic correspondences from $\mathbb{D}$ to itself (see Lemma~\ref{L:corres_equa_zeroset}). Hence, Theorem~\ref{T:Schwarzlemma_corres_corol} generalizes \cite[Theorem~1.1]{nokraneRansford:Slam01} by Nokrane--Ransford. \smallskip The above theorem is the consequence of a more precise inequality, which we present in Section~\ref{S:SchLem_holcorres} as Theorem~\ref{T:SchwLem_holcorres}. The proof of the latter theorem is closely related to the proof of Theorem~\ref{T:3pt_nec}. The proof of Theorem~\ref{T:3pt_nec} is presented in Section~\ref{S:3pt_nec_proof}, while the proof of Theorem~\ref{T:Schwarzlemma_corres_corol} is presented in Section~\ref{S:SchLem_holcorres}. \medskip \section{Some remarks on the holomorphic functional calculus}\label{S:holo_fc} A very essential part of our proofs below is the ability, given a domain $\Omega\subset\C$ and a matrix $A\in M_n(\C)$, to define $f(A)$ in a meaningful way for each $f\in\mathcal{O}(\Omega)$, provided $\sigma(A)\subset\Omega$. Most readers will be aware that this is what is known as the holomorphic functional calculus. We briefly recapitulate what the holomorphic functional calculus is so that we can make an observation about the boundary regularity of $\Omega$\,---\,where $\Omega$ is as in the statement of Theorems~{\ref{T:Schwarzlemma_corres_corol} and \ref{T:SchwLem_holcorres}}\,---\,which will be relevant to our proofs in Section~\ref{S:SchLem_holcorres}. \smallskip The discussion in this paragraph makes sense for any unital Banach algebra $\mathscr{A}$, where we denote the norm on $\mathscr{A}$ by $\|\boldsymbol{\cdot}\|$. Let $a\in\mathscr{A}$ and write \[ \mathsf{Hol}(a):=\text{the set of all functions holomorphic in some neighbourhood of $\sigma(a)$}. \] With the understanding that if $f,g\in\mathsf{Hol}(a)$, $f+g$ and $fg$ are defined and holomorphic on $\mathsf{dom}(f)\cap\mathsf{dom}(g)\supset\sigma(a)$, which endows $\mathsf{Hol}(a)$ with the structure of a unital $\C$-algebra, the {\em holomorphic functional calculus} is an assignment $\Theta_{a}:\,f\longmapsto f(a)$ with the following properties: \begin{enumerate} \item[(i)] $\Theta_{a}:\mathsf{Hol}(a)\longrightarrow\mathscr{A}$ is a $\C$-algebra homomorphism. \item[(ii)] $\Theta_{a}({\text{id}}_{\C})=a$. \item[(iii)] Let $\{f_{\nu}\}\subset\mathsf{Hol}(a)$ and suppose there is an open set $U\supset\sigma(a)$ such that $U\subseteq\mathsf{dom}(f_{\nu})$ for every $\nu\in\mathbb{N}$. Suppose $f\in\mathsf{Hol}(a)$ is such that $f_{\nu}\to f$ uniformly on compact subsets of $U$. Then $\|f_{\nu}(a)-f(a)\|\to 0$ as $\nu\to\infty$. \end{enumerate} It is a basic result of the spectral theory of Banach algebras that an assignment $\Theta_{a}$ with the above properties exists. \smallskip We now specialize to the Banach algebra $M_{n}(\C)$. Fix $A\in M_{n}(\C)$. Then, it is well known that (see \cite[Chapter~7, Section~1]{dunfordsch:Linopera88}) for any polynomial $p(z)=\sum_{j=0}^m\alpha_jz^j$ such that \begin{equation} p^{(j)}(\lambda)=f^{(j)}(\lambda) \;\; \forall j\,:\, 0\leq j\leq \nu(\lambda)-1 \ \text{and} \ \forall\lambda\in\sigma(A), \label{E:defeq_funccal} \end{equation} where \[ \nu(\lambda):=\min\Big\{k\in\mathbb{N}\,:\, \text{Ker}(\lambda\mathbb{I}-A)^{k+1}=\text{Ker}(\lambda\mathbb{I}-A)^{k}\Big\}, \ \lambda\in\sigma(A), \] the assignment \[ \Theta_{A}(f)=f(A):=\sum_{j=0}^m\alpha_j A^j \] has the properties (i), (ii) and (iii) above. Note that, for $\lambda\in\sigma(A)$, $\nu(\lambda)$ is the exponent of $(z-\lambda)$ in the minimal polynomial of $A$. Now, given a non-empty open set $\Omega\subset\C$ and $A\in M_n(\C)$ such that $\sigma(A)\subset\Omega$, one defines \[ f(A):=\Theta_{A}(f)\;\;\forall f\in\mathcal{O}(\Omega). \] By the foregoing discussion, we need to make {\em no assumptions} about the boundary of $\Omega$ in defining $f(A)$, $f\in\mathcal{O}(\Omega)$, such that the assignment $\mathcal{O}(\Omega)\ni f\mapsto f(A)$ ({\em provided} $\sigma(A)\subset\Omega$) behaves ``naturally''. We consider this point relevant to make because in treatments of the assignment $\mathcal{O}(\Omega)\ni f\mapsto f(a)$ in certain books, $a$ belonging to a {\em general} unital Banach algebra $\mathscr{A}$, this assignment is defined via a Cauchy integral and with certain conditions imposed on $\partial{\Omega}$ when $\Omega\varsubsetneq\C$. A rephrasing of the above point in a manner that is more precise and relevant to the proofs in Section~\ref{S:SchLem_holcorres} is as follows. \begin{remark}\label{Rem:matpara_func} Let $\Omega$ be a non-empty open set in $\C$ and let $S_{n}(\Omega):=\{A\in M_{n}(\C)\,:\,\sigma(A) \subset\Omega\}$, $n\geq 2$. Then for each $f\in\mathcal{O}(\Omega)$ and $A\in S_{n}(\Omega)$, we can define $f(A)$ such that $f(A)$\,---\,fixing $A\in S_{n}(\Omega)$ and writing $f(A):=\Theta_{A}(f)$\,---\,has the properties (i)--(iii) above (taking $\mathscr{A}=M_{n}(\C)$, $a=A$ and with $\mathcal{O}(\Omega)$ and $\Omega$ replacing $\mathsf{Hol}(a)$ and $U$, respectively) $\forall f\in \mathcal{O}(\Omega)$ \textbf{without} any conditions on $\partial{\Omega}$ or on whether $f\in\mathcal{O}(\Omega)$ extends to $\partial{\Omega}$. With $\Omega$ as above and $A\in S_{n}(\Omega)$, the assignment $\mathcal{O}(\Omega)\ni f\mapsto f(A)$ will \textbf{also} be called the holomorphic functional calculus in our discussions below. \end{remark} We end this section by stating the Spectral Mapping Theorem. When we invoke it in subsequent sections, it will be for the Banach algebra $\mathscr{A}=M_{n}(\C)$. \begin{result}[Spectral Mapping Theorem] Let $\mathscr{A}$ be a unital Banach algebra. Then for every $f\in\mathsf{Hol}(a)$ and $a\in\mathscr{A}$ we have \[ \sigma(f(a))=f(\sigma(a)). \] \end{result} \smallskip \section{Minimal polynomials under the holomorphic functional calculus}\label{S:minpo_holo_fc} In this section, we develop the key matricial tool needed in establishing Theorem~\ref{T:3pt_nec}, which is the computation of the minimal polynomial for $f(A)$, given $f\in\mathcal{O}(\mathbb{D})$ and $A\in\Omega_n$, $n\geq 2$. This is the content of Theorem~\ref{T:minpo_holo_func_anal}. We begin with a few lemmas which will help us to establish Theorem~\ref{T:minpo_holo_func_anal}. In what follows, given integers $p < q$, $\intgR{p}{q}$ will denote the set of integers $\{p, p+1,\dots, q\}$. Recall: $[\boldsymbol{\cdot}]$ denotes the greatest-integer function. Also, given $A\in M_{n}(\C)$, we will denote its minimal polynomial by $\minpo{A}$. \smallskip Let $n\geq 2$. Given $(\alpha_1,\alpha_2,\dots,\alpha_{n-1})\in\C^{n-1}$, we define $l(\alpha_1,\alpha_2,\dots,\alpha_{n-1})\in\intgR{1}{n}$ by \[ l(\alpha_1,\alpha_2,\dots,\alpha_{n-1})\,:=\, \begin{cases} n, &\text{if $\alpha_j = 0 \ \forall j\in\intgR{1}{n-1}$},\\ \min\{j\in \intgR{1}{n-1}\,:\,\alpha_j\neq 0\}, &\text{otherwise}. \end{cases} \] \begin{lemma}\label{L:minmo_lincomb_nilpo} Let $(\alpha_0,\alpha_1,\dots,\alpha_{n-1})\in\C^{n}$, $n\geq 2$. Let $A\,=\,\sum_{j=0}^{n-1}\alpha_jN^{j}$, where $N\in M_{n}(\C)$ is the nilpotent matrix of degree $n$ given by $(\boldsymbol{\delta}_{i+1,\,j})_{i,\,j\,=\,1}^n$, $\boldsymbol{\delta}_{\mu,\,\nu}$ being the Kronecker symbol. Then the minimal polynomial for $A$ is given by: \begin{equation}\label{E:minpoeq_lincomb_nilpo} \minpo{A}(t)\,=\,(t-\alpha_{0})^{[(n-1)/{l(\alpha_1,\alpha_2,\dots,\alpha_{n-1})}]+1}, \end{equation} where $l(\alpha_1,\alpha_2,\dots,\alpha_{n-1})$ is as defined above. \end{lemma} \begin{proof} The proof consists of two cases. \noindent{\bf Case 1.} $\alpha_j\,=\,0\,\,\forall j\in\intgR{1}{n-1}$. \noindent This implies that $l(\alpha_1,\alpha_2,\dots,\alpha_{n-1})\,=\,n$, and hence $[(n-1)/{l(\alpha_1,\alpha_2,\dots,\alpha_{n-1})}]\,=\,0$. The minimal polynomial in this case clearly is $(t-\alpha_0)$. This establishes \eqref{E:minpoeq_lincomb_nilpo} in this case. \smallskip \noindent{\bf Case 2.} {\em $\alpha_j\,\neq\,0$ for some $j\in\intgR{1}{n-1}$.} \noindent We write $l\,\equiv\,l(\alpha_1,\alpha_2,\dots,\alpha_{n-1})$. Then $A-\alpha_{0}I\,=\,\alpha_lN^l+\cdots+\alpha_{n-1}N^{n-1}$. Hence \[ (A-\alpha_{0}I)^{m}={\alpha_l}^{m}N^{lm}+(\text{terms in $N^k$ with $k>lm$}). \] We observe thus that the power of $(t-\alpha_0)$ in $\minpo{A}(t)$ must be the least integer $m$ such that $ml\geq n$. It is elementary to see that that integer is $[(n-1)/l]+1$. \end{proof} Let $a\in \C$ and let $g$ be a function that is holomorphic in a neighbourhood of $a$. Then, $\mathsf{ord}_{a}g$ will denote the order of vanishing of $g$ at $a$. Recall: this means that $\mathsf{ord}_{a}g$ is the least non-negative integer $j$ such that $g^{(j)}(a)\,\not=\,0$ (hence, $\mathsf{ord}_{a}g = 0$ if $g$ does not vanish at $a$). \begin{lemma}\label{L:minpo_Jordan} Let $\lambda\in\mathbb{D}$ and let $f\in\mathcal{O}(\mathbb{D})$ be a non-constant function. Let $J_{n}(\lambda)$ represent the $n\times n$ Jordan matrix associated to $\lambda$, $n\geq 2$, and $f(J_{n}(\lambda))$ be the matrix given by the holomorphic functional calculus. Then the minimal polynomial of $f(J_{n}(\lambda))$ is given by \[ \minpo{f(J_{n}(\lambda))}(t)\,=\,\left(t-f(\lambda)\right) ^{\intf[]{n-1}{\mathsf{ord}_{\lambda}{f'}+1}+1}. \] \end{lemma} \begin{proof} We begin by noting that, while \eqref{E:defeq_funccal} gives an expression for $f(J_{n}(\lambda))$ in terms of the exponent of $(t-\lambda)$ in $\minpo{J_{n}(\lambda)}$, our task is to determine the analogous exponent in $\minpo{f(J_n(\lambda))}$, for which \eqref{E:defeq_funccal} is not immediately helpful. \smallskip Let $R$ be such that $|\lambda|<R<1$. Then, the power series expansion of $f$ \[ f(z)\,:=\,\sum_{k\in\mathbb{N}}\intf{}{}{f^{(k)}(0)}{k!}{z^k}\,\, \text{converges absolutely at each}\,\,z\in D(0;\,R). \] Then by elementary properties (see Section~\ref{S:holo_fc}) of the holomorphic functional calculus we get \begin{equation}\label{E:powerseries_Jordan} f(J_{n}(\lambda))\,=\,\sum_{k\in\mathbb{N}}\intf{}{}{f^{(k)}(0)}{k!}(J_{n}(\lambda))^{k}. \end{equation} Note that $J_{n}(\lambda)=\lambda\mathbb{I}+N$, where $N$ is the nilpotent matrix as in Lemma~\ref{L:minmo_lincomb_nilpo}. We can use the binomial expansion to get \[ (J_{n}(\lambda))^{k}\,=\,(\lambda\mathbb{I}+N)^k\,=\,\sum_{j=0}^{p(k)} \facto{k}{j}{\lambda}^{k-j}N^j, \] where $p(k):=\min(k,\,n-1)$ and $k\in\mathbb{N}$. Hence \eqref{E:powerseries_Jordan} becomes \begin{equation}\label{E:doublesum_nilpo} f(J_{n}(\lambda))\,=\,\sum_{k\in\mathbb{N}}\intf{}{}{f^{(k)}(0)}{k!}\sum_{j=0}^{p(k)} \facto{k}{j}{\lambda}^{k-j}N^j. \end{equation} The coefficient of $N^j$, $0\leq j\leq n-1$, in \eqref{E:doublesum_nilpo} is \[ \sum_{k\geq j}\intf{}{}{f^{(k)}(0)}{k!}\facto{k}{j}{\lambda}^{k-j}\,=\, \sum_{k\geq j}\intf{}{}{f^{(k)}(0)}{(k-j)!\,{j!}}{\lambda}^{k-j}\,=\,\intf{}{}{f^{(j)}(\lambda)}{j!},\,\,j\in\mathbb{N}. \] Using the fact that $N^n=0$, we get \[ f(J_{n}(\lambda))\,=\,\sum_{j=0}^{n-1}\intf{}{}{f^{(j)}(\lambda)}{j!}N^{j}. \] From Lemma~\ref{L:minmo_lincomb_nilpo} we have $\minpo{f(J_{n}(\lambda))}(t)\,=\,(t-f(\lambda))^{m}$, where \[ m\,=\,\intf[]{n-1}{l(f'(\lambda),f''(\lambda),\dots,f^{(n-1)}(\lambda))}+1. \] If $\mathsf{ord}_{\lambda}{f'}\leq n-2$, then $l(f'(\lambda),f''(\lambda),\dots,f^{(n-1)}(\lambda))\,=\,\mathsf{ord}_{\lambda}{f'}+1$, else $\mathsf{ord}_{\lambda}{f'}+1$ and $l(f'(\lambda),f''(\lambda),\dots,f^{(n-1)}(\lambda))>(n-1)$. In both the cases we have: \[ \intf[]{n-1}{l(f'(\lambda),f''(\lambda),\dots,f^{(n-1)}(\lambda))}\,=\, \intf[]{n-1}{\mathsf{ord}_{\lambda}{f'}+1}. \] From the last two expressions, the lemma follows. \end{proof} \begin{lemma}\label{L:minmo_Jordanblocks} Let $\lambda\in\mathbb{D}$ and $f\in\mathcal{O}(\mathbb{D})$, $f$ non-constant. Let $n_1\leq n_2\leq\cdots\leq n_{q}$ be a sequence of positive integers. Let $J\,=\,\oplus_{i=1}^{q}J_{n_i}(\lambda)$, where $J_{n_i}(\lambda)$ represents the $n_i\times n_i$ Jordan block associated to $\lambda$. Then the minimal polynomial for $f(J)$ is given by: \[ \minpo{f(J)}(t)\,=\,\left(t-f(\lambda)\right) ^{\intf[]{n_{q}-1}{\mathsf{ord}_{\lambda}{f'}+1}+1}. \] \end{lemma} \begin{proof} Note that $f(J)\,=\,\oplus_{i=1}^{q}f(J_{n_i}(\lambda))$. If $n_i = 1$, for $i=1,\ldots,q$, then the following is obvious; else Lemma~\ref{L:minpo_Jordan} gives us \begin{equation}\label{E:minpo_summand} \minpo{f(J_{n_i}(\lambda))}(t)\,=\,\left(t-f(\lambda)\right) ^{\intf[]{n_i-1}{\mathsf{ord}_{\lambda}{f'}+1}+1}\,\,\forall i\in\intgR{1}{q}. \end{equation} For each $i$, we also have \begin{equation}\label{E:ineq_expo} \intf[]{n_i-1}{\mathsf{ord}_{\lambda}{f'}+1}+1\leq\, \intf[]{n_q-1}{\mathsf{ord}_{\lambda}{f'}+1}+1. \end{equation} Now the minimal polynomial for a matrix that is a finite direct sum of matrices is the least common multiple (in the ring $\C[t]$) of the minimal polynomials of the direct summands. This, together with \eqref{E:minpo_summand} and \eqref{E:ineq_expo}, establishes the lemma. \end{proof} \begin{theorem}\label{T:minpo_holo_func_anal} Let $A\in\Omega_n$, $n\geq 2$, and let $f\in\mathcal{O}(\mathbb{D})$ be a non-constant function. Suppose that the minimal polynomial for $A$ is given by \[ \minpo{A}(t)\,=\,\prod_{\lambda\in\sigma(A)}(t-\lambda)^{m(\lambda)}. \] Then the minimal polynomial for $f(A)$ is given by \[ \minpo{f(A)}(t)\,=\,\prod_{\nu\in f(\sigma(A))}(t-\nu)^{k(\nu)}, \] where, $k(\nu)=\max\left\{\intf{[}{]}{m(\lambda)-1} {\mathsf{ord}_{\lambda}{f'}+1}+1:\,\,\lambda \in \sigma(A)\cap f^{-1}\{\nu\}\right\}. $ \end{theorem} \begin{proof} By the Spectral Mapping Theorem the minimal polynomial for $f(A)$ is given by $\prod_{\nu\in f(\sigma(A))}(t-\nu)^{k(\nu)}$ for some $k(\nu)\in\mathbb{N}$. We must now show that $k(\nu)$ are as stated above. Let $\mathfrak{S}(\nu)$, for each $\nu\in f(\sigma(A))$, denote the set \[ \mathfrak{S}(\nu)\,:=\,\{\lambda\in\sigma(A)\,:\,f(\lambda)\,=\,\nu\}. \] Then $\{\mathfrak{S}(\nu)\,:\,\nu\in f(\sigma(A))\}$ gives a partition of $\sigma(A)$. \smallskip The Jordan canonical form tells us that $\exists S\in GL_{n}(\C)$ such that \begin{equation}\label{E:Jordanform_A} A\,=\,S\left[\oplus_{\nu\in f(\sigma(A))}\left[\oplus_{\lambda\in\mathfrak{S}(\nu)} \left[\oplus_{i=1}^{q(\lambda)}J_{n_{i}^\lambda}(\lambda)\right]\right]\right]S^{-1}, \end{equation} where $\left\{J_{n_{i}^\lambda}(\lambda)\,:\,i\in\intgR{1}{q(\lambda)}\right\}$ is the Jordan block-system associated to $\lambda$ such that $n_{1}^\lambda\leq n_{2}^\lambda\leq\cdots\leq n_{q(\lambda)}^\lambda$. Now from \eqref{E:Jordanform_A} and from the basic properties of the holomorphic functional calculus we get \begin{equation}\label{E:minpo_Jordanform_A} \minpo{f(A)}\,=\,{\boldsymbol{\mathsf{M}}}\left(\oplus_{\nu\in f(\sigma(A))}\left[\oplus_{\lambda\in\mathfrak{S}(\nu)} \left[f\left(\oplus_{i=1}^{q(\lambda)}J_{n_{i}^\lambda}(\lambda)\right)\right]\right]\right) \end{equation} (we will sometimes write $\minpo{B}$ as ${\boldsymbol{\mathsf{M}}}(B)$ for convenience). Notice that if $\nu_1\not=\nu_2\in f(\sigma(A))$, the matrices $\oplus_{\lambda\in\mathfrak{S}(\nu_j)} f\left(\oplus_{i=1}^{q(\lambda)}J_{n_{i}^\lambda}(\lambda)\right),\ j=1,2$, have $\{\nu_{1}\}$ and $\{\nu_{2}\}$, respectively, as their spectra. Hence the minimal polynomials of these are relatively prime to each other. This implies that (from \eqref{E:minpo_Jordanform_A} above): \begin{equation}\label{E:minpo_sum_Jordanblocks} \minpo{f(A)}\,=\,\prod_{\nu\in f(\sigma(A))}{\boldsymbol{\mathsf{M}}} \left(\left[\oplus_{\lambda\in\mathfrak{S}(\nu)} \left[f\left(\oplus_{i=1}^{q(\lambda)}J_{n_{i}^\lambda}(\lambda)\right)\right]\right]\right). \end{equation} The above is the consequence of the fact that the minimal polynomial of a direct sum of matrices is the least common multiple of the minimal polynomials of the individual matrices. This also implies that \begin{equation}\label{E:minpo_lcm} {\boldsymbol{\mathsf{M}}}\left(\left[\oplus_{\lambda\in\mathfrak{S}(\nu)} \left[f\left(\oplus_{i=1}^{q(\lambda)}J_{n_{i}^\lambda}(\lambda)\right)\right]\right]\right)\,=\, {\sf lcm}\left\{{\boldsymbol{\mathsf{M}}} \left(f\left(\oplus_{i=1}^{q(\lambda)}J_{n_{i}^\lambda}(\lambda)\right)\right)\,:\, \lambda\in\mathfrak{S}(\nu)\right\}. \end{equation} For a fixed $\lambda\in\mathfrak{S}(\nu)$, recall that $n_{1}^\lambda\leq n_{2}^\lambda\leq\cdots\leq n_{q(\lambda)}^\lambda$. Furthermore, $n_{q(\lambda)}^\lambda\,=\,m(\lambda)$, $m(\lambda)$ being the multiplicity of $\lambda$ in $\minpo{A}$. Putting all of this together with Lemma~\ref{L:minmo_Jordanblocks} we have: \begin{equation}\label{E:minpo_holo_jordan} {\boldsymbol{\mathsf{M}}}\left(f\left(\oplus_{i=1}^{q(\lambda)}J_{n_{i}^\lambda}(\lambda)\right)\right) (t)\,=\,\left(t-\nu\right)^{\intf{[}{]}{m(\lambda)-1}{\mathsf{ord}_{\lambda}{f'}+1}+1}\,\, \forall\lambda\in\mathfrak{S}(\nu). \end{equation} Now, \eqref{E:minpo_holo_jordan} and \eqref{E:minpo_lcm} together give us: \[ {\boldsymbol{\mathsf{M}}}\left(\left[\oplus_{\lambda\in\mathfrak{S}(\nu)} \left[f\left(\oplus_{i=1}^{q(\lambda)}J_{n_{i}^\lambda}(\lambda)\right)\right]\right]\right)\,=\, (t-\nu)^{k(\nu)}, \] where $k(\nu)$ is as stated in our theorem. The above, in view of \eqref{E:minpo_sum_Jordanblocks}, gives the result. \end{proof} \smallskip \section{Two fundamental lemmas}\label{S:princ_lemma} In this section, we state two fundamental and closely related lemmas. Lemma~\ref{L:fl2} serves as the link between the two main results of this paper. Both lemmas are simple, once we appeal to Vesentini's theorem. We begin by stating this result. \begin{result}[Vesentini, \cite{vst:shSprd68}]\label{Res:satz_vst} Let $\mathscr{A}$ be a complex, unital Banach algebra and let $\rho(x)$ denote the spectral radius of any element $x\in\mathscr{A}$. Let $f\in\mathcal{O}(\mathbb{D},\,\mathscr{A})$. Then, the function $\zeta\longmapsto\rho(f(\zeta))$ is subharmonic on $\mathbb{D}$. \end{result} \begin{lemma}\label{L:fl1} Let $\Phi\in\mathcal{O}(\mathbb{D},\,\overline{\Omega}_n)$ be such that there exists a $\theta_0\in\mathbb{R}$ and $\zeta_0\in\mathbb{D}$ satisfying $e^{i\theta_0}\in\sigma(\Phi(\zeta_0))$. Then $e^{i\theta_0}\in\sigma(\Phi(\zeta))$ for all $\zeta\in\mathbb{D}$. \end{lemma} \begin{proof} Define $\widetilde{\Phi}\in\mathcal{O}(\mathbb{D},\,M_n(\C))$ by \[ \widetilde{\Phi}(\zeta)\,=\,\Phi(\zeta)+e^{i\theta_0}\mathbb{I},\,\,\forall \zeta\in\mathbb{D}. \] By Result~\ref{Res:satz_vst}, $\rho\circ\widetilde{\Phi}$ is subharmonic on $\mathbb{D}$. Notice that for each $\zeta\in\mathbb{D}$ we have $\rho\circ\widetilde{\Phi}(\zeta)\leq 2$, and $\rho(\widetilde{\Phi}(\zeta_0))=2$. By the maximum principle for subharmonic functions it follows that $\rho\circ\widetilde{\Phi}\equiv 2$. As $\sigma(\widetilde{\Phi}(\zeta))=e^{i\theta_0}+\sigma(\Phi(\zeta))$ and $\sigma(\Phi(\zeta))\subseteq\overline{\mathbb{D}}$, this implies that $e^{i\theta_0}\in\sigma(\Phi(\zeta))\,\, \forall \zeta\in\mathbb{D}$. Hence the lemma. \end{proof} The next lemma is, essentially, a fragment of a proof in \cite[Section 3]{gb:itplSpUb07}. However, since it requires a non-trivial fact\,---\,i.e., the plurisubharmonicity of the spectral radius\,---\,we provide a proof. \begin{lemma}\label{L:fl2} Let $F\in\mathcal{O}(\mathbb{D},\,\Omega_n)$ be such that $F(0)=0$. Then, there exists $G\in\mathcal{O}(\mathbb{D},\,\overline{\Omega}_n)$ such that $F(\zeta)\,=\,\zeta\,G(\zeta)$ for all $\zeta\in\mathbb{D}$. \end{lemma} \begin{proof} As $F(0)=0$, there exists $G\in\mathcal{O}(\mathbb{D},\,M_n(\C))$ such that $F(\zeta)\,=\,\zeta\,G(\zeta)\,\,\forall\zeta\in\mathbb{D}$. Fix a $\zeta\in\mathbb{D}\setminus\{0\}$ and let $R\in (0,\,1)$ be such that $R>|\zeta|$. Then on the circle $|w|\,=\,R$ we have \begin{align} \rho(F(w))\,&=\,R\,\rho(G(w))\nonumber\\ \implies \rho(G(w))\,&=\,\intf{}{}{\rho(F(w))}{R}<\intf{}{}{1}{R}\,\,\forall w:\,|w|=R,\label{E:maxSp} \end{align} where $\rho$ denotes the spectral radius. We again appeal to Vesentini's Theorem. Subharmonicity of $\rho\circ G$, the maximum principle and \eqref{E:maxSp} give us: \[ \rho(G(\zeta))<\intf{}{}{1}{R}\,\,(\text{recall that $|\zeta|<R$}). \] By taking $R\nearrow 1$, and from the fact that $\zeta\in\mathbb{D}$ was arbitrary, we get $\rho(G(\zeta))\leq 1\,\,\forall\zeta\in\mathbb{D}$. This is equivalent to $G\in\mathcal{O}(\mathbb{D},\,\overline{\Omega}_n)$. \end{proof} \medskip \section{Some notations and results in basic complex geometry}\label{S:nott_comx_geom} This section is devoted to a couple of results in the geometry and function theory in the holomorphic setting that are relevant to our proof of Theorem~\ref{T:Schwarzlemma_corres_corol}. Our first result pertains to the structure of a holomorphic correspondence $\Gamma$ from $\mathbb{D}$ to $\Omega$ with the properties as in Theorem~\ref{T:Schwarzlemma_corres_corol}. For this, we need the following standard result (see \cite[Section~3.1]{chirka:comx_ana_set89}, for instance). \begin{result}\label{Res:crit_propproj} Let $\Omega_1\subsetneq X$ and $\Omega_2\subsetneq Y$ be open subsets in topological spaces $X$ and $Y$ respectively, with $\overline{\Omega}_2$ being compact. Let $A$ be a closed subset in $\Omega_1\times \Omega_2$. Then the restriction to $A$ of the projection $(x,\,y)\longmapsto x$ is proper if and only if $\overline{A}\cap(\Omega_1\times\partial{\Omega_2})=\emptyset$. \end{result} Owing to the above result, any holomorphic correspondence $\Gamma$ from $D_1$ to $D_2$, which are domains in $\C^{n}$, such that $\overline{\Gamma}\cap(D_1\times\partial{D_2})=\emptyset$ is called a \emph{proper holomorphic correspondence}. We can now state and prove the result that we need. This result is deducible, in essence, from \cite[Section~4.2]{chirka:comx_ana_set89}. However, since that discussion pertains to a much more general setting, we provide a proof in the set-up that we are interested in. \begin{lemma}\label{L:corres_equa_zeroset} Let $\Omega$ be a bounded domain in $\C$ and let $\Gamma$ be a proper holomorphic correspondence from $\mathbb{D}$ to $\Omega$. Then there exist an $n\in\mathbb{Z}_{+}$ and functions $a_1,\dots,a_n\in\mathcal{O}(\mathbb{D})$ such that \begin{equation} \Gamma=\Big\{(z,w)\in\mathbb{D}\times\Omega\,:\,w^n+\sum_{j=1}^n(-1)^ja_j(z)w^{n-j}=0\Big\}. \end{equation} \end{lemma} \begin{proof} Since $\left.\pi_1\right|_{\Gamma}$ is proper, it follows from the elementary theory of proper holomorphic maps that there exists a discrete set $\mathcal{A}\subset\mathbb{D}$ and an $n\in\mathbb{Z}_{+}$ such that $(\Gamma\setminus{\pi_1^{-1}(\mathcal{A})},\,\mathbb{D}\setminus\mathcal{A},\,\left.\pi_1\right|_{\Gamma})$ is an $n$-fold analytic covering. Thus, given any $p\in\mathbb{D}\setminus\mathcal{A}$, there exist an open neighborhood $V_p$ such that $p\in V_p\subset\mathbb{D}\setminus\mathcal{A}$, and $n$ holomorphic inverse branches of $\left.\pi_1\right|_{\Gamma}$ ; $(\pi_1)^{-1}_{1,\,p},\dots,(\pi_1)^{-1}_{n,\,p}\in\mathcal{O}(V_p,\,\Gamma)$; such that the images of $(\pi_1)^{-1}_{j,\,p}$, $j=1,\ldots,n$, are disjoint. \smallskip Let $\mathscr{S}_{j}$ denote the $j$-th elementary symmetric polynomial in $n$ symbols. Define: \[ a_j(z):=\mathscr{S}_{j}((\pi_1)^{-1}_{1,\,z}(z),\dots,(\pi_1)^{-1}_{n,\,z}(z)) \ \forall z\in\mathbb{D}\setminus\mathcal{A}, \ 1\leq j\leq n. \] Clearly, $a_j$ does not depend on the order in which $\{(\pi_1)^{-1}_{k,\,p}\}_{k\in\intgR{1}{n}}$ are labeled, whence it is well-defined. Now, fix a $p\in\mathbb{D}\setminus\mathcal{A}$. Provisionally, for $z\in V_p$, let us define: \[ (\pi_1)^{-1}_{j,\,z}:=\text{the holomorphic inverse branch of $\left.\pi_1\right|_{\Gamma}$ that maps $z$ to $(\pi_1)^{-1}_{j,\,p}(z),$} \] which is well-defined, because the images of $(\pi_1)^{-1}_{j,\,p}$, $j=1,\dots,n$, are disjoint. By the same reason $\{(\pi_1)^{-1}_{k,\,z}(z):1\leq k\leq n\}=\{(\pi_1)^{-1}_{j,\,p}(z):1\leq j\leq n\}$. From the last two assertions, we have \[ a_j(z)=\mathscr{S}_{j}((\pi_1)^{-1}_{1,\,p}(z),\dots,(\pi_1)^{-1}_{n,\,p}(z)) \ \forall z\in V_p. \] This tells us that $a_j$ is $\C$-differentiable at each $p\in\mathbb{D}\setminus\mathcal{A}$, whence $a_j\in\mathcal{O}(\mathbb{D}\setminus\mathcal{A})$. It is easy to see that, for each $q\in\mathcal{A}$ \begin{equation}\label{E:exiL_excep} \lim_{\mathbb{D}\setminus\mathcal{A}\,\ni z\to q}a_j(z)=\sum_{(i_1,\dots,i_j)\in\mathscr{I}_{n}(j)} \prod\nolimits_{w_{i_k}\in\bv{(\pi_1^{-1}\{q\}\cap\Gamma)},\,{1\leq k\leq j}} \ w_{i_k}, \end{equation} where $\mathscr{I}_n(j)$ denotes the set of all increasing $j$-tuples of $\intgR{1}{n}$, $j=1,\ldots,n$, and \begin{align*} \bv{(\pi_1^{-1}\{q\}\cap\Gamma)}:={}& \text{an enumeration of the {\bf list} of elements in $\pi_1^{-1}\{q\}\cap\Gamma$} \\ {}& \text{repeated according to intersection multiplicity.} \end{align*} From \eqref{E:exiL_excep} and Riemann's removable singularities theorem, we conclude that $a_j$ extends to a holomorphic function, $j\in\intgR{1}{n}$. Furthermore, by the definition of $\left.a_j\right|_{\mathbb{D}\setminus\mathcal{A}}$ and by \eqref{E:exiL_excep}, we conclude, from Vieta's formulas that, fixing a $z_0\in\mathbb{D}$: \begin{align*} {}& \text{the {\bf list}, repeated according to multiplicity, of the zeros of $w^n+\sum_{j=1}^n(-1)^ja_j(z_0)w^{n-j}$}\\ {}&=\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(z_0)}. \end{align*} As $z_0\in\mathbb{D}$ was arbitrary, the result follows. \end{proof} \begin{remark}\label{Rem:not_mltply} We have used a notation in our proof of Lemma~\ref{L:corres_equa_zeroset}\,---\,see \eqref{E:exiL_excep} and the clarifications that follow\,---\,that will be useful in later discussions/calculations. Namely: if $S$ is a non-empty set, and there is a multiplicity associated to each $s\in S$, then we shall use the notation $\bv{S}$ to denote the {\bf list} of elements of $S$ repeated according to their multiplicity. \end{remark} \begin{remark}\label{Rem:mult_corr} The positive integer $n$ that appears in the above lemma is known as \emph{the multiplicity of $\Gamma$}. In general, when we have a proper holomorphic correspondence $\Gamma$ from $\Omega_1$ to $\Omega_2$ with $\dim(\Gamma)=\dim(\Omega_1)$, then there exists an analytic variety $\mathcal{A}\subset\Omega_1$ with $\dim{\mathcal{A}}<\dim(\Gamma)$ such that the cardinality of $\pi_1^{-1}\{z\}\cap\Gamma=k$ for all $z\in\Omega\setminus\mathcal{A}$ (see \cite[Section~3.7]{chirka:comx_ana_set89}). This generalizes the notion of multiplicity to higher dimensions. \end{remark} We shall now look at an extremal problem associated to the Carath\'{e}odory pseudo-distance $C_{\Omega}$ on the domain $\Omega$ in $\C$. Recall the discussion in Section~\ref{S:intro} about the Carath\'{e}odory pseudo-distance, and the reasons that we prefer using the following definition: \begin{align} C_{\Omega}(p,\,q)&:=\sup\{\mathcal{M}_{\mathbb{D}}(f(p),\,f(q))\,:\,f\in\mathcal{O}(\Omega,\,\mathbb{D})\}\nonumber\\ &=\sup\{|\,f(q)\,|\,:\,f\in\mathcal{O}(\Omega,\,\mathbb{D})\,:\,f(p)=0\}.\label{E:alt_def_cara} \end{align} The equality in \eqref{E:alt_def_cara} is due to the fact that the automorphism group of $\mathbb{D}$ acts transitively on $\mathbb{D}$ and the M{\"o}bius distance is invariant under its action. Applying Montel's Theorem, it is easy to see that there exists a function $g\in\mathcal{O}(\Omega,\,\mathbb{D})$ such that $g(p)=0$ and $g(q)=C_{\Omega}(p,\,q)$. Such a function is called an \emph{extremal solution} for the extremal problem determined by \eqref{E:alt_def_cara}.\vspace{1mm} \smallskip We will always consider domains $\Omega$ in $\C$ for which $H^{\infty}(\Omega)$, the set of all bounded holomorphic functions in $\Omega$, separates points in $\Omega$. For such domains the Carath\'{e}odory pseudo-distance clearly is a distance. Moreover it turns out that for such domains there is a unique extremal solution (see the last two paragraphs in \cite{fsh:shwaLemInnfun69}). In Section~\ref{S:SchLem_holcorres} (since the domains considered there are bounded), we will always denote by $G_{\Omega}(p,\,q;\,\boldsymbol{\cdot}{•})$ the unique extremal solution determined by the extremal problem \eqref{E:alt_def_cara}. \medskip \section{The proof of Theorem~\ref{T:3pt_nec}}\label{S:3pt_nec_proof} This section is largely devoted to the proof of Theorem~\ref{T:3pt_nec}. Closely related to it is our example\,---\,referred to in Section~\ref{S:intro}\,---\,that compares the necessary condition given by Theorem~\ref{T:3pt_nec} with other necessary conditions for the existence of a $3$-point interpolant from $\mathbb{D}$ to $\Omega_n$. This example is presented after our proof. \subsection{The proof of Theorem~\ref{T:3pt_nec}}\label{SS:3pt_nec_proof} Let $F\in\mathcal{O}(\mathbb{D},\,\Omega_n)$ be such that $F(\zeta_j)=W_j$, $j\in\{1,2,3\}$. Fix $k\in\{1,2,3\}$ and write: \[ \widetilde{F_k}:=B_k\circ F\circ\psi_k^{-1}, \] where $B_k$, $\psi_k$ are as described before the statement of Theorem~\ref{T:3pt_nec}. Notice that $\widetilde{F_k}\in\mathcal{O}(\mathbb{D},\,\Omega_n)$ and satisfies $\widetilde{F_k}(\psi_k(\zeta_{L(k)}))=B_k(W_{L(k)}),\,\widetilde{F_k}(\psi_k(\zeta_{G(k)})) =B_k(W_{G(k)}),\,\widetilde{F_k}(0)=0$. By Lemma~\ref{L:fl2}, we get \begin{equation}\label{E:fact_aux_intp} \widetilde{F_k}(\zeta)=\zeta\,\widetilde{G_k}(\zeta)\,\,\forall\zeta\in\mathbb{D},\,\,\text{ for some $\widetilde{G_k}\in\mathcal{O}(\mathbb{D},\,\overline{\Omega}_n)$}. \end{equation} Two cases arise: \smallskip \noindent{\bf Case 1.} $\widetilde{G_k}(\mathbb{D})\subset\Omega_n$. \noindent In view of \eqref{E:fact_aux_intp}, we have \begin{equation}\label{E:2pt_redut_intp} \widetilde{G_k}\left(\psi_k(\zeta_{L(k)})\right)\,=\,W_{L(k),\,k},\,\, \widetilde{G_k}\left(\psi_k(\zeta_{G(k)})\right)\,=\,W_{G(k),\,k}, \end{equation} where $W_{L(k),\,k}:={B_k(W_{L(k)})}\big{/}{\psi_{k}(\zeta_{L(k)})}$ and $W_{G(k),\,k}:={B_k(W_{G(k)})}\big{/}{\psi_{k}(\zeta_{G(k)})}$. Now by Result~\ref{Res:gbintS}, a necessary condition for \eqref{E:2pt_redut_intp} is \begin{equation}\label{E:ineq_2pt_redut} \max\left\{\sub{\eta\in\sigma\left(W_{L(k),\,k}\right)}{\max}|\,b_{G(k),\,k}(\eta)\,|,\, \sub{\eta\in\sigma\left(W_{G(k),\,k}\right)}{\max}|\,b_{L(k),\,k}(\eta)\,|\right\} \leq\mathcal{M}_{\mathbb{D}}\left(\zeta_{G(k)},\,\zeta_{L(k)}\right), \end{equation} where $b_{L(k),\,k}$ and $b_{G(k),\,k}$ denote the minimal Blaschke product corresponding to the matrices $W_{L(k),\,k}\,,\,\,W_{G(k),\,k}$. Given the definitions of the latter matrices, we will need Theorem~\ref{T:minpo_holo_func_anal} to determine $b_{L(k),\,k}$, $b_{G(k),\,k}$. By this theorem, we have \begin{align} b_{L(k),\,k}(t)&=\prod_{\nu\in\sigma\left(B_k(W_{L(k)})\right)} \!\!\!{\intf{(}{)}{t-{\nu}/{\psi_k(\zeta_{L(k)})}}{1-\overline{{\nu}/{\psi_k(\zeta_{L(k)})}}t}}^{q(\nu,\,L(k),\,k)}\,\, \label{E:minpo_2pt_redut1}\\ b_{G(k),\,k}(t)&=\prod_{\nu\in\sigma\left(B_k(W_{G(k)})\right)} \!\!\!{\intf{(}{)}{t-{\nu}/{\psi_k(\zeta_{G(k)})}}{1-\overline{{\nu}/{\psi_k(\zeta_{G(k)})}}t}}^{q(\nu,\,G(k),\,k)},\label{E:minpo_2pt_redut2} \end{align} where $q(\nu,\,L(k),\,k)$ and $q(\nu,\,G(k),\,k)$ are as in the statement of Theorem~\ref{T:3pt_nec}. Now if $\eta\in\sigma(W_{L(k),\,k})$ or $\eta\in\sigma(W_{G(k),\,k})$, then $\eta=\mu/{\psi_k(\zeta_{L(k)})}$ for some $\mu\in\sigma(B_k(W_{L(k)}))$ or $\eta=\mu/{\psi_k(\zeta_{G(k)})}$ for some $\mu\in\sigma(B_k(W_{G(k)}))$, respectively, and conversely. This observation together with \eqref{E:minpo_2pt_redut2}, \eqref{E:minpo_2pt_redut1} and \eqref{E:ineq_2pt_redut} establishes the first part of our theorem. \smallskip \noindent{\bf Case 2.} $\widetilde{G_k}(\mathbb{D})\cap\partial\,\Omega_n\not=\emptyset$. \noindent Let $\zeta_0\in\mathbb{D}$ such that $e^{i\theta_0}\in\sigma(\widetilde{G_k}(\zeta_0))$ for some $\theta_0\in\mathbb{R}$. By Lemma~\ref{L:fl1}, we have $e^{i\theta_0}\in\sigma(\widetilde{G_k}(\zeta))$ for every $\zeta\in\mathbb{D}$. By \eqref{E:fact_aux_intp}, $e^{i\theta_0}\zeta\in\sigma(\widetilde{F_k}(\zeta))$. Let $\Phi\,\equiv\,F\circ\psi_k^{-1}$. Then $\Phi\in\mathcal{O}(\mathbb{D},\,\Omega_n)$ and we have: \[ e^{i\theta_0}\zeta\in\sigma(B_k\circ\Phi(\zeta))=B_k\{\sigma(\Phi(\zeta))\}\,\,\forall\zeta\in\mathbb{D}, \] where the last equality is an application of the Spectral Mapping Theorem . For each $\zeta\in\mathbb{D}$, let $\omega_{\zeta}\in\sigma(\Phi(\zeta))$ be such that $B_{k}(\omega_{\zeta})\,=\,e^{i\theta_0}\zeta$. Notice that if $\zeta_1\not=\zeta_2$ then $\omega_{\zeta_1}\not=\omega_{\zeta_2}$, whence $E\,:=\,\{\omega_{\zeta}\,:\,\zeta\in\mathbb{D}\}$ is an uncountable set. Notice that $\omega_{\zeta}$ satisfies: \[ B_k(\omega_\zeta)=e^{i\theta_0}\zeta\,\,\,\text{and}\,\,\, \mathrm{det}\left(\omega_{\zeta}\mathbb{I}-\Phi(\zeta)\right) =0\,\,\forall\zeta\in\mathbb{D}. \] This implies $\mathrm{det}\left(\omega_{\zeta}\mathbb{I}-\Phi(\zeta)\right)\,=\,0$ for every $\omega_{\zeta}\in E$. As $E$ is uncountable, it follows that $\mathrm{det}\left((\boldsymbol{\cdot})\mathbb{I}-\Phi\circ(e^{-i\theta_0}B_k)\right)\equiv 0$. As $B_k$ maps $\mathbb{D}$ onto itself, it follows that \begin{equation}\label{E:blasholcor} B_k^{-1}\{e^{i\theta_0}\zeta\}\subset\sigma(\Phi(\zeta))=\sigma\left(F\circ\psi_k^{-1}(\zeta)\right)\,\,\forall\zeta\in\mathbb{D}. \end{equation} Putting $\zeta=\psi_k(\zeta_{L(k)})$ and $\psi_k(\zeta_{G(k)})$ respectively in \eqref{E:blasholcor} we get $B_k^{-1}\{e^{i\theta_0}\psi_k(\zeta_{L(k)})\}\subset\sigma(F(\zeta_{L(k)})) =\sigma(W_{L(k)})$ and $B_k^{-1}\{e^{i\theta_0}\psi_k(\zeta_{G(k)})\}\subset\sigma(F(\zeta_{G(k)}))=\sigma(W_{G(k)})$.\qed \smallskip We now present our example that compares the condition given by Theorem~\ref{T:3pt_nec} with that of Costara and Ogle and Baribeau--Kamara as alluded to in Remark~\ref{Rmk:comparison}. \begin{example}\label{Exam:comparison} For each $n\geq 4$, there is a class of $3$-point data-sets for which Theorem~\ref{T:3pt_nec}\linebreak \pagebreak \noindent{implies that it cannot admit any $\mathcal{O}(\mathbb{D},\,\Omega_n)$-interpolant, whereas the conditions given by \cite{costara:osNPp05, ogle:thesis99} and by Result~\ref{Res:inequality_baribeauKamara} provide no information.} \end{example} \noindent Let $0$, $a$ and $b$ be distinct points in $\mathbb{D}$. For each $n\geq 4$, we will construct a class of $3$-point matricial data\,---\,of the form $\{0, \mathsf{A}, \mathsf{B}\}$ such that $0, \mathsf{A}, \mathsf{B}\in\Omega_n$, where $\mathsf{A}$ and $\mathsf{B}$ will depend on $n, a, b$\,---\,for which the aforementioned statement holds true. To this end, consider the matrices: \begin{align*} \mathsf{A}&:=\sum_{j=0}^{n-1}\alpha_j N^j,\ \text{where $\alpha_j\in\C$ is such that} \ \alpha_j=0 \ \forall j\in\intgR{0}{n-3} \ \text{and} \ \alpha_{n-2}\neq 0, \\ \mathsf{B}&:=\text{diag}[\beta_1,\dots,\beta_n]:=\ \text{the diagonal matrix with {\bf distinct} entries $\beta_i$} \end{align*} such that $\beta_1=0$ and such that \begin{itemize} \item $\beta_i^2\neq\beta_j^2\;\;\forall i\neq j$, \item $|\,\beta_i\,|<|\,b\,|\;\;\forall i\in\intgR{1}{n}$, and \item $|\,\beta_i\,|^2<\mathcal{M}_{\mathbb{D}}(a,\,b)\;\;\forall i\in\intgR{1}{n}$. \end{itemize} We shall see the relevance of these conditions presently. Here, $N$ is the nilpotent matrix introduced in Lemma~\ref{L:minmo_lincomb_nilpo}. \smallskip Notice that by Lemma~\ref{L:minmo_lincomb_nilpo} the minimal polynomial for $\mathsf{A}$ is given by $\minpo{\mathsf{A}}(t)=t^2$ while its characteristic polynomial is $t^n$. As $n\geq 4$, $\mathsf{A}$ is not a non-derogatory matrix. Thus, for the data $\{(0,0),(a,\mathsf{A}),(b,\mathsf{B})\}$ the result given by Costara and Ogle cannot be applied in this setting, and hence yields no information. \smallskip Let us compute the form that the condition \eqref{E:inequality_baribeauKamara} takes for these data by setting $(\zeta_1, W_1)=(0, 0)$, $(\zeta_2, W_2)=(a, \mathsf{A})$ and $(\zeta_3, W_3)=(b, \mathsf{B})$. Observe: $B_1(t)=t$ for every $t\in\mathbb{D}$. The Spectral Mapping Theorem then implies that $\sigma(B_1(W_j))=B_1(\sigma(W_j))=\sigma(W_j)$ for $j=2,3$. Hence we have: \[ \mathcal{H}^{\mathcal{M}_{\mathbb{D}}}\left(\sigma\intf{(}{)}{B_1(W_{2})}{\psi_1(\zeta_{2})}\cap\mathbb{D},\;\; \sigma\intf{(}{)}{B_1(W_{3})}{\psi_1(\zeta_{3})}\cap\mathbb{D}\right)=\max_{1\leq i\leq n}\intf{|}{|}{\,\beta_i\,}{\,b\,}. \] Since $\beta_i$'s are all distinct, $\nu=n$. So the condition \eqref{E:inequality_baribeauKamara} turns out to be: \begin{equation}\label{E:barbkamar_1} \max_{1\leq i\leq n}\intf{|}{|}{\,\beta_i\,}{\,b\,}\leq\mathcal{M}_{\mathbb{D}}(a,\,b)^{1/n}. \end{equation} Permuting the roles of $(\zeta_j,W_j)$, $j=1,2,3$, in Result~\ref{Res:inequality_baribeauKamara} provides two other conditions. In the case when $(\zeta_1, W_1)=(b, \mathsf{B})$ the condition \eqref{E:inequality_baribeauKamara} holds trivially, because its left-hand side reduces to $0$. On the other hand if $(\zeta_1, W_1)=(a, \mathsf{A})$, then under the restriction $\beta_i^2\neq\beta_j^2$, for $i\neq j$, and that $|\,\beta_i\,|^2<\mathcal{M}_{\mathbb{D}}(a,\,b)$ for every $i\in\intgR{1}{n}$, we leave it to the reader to check that this condition turns out to be: \begin{equation}\label{E:barbkamar_2} \max_{1\leq i\leq n}\intf{|}{|}{\,\beta_i^2\,}{\,\psi_a(b)\,}\leq |\,b\,|^{1/n}. \end{equation} Now let us compute the form that the necessary condition provided by Theorem~\ref{T:3pt_nec} takes for the above data in the case $k=1$, $L(k)=2$, $G(k)=3$. To this end, we observe first: \[ \sub{\mu\in\sigma\left(B_1(W_{2})\right)}{\max}\prod_{\nu\in\sigma\left(B_1(W_{3})\right)} {\mathcal{M}_{\mathbb{D}}\left(\intf{}{}{\mu}{\psi_1(\zeta_{2})},\,\intf{}{}{\nu}{\psi_1(\zeta_{3})}\right)}^{q(\nu,\,3,\,1)} = \prod_{i=1}^n{\intf{|}{|}{\,\beta_i\,}{\,b\,}}^{q(\beta_i,\,3,\,1)}, \] which is equal to zero since $q(\beta_i,\,3,\,1)=1$ for every $i\in\intgR{1}{n}$ and $\beta_1=0$. On the other hand: \[ \sub{\mu\in\sigma\left(B_1(W_{3})\right)}{\max}\prod_{\nu\in\sigma\left(B_1(W_{2})\right)} {\mathcal{M}_{\mathbb{D}}\left(\intf{}{}{\mu}{\psi_1(\zeta_{3})},\,\intf{}{}{\nu}{\psi_1(\zeta_{2})}\right)}^{q(\nu,\,2,\,1)} =\max\left\{{\intf{|}{|}{\,\beta_i\,}{\,b\,}}^{q(0,\,2,\,1)}:i\in\intgR{1}{n}\right\}. \] Notice $m(2,\,0)$\,---\,the multiplicity of $0$ as a zero of $\minpo{\mathsf{A}}$\,---\,is $2$ and $\mathsf{ord}_{0}{B'_1}=0$ whence $q(0,\,2,\,1)=2$. Hence, one of the conditions given by Theorem~\ref{T:3pt_nec} is: \begin{equation}\label{E:barbkamar_3} \max_{1\leq i\leq n}{\intf{|}{|}{\,\beta_i\,}{\,b\,}}^2\leq\mathcal{M}_{\mathbb{D}}(a,\,b). \end{equation} Notice that \begin{equation}\label{E:barbkamar1_2_geq_3} |\,b\,|\mathcal{M}_{\mathbb{D}}(a,\,b)^{1/2}<\min\left\{|\,b\,|{\mathcal{M}_{\mathbb{D}}(a,\,b)}^{1/n},\, |\,b\,|^{1/2n}{\mathcal{M}_{\mathbb{D}}(a,\,b)}^{1/2}\right\}\;\;\forall n\geq 3. \end{equation} Now, let us choose $\{\beta_1,\dots,\beta_n\}\subset\mathbb{D}$ such that, in addition to the conditions listed above, \[ |\,\beta_i\,|\leq \min\left\{|\,b\,|{\mathcal{M}_{\mathbb{D}}(a,\,b)}^{1/n},\, |\,b\,|^{1/2n}{\mathcal{M}_{\mathbb{D}}(a,\,b)}^{1/2}\right\}\;\;\forall i\in\intgR{2}{n}, \] and such that for some $i_0\in\intgR{2}{n}$, \[ |\,\beta_{i_0}\,|>|\,b\,|\mathcal{M}_{\mathbb{D}}(a,\,b)^{1/2}. \] This is possible owing to \eqref{E:barbkamar1_2_geq_3}. We now see that all forms of the condition arising from Result~\ref{Res:inequality_baribeauKamara} are satisfied by the given data-set, while \eqref{E:barbkamar_3} does not hold true. Hence, Theorem~\ref{T:3pt_nec} implies that there does not exist an $F\in\mathcal{O}(\mathbb{D},\,\Omega_n)$ such that $F(0)=0$, $F(a)=\mathsf{A}$ and $F(b)=\mathsf{B}$ while Result~\ref{Res:inequality_baribeauKamara} provides no information. \medskip \section{A Schwarz lemma for holomorphic correspondences}\label{S:SchLem_holcorres} This section is dedicated to the proof of Theorem~\ref{T:Schwarzlemma_corres_corol}. However, as hinted at in Section~\ref{S:intro}, there is a more precise inequality, from which Theorem~\ref{T:Schwarzlemma_corres_corol} follows. We begin, therefore, with the following: \begin{theorem}\label{T:SchwLem_holcorres} Let $\Omega$ be a bounded domain in $\C$ and let $\Gamma$ be a proper holomorphic correspondence from $\mathbb{D}$ to $\Omega$. Then, for every $\zeta_1,\zeta_2\in\mathbb{D}$ we have \[ \max\Big\{\sub{\mu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_2)}}{\max}\prod_{\nu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1)}}C_\Omega(\nu,\,\mu), \ \sub{\mu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1)}}{\max}\prod_{\nu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_2)}}C_\Omega(\nu,\,\mu)\Big\} \leq\mathcal{M}_{\mathbb{D}}(\zeta_1,\,\zeta_2), \] where $\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\boldsymbol{\cdot})}$ is as in Section~\ref{S:intro}, read together with Remark~\ref{Rem:not_mltply}. \end{theorem} We remind the reader that $C_{\Omega}$ here is as defined by \eqref{E:alt_def_cara}. The proof of the above theorem is an easy consequence of the following lemma. \begin{lemma}\label{L:flsch_hol_corres} Let $\Gamma$ be as in the statement of Theorem~\ref{T:SchwLem_holcorres}. Then \[ \sub{\mu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta)}}{\max}\prod_{p\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(0)}}C_{\Omega}(p,\,\mu)\leq |\,\zeta\,| \ \forall\zeta\in\mathbb{D}. \] \end{lemma} \begin{proof} By Lemma~\ref{L:corres_equa_zeroset}, there exists a positive integer $n$ and functions $a_j\in\mathcal{O}(\mathbb{D})$, $j=1,\dots,n$, such that \[ \Gamma=\Big\{(z,w)\in\mathbb{D}\times\Omega\,:\, w^n+\sum_{j=1}^n(-1)^j a_j(z)w^{n-j}=0\Big\}. \] Note that by our definition of $\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\boldsymbol{\cdot})}$ \begin{align*} \bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\boldsymbol{\cdot})}=& \ \text{the {\bf list}, repeated according to multiplicity, of the zeros of}\\ & w^n+\sum_{j=1}^n(-1)^j a_j(\boldsymbol{\cdot})w^{n-j}. \end{align*} Let us now define an open set in $M_{n}(\C):S_{n}(\Omega):=\{A\in M_{n}(\C):\sigma(A)\subset \Omega\}$. Let $\Phi\in\mathcal{O}(\mathbb{D},\,M_{n}(\C))$ that is defined by \[ \Phi(z):=\text{companion matrix corresponding to the polynomial} \ w^n+\sum_{j=1}^n(-1)^j a_j(z)w^{n-j}. \] In the notation introduced by Remark~\ref{Rem:not_mltply}, $\bv{\sigma(A)}$ will denote the list of eigenvalues of $A$ repeated according to their multiplicity. In this notation, we have $\bv{\sigma(\Phi(z))}=\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(z)}\subset\Omega$. Hence $\Phi(\mathbb{D})\subset S_n(\Omega)$. Now we choose an arbitrary $z\in\Omega$ and fix it. Consider $B\in\mathcal{O}(\Omega)$ defined by \[ B:=\prod_{p\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(0)}}G_{\Omega}(p,\,z;\,\boldsymbol{\cdot}), \] where $G_{\Omega}(p,\,z;\,\boldsymbol{\cdot})$ denotes the Carath\'{e}odory extremal for points $p,z\in\Omega$, whose existence was discussed in Section~\ref{S:nott_comx_geom}. As $B$ is holomorphic in $\Omega$, it induces\,---\,via the holomorphic functional calculus\,---\, a map (which we continue to denote by $B$) from $S_{n}(\Omega)$ to $M_n(\C)$. The Spectral Mapping Theorem tells us that $\sigma(B(A))=B(\sigma(A))\subset \mathbb{D}$ for every $A\in S_{n}(\Omega)$. Hence $B(A)\subset\Omega_n$ for every $A\in S_{n}(\Omega)$. \smallskip \noindent{\bf Claim.} $B(\Phi(0))=0$. \noindent To see this we write: \[ B(w)=\big(\prod\nolimits_{p\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(0)}}(w-p)\big)g(w), \ w\in\Omega, \] where $g\in\mathcal{O}(\Omega)$. Hence, since\,---\,by the holomorphic functional calculus\,---\,the assignment $f\longmapsto f(\Phi(0)), \ f\in\mathcal{O}(\Omega)$, is multiplicative, as discussed in Section~\ref{S:holo_fc} (also see Remark~\ref{Rem:matpara_func}), we get \[ B(\Phi(0))=\big(\prod\nolimits_{p\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(0)}}(\Phi(0)-p\,\mathbb{I})\big)g(\Phi(0)). \] As $\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(0)}=\bv{\sigma(\Phi(0))}$, Cayley--Hamilton Theorem implies that the product term in the right hand side of the above equation is zero, whence the claim. \smallskip Consider the map $\Psi$ defined by: \[ \Psi(\zeta):=B\circ\Phi(\zeta), \ \zeta\in\mathbb{D}. \] It is a fact that $\Psi\in\mathcal{O}(\mathbb{D})$. Moreover from the above claim and the discussion just before it, we have $\Psi\in\mathcal{O}(\mathbb{D},\,\Omega_n)$ with $\Psi(0)=0$. By Lemma~\ref{L:fl2} there exists $\widetilde{\Psi}\in\mathcal{O}(\mathbb{D},\,\overline{\Omega}_{n})$ such that \begin{equation}\label{E:fact_Psi} \Psi(\zeta)=\zeta\widetilde{\Psi}(\zeta) \ \forall\zeta\in\mathbb{D}. \end{equation} From the definition of $\Psi$, \eqref{E:fact_Psi}, and from the Spectral Mapping Theorem, we get \begin{align*} B(\bv{\sigma(\Phi(\zeta))})&=\zeta\bv{\sigma(\widetilde{\Psi}(\zeta))}\\ \implies |\,B(\mu)\,|&\leq |\,\zeta\,| \ \forall\mu\in\bv{\sigma(\Phi(\zeta))}=\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta)}, \ \forall\zeta\in\mathbb{D}. \end{align*} This in turn implies that \[ \prod_{p\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(0)}}|\,G_{\Omega}(p,\,z;\,\mu)\,| \leq |\,\zeta\,| \ \forall\zeta\in\mathbb{D}, \ \forall\mu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta)}. \] Since $z$ is arbitrary we can take $z=\mu$ for some $\mu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta)}$. This with the observation that $G_{\Omega}(p,\,\mu;\,\mu)=C_{\Omega}(p,\,\mu)$ establishes the lemma. \end{proof} We are now ready to give: \begin{proof}[The proof of the Theorem~\ref{T:SchwLem_holcorres}] Fix $\zeta_1$, $\zeta_2\in\mathbb{D}$ and consider the correspondence $\widetilde{\Gamma}$ such that \[ \bv{{F_{\widetilde{\Gamma}}(\zeta)}}=\bv{{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\psi_{\zeta_1}^{-1}(\zeta))}}. \] It is easy to see that $\widetilde{\Gamma}$ is a proper holomorphic correspondence from $\mathbb{D}$ to $\Omega$. So from the lemma above and with the observation that $\bv{{F_{\widetilde{\Gamma}}(0)}}=\bv{{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1)}}$ we get \begin{equation}\label{E:ineq_1} \sub{\mu\in\bv{{F_{\widetilde{\Gamma}}(\zeta)}}}{\max}\prod_{p\in\bv{{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1)}}}C_{\Omega}(p,\,\mu) \leq |\,\zeta\,| \ \forall\zeta\in\mathbb{D}. \end{equation} Putting $\zeta=\psi_{\zeta_1}(\zeta_2)$ in \eqref{E:ineq_1} gives us \begin{equation}\label{E:ineq_2} \sub{\mu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_2)}}{\max}\prod_{p\in\bv{{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1)}}}C_{\Omega}(p,\,\mu) \leq \mathcal{M}_{\mathbb{D}}(\zeta_1,\,\zeta_2). \end{equation} Interchanging the role of $\zeta_1$ and $\zeta_2$ in the above discussion, we get \begin{equation}\label{E:ineq_3} \sub{\mu\in\bv{{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1)}}}{\max}\prod_{p\in\bv{{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_2)}}}C_{\Omega}(p,\,\mu) \leq \mathcal{M}_{\mathbb{D}}(\zeta_1,\,\zeta_2). \end{equation} From \eqref{E:ineq_3} and \eqref{E:ineq_2} the result follows. \end{proof} Theorem~\ref{T:Schwarzlemma_corres_corol} is a corollary of Theorem~\ref{T:SchwLem_holcorres}. This is almost immediate; we probably just require a few words about the Hausdorff distance induced by $C_{\Omega}$. We refer the reader to \cite[p.~279]{munk:topo_74} for the definition of the Hausdorff distance\,---\,which is a distance on the set of non-empty closed, bounded subsets of a distance space $(X,\,d)$. In our case $(X,\,d)=(\Omega,\,C_{\Omega})$ and it is easy to check that \[ \mathcal{H}^{C_{\Omega}}\big(F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1),\,F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_2)\big)=\max\Big\{\max_{w\in F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1)} {\rm dist}_{C_{\Omega}}\big(w,\,F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_2)\big),\, \max_{w\in F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_2)}{\rm dist}_{C_{\Omega}}\big(w,\,F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1)\big)\Big\}, \] given $\zeta_1$, $\zeta_2\in\mathbb{D}$, and where, given $p\in\Omega$, $\emptyset\not= S\subset\Omega$, ${\rm dist}_{C_{\Omega}}(p,\, S):=\inf\nolimits_{q\in S}C_{\Omega}(p,\,q)$. Clearly, there exists $j,\,k\in\{1,2\}$ with $j\not=k$ such that \[ \mathcal{H}^{C_{\Omega}}\big(F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_1),\,F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_2)\big)^n\leq \sub{\mu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_j)}}{\max}\prod_{\nu\in\bv{F_{\raisebox{-3pt}{$\scriptstyle{\!\Gamma}$}}(\zeta_k)}}C_\Omega(\nu,\,\mu), \] where $n$ is the multiplicity of $\Gamma$. Combining the above inequality with the inequality in Theorem~\ref{T:SchwLem_holcorres}, we establish Theorem~\ref{T:Schwarzlemma_corres_corol}. \medskip \section*{Acknowledgements} I wish to thank my thesis adviser Gautam Bharali for the many helpful discussions during the course of this work. I am especially grateful for his supporting me as a Research Fellow under his Swarnajayanti Fellowship (Grant No.~DST/SJF/MSA-02/2013-14).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The notions of consistent truncations and dualities are at first sight seemingly unrelated. The former singles out degrees of freedom in a physical theory which decouple from the rest. They are important because it is often easier to analyse a system, i.e. solve its equations of motion, if the number of degrees of freedom is reduced. Therefore, consistent truncations provide a crucial tool for finding solutions in (super)gravity, with a wide range of applications. More precisely, this idea can be summarised by the commuting diagram \begin{equation}\label{diag:contrunc} \tikz{ \node (fullaction) {full action $S$}; \node[at={(fullaction.east)},anchor=center,xshift=20em] (redaction) {truncated action $S_{\mathrm{red}}$}; \node[at={(fullaction.south)},anchor=center,yshift=-3em] (fieldeq) {field equations $\delta S = 0$}; \node[at={(redaction.south)},anchor=center,yshift=-3em] (redfieldeq) {truncated field equations $\delta S_{\mathrm{red}} = 0$.}; \draw[->] (fullaction.east) -- (redaction.west); \draw[->] (fullaction.south) -- (fieldeq.north); \draw[->] (redaction.south) -- (redfieldeq.north); \draw[->] (fieldeq.east) -- (redfieldeq.west); \draw[->] ($(fieldeq.south) + (0,-0.5)$) -- ($(redfieldeq.south) + (0,-0.5)$) node[midway,above] {truncation}; \draw[<-] ($(fieldeq.south) + (0,-0.6)$) -- ($(redfieldeq.south) + (0,-0.6)$) node[midway,below] {uplift}; } \end{equation} The truncation is said to be consistent if the two pathways to the truncated equations of motion yield the same result. Otherwise, the chosen truncation ansatz does not single out decoupled degrees of freedom. One should note that it is in general very difficult to find consistent truncations, because the standard Kaluza-Klein ansatz with massless gauge fields is in general inconsistent \cite{Duff:1984hn}. For a long time, only few exceptions have been known, including sphere reductions \cite{Cvetic:2000dm} and reductions on group manifolds \cite{Scherk:1979zr}. The second central concept for this paper is dualities. They are ubiquitous in physics, and find applications from models in condensed matter to high energy physics. The basic idea is that two seemingly very different models ultimately still share the same (quantum/classical) dynamics. Here, we are particularly interested in target-space dualities (T-dualities) of two-dimensional $\sigma$-models. They can be studied from two major perspectives: the worldsheet and the target space. The former is the surface on which the $\sigma$-model is defined as a field theory. In the classical limit, T-duality acts as a canonical transformation relating at least two different $\sigma$-models on different target spaces. Alternatively, one can consider the low-energy, effective target-space action that captures the dynamics of the strings described by the $\sigma$-model. Here, T-duality maps existing solutions of the field equations to new solutions. In this context it plays a role as a solution generating technique. It is known that a small subset of all T-dualities, namely abelian T-dualities, are preserved under quantum corrections on the worldsheet and the corresponding higher-derivative corrections in the target space effective action \cite{}. For the remaining, generalised, T-dualities, their fate under quantum corrections is not yet known. There are some preliminary results \cite{Hassler:2020tvz,Borsato:2020wwk,Codina:2020yma,Hassler:2020wnp} that suggest that they might also cover higher derivative corrections. Here, we are mostly concerned with the leading two-derivative effective action, and such higher-derivative corrections will just touch the discussion tangentially. Historically, one distinguishes between non-abelian T-duality \cite{delaOssa:1992vci}, Poisson-Lie T-duality \cite{Klimcik:1995ux,Klimcik:1995dy}, which might be supplemented by a Wess-Zumino-Witten term \cite{Klimcik:2001vg}, and dressing cosets \cite{Klimcik:1996np}. Our results will apply to all of these, and we shall call them {\it generalised T-dualities}. Note that we will not consider mirror symmetries, which relate different Calabi-Yau manifolds; they are related to abelian T-duality by the SYZ conjecture \cite{Strominger:1996it}. For completeness, let us quickly explain the two central mathematical objects mentioned here. A Poisson-Lie group is a Lie group $G$ equipped with a Poisson bracket that satisfies \begin{equation} \{f_1, f_2\}(g g') = \{f_1 \circ L_g, f_2 \circ L_g \}(g') + \{ f_1 \circ R_{g'}, f_2 \circ R_{g'} \}(g) \end{equation} where $L_h(g) = h g$ denotes the left-multiplication on $G$ and $R_h (g) = g h$ the right-multiplication, respectively. The classical Lie group $\rightarrow$ Lie algebra correspondence was extended by Drinfeld to Poisson-Lie group $\rightarrow$ Lie bialgebra, which contains in addition to the Lie algebra Lie($G$) also a dual Lie algebra Lie($\widetilde{G}$) corresponding to dual Lie group $\widetilde{G}$. Together, these two Lie groups form a double Lie group $\mathds{D}$ with the Lie algebra Lie($\mathds{D}$)=Lie($G$)$\oplus$Lie($\widetilde{G}$). Poisson-Lie groups are called dual, if they share the same $\mathds{D}$. For example, they can be related by exchanging $G$ and $\widetilde{G}$. Poisson-Lie T-duality got its name because it identifies $\sigma$-models whose target spaces are dual Poisson-Lie groups. In addition to the right- and left-action of $G$ on $G$, Poisson-Lie groups admit a so-called dressing action \cite{10.4310/jdg/1214444324}. It can be most easily seen on the level of the doubled Lie group $\mathds{D}$ where any element of the subgroup $F \subset \mathds{D}$ generates the dressing transformation $g\rightarrow g'$ on $G$ with \begin{equation} f g = g' \widetilde{h} \,, \qquad g,\,g'\in G\,, \quad \widetilde{h}\in\widetilde{G}\,, \quad \text{and} \quad f \in F\,. \end{equation} Here the group multiplication is the multiplication on $\mathds{D}$. The set of orbits of this action is called the dressing coset \cite{Klimcik:1996np}. In this paper, we will even work with a more general notion than dressing cosets, which are called generalised cosets \cite{Demulder:2019vvh}. They drop the requirement that the doubled Lie group $\mathds{D}$ has to originate from a Poisson-Lie group and can be understood as the lift of the concept of a coset in differential geometry to generalised geometry. At the end, the following dependencies arise: \begin{equation} \begin{array}{ccccccc} \text{ abelian } & \subset & \text{ non-abelian } & \subset & \text{ Poisson-Lie } & \subset & \text{ WZW-Poisson }\\ & & & & \raisebox{0.6em}{\rotatebox{270}{$\subset$}} & & \raisebox{0.6em}{\rotatebox{270}{$\subset$}}\\ & & & & \text{ dressing coset } & \subset & \text{ \underline{generalised coset}\,. } \end{array} \end{equation} At first glance these two concepts, consistent truncations and generalised T-duality, seem unrelated. However, in recent years it has become evident that a deeper understanding of each of these concepts can be achieved using similar tools, most prominently (exceptional) generalised geometry \cite{Coimbra:2011nw,Coimbra:2012af} and double/exceptional field theories. In particular the latter were initially developed \cite{Siegel:1993th,0904.4664,Hohm:2010pp} with abelian T-duality (and later, its extension to U-duality) in mind; before its utility for understanding consistent truncations was appreciated \cite{Lee:2014mla,Hohm:2014qga,Cassani:2019vcl}. While double field theory is in principle able to describe also non-geometric setups, we shall use it in its most conservative form, with the standard solution $\widetilde{\partial}^i = 0$ of the section condition, thus rendering it equivalent to generalised geometry. Later, it was shown that generalised T-dualities can also be conveniently studied in double field theory \cite{Hassler:2017yza,Demulder:2018lmj,Sakatani:2019jgu}. Moreover, they provided the first systematic construction of the generalised frame fields that underlie generalised Scherk-Schwarz reductions \cite{Dabholkar:2002sy,Aldazabal:2011nj,Geissbuhler:2011mx}, and which still dominate the landscape of consistent truncations. Motivated by this connection, the goal of this paper is to explore further the relation between generalised T-dualities and consistent truncations. It is already known that all generalised T-dualities except for dressing cosets give rise to consistent generalised Scherk-Schwarz reductions. Therefore, we shall take a closer look at generalised cosets, and show how they can be used to construct a very rich class of consistent truncations that go beyond generalised Scherk-Schwarz reductions. Our starting point will be the most general known class of consistent truncations \cite{Cassani:2019vcl} that arise from generalised geometry. In section~\ref{sec:ggconsistent}, we review the underlying construction. At the end of this section, we then show how the already mentioned generalised Scherk-Schwarz reductions are connected to Poisson-Lie T-duality. At this point, we already see that dualities can be understood naturally in the context of consistent truncations, by relating different truncation ans\"atze for a higher-dimensional theory that give rise to the same truncated theory: \begin{equation}\label{eqn:dualitiesdiagram} \begin{tikzpicture} \node[name=ansatz1] {ansatz 1}; \node[at={(ansatz1)},xshift=7em,name=ansatz2] {ansatz 2}; \node[at={(ansatz2)},xshift=7em,name=dots] {\dots}; \node[at={(dots)},xshift=7em,name=ansatzn] {ansatz $n$}; \draw[<->] (ansatz1.east) -- (ansatz2.west); \draw[<->] (ansatz2.east) -- (dots.west) node[midway,yshift=2em] {dualities}; \draw[<->] (dots.east) -- (ansatzn.west); \path (ansatz1.east) -- (ansatzn.west) node[midway,yshift=-4em,name=trunc] {truncated theory\,.}; \draw (ansatz1.south) -- ($(trunc.north)-(0.4,0)$); \draw (ansatz2.south) -- ($(trunc.north)-(0.2,0)$); \draw[loosely dotted] (dots.south) -- ($(trunc.north)+(0.2,0)$); \draw (ansatzn.south) -- ($(trunc.north)+(0.4,0)$); \node [at={($(trunc.west)-(4.5,0)$)},name=dminusn] {$d-n$}; \node [at={(dminusn)},yshift=4em,name=D] {$D$}; \node [at={(D)},yshift=2em] {dimension}; \draw (dminusn.north) -- (D.south); \draw[->] ($(dminusn)+(12.6,0)$) -- ($(D)+(12.6,0)$) node[midway,rotate=90,below] {uplift}; \draw[<-] ($(dminusn)+(12.4,0)$) -- ($(D)+(12.4,0)$) node[midway,rotate=90,above] {truncation}; \end{tikzpicture} \end{equation} \bigskip Inspired by this finding, we are led to pose the question: \begin{center} Is there a one-to-one correspondence between generalised T-dualities and consistent truncations in generalised geometry? \end{center} To answer it, section~\ref{sec:mega-vielbein} introduces a new construction for the truncation ans\"atze depicted in \eqref{eqn:dualitiesdiagram}. It employs a higher-dimensional, auxiliary space to geometrise the generalised structure group that underlies each consistent truncation. A similar technique was used by Pol\'a\v{c}ek and Siegel \cite{Polacek:2013nla} some time ago to find a natural construction of the generalised Riemann tensor in double field theory. It provides a structured construction of covariant torsion and curvature tensors which, as we shall show, are of central interest in consistent truncations also. With this tool at hand, we establish in section~\ref{sec:truncgenRicci} that all spaces that admit generalised T-dualities give rise to consistent truncations. Establishing the converse, namely that any consistent truncation can be constructed from a dressing coset, is more involved. On the one hand, we know that consistent truncations can be constructed on Sasaki-Einstein spaces \cite{Cassani:2019vcl} that are clearly not double cosets. However, this is not, of itself, a problem, because as the illustration \eqref{eqn:dualitiesdiagram} shows, there can exist different ans\"atze that result in the same truncation. To check if at least one of them originates from a dressing coset we have to find solutions for the Jacobi identity of the underlying Lie algebra in which certain components of the structure coefficients are fixed while others remain free. We set up the computation in section~\ref{sec:Jac}, but it is hard to find solutions because the Jacobi identities lead to many coupled quadratic equations. This problem gets easier the more components of the structure coefficients are fixed, because they do not then appear as unknowns in the quadratic equations. Fortunately, this is exactly what happens when consistent truncations are considered in connection with higher-derivative corrections. We explore this idea in section~\ref{sec:higherderiv}, and interpret it as a hint that there might indeed be a one-to-one connection between consistent truncations and generalised T-dualities. We plan to return to the problem of constructing a complete proof in future work. Finally, section~\ref{sec:vielbeins} is concerned with the explicit construction of truncation an\"atze for generalised cosets, the basis for any generalised T-duality. Here, we extend the results of previous work \cite{Demulder:2019vvh}, by presenting a completely systematic construction. This culminates with the observation~\ref{th:MAIN} that on any dressing coset $H \backslash \mathds{D} / F$ one can construct a consistent truncation with generalised structure group $F$. \section{Generalised geometry and consistent truncations}\label{sec:ggconsistent} \subsection{(Super)gravity and generalised geometry}\label{sec:gengeo} We are interested in consistent truncations of (super)gravity, such as arises as the low-energy effective action in string theory. For the sake of simplicity, we only consider the NS/NS sector, which is governed by the action \begin{equation}\label{eqn:SNSNS} S = \int \mathrm{d}^D x\, \sqrt{g} e^{-2\phi} \left( R + 4 \partial_i \phi \partial^i \phi - \frac1{12} H_{ijk} H^{ijk} \right)\,. \end{equation} We use the convention here that $g$ is the determinant of the metric $g_{ij}$, and $R$ the corresponding curvature scalar. Besides the metric, there is also the dilaton $\phi$ and the $B$-field $B_{ij}$ on the $D$-dimensional spacetime $M_D$ (also referred to as the target space). The action does not incorporate $B_{ij}$ directly, but instead its field strength $H_{ijk} = 3 \partial_{[i} B_{jk]}$. Conformal invariance of the string at the quantum level fixes $D=10$ for superstrings and $D=26$ for their bosonic counterpart. \eqref{eqn:SNSNS} possesses two local symmetries, namely diffeomorphisms that account for coordinate changes and $B$-field gauge transformations $B_{ij} \rightarrow B_{ij} + 2 \partial_{[i} \xi_{j]}$. The infinitesimal versions of these two symmetries can be written in a unified form in terms of the generalised Lie derivative \begin{equation}\label{eqn:genLie} \mathcal{L}_U V^I = U^J \partial_J V^I - (\partial_J U^I - \partial^I U_J) V^J\,. \end{equation} $U^I = \begin{pmatrix} u^i \quad& u_i\end{pmatrix}$, $U_I = \begin{pmatrix} u_i \quad& u^i \end{pmatrix}$ and $V^I$ are generalised vectors on the generalised tangent space $T M \oplus T^* M$. While the original tangent space $T M$ is always extended in this setup, the manifold $M_D$ might be extended (double/exceptional field theories) or not (generalised geometry). The two alternatives are related by the section condition, which singles out the non-constant, physical, directions on $M_D$. We always solve the section condition in the trivial way, namely $\partial_I = \begin{pmatrix} \partial_i & 0 \end{pmatrix}$. Thus we do not need to distinguish between the two different approaches, and we just use the doubling of the partial derivative index as a convenient book-keeping device. The form of \eqref{eqn:genLie} is fixed by the requirement that the natural pairing between a vector and a one-form, \begin{equation}\label{eqn:etametric} U^I V^J \eta_{IJ} = u^i v_i + u_i v^i\,, \quad \text{with} \quad \eta_{IJ} = \begin{pmatrix} 0 & \delta_i^j \\ \delta_j^i & 0 \end{pmatrix}\,, \end{equation} is preserved. The invariance of this pairing introduces an O($D$,$D$) structure. This structure allows the metric and $B$-field to be captured in terms of one unified object, the generalised metric \begin{equation}\label{eqn:genmetric} \mathcal{H}_{IJ} = \begin{pmatrix} g_{ij} - B_{ik} g^{kl} B_{lj} &\qquad -B_{ik} g^{kj} \\ g^{ik} B_{kj} & \qquad g^{ij} \end{pmatrix}\,. \end{equation} It has two defining properties: 1) it is symmetric and 2) it is an element of O($D$,$D$);\ $\mathcal{H}_{IK} \eta^{KL} \mathcal{H}_{LJ} = \eta_{IJ}$, where $\eta^{IJ}$ is the inverse of $\eta_{IJ}$. Moreover, its parameterisation in terms of $g_{ij}$ and $B_{ij}$ is chosen such that the infinitesimal action of diffeomorphisms and $B$-field gauge transformation is mediated by the generalised Lie derivative. The generalised metric introduces more structure than might be immediately obvious from the parameterisation \eqref{eqn:genmetric}. To reveal it, we re-express $\mathcal{H}_{IJ}$ in terms of the generalised frame field $E^A{}_I$, with the defining properties \begin{align} \eta_{IJ} &= E^A{}_I \eta_{AB} E^B{}_J\,, & \eta_{AB} &= \begin{pmatrix} 0 & \delta_a^b \\ \delta_b^a & 0 \end{pmatrix} \quad \text{and}\\\label{eqn:HHcanonical} \mathcal{H}_{IJ} &= E^A{}_I \mathcal{H}_{AB} E^B{}_J\,, & \mathcal{H}_{AB} &= \begin{pmatrix} \eta_{ab} & 0 \\ 0 & \eta^{ab} \end{pmatrix} \,, \end{align} where $\eta_{ab}$ is either the Lorentzian or Euclidean metric and $\eta^{ab}$ its inverse. These two relations still do not fix the generalised frame completely. One can perform coordinate dependent double Lorentz transformations, $E_A{}^I \rightarrow \Lambda_A{}^B E_B{}^I$, without changing the resulting $\eta_{IJ}$ and $\mathcal{H}_{IJ}$, assuming that $\Lambda_A{}^C \eta_{CD} \Lambda_B{}^D = \Lambda_A{}^C \mathcal{H}_{CD} \Lambda_B{}^D = 0$. They furnish the double Lorentz group $H_D$=O($D$-1,1)$\times$O(1,$D$-1) for Lorentzian or $H_D$=O($D$)$\times$O($D$) for Euclidean spacetimes. We can use these ingredients to rewrite the action \eqref{eqn:SNSNS}. There are in fact different ways to do this. For our purpose the so-called flux formulation is most suitable \cite{Hohm:2010xe,Geissbuhler:2011mx,Geissbuhler:2013uka}. To make the invariance under generalised diffeomorphisms (generated by the generalised Lie derivative) of the action manifest, the flux formulation introduces two generalised fluxes, namely \begin{equation}\label{eqn:defgenfluxes} \mathcal{L}_{E_A} E_B = F_{AB}{}^C E_C \quad \text{and} \quad \mathcal{L}_{E_A} e^{-2 d} = - F_A e^{-2 d} \,. \end{equation} Taking into account the definition of the generalised Lie derivative \eqref{eqn:genLie}, the first equation leads to \begin{equation}\label{eqn:FABC} F_{ABC} = 3 E_{[A}{}^I \partial_I E_B{}^J E_{C]J}\,, \end{equation} while the second relation requires more explanation because we encounter a new quantity, $d$, the generalised dilaton. It is defined by \begin{equation} e^{-2 d} = \sqrt{g} e^{- 2 \phi} \quad \text{or} \quad d = \phi - \frac14 \log g\,. \end{equation} Importantly, $e^{-2 d}$ does not transform as a scalar under the generalised Lie derivative, but rather, as a scalar density, resulting in \begin{equation} \mathcal{L}_U e^{- 2 d} = \partial_I ( U^I e^{-2 d} )\,. \end{equation} Therefore, the second generalised flux $F_A$ is given by \begin{equation}\label{eqn:FA} F_A = 2 E_A{}^I \partial_I d - \partial_I E_A{}^I\,. \end{equation} The two fluxes $F_{ABC}$ and $F_A$ repackage the information contained in the metric, dilaton and $B$-field. Hence, the action \eqref{eqn:SNSNS} can be alternatively written in terms of them as \cite{Geissbuhler:2011mx,Geissbuhler:2013uka} \begin{equation}\label{eqn:SDFT} S = \int \mathrm{d}^D x \, e^{-2 d} \mathcal{R} \end{equation} with the generalised Ricci scalar being given by \begin{equation}\label{eqn:genRicciScalar} \mathcal{R} = P^{AB} P^{CD} \left( \overline{P}{}^{EF} + \frac13 P^{EF} \right) F_{ACE} F_{BDF} + 2 P^{AB}( 2 D_A F_B - F_A F_B ) \,. \end{equation} We encounter two new objects here: First, the flat derivative \begin{equation} D_A = E_A{}^I \partial_I \end{equation} and second, the projectors \begin{equation} P^{AB} = \frac12 ( \eta^{AB} + \mathcal{H}^{AB} ) \quad \text{and} \quad \overline{P}{}^{AB} = \frac12 ( \eta^{AB} - \mathcal{H}^{AB} )\,. \end{equation} They are called projectors because of their properties \begin{equation} P_A{}^C P_C{}^B = P_A{}^B \,, \quad \overline{P}{}_A{}^C \overline{P}{}_C{}^B = \overline{P}{}_A{}^B \,, \quad \text{and} \quad P_A{}^C \overline{P}{}_C{}^B = \overline{P}{}_A{}^C P_C{}^B = 0 \,. \end{equation} At this point, we just have a rewriting of \eqref{eqn:SNSNS} in terms of quantities that appear ``naturally'' in generalised geometry or double field theory. \subsection{Systematics of consistent truncations} Next, we explain how this form of the action helps in identifying consistent truncations. The answer is given by the following theorem, which was established in \cite{Cassani:2019vcl}: \begin{theorem}\label{th:CT} Let $M_D$ be a $D$-dimensional manifold with a generalised $F$-structure defining a set of invariant tensors ${f^{(j)}}$ with $F \subset H_D$ and only constant, singlet intrinsic torsion. Then there is a consistent truncation of the action \eqref{eqn:SNSNS} on $M_D$ defined by expanding all bosonic fields in terms of invariant tensors. \end{theorem} \noindent A key observation to understand this theorem is that invariant tensors are covariantly constant with respect to an appropriate O($D$,$D$) covariant derivative $\nabla_A$, i.e. $\nabla_A f^{(j)} = 0$ for all $f^{(j)}$. This derivative acts as a selector for degrees of freedom that are retained in the truncation. An important feature of this derivative is that since it obeys the Leibniz rule, the product of any two invariant tensors is again covariantly constant, and thus will be part of the truncation. Usually, the derivative $\nabla_I$ is not the generalised Levi-Civita connection $\overline{\nabla}_I$ from which the generalised curvature scalar in the action \eqref{eqn:SDFT} is derived. Fortunately though, the two are related. To see how, we have to look closer at covariant derivatives in generalised geometry/double field theory. Different generalised covariant derivatives differ by their generalised torsion. Similarly to the situation in standard geometry, the torsion is defined by comparing the generalised Lie derivative written with partial or covariant derivatives as in \cite{Coimbra:2011nw}: \begin{equation}\label{eqn:defgentorsion} \left( \mathcal{L}^{\nabla}_U - \mathcal{L}^{\partial}_U \right) V^I = U^J V^K T_{JK}{}^I \,. \end{equation} To compute this quantity directly, we need an explicit definition of the covariant derivative: \begin{equation}\label{eqn:defnabla} \nabla_I E_A{}^J = \partial_I E_A{}^J - \Omega_{IA}{}^B E_B{}^J + \Gamma_{IK}{}^J E_A{}^K \,, \end{equation} involving the generalised spin connection $\Omega_{IA}{}^B E_B{}^J$ and the generalised affine connection $\Gamma_{IJ}{}^K$. The two are related by the vielbein postulate $\nabla_I E_A{}^J = 0$, leading to \begin{equation}\label{eqn:GammafromOmega} \Gamma_{IJK} = \partial_I E^A{}_J E_{AK} + \Omega_{IJK} \,. \end{equation} With these definitions, the generalised torsion evaluates to \begin{equation}\label{eqn:gentorsionTIJK} T_{IJK} = 3 \Gamma_{[IJK]} \,. \end{equation} Additionally, the generalised covariant derivative should at least satisfy two more constraints \cite{Coimbra:2011nw}: \begin{enumerate} \item Compatibility with the $\eta$-metric, $\nabla_I \eta_{JK} = 0$, implying \begin{equation} \Gamma_{IJK} = -\Gamma_{IKJ}\,. \end{equation} \item Compatibility with the generalised metric, $\nabla_I \mathcal{H}_{JK} =0$, implying \begin{equation} P_{(J}{}^M \overline{P}{}_{K)}{}^N \Gamma_{IMN} = \frac12 P_{(J}{}^M \overline{P}{}_{K)}{}^N \partial_I \mathcal{H}_{MN}\,. \end{equation} \item\label{item:IBP} (optional) Compatibility with integration by parts $\int \mathrm{d}^D x \, e^{-2 d} \nabla_I V^I = 0$, implying \begin{equation} \overline{\Gamma}^J{}_{JI} = 2 \partial_I d\,. \end{equation} \end{enumerate} Note that for the generalised Levi-Civita connection, point \ref{item:IBP} applies, but it does not have to hold for $\nabla_I$. In the definition of the generalised torsion \eqref{eqn:defgentorsion}, the generalised Lie derivative acts on a vector. We obtain a similar result for higher rank generalised tensors, where $T_{IJ}{}^K$ acts on each index individually. But there are also densities, like the generalised dilaton $d$. Hence, we introduce an additional torsion for them, too, \begin{equation} \left( \mathcal{L}^{\nabla}_U - \mathcal{L}^{\overline{\nabla}}_U \right) e^{-2 d} = U^I T_I e^{-2 d} \,, \end{equation} which gives rise to \begin{equation}\label{eqn:gentorsionTI} T_I = 2 \partial_I d - \Gamma^J{}_{JI}\,. \end{equation} As in conventional geometry, the generalised Levi-Civita connection, $\overline{\nabla}_I$, is defined to have vanishing torsion. Thus, we can deduce the relation between the generalised Christoffel symbols $\overline{\Gamma}_{IJK}$ of $\overline{\nabla}_I$ and $\Gamma_{IJK}$ of $\nabla_I$, \begin{equation}\label{eqn:defSigma} \begin{aligned} \Sigma_{IJK} &= \overline{\Gamma}_{IJK} - \Gamma_{IJK} \\ &= -\left( \frac13 P P P + \overline{P}{} P P + P \overline{P}{} \overline{P}{} + \frac13 \overline{P}{} \overline{P}{} \overline{P}{} \right){}_{IJK}{}^{LMN} \left( T_{LMN} + \frac6{D-1} \eta_{L[M}T_{N]} \right) \,. \end{aligned} \end{equation} Note that $\Sigma_{IJK}$ is not fully fixed by all constraints above. There are some undetermined components \cite{Coimbra:2011nw}, which we have set to zero here. However, it is known that these components do not contribute to the two-derivative effective action \eqref{eqn:SNSNS} or its field equations. Therefore, it is safe to neglect them. After this digression, we recognise that the action of the generalised Levi-Civita connection on the invariant tensors has to be of the form \begin{equation} \overline{\nabla}_I f^{(j)} = \Sigma_I \cdot f^{(j)} \,. \end{equation} According to theorem~\ref{th:CT}, the torsions $T_{IJK}$ and $T_I$ of $\nabla$, called the intrinsic torsion, have to be covariantly constant. From this observation and \eqref{eqn:defSigma}, it is obvious that \begin{equation} \nabla_I \Sigma_{JKL} = 0 \end{equation} holds. Therefore, any tensor expression which is a function $u$ of $f^{(j)}$ and their $\overline{\nabla}$-derivatives satisfies \begin{equation} \nabla_I u( f^{(j)}, \overline{\nabla} ) = 0 \end{equation} automatically. Still, this does not guarantee that the resulting tensor is contained in the set ${f^{(j)}}$ that forms the truncation. To overcome this problem, we note that all elements of this set are invariant under the action of the group $F$. Any function $u( f^{(j)}, \overline{\nabla} )$ will share this property as long as the intrinsic torsion is a singlet (i.e.~it is invariant) under the $F$ action. In this case $\Sigma_{IJK}$ is invariant because, due to $F \subset H_D$, the projectors in its definition are automatically invariant, too. Let us summarise the ingredients of the construction implied by theorem~\ref{th:CT} and fix some notation for later. For a consistent truncation we need: \begin{enumerate} \item The set of \underline{all} invariant tensors $\{ f^{(j)} \}$ which satisfy \begin{equation}\label{eqn:definvartensors} \nabla_I f^{(j)} = 0 \quad \text{and} \quad \nabla_\alpha f^{(j)} = 0 \,. \end{equation} Here $\nabla_\alpha$ denotes the infinitesimal action by generators $t_\alpha \in$ Lie($F$) of the structure group $F$. The reason why we use this particular notation, will become clear in the next section. \item Constant, singlet intrinsic torsion that implies \begin{equation}\label{eqn:defsinglettorsion} \nabla_I T_{JKL} = 0\,,\quad \nabla_I T_J = 0 \qquad \text{and} \qquad \nabla_\alpha T_{IJK} = 0\,, \quad \nabla_\alpha T_I = 0 \,. \end{equation} \end{enumerate} \subsection{The truncated theory}\label{sec:trunctheory} At this point, we can say more about the truncated theory. In general, the consistent truncation still has an infinite number of degrees of freedom. To accommodate them, the $D$-dimensional target space is split into two parts \begin{equation} M_D = M_{D-n} \times M_{n}\,, \end{equation} comprising an external manifold $M_{D-n}$ of dimension $D-n$ where no truncation is performed, and the internal manifold $M_n$ that hosts the invariant tensors discussed above. Accordingly, we split the coordinates, namely $x^\mu$ on $M_{D-n}$ and $y^i$ on $M_n$. On the external space, we do not require generalised geometry because no truncation takes place here. Therefore, we do not use doubled indices on $M_{D-n}$. Only on the internal space $M_n$, we have the O($n$,$n$)-invariant pairing $\eta_{IJ}$, a generalised metric $\mathcal{H}_{IJ}$, and all the other objects introduced above. In this setup the metric, $B$-field and dilaton on the full space split into five contributions: \begin{itemize} \item external metric $g_{\mu\nu}( x )$\,, \item external $B$-field $B_{\mu\nu}( x )$\,, \item dilaton $\phi(x, y)$\,, \item gauge connection $A_\mu{}^I(x, y)$\,, and \item scalar field $\mathcal{H}_{IJ}(x, y)$\,. \end{itemize} All shall be understood as fields in the truncated theory with their $y$-dependence totally fixed by the truncation ansatz. The action \eqref{eqn:SNSNS} can be rewritten in terms of these fields by using a Kaluza-Klein ansatz. This rewriting is cumbersome, but can be found in different papers, e.g. \cite{Aldazabal:2011nj,Geissbuhler:2011mx,Hohm:2013nja}. Here, we start from the result of \cite{Hohm:2013nja}, because it is fully general and does not commit to any specific ansatz on the internal manifold yet: \begin{equation}\label{eqn:SNSNSsplit} \begin{aligned} S = \int \mathrm{d}^{(D-n)} x\, \mathrm{d}^n y \sqrt{g} e^{-2 \phi} \Bigl( & \overline{R} + \mathcal{D}_\mu \phi \mathcal{D}^\mu \phi - \frac1{12} \mathcal{H}_{\mu\nu\rho} \mathcal{H}^{\mu\nu\rho} \\ & + \frac18 \mathcal{D}_\mu \mathcal{H}_{IJ} \mathcal{D}^\mu \mathcal{H}^{IJ} - \frac14 \mathcal{H}_{IJ} \mathcal{F}_{\mu\nu}{}^{I} \mathcal{F}^{\mu\nu J} - V \Bigr)\,. \end{aligned} \end{equation} In the following, we will refine this action by using all the insights from the previous subsection. But before doing so, let us discuss the new objects that appear in \eqref{eqn:SNSNSsplit}. $\overline{R}$ is the curvature scalar for the external metric. Furthermore, the truncated theory is a gauge theory coupled to (super)gravity, normally referred to as a gauged (super)gravity. It comes with the gauge covariant derivative \begin{equation} \mathcal{D}_\mu = \partial_\mu - \mathcal{L}_{A_\mu} \end{equation} that incorporates the connection one-form $A_\mu{}^I$. Like in Yang-Mills theory, one has to add a kinetic term containing the corresponding 2-form field strength tensor \begin{equation} \mathcal{F}_{\mu\nu}{}^I = 2 \partial_{[\mu} A_{\nu]}{}^I - [ A_\mu, A_\nu ]_{\mathrm{C}}^I \end{equation} to the action. Instead of a Lie bracket, it employs the Courant bracket \begin{equation} [ U, V ]_{\mathrm{C}} = \frac12 (\mathcal{L}_U V - \mathcal{L}_V U)\,. \end{equation} This is because we have not yet performed the truncation. Later, we will see that this bracket turns into a Lie bracket after the truncation, as is required for a gauge theory. Moreover, the 3-form $H_{ijk}$ in the original action has to be complemented by a Chern-Simons term giving rise to \begin{equation} \mathcal{H}_{\mu\nu\rho} = 3 \partial_{[\mu} B_{\nu\rho]} + 3 A_{[\mu}{}^I \partial_{\nu} A_{\rho]I} - A_{[\mu| I |} [ A_{\nu}, A_{\rho]} ]_{\mathrm{C}}^I\,. \end{equation} Finally, we have the scalar potential \begin{equation} V = - \mathcal{R}\,. \end{equation} It is expressed in terms of the generalised Ricci scalar \eqref{eqn:genRicciScalar} \textit{on the internal manifold $M_n$}. For completeness, we note that gauge transformations of the connection 1-form $A_\mu{}^I$ are mediated by \begin{equation} \delta_\Lambda A_\mu{}^I = \partial_\mu \Lambda^I + \mathcal{L}_\Lambda A_\mu{}^I \,, \end{equation} For all the covariant fields, the generalised Lie derivative mediates gauge transformation, namely \begin{equation} \delta_\Lambda = \mathcal{L}_\Lambda \,. \end{equation} At this stage, there are still an infinite number of scalars $\mathcal{H}_{IJ}$ and vectors $A_\mu{}^I$. But, as suggested above, we should expand them in terms of tensors that are invariant under the structure group in order to obtain a consistent truncation. Let us start with the generalised metric \begin{equation} \mathcal{H}_{IJ}(x, y) = \widetilde{E}^A{}_I(y) h_{AB}(x) \widetilde{E}^B{}_J(y)\,, \end{equation} and adopt the notation that quantities which depend only on $y$ are decorated with a tilde. In contrast to section~\ref{sec:gengeo}, the generalised metric $h_{AB}$ here is not restricted to the diagonal form given in \eqref{eqn:HHcanonical}. Rather, it is an element of the coset O($n$,$n$)/$H_n$ and it thereby captures the scalar moduli space of the truncated theory. However, this choice has to be refined because it is not automatically invariant under the action of the structure group $F$. Thus, we restrict $h_{AB}$ to the coset \begin{equation} h_{AB}(x) \in \frac{C_{\mathrm{O}(n,n)}(F)}{C_{H_n}(F)} \end{equation} where $C_G(H)$ denotes all element of the Lie group $G$ that commute with all element of $H$. Similarly, the vectors of the truncated theory are formed from all $n_{\mathrm{v}}$ $F$-invariant vectors $\widetilde{K}_{\dot\alpha}{}^I (y)$, with $\dot\alpha = 1, \dots, n_{\mathrm{v}}$, \begin{equation} A_\mu{}^I(x, y) = A_\mu{}^{\dot\alpha} (x) \widetilde{K}_{\dot\alpha}{}^I (y) \,. \end{equation} Thanks to the analysis in the last section, it is not hard to see how these vectors act on other invariant tensors through the generalised Lie derivative.\footnote{The trick here is to write the generalised Lie derivative in terms of the covariant derivative $\nabla_A$ and keep in mind that it annihilates all $F$-invariant tensors on the internal manifold. Thus, the only non-vanishing contribution comes from the torsion tensor.} In particular, we find \begin{align} \mathcal{L}_{\widetilde{K}_{\dot\alpha}} \widetilde{K}_{\dot\beta}{}^I &= (\widetilde{T}_{\dot\alpha})_J{}^I \widetilde{K}_{\dot\beta}{}^J \\ \mathcal{L}_{\widetilde{K}_{\dot\beta}} \mathcal{H}_{IJ} &= 2 (\widetilde{T}_{\dot\alpha})_{(I}{}^K \mathcal{H}_{J)K} \end{align} with \begin{equation} (\widetilde{T}_{\dot\alpha})_I{}^J = - \widetilde{K}_{\dot\alpha}{}^K \widetilde{T}_{KI}{}^J\,. \end{equation} Here, $\widetilde{T}_{IJK}$ is the generalised torsion from \eqref{eqn:gentorsionTIJK}. We only decorated it with a tilde to emphasis that it depends only on the internal coordinates $y^i$. One can interpret $(\widetilde{T}_{\dot\alpha})_I{}^J$ as $2n \times 2n$-matrices. They are elements of the Lie algebra Lie[$C_{\mathrm{O}(n,n)}(F)$] and generate the gauge group $G\subset C_{\mathrm{O}(n,n)}(F)$ \cite{Cassani:2019vcl}. The corresponding Lie algebra has the structure \textit{constants} \begin{equation} [ \widetilde{T}_{\dot\alpha}, \widetilde{T}_{\dot\beta} ] = f_{\dot\alpha\dot\beta}{}^{\dot\gamma} \widetilde{T}_{\dot\gamma}\,. \end{equation} This result is quite remarkable, because it tells us that the torsion $T_{IJK}$ controls how the gauge group is embedded in the global symmetry group of the truncated theory. It plays the role of the embedding tensor, which is known from truncations that preserve maximal or half-maximal supersymmetry (see \cite{Samtleben:2008pe,Trigiante:2016mnt} for reviews). Eventually, we want to get rid of the internal manifold's coordinate dependence. To this end, we note that $K_{\dot\alpha}{}^A = \widetilde{K}_{\dot\alpha}{}^I \widetilde{E}^A{}_I$ in flat indices is constant. The same of course also holds for the torsion $T_{ABC}$ and $T_A$. Thus, we define the constant tensors \begin{align} \eta_{\dot\alpha\dot\beta} &= K_{\dot\alpha}{}^A K_{\dot\beta}{}^B \eta_{AB}\,, \\ h_{\dot\alpha\dot\beta} &= K_{\dot\alpha}{}^A K_{\dot\beta}{}^B h_{AB}(x)\,, \quad \text{and} \\ (T_{\dot\alpha})_A{}^B &= \widetilde{K}_{\dot\alpha}{}^I T_{IJ}{}^K \widetilde{E}_A{}^J \widetilde{E}^B{}_K \,. \end{align} Note that $\eta_{\dot\alpha\dot\beta}$ is invariant under the action of the gauge group, and we use it to raise and lower dotted Greek indices. Moreover, we have to deal with the dilaton. It decomposes into \begin{equation} \phi(x, y) = \overline{\phi}(x) + \widetilde{d}(y)\,. \end{equation} With these definitions in place, we can eventually restrict the action \eqref{eqn:SNSNSsplit} to $M_{D-n}$, \begin{equation} \begin{aligned} S = V_{\mathrm{int}} \int d^{(D-n)} x \, \sqrt{g} e^{-2\overline{\phi}} \Bigl(& \overline{R} + 4 \mathcal{D}_\mu \overline{\phi} \mathcal{D}^\mu \overline{\phi} - \frac1{12} \mathcal{H}_{\mu\nu\rho} \mathcal{H}^{\mu\nu\rho} \\ & \frac18 \mathcal{D}_\mu h_{AB} \mathcal{D}^\mu h^{AB} - \frac14 h_{\dot\alpha\dot\beta} \mathcal{F}_{\mu\nu}{}^{\dot\alpha} \mathcal{F}^{\mu\nu\dot\beta} - V \Bigr)\,. \end{aligned} \end{equation} This reduced action employs the covariant derivative and field strengths \begin{align} \mathcal{D}_\mu \overline{\phi} &= \partial_\mu \overline{\phi} - \frac12 A_\mu\,, \\ \mathcal{D}_\mu h_{AB} &= \partial_\mu h_{AB} - 2 A_\mu{}^{\dot\alpha} (T_{\dot\alpha})_{(A}{}^C h_{B)C}\,, \\ \mathcal{F}_{\mu\nu}{}^{\dot\alpha} &= 2 \partial_{[\mu} A_{\nu]}{}^{\dot\alpha} - f_{\dot\beta\dot\gamma}{}^{\dot\alpha} A_\mu{}^{\dot\beta} A_\nu{}^{\dot\gamma}\,, \qquad \text{and}\\ \mathcal{H}_{\mu\nu\rho} &= 3 \partial_{[\mu} B_{\nu\rho]} + 3 A_{[\mu}{}^{\dot\alpha} \partial_\nu A_{\rho]\dot\alpha} - f_{\dot\alpha\dot\beta\dot\gamma} A_\mu{}^{\dot\alpha} A_\nu{}^{\dot\beta} A_\rho{}^{\dot\gamma}\,, \end{align} and the gauge transformations \begin{align} \delta_\Lambda A_\mu{}^{\dot\alpha} &= \partial_\mu A_\mu{}^{\dot\alpha} + \Lambda^{\dot\beta} A_\mu{}^{\dot\gamma} f_{\dot\beta\dot\gamma}{}^{\dot\alpha}\,, \\ \delta_\Lambda \overline{\phi} &= \frac12 \Lambda^{\dot\alpha} T_{\dot\alpha}\,, \\ \delta_\Lambda h_{AB} &= 2 \Lambda^{\dot\alpha} (T_{\dot\alpha})_{(A}{}^C h_{B)C}\,. \end{align} Two new quantities, \begin{equation} A_\mu (x) = A_\mu{}^{\dot\alpha} (x) T_{\dot\alpha}, \quad \text{and} \quad T_{\dot\alpha} = K_{\dot\alpha}{}^A T_A\,, \end{equation} have appeared here. As intended, neither depends on the internal coordinates $y$ because the torsion $T_A$ is constant by theorem~\ref{th:CT}. There is only one imprint of the manifold $M_d$ left, namely its generalised volume \begin{equation} V_{\mathrm{int}} = \int \mathrm{d}^n y \, e^{-2 \widetilde{d}}\,, \end{equation} which appears as an overall prefactor. Before, we turn to an example, let us summarise the salient features of all truncated theories that arise from theorem~\ref{th:CT} in generalised geometry: \begin{itemize} \item They are gauged (super)gravities in dimensions $D < 10$ for the superstring or $D < 26$ for the bosonic string. \item Their field content and gauge group are completely fixed by the constant, singlet intrinsic torsion $T_{ABC}$ and $T_A$. \item The only part of the action that is not so easily fixed is the scalar potential $V$. It requires detailed knowledge about the geometry of the internal manifold and its dependence on the scalar moduli fields. \end{itemize} However, the scalar potential is central for most applications. That is the reason why a particular subclass of consistent truncations, called generalised Scherk-Schwarz reductions \cite{Aldazabal:2011nj,Geissbuhler:2011mx}, currently dominates most applications. \subsection{Generalised Scherk-Schwarz reductions and Poisson-Lie T-duality}\label{sec:gSSandPLTD} Remarkably, these reductions are directly related to the second central pillar of this paper, dualities. In this subsection, we explain how. \subsubsection*{Consistent truncation perspective} Generalised Scherk-Schwarz reductions implement a special case of theorem~\ref{th:CT} with trivial structure group $F$. Therefore, any tensor in flat indices which is annihilated by the covariant derivative $\nabla_I$ forms part of the truncation. In particular, we do not have to deal with $\nabla_\alpha$, introduced in \eqref{eqn:definvartensors} and \eqref{eqn:defsinglettorsion}. Moreover, the spin connection $\Omega_{IA}{}^B$ in \eqref{eqn:defnabla} vanishes. This situation is similar to a group manifold in standard geometry, where one can always introduce a flat derivative without curvature. The only difference is that in generalised geometry instead of the Riemann tensor, the generalised Riemann tensor \cite{Hohm:2011si} \begin{equation} \mathcal{R}_{IJKL} = 2 \partial_{[I} \Gamma_{J]KL} + 2 \Gamma_{[I|ML} \Gamma_{|J]K}{}^M + \frac12 \Gamma_{MIJ} \Gamma^M{}_{KL} + (IJ) \leftrightarrow (KL)\,, \end{equation} or in flat indices after using \eqref{eqn:GammafromOmega} \begin{equation} \begin{aligned} \mathcal{R}_{ABCD} &= 2 E_{[A}{}^I \partial_I \Omega_{B]CD} + 2 \Omega_{[A|C}{}^E \Omega_{B]D}{}_E + \frac12 \Omega_{EAB} \Omega^E{}_{CD} \\ & - F_{AB}{}^E \Omega_{ECD} + (AB) \leftrightarrow (CD)\,, \end{aligned} \end{equation} has to vanish for trivial $F$. Taking into account the vanishing spin connection, we find the generalised affine connection \begin{equation} \Gamma_{IJK} = \partial_I E^A{}_J E_{AK} \end{equation} by using \eqref{eqn:GammafromOmega}. From it, we next compute the generalised torsions from \eqref{eqn:gentorsionTIJK}, \begin{equation} T_{ABC} = - F_{ABC}\,, \end{equation} and from \eqref{eqn:gentorsionTI}, \begin{equation} T_A = F_A\,. \end{equation} Hence, the conditions for consistent truncations become \begin{equation}\label{eqn:Fconst} F_{ABC} = \text{const.} \qquad \text{and} \qquad F_A = \text{const.}\,, \end{equation} because the covariant derivative $\nabla_A$ acting on quantities with just flat indices reduces to $D_A = E_A{}^I \partial_I$. At this point, it is instructive to look at the Bianchi identities for $\nabla_A$. They reduce to \cite{Geissbuhler:2013uka} \begin{align}\label{eqn:bianchiFABC} D_{[A} F_{BCD]} - \frac34 F_{[AB}{}^E F_{CD]E} &= 0 \,, \quad \text{and} \\\label{eqn:bianchiFA} D_{[A} F_{B]} + \frac12 D^C F_{CAB} - \frac12 F^C F_{CAB} &= 0 \,. \end{align} Because of \eqref{eqn:Fconst}, all terms with derivatives $D_A$ drop out. Consequentially, \eqref{eqn:bianchiFABC} becomes the Jacobi identity of a Lie algebra, Lie($\mathds{D}$). Assume that this Lie algebra has the generators $t_A$, satisfying \begin{equation} [t_A, t_B] = F_{AB}{}^C t_C \,. \end{equation} Furthermore, \eqref{eqn:bianchiFA} states that $t_{\mathrm{F}} = F^A t_A$ has to be in the center of Lie($\mathds{D}$), saying that the generator $t_{\mathrm{F}}$ commutes with all other elements of the Lie algebra. Another effect of a trivial generalised structure group is that we can identify the index $\dot\alpha$, enumerating invariant constant vectors, with the O(D,D) index $A$, resulting in $K_A{}^B = \delta_A^B$. Hence, the Lie group $\mathds{D}$ has a natural interpretation as the gauge group of the truncated gauged (super)gravity. In the same vein, we identify the scalar manifold as $O(n,n)/H_n$. Finally, one can directly read off the scalar potential\footnote{We drop the contribution \begin{equation} F_A F^A - \frac16 F_{ABC} F^{ABC} \end{equation} because it vanishes under the section condition, we always impose in this paper.} \begin{equation} V = F_{ACE} F_{BDF} \left( \frac1{12} h^{AB} h^{CD} h^{EF} - \frac14 h^{AB} \eta^{CD} \eta^{EF} \right) + F_A F_B h^{AB} \end{equation} from \eqref{eqn:genRicciScalar} by dropping all terms with a derivative and expanding the projectors $P^{AB} = \frac12 ( \eta^{AB} + h^{AB} )$ and $\overline{P}{}^{AB} = \frac12 ( \eta^{AB} - h^{AB} )$. Summarising, the truncation ansatz for a consistent truncation with a trivial structure group is built from following data: \begin{enumerate} \item A doubled Lie group $\mathds{D}$ (=gauge group), generated by $t_A$ \begin{enumerate} \item with the structure coefficients $F_{AB}{}^C$ and \item\label{item:FA} and an elements in the center $t_{\mathrm{F}} = F^A t_A$ ($t_{\mathrm{F}} = 0$ always works). \end{enumerate} \item $\mathds{D}$ has to be a subgroup of O($n$,$n$). Otherwise, its adjoint action would not leave $\eta_{AB}$ invariant and consequently, $F_{ABC} = F_{AB}{}^D \eta_{DC}$ would only be antisymmetric with respect to the first two indices $A$ and $B$, but it has to be totally antisymmetric. Therefore, $F_{AB}{}^C$ actually describes how $\mathds{D}$ is embedded into O($n$,$n$), and is called the embedding tensor. \item A constant generalised metric $h_{AB}$ on the internal manifold, to construct the projectors $P^{AB}$ and $\overline{P}{}^{AB}$. \end{enumerate} \subsubsection*{Generalised T-duality perspective} Intriguingly, exactly the same data are needed to describe a Poisson-Lie symmetric target space in the $\mathcal{E}$-model formalism \cite{Klimcik:1995dy,Klimcik:2015gba}. More precisely, one needs the following ingredients to construct an $\mathcal{E}$-model: \begin{enumerate} \item A doubled Lie group $\mathds{D}$, generated by $t_A$ \begin{enumerate} \item with the structure coefficients $F_{AB}{}^C$. \item there is no item~(b). \end{enumerate} \item A non-degenerate pairing $\langle t_A, t_B \rangle = \eta_{AB}$, that is invariant under the adjoint action of $\mathds{D}$. \item An $\mathcal{E}$-operator $\mathcal{E}: \mathrm{Lie}(\mathds{D}) \rightarrow \mathrm{Lie}(\mathds{D})$, which squares to the identity. In the language we use here, this is just the generalised metric $\mathcal{E}_A{}^B = \mathcal{H}_A{}^B$. \end{enumerate} The underlying classical $\sigma$-model \begin{equation}\label{eqn:sigmamodel} S_\Sigma = \frac{1}{4\pi\alpha'} \int_\Sigma \left(g_{ij} \mathrm{d} x^i \wedge \star\mathrm{d} x^j + B_{ij} \mathrm{d} x^i \wedge \mathrm{d} x^j \right) \end{equation} does not incorporate the dilaton, therefore item \ref{item:FA} is not contained in this list. To define the $\mathcal{E}$-model, one first transitions to the Hamiltonian formalism with the Hamiltonian\cite{Tseytlin:1990nb} \begin{equation} H = \frac{1}{4\pi\alpha'} \int \mathrm{d} \sigma \mathcal{J}^M \mathcal{H}_{MN} \mathcal{J}^N \,. \end{equation} In addition to the generalised metric in \eqref{eqn:genmetric}, we find the generalised currents \begin{equation} \mathcal{J}_M = \begin{pmatrix} p_m \quad& \partial_\sigma x^m \end{pmatrix} \end{equation} defined by using the embedding coordinates $x^m$ and their canonical momenta \begin{equation} p_m = g_{mn} \partial_\tau x^n + B_{mn} \partial_\sigma x^n \,. \end{equation} Taking into account canonical, equal-time Poisson brackets $\{x_m(\sigma), p^n(\sigma')\}=\delta_m^n \delta(\sigma-\sigma')$, one obtains \begin{equation} \{ \mathcal{J}^M(\sigma), \mathcal{J}^N(\sigma') \} = 2\pi\alpha' \delta'(\sigma-\sigma') \eta^{MN} \end{equation} which introduces $\eta^{MN}$ from the worldsheet perspective. The $\mathcal{E}$-models arises after dressing these currents with the generalised frame $E^A{}_I$ to obtain the Kac-Moody algebra \begin{equation} \{ \mathcal{J}^A(\sigma), \mathcal{J}^B(\sigma') \} = \delta(\sigma-\sigma') F^{AB}{}_C \mathcal{J}^C(\sigma) + \delta'(\sigma-\sigma') \eta^{AB} \end{equation} with \begin{equation} \mathcal{J}^A = \frac{1}{\sqrt{2\pi\alpha'}} E^A{}_M \mathcal{J}^M\,. \end{equation} As in our previous discussion, it is crucial here that $F_{ABC}$, $\eta_{AB}$ and $\mathcal{H}_{AB}$ are all constant, resulting in the $\mathcal{E}$-model Hamiltonian \begin{equation} H = \frac12 \int \mathrm{d} \sigma \mathcal{J}^A \mathcal{H}_{AB} \mathcal{J}^B \end{equation} that is quadratic in the generalised currents and therefore results in the simple equations of motion \begin{equation} \mathrm{d} \mathcal{J} + \frac12 [ \mathcal{J}, \mathcal{J} ] = 0 \end{equation} with the Lie($\mathds{D}$)-valued, worldsheet one-forms \begin{equation} \mathcal{J} = t_A \left( \mathcal{E}^A{}_B \mathcal{J}^B \mathrm{d}\tau + \mathcal{J}^A \mathrm{d}\sigma \right)\,. \end{equation} Poisson-Lie T-duality relates different choices for the metric $g_{ij}$ and the $B$-field $B_{ij}$ in \eqref{eqn:sigmamodel} by a canonical transformation. How this exactly works is best seen in the Hamiltonian formalism where the canonical transformation leaves the Poisson brackets and the Hamiltonian invariant, but changes the composition of the currents $\mathcal{J}^M$ in terms of the fundamental field $x^n$. Consequently, there is in general not just one choice of the generalised frame $E^A{}_I$ that results in some fixed structure constants $F_{AB}{}^C$, but multiple ones. There is an important lesson to be learned from this new perspective: The truncation ansatz, which is fixed by the same generalised frame $E_A{}^I$ as the currents $\mathcal{J}^A$ in the $\mathcal{E}$-model, is in general not unique. Instead one can find for every maximally isotropic subgroup $H_i$ of $\mathds{D}$ a generalised frame on the coset $M^{(i)}=H_i \backslash \mathds{D}$ that results in the same generalised fluxes $F_{ABC}$ \cite{Hassler:2017yza,Demulder:2018lmj,Borsato:2021vfy}. All of them are connected by Poisson-Lie T-duality. We will discuss the details of the construction of generalised frames in section~\ref{sec:vielbeins}. It is known that gauged $\mathcal{E}$-models give rise to an even broader notion of T-duality \cite{Klimcik:2019kkf}, called dressing cosets \cite{Klimcik:1996np}. There are hints that they are also closely related to consistent truncations \cite{Demulder:2019vvh}. In the rest of this paper, we follow these hints and eventually show that generalised cosets provide a very large class of new consistent truncations for which the scalar potential can be computed. \section{The Pol\'a\v{c}ek-Siegel{} construction} In the last section, we have identified the generalised structure group $F$ and the singlet, intrinsic torsion as the fundamental building blocks of consistent truncations. We will now present a construction of the associated generalised frames $E_A{}^I$ and spin connection $\Omega_{AB}{}^C$, which treats them as first class citizens. The basic idea for our approach first came up in the paper \cite{Polacek:2013nla}, and we therefore refer to it as the Pol\'a\v{c}ek-Siegel{} construction. In its original form, it was restricted to the case $F=H_n$, which is not of much use for the application to consistent truncations. Fortunately, one of the authors extended the discussion to general $F$'s in ref.~\cite{Butter:2022gbc}. We shall review the construction in the following, and adapt it to our conventions before applying it to truncations. \subsection{Generalised frame on the mega-space}\label{sec:mega-vielbein} First, we define generators $t_\alpha\in$ Lie($F$) that generate the generalised structure group and that are governed by the commutators \begin{equation}\label{eqn:deffalphabetagamma} [t_\alpha, t_\beta] = f_{\alpha\beta}{}^\gamma t_\gamma\,. \end{equation} Next, we introduce the auxiliary coordinates $z^\mu$ to parameterise group elements $f(z^\mu)\in F$. In combination with the coordinates $y^i$ on the internal manifold $M_n$, they describe what we call mega-space. It is important to keep in mind that the mega-space is not physical. It is, rather, a useful book-keeping device, as will become clear by the end of this section. In order to make contact with the discussion in the last section, we have to fix at least two quantities on the mega-space, namely the generalised frame and the $\eta$-metric. For the former, we will use the parameterisation \begin{equation}\label{eqn:mega-vielbein} \widehat{E}_{\widehat{A}}{}^{\widehat{I}} = \widetilde{M}_{\widehat{A}}{}^{\widehat{B}} \begin{pmatrix} \delta_\beta{}^\gamma & 0 & 0 \\ -\Omega^\gamma{}_B & E_B{}^J & 0 \\ \rho^{\beta\gamma} - \frac12 \Omega^\beta{}_K \Omega^{\gamma K} & \Omega^{\beta J} & \delta^\beta{}_\gamma \end{pmatrix} \begin{pmatrix} \widehat{\vt}_\gamma{}^\mu & 0 & 0 \\ 0 & \delta_J^I & 0 \\ 0 & 0 & \widetilde{v}^\gamma{}_\mu \end{pmatrix}\,. \end{equation} Before motivating it, let us take a moment to explain the new conventions we encounter here: Indices come in two kinds: flat $\widehat{A}$, $\widehat{B}$, \dots and curved $\widehat{I}$, $\widehat{J}$, \dots. The latter split as $x^{\widehat{I}} = \begin{pmatrix} z^\mu & y^i & \widetilde{y}_i & \widetilde{z}_\mu \end{pmatrix}$, $x_{\widehat{I}} = \begin{pmatrix} \widetilde{z}_\mu & \widetilde{y}_i & y^i & z^\mu \end{pmatrix}$. Note that the section condition is still trivially solved, because nothing depends on the coordinates $\widetilde{y}_i$ and $\widetilde{z}_\mu$ and we use the partial derivative $\partial_{\widehat{I}} = \begin{pmatrix} \partial_\mu & \partial_i & 0 & 0 \end{pmatrix}$. Moreover, the O($n+\dim F$,$n+\dim F$) invariant metric $\eta_{\widehat{I}\widehat{J}}$ is given by \begin{equation} \eta_{\widehat{I}\widehat{J}} = \begin{pmatrix} 0 & 0 & \delta_\mu^\nu \\ 0 & \eta_{IJ} & 0 \\ \delta_\nu^\mu & 0 & 0 \end{pmatrix} \end{equation} where $\eta_{IJ}$ is already known from \eqref{eqn:etametric}. Flat indices behave in the same way. In particular, we encounter the flat $\eta$-metric \begin{equation} \eta_{\widehat{A}\widehat{B}} = \begin{pmatrix} 0 & 0 & \delta_\alpha^\beta \\ 0 & \eta_{AB} & 0 \\ \delta_\beta^\alpha & 0 & 0 \end{pmatrix}\,. \end{equation} With the index conventions established, we can say more about the form of \eqref{eqn:mega-vielbein}. At first glance, it has three distinguishing features: \begin{enumerate} \item All physically relevant quantities are contained in the middle matrix. \item This matrix is of lower triangular form. \item It is dressed from left and right by two matrices which only depend on the auxiliary coordinates $z^\mu$. \end{enumerate} Note that we use here tilde quantities which depend on the auxiliary coordinates $z^\mu$. $\widetilde{M}_{\widehat{A}}{}^{\widehat{B}}$ does not appear explicitly in either \cite{Polacek:2013nla} or \cite{Butter:2022gbc}, but later it will be very helpful for understanding how the mega-generalised frame field relates to the covariant derivative $\nabla_A$. This matrix mediates the adjoint action of $F$ on the mega-space. Therefore, it has two defining properties: \begin{equation}\label{eqn:MtODD} \widetilde{M}_{\widehat{A}}{}^{\widehat{C}} \widetilde{M}_{\widehat{B}}{}^{\widehat{D}} \eta_{\widehat{C}\widehat{D}} = \eta_{\widehat{A}\widehat{B}} \end{equation} and \begin{equation}\label{eqn:Mtaction} \partial_\mu \widetilde{M}_{\widehat{A}}{}^{\widehat{B}} = \widetilde{v}^\alpha{}_\mu \widetilde{M}_{\widehat{A}}{}^{\widehat{C}} f_{\alpha \widehat{C}}{}^{\widehat{B}}\,. \end{equation} Here $\widetilde{v}^\alpha{}_\mu$ denotes the components of the right-invariant one-forms $\mathrm{d} f f^{-1} = t_\alpha \widetilde{v}^\alpha{}_\mu \mathrm{d} z^\mu$ and $\widehat{\vt}_\alpha{}^\mu$ are the dual vector fields, defined by $\widehat{\vt}_\alpha{}^\mu \widehat{v}^\beta{}_\mu = \delta_\alpha^\beta$. The infinitesimal action of $F$ is specified by the constants $f_{\alpha\widehat{B}\widehat{C}}$. Due to \eqref{eqn:MtODD}, they have to satisfy $f_{\alpha\widehat{B}\widehat{C}} = - f_{\alpha\widehat{C}\widehat{B}}$. Furthermore, \eqref{eqn:Mtaction} should not spoil the lower triangular form of the middle matrix in \eqref{eqn:mega-vielbein} and therefore we find \begin{equation} f_{\alpha\beta}{}^C = 0 \,, \qquad \text{and} \qquad f_{\alpha\beta\gamma} = 0 \,. \end{equation} Owing to these constraints, the full mega-generalised frame field is lower triangular. This observation motivates the parameterisation of the two right-most matrices in \eqref{eqn:mega-vielbein}. Together, they implement the most general lower triangular matrix that leaves the $\eta$-metric invariant. Thereby, we encounter three fields on the physical space $M_n$: \begin{itemize} \item The generalised frame $E_A{}^I$ that we already discussed in the last section. \item An unconstrained field $\Omega^\alpha{}_B$ which, as we will see later, is directly related to the spin connection $\Omega_{ABC}$ of $\nabla_A$. \item An antisymmetric tensor $\rho^{\alpha\beta} = - \rho^{\beta\alpha}$. It is the most interesting result of the Pol\'a\v{c}ek-Siegel{} construction because it has no analog in standard geometry. Therefore, in honor of these two gentlemen, we call it the Pol\'a\v{c}ek-Siegel{} field. \end{itemize} Finally, we come to the question: How is this setup related to the covariant derivative $\nabla_A$? The relation becomes manifest when we split all tensors into a physical and an auxiliary part as, for example, in \begin{equation}\label{eqn:Mtsplitting} \widehat{V}_{\widehat{A}} = \widetilde{M}_{\widehat{A}}{}^{\widehat{B}} V_{\widehat{B}} \,. \end{equation} To indicate that $\widehat{V}_{\widehat{A}}$ depends on $z^\mu$ and $y^i$ while $V_{\widehat{A}}$ depends only on $y^i$, we decorate the former with a hat. This is also the reason that the generalised frame field on the mega-space is called $\widehat{E}_{\widehat{A}}{}^{\widehat{I}}$ instead of just $E_{\widehat{A}}{}^{\widehat{I}}$. In the same vein, we introduce the flat derivative \begin{equation} \widehat{D}_{\widehat{A}} = \widehat{E}_{\widehat{A}}{}^{\widehat{I}} \partial_{\widehat{I}} = \widetilde{M}_{\widehat{A}}{}^{\widehat{B}} D_{\widehat{A}}\,, \qquad \text{with} \qquad D_{\widehat{A}} = E_{\widehat{A}}{}^{\widehat{I}} \partial_{\widehat{I}}\,. \end{equation} A natural question at this point is: Can get rid of the auxiliary coordinates $z^\mu$ completely? This would be the case if these coordinates always appeared only in the form of \eqref{eqn:Mtsplitting}. There is only one place where this could go wrong, namely for flat derivatives. But fortunately, for them we find \begin{equation}\label{eqn:Dhtonabla'1} \widehat{D}_{\widehat{A}} \widehat{V}_{\widehat{B}} = \widetilde{M}_{\widehat{A}}{}^{\widehat{C}} \widetilde{M}_{\widehat{B}}{}^{\widehat{D}} \left( D_{\widehat{C}} V_{\widehat{D}} + E_{\widehat{C}}{}^\alpha f_{\alpha \widehat{D}}{}^{\widehat{E}} V_{\widehat{E}} \right) \quad \text{with} \quad E_{\widehat{A}}{}^{\beta} = \begin{pmatrix} \delta_\alpha^\beta \\ - \Omega^\beta{}_A \\ \rho^{\alpha\beta} - \frac12 \Omega^{\alpha}{}_I \Omega^{\beta I} \end{pmatrix}\,. \end{equation} Hence, we conclude that the $z^\mu$-dependence of any quantity with flat indices arises only from the twist of each index with $\widetilde{M}_{\widehat{A}}{}^{\widehat{B}}$. Looking more closely at \eqref{eqn:Dhtonabla'1}, one can interpret the two terms in the brackets on the right hand side of the first equation as a covariant derivative. Thus, we write \begin{equation}\label{eqn:Dhtonabla'2} \widehat{D}_{\widehat{A}} \widehat{V}_{\widehat{B}} = \widetilde{M}_{\widehat{A}}{}^{\widehat{C}} \widetilde{M}_{\widehat{B}}{}^{\widehat{D}} \nabla'_{\widehat{C}} V_{\widehat{D}} \end{equation} for flat derivatives on the mega-space, and we see that it can alternatively be interpreted as a covariant derivative on $M_n$. Comparing \eqref{eqn:Dhtonabla'1} and \eqref{eqn:Dhtonabla'2}, we find \begin{align}\label{eqn:nabla'A} \nabla'_A &= E_A{}^I \partial_I - \Omega^\beta{}_A \nabla'_\beta \,, \\ \nabla'^\alpha &= \Omega^{\alpha B} D_B + \left( \rho^{\alpha\beta} - \frac12 \Omega^{\alpha}{}_I \Omega^{\beta I} \right) \nabla'_\beta\,. \end{align} and furthermore \begin{align} \nabla'_\alpha V_\beta &= f_{\alpha\beta}{}^\gamma V_\gamma \,, \\ \nabla'_\alpha V_B &= f_{\alpha B}{}^C V_C + f_{\alpha B}{}^\gamma V_\gamma \,,\\ \nabla'_\alpha V^\beta &= - f_{\alpha\gamma}{}^\beta V^\gamma - f_{\alpha C}{}^\beta V^C + f_\alpha{}^{\beta\gamma} V_\gamma\,. \end{align} Most important for our purpose is to relate $\nabla'_{\widehat{A}}$ to the covariant derivative $\nabla_A$ and $\nabla_\alpha$ that is required for constructing consistent truncations. More precisely, we want to identify \begin{equation}\label{eqn:nablaAtonabla'A} \nabla_A V_B = \nabla'_A V_B\,, \end{equation} or equally \begin{equation} \Omega_{IBC} = \Omega^\delta{}_I f_{\delta BC}\,. \end{equation} Hence, we impose \begin{equation} f_{\alpha B}{}^\gamma = 0\,. \end{equation} This condition can be relaxed in the context of generalised T-dualities\footnote{We thank Yuho Sakatani for pointing out this possibility to us.}. However, for the consistent truncations we study here, it arises naturally. Thus, we shall keep the $\cancel{f_{\alpha B}{}^\gamma}$-terms in intermediate results and only remove them in the final expression on $M_n$. Finally, we also need to know how the generalised structure group $F$ acts on the physical generalised tangent space $T M_n \oplus T^* M_n$. Remember, the corresponding infinitesimal action is mediated by $\nabla_\alpha$, introduced in \eqref{eqn:definvartensors}. Thus, it is natural to relate \begin{equation}\label{eqn:nablaalphatonabla'alpha} \nabla_\alpha V_B := f_{\alpha B}{}^C V_C = \nabla'_\alpha V_B \,, \end{equation} too. This also explains the initially arbitrary looking notation for this operation. On the mega-space it has exactly the same origin as the covariant derivative $\nabla_A$. \subsection{Torsion and curvature}\label{sec:torsionandcurvature} We have seen that the Pol\'a\v{c}ek-Siegel{} construction transforms flat derivatives $\widehat{D}_{\widehat{A}}$ on the mega-space into covariant derivatives on the physical space $M_n$. From a conceptual point of view, one might say that it geometrises a generalised connection by adding auxiliary coordinates $z^\mu$. Therefore, analysing the properties of the derivatives $\widehat{D}_{\widehat{A}}$ becomes the main objective for this subsection. Fortunately, we already learned in section \eqref{sec:gSSandPLTD} that they are exclusively controlled by the torsions (see \eqref{eqn:FABC} and \eqref{eqn:FA}) \begin{align}\label{eqn:fhABC} \widehat{f}_{\widehat{A}\widehat{B}\widehat{C}} &= \widehat{D}_{[\widehat{A}} \widehat{E}_{\widehat{B}}{}^{\widehat{I}} \widehat{E}_{\widehat{C}]\widehat{I}} \,, \quad \text{and} \\ \widehat{f}_{\widehat{A}} &= 2 \widehat{D}_{\widehat{A}} \dh - \partial_{\widehat{I}} \widehat{E}_{\widehat{A}}{}^{\widehat{I}} \,. \end{align} Here, we switched from capital $F$'s to $f$'s, because we want to reserve the $F$ for the physical space $M_n$. At the moment this might seem arbitrary, but it will become obvious shortly. Next, we will evaluate \eqref{eqn:fhABC}. As already discussed, it is convenient to strip off $\widetilde{M}_{\widehat{A}}{}^{\widehat{B}}$ from generalised tensors, as we did in \eqref{eqn:Mtsplitting} while going from $\widehat{V}_{\widehat{A}}$ to $V_{\widehat{A}}$. The generalised frame field in \eqref{eqn:mega-vielbein} is no exception. Therefore, we split it according to \begin{equation} \widehat{E}_{\widehat{A}}{}^{\widehat{I}} = \widetilde{M}_{\widehat{A}}{}^{\widehat{B}} \overline{E}_{\widehat{B}}{}^{\widehat{C}} \mathcal{V}_{\widehat{C}}{}^{\widehat{I}} \qquad \text{with} \qquad \mathcal{V}_{\widehat{A}}{}^{\widehat{I}} = \begin{pmatrix} \widehat{\vt}_\alpha{}^\mu & 0 & 0 \\ 0 & E_A{}^I & 0 \\ 0 & 0 & \widetilde{v}^\alpha{}_\mu \end{pmatrix}\,, \end{equation} and first compute \begin{equation} 3 \mathcal{V}_{[\widehat{A}}{}^{\widehat{I}} \partial_{\widehat{I}} \mathcal{V}_{\widehat{B}}{}^{\widehat{J}} \mathcal{V}_{\widehat{C}]\widehat{J}} = \begin{cases} f_{\alpha\beta}{}^{\gamma} \text{ and cyclic} \\ F_{ABC}\,. \end{cases} \end{equation} The latter is then used to obtain \begin{equation} f_{\widehat{A}\widehat{B}\widehat{C}} = 3 \overline{E}_{\widehat{A}}{}^{\widehat{D}} \overline{E}_{\widehat{B}}{}^{\widehat{E}} \overline{E}_{\widehat{C}}{}^{\widehat{F}} \mathcal{V}_{[\widehat{D}}{}^{\widehat{I}} \partial_{\widehat{I}} \mathcal{V}_{\widehat{E}}{}^{\widehat{J}} \mathcal{V}_{\widehat{F}]\widehat{J}} + 3 \nabla'_{[\widehat{A}} \overline{E}_{\widehat{B}}{}^{\widehat{I}} \overline{E}_{\widehat{C}]\widehat{I}}\,. \end{equation} Note that $\nabla'_A$ here just acts on the flat index $\widehat{B}$ of the generalised frame field $\overline{E}_{\widehat{B}}{}^{\widehat{I}}$. There is no affine connection fixed by a vielbein postulate. Before we can turn to $\widehat{f}_{\widehat{A}}$, we have to specify how the generalised dilaton on the mega-space depends on the auxiliary coordinates $z^\mu$. It turns out that the right choice is \begin{equation} \dh(y, z) = d(y) - \frac12 \log \det \widetilde{e}(z) \end{equation} with \begin{equation} t_\alpha \widetilde{e}^\alpha{}_\mu(z) \mathrm{d} z^\mu = f^{-1} \mathrm{d} f\,. \end{equation} Only for this choice does the $z^\mu$-dependence of $\widehat{f}_{\widehat{A}}$ completely factor into a $\widetilde{M}_{\widehat{A}}{}^{\widehat{B}}$ twist, as we required in \eqref{eqn:Mtsplitting}. After removing this twist, we find \begin{equation}\label{eqn:fhA} f_{\widehat{A}} = 2 \overline{E}_{\widehat{A}}{}^B D_B d - \partial_I E_{\widehat{A}}{}^I - f_{\alpha\widehat{A}}{}^{\widehat{B}} \overline{E}_{\widehat{B}}{}^\alpha\,. \end{equation} It is very instructive to compute the individual components of both $\widehat{f}_{\widehat{A}\widehat{B}\widehat{C}}$ and $\widehat{f}_{\widehat{A}}$. But before doing so, we need a way to keep track of all contributions. To this end, we introduce the $\epsilon$-dimension, which is defined in the following way: Assume we scale the generators of the generalised structure group $F$ according to $t_\alpha \rightarrow \epsilon^{-1} \, t_\alpha$. In this case, the structure coefficients $f_{\alpha\beta}{}^\gamma$ introduced in \eqref{eqn:deffalphabetagamma} scale as $f_{\alpha\beta}{}^\gamma \rightarrow \epsilon^{-1} f_{\alpha\beta}{}^\gamma$. To find out how other tensors scale, assign $-1$ to each lowered Greek index, $+1$ to each raised Greek index and $0$ to each Latin index. Summing over all indices of the tensor then gives its $\epsilon$-dimension. The motivation for this particular scaling comes from an alternative approach in the literature to the construction of dressing cosets \cite{Sfetsos:1999zm,Sakatani:2021skx}. It considers an $\mathcal{E}$-model on the mega-space where the generalised structure group $F$ describes a global symmetry. After scaling the generators $t_\alpha$ as shown above, and sending $\epsilon$ to $0$, the $\mathcal{E}$-model degenerates and the global symmetry becomes a gauge symmetry. This limit is subtle, but several examples suggest that the relevant quantities on the dressing coset are those which are invariant under the scaling or, equally, have vanishing $\epsilon$-dimension. To find all independent components of $\widehat{f}_{\widehat{A}\widehat{B}\widehat{C}}$, recall that it is by construction totally antisymmetric. For each of its components, we can therefore order the indices by their $\epsilon$-dimension (of course still keeping track of the sign). The results for the ten independent classes of components are then given by \begin{equation}\label{eqn:decompfhABC} \begin{tabular}{r|ll|l} $\epsilon$-dim. & & & \\ $-3$ & $f_{\alpha\beta\gamma} = 0$ & & \\ $-2$ & $f_{\alpha\beta C} = 0$ & & \\ $-1$ & $f_{\alpha\beta}{}^\gamma$ & $f_{\alpha AB}$ & \\ $0$ & $f_{\alpha B}{}^\gamma = 0$ & & $f_{ABC}$ \\ $+1$ & $f_{\alpha}{}^{\beta\gamma}$ & & $f_{AB}{}^\gamma$ \\ $+2$ & & & $f_A{}^{\beta\gamma}$ \\ $+3$ & & & $f^{\alpha\beta\gamma}$\,. \end{tabular} \end{equation} All the components in the first column are just the parts of $f_{\alpha \widehat{B}}{}^{\widehat{C}}$ that describe the $F$-action on the mega-space (see \eqref{eqn:Mtaction}). They are constant, and constrained by the Jacobi identity \begin{equation}\label{eqn:closureFaction} 2 f_{[\alpha|\widehat{C}}{}^{\widehat{E}} f_{|\beta]\widehat{E}}{}^{\widehat{D}} = - f_{\alpha\beta}{}^\gamma f_{\gamma\widehat{C}}{}^{\widehat{D}}\,, \end{equation} which arises from $\mathrm{d}^2 M_{\widehat{A}}{}^{\widehat{B}}=0$ and \eqref{eqn:Mtaction}, because infinitesimal $F$-transformations have to close into a group. Non-constant contributions only arise from the components in the second column. In the following, we compute them one by one. First, we look at \begin{equation} f_{ABC} = F_{ABC} - 3 \Omega^\delta{}_{[A} f_{\delta BC]} = F_{ABC} - 3 \Omega_{[ABC]} \,. \end{equation} Comparing this result with \eqref{eqn:GammafromOmega} and \eqref{eqn:gentorsionTIJK}, we find the remarkable identification \begin{equation} f_{ABC} = - T_{ABC}\,. \end{equation} On the other hand, \eqref{eqn:defsinglettorsion} requires that $T_{BCD}$ is annihilated by $\nabla_A$ and $\nabla_\alpha$ for theorem~\ref{th:CT} to apply. This condition can now be written as \begin{align}\label{eqn:nablaAfBCD} \nabla_A f_{BCD} = \nabla'_A f_{BCD} &= 0 \quad \text{and} \\ \nabla_\alpha f_{BCD} = \nabla'_{\alpha} f_{BCD} &=0 \end{align} by using \eqref{eqn:nablaAtonabla'A} and \eqref{eqn:nablaalphatonabla'alpha}. Due to \eqref{eqn:nabla'A}, these two equations automatically imply \begin{equation}\label{eqn:fABCconst} f_{ABC} = \text{const.} \end{equation} for all consistent truncations resulting from theorem~\ref{th:CT}. Next in line is \begin{equation}\label{eqn:fABgamma} f_{AB}{}^\gamma = - 2 D_{[A} \Omega^\gamma{}_{B]} + f_{\alpha\beta}{}^\gamma \Omega^\alpha{}_A \Omega^\beta{}_B - \frac12 f_{\alpha A B} \Omega^\alpha{}_C \Omega^{\gamma C} + \cancel{2 f_{\alpha[A}{}^\gamma \Omega^\alpha{}_{B]}} - f_{\alpha AB}\rho^{\alpha\gamma} + F_{AB}{}^C \Omega^\gamma{}_C \,. \end{equation} To find an interpretation for this quantity, we first get rid of the Greek index $\gamma$ by contracting with $f_{\gamma CD}$, resulting in the new quantity \begin{equation}\label{eqn:deffABCD} f_{ABCD} := f_{AB}{}^\gamma f_{\gamma CD}\,. \end{equation} In the same vein, we introduce \begin{equation} r_{ABCD} := \rho^{\alpha\beta} f_{\alpha AB} f_{\beta CD}\,, \end{equation} which is actually the original form of the Pol\'a\v{c}ek-Siegel-field introduced in \cite{ }. With these two new quantities, \eqref{eqn:fABgamma} can be rewritten exclusively in terms of Latin indices as \begin{equation} f_{ABCD} = - 2 D_{[A} \Omega_{B]CD} - 2 \Omega_{[A|C}{}^E \Omega_{B]DE} - \frac12 \Omega_{EAB} \Omega^E{}_{CD} + F_{AB}{}^E \Omega_{ECD} - r_{ABCD}\,. \end{equation} From \eqref{eqn:deffABCD}, it follows that $f_{ABCD}$ is antisymmetric with respect to its first two and last two indices. Thus, it can be decomposed into the three contributions \begin{equation}\label{eqn:decompfABCD} f_{ABCD} = \ydiagram{1,1} \otimes \ydiagram{1,1} = \ydiagram{1,1,1,1} \oplus \ydiagram{2,2} \oplus \ydiagram{2,1,1} \,. \end{equation} The first two diagrams on the right hand side contain the parts of $f_{ABCD}$ that are symmetric under the pairwise exchange of $AB$ and $CD$. For them, $r_{ABCD}$ drops out and we find \begin{equation}\label{eqn:ftoRR} f_{ABCD} + f_{CDAB} = - \mathcal{R}_{ABCD}\,. \end{equation} At this point, the power of the Pol\'a\v{c}ek-Siegel{} construction becomes fully apparent: We recover the generalised curvature on the physical space $M_n$ purely from a torsion component on the mega-space! Additionally, the Bianchi identity components \begin{equation}\label{eqn:BIfhAhBhCh} \widehat{D}_{[A} \widehat{f}_{BCD]} - \frac34 \widehat{f}_{[AB}{}^{\widehat{E}} \widehat{f}_{CD]\widehat{E}} = 0 \end{equation} on the mega-space imply the algebraic Bianchi identity of $\mathcal{R}_{ABCD}$, namely \begin{equation}\label{eqn:algBIgenRiem} \mathcal{R}_{[ABCD]} = f_{[AB}{}^E f_{CD]E} - \frac43 \nabla_{[A} f_{BCD]}\,. \end{equation} Note that $\nabla_A f_{BCD} = 0$ for consistent truncations, rendering the identity completely algebraic. Finally, there is the hook in \eqref{eqn:decompfABCD}, which is antisymmetric under pairwise exchange of $AB$ and $CD$ and contains the Pol\'a\v{c}ek-Siegel-field. As we will show in the next subsection, it will not contribute to the two-derivative action or its equations of motion. The same holds for the remaining components $f_A{}^{\beta\gamma}$ and $f^{\alpha\beta\gamma}$. Hence, we postpone deriving detailed expressions for them until section~\ref{sec:higherderiv}. We still have to deal with the one-index torsion \eqref{eqn:fhA}. In analogy with \eqref{eqn:decompfhABC}, we decompose it into \begin{equation}\label{eqn:complistfAh} \begin{tabular}{r|l|l} $\epsilon$-dim. & & \\ $-1$ & $f_\alpha = f_{\alpha\beta}{}^\beta$ & \\ $0$ & & $f_A$ \\ $1$ & & $f^\alpha$\,. \end{tabular} \end{equation} Again, we need to compute the two contributions, $f_A$ and $f^\alpha$, in the right-hand column. For the first, we obtain \begin{equation} f_A = F_A - \Omega^B{}_{BA} - \cancel{f_{\beta A}{}^\beta} \end{equation} by expanding \eqref{eqn:fhA}. Similar to $f_{ABC}$, we recover the one-index torsion for the last section and identify \begin{equation} T_A = f_A \end{equation} Therefore, \begin{equation} \nabla_A T_B = \nabla'_A f_B = 0 \qquad \text{and} \qquad \nabla_\alpha T_B = \nabla'_\alpha f_B = 0 \end{equation} hold according to \eqref{eqn:defsinglettorsion} for all consistent truncations that are governed by theorem~\ref{th:CT}. Just as in \eqref{eqn:fABCconst}, we therefore deduce \begin{equation}\label{eqn:fAconst} f_A = \text{const.} \end{equation} Finally, there is \begin{equation} f^\alpha = - D_B \Omega^\alpha{}_B - \cancel{\Omega^\beta{}_C f_\beta{}^{C \alpha}} - \rho^{\beta\gamma} f_{\beta\gamma}{}^\alpha + \Omega^{\alpha B} F_B + f_\beta{}^{\beta\alpha}\,, \end{equation} which we rewrite as \begin{align} f_{AB} :&= \left( f^\alpha - f_\gamma{}^{\gamma\alpha} \right) f_{\alpha AB} \\ &=- D^C \Omega_{CAB} + F^C \Omega_{CAB} + 2 r^C{}_{ABC} \,. \end{align} However, it is not a new independent quantity. Instead it can be reconstructed from already encountered $f$'s by using the Bianchi identity \begin{equation} \widehat{D}_{[A} \widehat{f}_{B]} + \frac12 \widehat{D}^{\widehat{C}} \widehat{f}_{\widehat{C} AB} - \frac12 \widehat{f}^{\widehat{C}} \widehat{f}_{\widehat{C} AB} = 0 \end{equation} on the mega-space, resulting in \begin{equation}\label{eqn:fABfromrest} f_{AB} = 2 \nabla_{[A} f_{B]} + \nabla^C f_{CAB} - f^C f_{CAB} - 2 f^C{}_{[AB]C} - f_\alpha f^{\alpha}{}_{AB} \,. \end{equation} \subsection{Truncation of the generalised Ricci scalar/tensor and dualities}\label{sec:truncgenRicci} We now come to the core objective of this paper, and reveal the relation between consistent truncations and dualities. To do so, we revisit the generalised Ricci scalar in \eqref{eqn:genRicciScalar}. Currently, it is written in terms of the flat derivative $D_A$. But we rather want to write it in terms of the covariant derivatives $\nabla_A$ that are relevant for consistent truncations. While $F_A$ and $F_{ABC}$ are directly related to the torsion of $D_A$, they lack this kind of geometric interpretation for $\nabla_A$. Therefore, we will ultimately replace them with $f_A$, $f_{ABC}$, and $f_{ABCD}$. In general, one might worry that after performing these substitutions, some naked connection terms would remain. By naked, we mean any generalised spin connection $\Omega_{AB}{}^C$ which is not part of a corresponding covariant derivative $\nabla_A$. However, the generalised Ricci scalar transforms covariantly under the action of the generalised structure group $F$. Therefore, one should in the end be able to rewrite it just in terms of manifestly covariant quantities, and these are exactly all the $f$'s introduced above (for more details see section~\ref{sec:gaugetransformations}) and their covariant derivatives. Indeed, all terms including a naked connection vanish and one finds \begin{equation} \mathcal{R} = P^{AB} P^{CD} \left[ \left( \overline{P}{}^{EF} + \frac13 P^{EF} \right) f_{ACE} f_{BDF} + 2 f_{ACDB} \right] + 2 P^{AB} ( 2 \nabla_A f_B - f_A f_B )\,. \end{equation} Note that the Pol\'a\v{c}ek-Siegel-field gets projected out from $f_{ABCD}$ and we can also write \begin{equation}\label{eqn:defRnabla} 2 P^{AB} P^{CD} f_{ACBD} = - \mathcal{R}_{ACBD} P^{AB} P^{CD} := \mathcal{R}^\nabla \end{equation} by using \eqref{eqn:ftoRR}. Hence, can rewrite $\mathcal{R}$ purely in terms of the torsions and scalar curvature of $\nabla_A$ as \begin{equation}\label{eqn:RRfornablaA} \mathcal{R} = \mathcal{R}^\nabla + P^{AB} P^{CD} \left( \overline{P}{}^{EF} + \frac13 P^{EF} \right) T_{ACE} T_{BDF} + 2 P^{AB} ( 2 \nabla_A T_B - T_A T_B )\,. \end{equation} As shown in diagram~\eqref{diag:contrunc}, it is crucial for any consistent truncation to preserve the relation between the action and its equations of motion. Therefore, we compute the latter by varying the action \eqref{eqn:SDFT} and dropping all boundary terms, to obtain \begin{equation}\label{eqn:eomG} \delta S = \int \mathrm{d}^D x\, e^{-2 d} \left( - 2 \mathcal{R} \delta d + \mathcal{G}_{AB} \delta E^{AB} \right) = 0\,, \end{equation} with \cite{} \begin{equation}\label{eqn:GAB} \begin{aligned} \mathcal{G}_{AB} = 4 P_{[A}{}^C \overline{P}{}_{B]}{}^D \Bigl( &F_{CEG} F_{DFH} P^{EF} \overline{P}{}^{GH} + F_{CDE} F_F P^{EF} + \\ & D_D F_C - D_E F_{CDF} P^{EF} \Bigr) \end{aligned} \end{equation} and \begin{equation} \delta E_{AB} := (\delta E_A{}^I ) E_{BI} \,. \end{equation} Note that $\mathcal{G}_{AB}$ is not yet the generalised Ricci tensor $\mathcal{R}_{IJ}$ that arises if we vary with respect to the generalised metric $\mathcal{H}_{IJ}$ \cite{ } \begin{equation}\label{eqn:eomR} \delta S = \int \mathrm{d}^D x\, e^{-2 d} \left( - 2 \mathcal{R} \delta d + \mathcal{R}_{IJ} \delta \mathcal{H}^{IJ} \right) = 0\,. \end{equation} Both $\mathcal{G}_{AB}$ and $\mathcal{R}_{IJ}$ are commonly used in the literature. They can be related by identifying \begin{equation} \delta \mathcal{H}^{IJ} = 4 \delta E_{KL} P^{K(I} \overline{P}{}^{J)L} \qquad \text{and} \qquad \delta E^{AB} = \delta \mathcal{H}_{CD} P^{C[A} \overline{P}{}^{B]D} \end{equation} first, and afterwards comparing \eqref{eqn:eomG} with \eqref{eqn:eomR} to eventually obtain \begin{equation} \mathcal{R}_{IJ} = \mathcal{G}^{LK} P_{L(I} \overline{P}{}_{J)K} \,. \end{equation} We follow the same strategy as we previously did for for $\mathcal{R}$, and rewrite $\mathcal{G}_{AB}$ in terms of $\nabla_A$, $f_A$, $f_{ABC}$ and $f_{ABCD}$. Again, we find that all terms with naked connections cancel. The result \begin{equation} \begin{gathered} \mathcal{G}_{AB} = 4 P_{[A}{}^C \overline{P}{}_{B]}{}^D \Bigl( f_{CEG} f_{DFH} P^{EF} \overline{P}{}^{GH} + f_{CDE} f_F P^{EF} \\ + \nabla_D f_C - \nabla_E f_{CDF} P^{EF} - f_{EDCF} P^{EF} \Bigr) \end{gathered} \end{equation} again looks similar to \eqref{eqn:GAB}, with only one new contribution from $f_{ABCD}$. If we look at the definition \eqref{eqn:deffABCD}, we note that the last two indices of $f_{ABCD}$ originate from $f_{\alpha CD}$ which captures the infinitesimal action of the generalised structure group $F$ on the generalised tangent space. Because $F$ is a subgroup of the double Lorentz group $H_D$, we find \begin{equation} f_{\alpha CD} P^C{}_A \overline{P}{}^D{}_B = f_{\alpha CD} \overline{P}{}^C{}_A P^D{}_B = 0 \,. \end{equation} Exploiting this property, we rewrite \begin{equation}\label{eqn:defGnablaAB} - 4 P_{[A}{}^C \overline{P}{}_{B]}{}^D f_{EDCF} P^{EF} = 2 P_{[A}{}^C \overline{P}{}_{B]}{}^D \mathcal{R}_{ECDF} P^{EF} = \mathcal{G}^\nabla_{AB} \end{equation} and see that only the box diagram in the decomposition \eqref{eqn:decompfABCD} of $f_{ABCD}$ contributes to the generalised Ricci tensor. Like for $\mathcal{R}$ in \eqref{eqn:RRfornablaA}, we can write it exclusively in terms of $\nabla_A$'s torsions and curvature: \begin{equation}\label{eqn:RRABfornablaA} \begin{aligned} \mathcal{G}_{AB} = \mathcal{G}^\nabla_{AB} + 4 P_{[A}{}^C \overline{P}{}_{B]}{}^D \Bigl( & T_{CEG} T_{DFH} P^{EF} \overline{P}{}^{GH} - T_{CDE} T_F P^{EF} + \\ & \nabla_D T_C + \nabla_E T_{CDF} P^{EF} \Bigr)\,. \end{aligned} \end{equation} For any consistent truncation governed by theorem~\ref{th:CT}, $\mathcal{R}$, $\mathcal{R}_{AB}$ and $\mathcal{G}_{AB}$ have to be constant and singlets. Written in terms of equations this imposes \begin{equation} \nabla_{A/\alpha} \mathcal{R} = 0 \qquad \text{and} \qquad \nabla_{A/\alpha} \mathcal{G}_{BC} = 0 \,. \end{equation} Because the intrinsic torsions $f_A$ and $f_{ABC}$ share this property, the two new quantities $\mathcal{R}^\nabla$ and $\mathcal{G}^\nabla_{AB}$ defined in \eqref{eqn:defGnablaAB} and \eqref{eqn:defRnabla} have to satisfy \begin{equation}\label{eqn:constraints2deriv} \nabla_{A/\alpha} \mathcal{R}^\nabla = 0 \qquad \text{and} \qquad \nabla_{A/\alpha} \mathcal{G}^\nabla_{AB} = 0\,, \end{equation} too. This fixes some of the components of $F_{\widehat{A}\widehat{B}\widehat{C}}$, which we need for the Pol\'a\v{c}ek-Siegel{} construction, to be constants, but clearly not all of them. However, assuming that we have fixed all physically relevant data (at least at the leading two derivative level), we should rather ask: Is there a way to choose the remaining components of $f_{\widehat{A}\widehat{B}\widehat{C}}$ such that they are compatible with the ones we already fixed? From a physical point of view this corresponds to obtaining a truncation ansatz which reproduces a target truncated theory. In general, there can be multiple solutions to this problem. Each of them provides a different metric and $B$-field on the internal space, but eventually they all reproduce the same truncated theory. We encountered this behaviour, which is depicted in \eqref{eqn:dualitiesdiagram}, already, for the motivating example of generalised Scherk-Schwarz reductions and their relation to Poisson-Lie T-duality. But a main difference is that we now have two possible mechanisms for generating dualities instead of one. First, we will see that there is the possibility of finding different generalised frames on the mega-space that realise the same generalised fluxes $f_{\widehat{A}\widehat{B}\widehat{C}}$. We already encountered this mechanism in section~\ref{sec:gSSandPLTD}; it provides the foundation of Poisson-Lie T-duality. Moreover, for a non-trivial generalised structure group, we have a second option since only some components of $f_{\widehat{A}\widehat{B}\widehat{C}}$ are fixed. The remaining ones are only constrained by the Bianchi identities on the mega-space. \subsection{Jacobi identities}\label{sec:Jac} Exploring the space of all these dual backgrounds is beyond the scope of the present paper. We rather want to restrict our discussion to a specific family of backgrounds with constant $f_{\widehat{A}\widehat{B}\widehat{C}}$, like for generalised Scherk-Schwarz reductions. In section~\ref{sec:vielbeins}, we show that all of them are realised in terms of generalised cosets. Thus, they 1) form the foundation of all generalised T-dualities currently known, and 2) provide explicit constructions of the full truncation ans\"atze. Hence, the question from above has to be refined: Can we find for any given truncated theory at least one ansatz with constant $f_{\widehat{A}\widehat{B}\widehat{C}}$ that uplifts the theory to $D$ dimensions? To answer this question, one has to analyse the Bianchi identities \begin{align}\label{eqn:bianchiFhABC} \widehat{f}_{[\widehat{A}\widehat{B}}{}^{\widehat{E}} \widehat{f}_{\widehat{C}\widehat{D}]\widehat{E}} &= 0 \quad \text{and} \\\label{eqn:bianchiFhA} \widehat{f}^{\widehat{C}} \widehat{f}_{\widehat{C}\widehat{A}\widehat{B}} &= 0\,, \end{align} on the mega-space, which arise after dropping the flat derivative $\widehat{D}_{\widehat{A}}$ because we are assuming that $f_{\widehat{A}\widehat{B}\widehat{C}}$ and $f_{\widehat{A}}$ are constant. It turns out that this problem is still hard, because it entails solving systems of coupled quadratic equations. Therefore, we postpone a thorough analysis to future work. But even without a full solution, already the structure of the various components of \eqref{eqn:bianchiFhABC} and \eqref{eqn:bianchiFhA} is interesting. We recognise \eqref{eqn:bianchiFhABC} as the Jacobi identity for a Lie algebra. It has 15 independent contributions which can be organised into four categories: \begin{enumerate} \item \underline{Closure of the $F$-action on the mega-space} (6 constraints)\\ These components arise from the requirement that the action of the generalised structure group on the mega-space closes. They are governed by \eqref{eqn:closureFaction}. Only three of them are non-trivial. \begin{enumerate} \item $\epsilon$-dimension -2 implements the Jacobi identity of Lie($F$), \begin{equation}\label{eqn:jacLieF} 3 f_{[\alpha\beta}{}^\epsilon f_{\gamma]\epsilon}{}^\delta = 0\,. \end{equation} \item $\epsilon$-dimension -2 also requires that $f_{\alpha B}{}^C$ generates this action on the generalised tangent space $T M_n \oplus T^* M_n$, \begin{equation} 2 f_{[\alpha|C}{}^E f_{|\beta]E}{}^D = - f_{\alpha\beta}{}^\epsilon f_{\epsilon C}{}^D \,. \end{equation} \item $\epsilon$-dimension 0 imposes the condition \begin{equation}\label{eqn:cocycle_1} f_{\alpha \beta}{}^\epsilon f_\epsilon{}^{\gamma \delta} - 4 f_{[\alpha| \epsilon}{}^{[\gamma|} f_{|\beta]}{}^{\epsilon |\delta]} = 0~. \end{equation} A simple solution is $f_{\alpha}{}^{\beta \gamma} = -2 \,\mathbf{r}^{\delta [\beta} f_{\delta \alpha}{}^{\gamma]}$ for antisymmetric $\mathbf{r}^{\alpha\beta}$. This condition (and the existence of possibly other non-trivial solutions) can be understood using the language of Lie algebra cohomology (see e.g. appendix A of \cite{Hassler:2016srl}, whose conventions we follow here). One introduces one-forms $\theta^\alpha$ (essentially the left-invariant one-forms on $F$) as well as scalar 0-forms $e_A$ valued in some representation -- here we are concerned with $\Lambda^2 {\rm Lie}(F)$, so we denote $e_{\alpha\beta} = t_\alpha \wedge t_\beta$, where the wedge product merely emphasizes that the result is antisymmetrized in $\alpha\beta$. Then $f_{\alpha}{}^{\beta \gamma}$ is a one-form $\varphi$ valued in $\Lambda^2 {\rm Lie}(F)$, i.e. \begin{equation} \varphi = \frac12 \theta^\gamma\, f_\gamma{}^{\alpha\beta} \, t_\alpha\wedge t_\beta ~. \end{equation} The condition \eqref{eqn:cocycle_1} amounts to closure of this one-form, where we take the exterior derivative to obey \begin{equation} \mathrm{d} \theta^\alpha = -\frac12 \theta^\beta \wedge \, \theta^\gamma f_{\beta\gamma}{}^{\alpha} \qquad \text{and} \qquad \mathrm{d} t_\alpha = \theta^\beta f_{\beta \alpha}{}^\gamma t_\gamma \,. \end{equation} Nilpotence of the exterior derivative is guaranteed by \eqref{eqn:jacLieF}. Then one indeed finds \begin{equation}\label{eqn:cocycle_2} \mathrm{d} \varphi = \frac14 \theta^\alpha \wedge \theta^\beta \left( -f_{\alpha\beta}{}^\epsilon f_\epsilon{}^{\gamma\delta}{} + 4 f_{\alpha\epsilon}{}^\gamma f_{\beta}{}^{\epsilon\delta} \right) t_\gamma \wedge t_\delta = 0 \end{equation} so $\varphi$ must be closed (i.e. a cocycle). An immediate solution is to take $\varphi$ to be exact (i.e. a coboundary), \begin{equation}\label{eqn:varphifromr} \varphi = \mathrm{d} \mathbf{r}~, \qquad \mathbf{r} = \frac12 r^{\alpha\beta} t_\alpha \wedge t_\beta \, \quad \implies \quad f_{\alpha}{}^{\beta \gamma} = -2 \,\mathbf{r}^{\delta [\beta} f_{\delta \alpha}{}^{\gamma]}~. \end{equation} But in general, there may also be cocycles that are not coboundaries and therefore cannot be written as $\varphi = \mathrm{d} \mathbf{r}$. These are governed by the Lie algebra cohomology $H^1({\rm Lie}(F), \Lambda^2 {\rm Lie}(F))$. \end{enumerate} \item \underline{Invariance under the $F$-action} (4 constraints)\\ Next, we find four constraints describing how the components in the last column of \eqref{eqn:decompfhABC} transform under the infinitesimal action of the generalised structure group $F$. \begin{enumerate} \item $\epsilon$-dimension $-1$ requires that the torsion $f_{ABC}$ is invariant, namely \begin{equation} 3 f_{\alpha [B}{}^E f_{CD]E} = 0\,. \end{equation} An alternative way to write this equation is by using $\nabla_\alpha$: \begin{equation} \nabla_\alpha f_{BCD} = 0\,. \end{equation} \item $\epsilon$-dimension $0$ gives rise to \begin{equation} \nabla_\alpha f_{BCDE} - f_{\alpha}{}^{\beta\gamma} f_{\beta BC} f_{\gamma DE} = 0\,. \end{equation} By using \eqref{eqn:varphifromr}, we can further simplify this equation to \begin{equation}\label{eqn:invarf4} \nabla_\alpha \left( f_{BCDE} + \mathbf{r}_{BCDE} \right) = 0 \end{equation} with \begin{equation} \mathbf{r}_{ABCD} := \mathbf{r}^{\alpha\beta} f_{\alpha AB} f_{\beta CD} \,. \end{equation} This is quite interesting, because it tells us that $f_{ABCD}$ does not need to be fully invariant under $F$-action. Instead the invariant quantity is rather the sum of $f_{ABCD}$ and the newly introduced $\mathbf{r}_{ABCD}$. Note that $\mathbf{r}_{ABCD}$ is antisymmetric under the exchange (AB)$\leftrightarrow$(CD). This implies that the generalised Riemann tensor $\mathcal{R}_{ABCD}$ defined by \eqref{eqn:ftoRR} is always invariant, as \begin{equation} \nabla_\alpha \mathcal{R}_{ABCD} = 0 \end{equation} has to hold. \item $\epsilon$-dimension $1$ requires \begin{equation} \nabla_\alpha f_{ABCDE} = 0 \end{equation} with \begin{equation} f_{ABCDE}: = f_A{}^{\alpha\beta} f_{\alpha BC} f_{\beta DE} \,. \end{equation} \item $\epsilon$-dimension $2$ relates $f_{\alpha}{}^{\beta\gamma}$ with $f^{\alpha\beta\gamma}$, \begin{equation} 3 f_\alpha{}^{\epsilon[\beta} f_{\epsilon}{}^{\gamma\delta]} + 3 f_{\alpha\epsilon}{}^{[\beta} f^{\gamma\delta]\epsilon} = 0\,. \end{equation} With \eqref{eqn:varphifromr}, this expression simplifies to \begin{equation}\label{eqn:invarf6} \nabla_\alpha ( f_{BCDEFG} + \mathbf{r}_{BCDEFG} ) = 0 \end{equation} with \begin{equation} \begin{aligned} \mathbf{r}_{ABCDEF} &=: \mathbf{r}^{\alpha\beta\gamma} f_{\alpha AB} f_{\beta CD} f_{\gamma EF}\,,\\ f_{ABCDEF} &=: f^{\alpha\beta\gamma} f_{\alpha AB} f_{\beta CD} f_{\gamma EF}\,, \end{aligned} \end{equation} and the classical Yang-Baxter tensor \begin{equation} \mathbf{r}^{\alpha\beta\gamma} = \mathbf{r}^{\delta\alpha} \mathbf{r}^{\beta\epsilon} f_{\delta\epsilon}{}^\gamma + \mathbf{r}^{\delta\gamma} f_{\delta}{}^{\alpha\beta}\,. \end{equation} We use this terminology because $\mathbf{r}^{\alpha\beta\gamma}$ describes the left hand side of the classical Yang-Baxter equation. It is manifestly antisymmetric with respect to its first two indices, $\alpha$ and $\beta$. After substituting $f_\alpha{}^{\beta\gamma} = - 2 \mathbf{r}^{\delta[\beta} f_{\delta\alpha}{}^{\gamma]}$ one can show that it is actually totally antisymmetric. \end{enumerate} \item\label{item:bianchi} \underline{Bianchi identities} (3 constraints)\\ Next, we find Bianchi identities for the three curvatures $f_{ABCD}$, $f_{ABCDE}$ and $f_{ABCDEF}$: \begin{enumerate} \item $\epsilon$-dimension 0 gives the Bianchi identity for $f_{ABCD}$ in \eqref{eqn:algBIgenRiem}, which we already discussed in section~\ref{sec:torsionandcurvature}. \item $\epsilon$-dimension 1 implies \begin{equation}\label{eqn:BIfABCDE} f_{[ABC]DE} = f_{[AB|}{}^F f_{F|C]DE}\,. \end{equation} To better understand the implications of this equation, we decompose \begin{equation}\label{eqn:decompfhABCDE} f_{ABCDE} = \ydiagram{2,1,1,1} \oplus \ydiagram{2,2,1} \oplus \ydiagram{3,1,1}\,. \end{equation} The antisymmetrisation on the right hand side of \eqref{eqn:BIfABCDE} projects out the last contribution. If we also take into account the left hand side, we find \begin{equation} \ydiagram{2,1,1,1} \oplus \ydiagram{2,2,1} = \ydiagram{2,1,1,1} \oplus \ydiagram{2,2,1} \oplus \overbrace{\ydiagram{1,1,1,1,1}}^{=0}\,, \end{equation} showing that the totally antisymmetric contribution from the left hand side has to vanish. As result, we obtain the first quadratic constraint, namely \begin{equation} f_{[AB|}{}^F f_{F|CDE]} = 0 \,. \end{equation} \item $\epsilon$-dimension 2 results finally in \begin{equation} \begin{aligned} f_{ABCDEF} = - & \frac12 f_{AB}{}^G f_{GCDEF} + f_{G[A|CD} f^G{}_{|B]EF} + f_{G[A|CD} f^G{}_{|B]EF} + \\ & 2 f_{AB[C}{}^Gf_{|G|D]EF} - (CD)\leftrightarrow (EF)\,. \end{aligned} \end{equation} The right hand side is by definition totally antisymmetric under pairwise exchange of $(AB)$, $(CD)$ and $(EF)$ but the left hand side is not. Therefore we find additional quadratic constraints. \end{enumerate} \item \underline{Additional quadratic constraints} (2 constraints)\\ There are two more constraints that do not contain any linear contribution. Together with the quadratic constraints we just encountered in point~\ref{item:bianchi}, they represent the real challenge in identifying admissible curvatures that result in a dressing coset. There is not much more we can do about them at the moment, and we therefore just list them here: \begin{enumerate} \item $\epsilon$-dimension 3 gives rise to \begin{equation} -\frac{1}{2}f_{ABIJ}f^B{}_{CDEF}-2 f_{ACD[I}{}^G f_{|G|J]EF} + \asym(IJ,CD,EF) = 0\,, \end{equation} where $\asym(IJ,CD,EF)$ denotes all permutations of the index pairs $(IJ)$, $(CD)$, $(EF)$ in the given expression weighted by $-1$ for odd permutations. \item $\epsilon$-dimension 4 finally results in \begin{equation} f_{IJCD[K}{}^G f_{|G|L]EF}+\frac{1}{8}f_{ACDKL}f^A{}_{IJEF} + \asym(CD,IJ,EF,KL) = 0\,. \end{equation} \end{enumerate} \end{enumerate} Next, we decompose the second constraint \eqref{eqn:bianchiFhA}. It has six contributions which again can be ordered according to their $\epsilon$-dimension: \begin{itemize} \item $\epsilon$-dimension $-2$ requires \begin{equation} f_\gamma f^\gamma{}_{\alpha\beta} = 0\,, \end{equation} which is automatically satisfied for $f_\gamma = f_{\gamma\delta}{}^\delta$. \item $\epsilon$-dimension $-1$ imposes \begin{equation}\label{eqn:fCsinglet} \nabla_\alpha f_B = 0\,, \end{equation} saying that the one-index torsion is a singlet under $F$-action. This is required by theorem~\ref{th:CT}. \item $\epsilon$-dimension $0$ gives rise to the two constraints \begin{align}\label{eqn:fABfromrestconst} f_{AB} &= -f_{AB}{}^C f_C -f_\gamma f^{\gamma}{}_{AB} + f_\gamma{}^{\gamma\delta} f_{\delta AB}\\ \text{and} \qquad f_{\gamma\alpha}{}^\beta f^\gamma &= -f_\gamma f_\alpha{}^{\beta \gamma}\,. \end{align} The first equation is used to fix $f_{AB}$. (Recall that we did exactly the same in \eqref{eqn:fABfromrest} for the more general case of non-constant $f$'s.) In the second equation, we contract the free index $\beta$ with $f_{\beta BC}$ to obtain \begin{equation} \nabla_\alpha (f_{BC}+\mathbf{r}_{BC}) = 0 \quad \text{where}\quad \mathbf{r}_{AB}:= r^{\alpha\beta} f_{\beta\alpha}{}^\gamma f_{\gamma AB} = 2\,\mathbf{r}^C{}_{ABC}\,. \end{equation} Satisfying this equation requires that the combination $f_{AB} + \mathbf{r}_{AB}$ to be a singlet. This result is not very surprising, because we observed the same phenomena already in \eqref{eqn:invarf4} and \eqref{eqn:invarf6}. \end{itemize} There are two further constraints that do not admit such nice interpretations. In general, we note that the higher the $\epsilon$-dimensions becomes, the more complicated it gets. The only advantage here is that both are linear in the components of $f_{\widehat{A}}$. Therefore, they can be always solved easily: \begin{itemize} \item $\epsilon$-dimension $1$ results in \begin{equation} f^A f_{ABCD}-f_\alpha f_B{}^{\alpha\beta }f_{\beta CD} = 0\,, \end{equation} while \item $\epsilon$-dimension $2$ requires \begin{equation} f^C f_{CABDE}+f^\gamma f_\gamma{}^{\alpha \beta}f_{\alpha AB}f_{\beta DE}+f_\gamma f^{\gamma \alpha \beta}f_{\alpha AB}f_{\beta DE}=0\,. \end{equation} \end{itemize} Concluding, the most problematic are the quadratic constraints that originate from \eqref{eqn:bianchiFhABC}. There is currently no obvious way to find general solutions. Instead one has to treat them on a case by case basis. \subsection{Higher derivative curvature}\label{sec:higherderiv} In subsection~\ref{sec:truncgenRicci}, one might have gained the impression that the components $f_A{}^{\beta\gamma}$ and $f^{\alpha\beta\gamma}$ are irrelevant for the action or its equations of motion. In fact, this is only true at the leading two-derivative level. If we want to consider higher-derivative corrections, there is no obvious reason why they should not contribute. To make this point clear, we first compute \begin{equation} \begin{aligned} f_A{}^{\beta\gamma} = &D_A \rho^{\beta\gamma} + D_A \Omega^{[\beta}{}_D \Omega^{\gamma] D} - 2 D_D \Omega^{[\beta}{}_A \Omega^{\gamma] D} - \Omega^\delta{}_A f_\delta{}^{\beta\gamma} + F_{ADE} \Omega^{\beta D} \Omega^{\gamma E} - \\ &\Omega^\delta{}_A f_{\delta\epsilon}{}^{[\beta} \Omega^{\gamma]F} \Omega^{\epsilon}{}_F - 2\Omega^\delta{}_A f_{\delta\epsilon}{}^{[\beta} \rho^{\gamma]\epsilon} + \cancel{2 f_{\delta A}{}^{[\beta} \rho^{\gamma]\delta}} + \cancel{ f_{\delta A}{}^{[\beta} \Omega^{\gamma] E} \Omega^\delta{}_E } \,. \end{aligned} \end{equation} As before, we rewrite this quantity in terms of indices on $T M \oplus T^* M$ only, as \begin{equation} \begin{aligned} f_{ABCDE} &= \frac{1}{2} D_A r_{BCDE} +2 \Omega_{A}{}^F{}_{[B}r_{C]FDE}-2 \Omega_{A}{}^F{}_{[B}f_{C]FDE}-D_F \Omega_{ABC} \Omega^F{}_{DE}\\ &+\frac{1}{2}D_A \Omega_{FED}\Omega^F{}_{BC}-\Omega_{AI[E}\Omega^{FI}{}_{D]}\Omega_{FBC}-\frac{1}{2}F_{FAG}\Omega^F{}_{BC}\Omega^G{}_{DE}\\ &-(BC)\leftrightarrow (DE) \end{aligned} \end{equation} There is one more component left, $f^{\alpha\beta\gamma}$, which we have to compute. In analogy with $f^\alpha$, it is not independent of the ones we computed already. Rather, it arises from the Bianchi identity \begin{equation} 2 \nabla^{[\alpha}f^{\beta]}{}_{CD}+2\nabla_{[C}f_{D]}{}^{\alpha\beta}-f^{\alpha \beta \widehat{E}}f_{CD\widehat{E}}+2 f^{\widehat{E} [\alpha}{}_{[C}f^{\beta]}{}_{D]\widehat{E}}=0 \end{equation} which eventually gives rise to \begin{equation} \begin{aligned} f_{ABCDEF} = & r_A{}^G{}_{EF}r_{GBCD}+2r_{G[D|EF|}f_{|AB|C]}{}^G-\frac{1}{2} D_G r_{EFCD}\Omega^G{}_{AB}- f_{G[D|EF|}\Omega_{|I|C]}{}^G \Omega^I{}_{AB}\\ &+r_{G[D|EF|}\Omega^I{}_{C]}{}^G \Omega_{IAB}+\frac{1}{4}\Omega_{IG[D}\Omega^I{}_{|EF|}\Omega_{|K|C]}{}^G \Omega^K{}_{AB}+\frac{1}{6}F^{GHI}\Omega_{HEF}\Omega_{IAB}\Omega_{GCD}\\ &+\frac{1}{2} D_I \Omega_{GEF}\Omega^G{}_{AB}\Omega^I{}_{CD} + \asym(AB,CD,EF) \end{aligned} \end{equation} To count the number of derivatives in each component of $f_{\widehat{A}\widehat{B}\widehat{C}}$, we define that either $D_A$ or $\Omega_{ABC}$ count as one, because they both contribute equally to the covariant derivative $\nabla_A$ defined in \eqref{eqn:defnabla}. Hence, from the Pol\'a\v{c}ek-Siegel{} construction, we find the following independent quantities: \begin{equation} \begin{tabular}{l|c|c|c} derivatives & 1 & 2 & 3 \\ \hline quantity & $f_A$\,, $f_{ABC}$ & $f_{ABCD}$ & $f_{ABCDE}$ \\ \end{tabular} \end{equation} At the two-derivative level, we have seen that the action and its equations of motion incorporate only the one-derivative torsions $f_A=T_A$ and $f_{ABC}=-T_{ABC}$, and the projectors of the two derivative curvature $f_{ABCD}$. Starting with the leading $\alpha'$-corrections at four derivatives, the situation becomes more complicated because the form of the action \cite{Marques:2015vua} is only fixed up to field redefinitions. If it is possible to find a field basis in which the generalised structure group $F$ still acts by $\nabla_\alpha$, then the action and its field equations should also incorporate $f_{ABCDE}$. Moreover, the singlet condition that is required by theorem~\ref{th:CT} has to be imposed at each order of $\alpha'$ separately. This will likely result in more constraints in addition to \eqref{eqn:constraints2deriv}. Thus, on a qualitative level, we note that consistent truncations with higher derivative corrections should require that more and more components of the mega-space generalised fluxes $f_{\widehat{A}\widehat{B}\widehat{C}}$ are constant. Taking the Bianchi identities into account, this might even at some order prohibit any non-constant contributions. In this case, there would be a one-to-one correspondence between consistent truncations, \`a la theorem~\ref{th:CT}, and generalised cosets. We leave the exploration of this idea to future work. \subsection{Gauge transformations}\label{sec:gaugetransformations} We observed in section~\ref{sec:truncgenRicci} that, by what appeared to be a miracle, all naked connections in the rewriting of the action and its equations of motion vanish and just the various $f$'s remain. Of course this is not actually a miracle, but rather the consequence of a gauge symmetry. To see how this symmetry emerges, we define its infinitesimal action on the mega-space by \begin{equation} \delta_{\widehat{\lambda}} \widehat{E}_{\widehat{A}\widehat{B}} = (\mathcal{L}_{\widehat{\lambda}} \widehat{E}_{\widehat{A}}{}^{\widehat{I}} ) \widehat{E}_{\widehat{B}\widehat{I}} \,. \end{equation} A short calculation, using the definition of the generalised Lie derivative \eqref{eqn:genLie}, gives rise to \begin{equation}\label{eqn:deltaEh} \delta_{\widehat{\lambda}} \widehat{E}_{\widehat{A}\widehat{B}} = - 2 \widehat{D}_{[\widehat{A}} \widehat{\lambda}_{\widehat{B}]} + \widehat{\lambda}^{\widehat{C}} \widehat{f}_{\widehat{C}\widehat{A}\widehat{B}}\,. \end{equation} Essential for our purpose is that all transformations generated by $\widehat{\lambda}^{\widehat{A}}$ do not change the lower-triangular form of $\widehat{E}_{\widehat{A}}{}^{\widehat{I}}$ given in \eqref{eqn:mega-vielbein}. To see why, we first note that \begin{equation} \lambda^{\widehat{A}} = \widetilde{M}_{\widehat{B}}{}^{\widehat{A}} \widehat{\lambda}^{\widehat{B}} \end{equation} should only depend of the internal coordinates $y$ but not on the auxiliary coordinates $z$. In this case, \eqref{eqn:deltaEh} implies \begin{equation}\label{eqn:deltaE} \delta_\lambda E_{\widehat{A}\widehat{B}} = - 2 \nabla'{}_{[\widehat{A}} \lambda_{\widehat{B}]} + \lambda^{\widehat{C}} f_{\widehat{C}\widehat{A}\widehat{B}}\,. \end{equation} On the other hand, we can also compute $\delta_\lambda E_{\widehat{A}\widehat{B}}$ directly from the variation of \eqref{eqn:mega-vielbein}, giving rise to \begin{equation} \delta_\lambda E_{\widehat{A}\widehat{B}} = \begin{pmatrix} 0 & 0 & 0 \\ 0 & \delta_\lambda E_{AB} & \quad - \delta_\lambda \Omega^\beta{}_A - \Omega^\beta{}_C \delta_\lambda E^C{}_B \quad\\ 0 & \quad \delta_\lambda \Omega^\alpha{}_B + \Omega^\alpha{}_C \delta_\lambda E^C{}_B \quad & \delta_\lambda \rho^{\alpha\beta} + 2 \Omega^{[\alpha}{}_C \delta_\lambda \Omega^{\beta]C} + \Omega^\alpha{}_C \delta_\lambda E^{CD} \Omega^\beta{}_D \end{pmatrix}\,. \end{equation} We see two important things here: First, the transformation of $E_A{}^I$, $\Omega^\alpha{}_B$ and $\rho^{\alpha\beta}$ can easily be extracted from this result, and second, $\delta E_{\alpha\widehat{B}} = \delta E_{\widehat{B}\alpha} = 0$. The latter is not automatically guaranteed by \eqref{eqn:deltaE}. It requires the restriction \begin{equation}\label{eqn:lambdacomponents} \lambda_{\widehat{A}} = \begin{pmatrix} \lambda^\alpha & \lambda_A & 0 \end{pmatrix}\,. \end{equation} But the second parameter $\lambda_A$ only mediates a generalised diffeomorphism on the physical space $M_d$. For the following discussion, we therefore concern ourselves only with $\lambda^\alpha$, which mediates gauge transformations, and compute \begin{align} \delta_\lambda E_{AB} &= \lambda^\gamma f_{\gamma AB} \,, \\ \delta_\lambda E^{\alpha}{}_B &= D_B \lambda^\alpha + \Omega^\gamma{}_B f_{\gamma\delta}{}^\alpha \lambda^\delta - \cancel{\lambda^\gamma f_{\gamma B}{}^{\alpha}} \,, \quad \text{and} \\ \delta_\lambda E^{\alpha\beta} &= 2 D_C \lambda^{[\alpha} \Omega^{\beta]C} - 2 \rho^{\gamma[\alpha} f_{\gamma\delta}{}^{\beta]} \lambda^\delta - \Omega^{\gamma I} \Omega^{[\alpha}{}_I f_{\gamma\delta}{}^{\beta]} \lambda^\delta + \lambda^\gamma f_{\gamma}{}^{\alpha\beta}\,, \end{align} eventually extracting \begin{align} \delta_\lambda \Omega^\alpha{}_B &= D_B \lambda^\alpha-\Omega^\alpha{}_C \lambda^C{}_B + \Omega^\gamma{}_B f_{\gamma \delta}{}^\alpha \lambda^\delta\,,\\ \delta_\lambda \rho^{\alpha\beta} &=4 D_C \lambda^{[\alpha}\Omega^{\beta]C}-2\rho^{\gamma[\alpha}f_{\gamma\delta}{}^{\beta]}\lambda^\delta -3\Omega^{[\alpha}{}_{E}f_{\gamma\delta}{}^{\beta]}\Omega^{\gamma E} \lambda^{\delta}+3\Omega^{[\alpha}{}_{C}\Omega^{\beta]}_{D}\lambda^{D C}+\lambda^\gamma f_\gamma{}^{\alpha \beta}\,. \end{align} As in earlier calculations, it is useful to write the results just with indices for the generalised tangent space on the internal manifold $T M \oplus T^* M$. To do so, we introduce \begin{equation} \lambda_{AB} := \lambda^\gamma f_{\gamma AB} \end{equation} and obtain \begin{align} \delta_\lambda E_{AB} &= \lambda_{AB}\,,\\ \delta_\lambda \Omega_{ABC} &= D_A\lambda_{BC}-\lambda_A{}^D \Omega_{DBC}-2\lambda_{D[C}\Omega_{|A|B]}{}^D \,, \\ \delta_\lambda r_{ABCD} &= 2 D_{E}\lambda_{AB}\Omega^E{}_{CD}-2D_{E}\lambda_{CD}\Omega^E{}_{AB}+ 2 \lambda^E{}_{[D}r_{C]EAB}-2\lambda^E{}_{[B} r_{A]ECD} \nonumber\\ & + 3 \Omega_{FAB}\lambda^E{}_{[D}\Omega^F{}_{C]E}-3 \lambda_{E[B}\Omega^F{}_{A]E}\Omega_{FCD}+3\Omega_{EAB}\Omega_{FCD}\lambda^{EF} \nonumber\\ & + 2 \lambda_{[A}{}^G f_{|G|B]CD}+2\lambda_{[C}{}^G f_{|ABG|D]}\,. \end{align} Here, we recognise the double Lorentz transformation rules for the frame and the spin connection $\Omega_{ABC}$. Additionally, there is a new transformation rule for the Pol\'a\v{c}ek-Siegel{} field $r_{ABCD}$. To understand how it arises, we look at the transformation of the generalised fluxes $\widehat{f}_{\widehat{A}\widehat{B}\widehat{C}}$ on the mega-space. Under generalised diffeomorphisms they transform as scalars, \begin{equation} \delta_{\widehat{\lambda}} \widehat{f}_{\widehat{A}\widehat{B}\widehat{C}} = \widehat{\lambda}^{\widehat{D}} \widehat{D}_{\widehat{D}} \widehat{f}_{\widehat{A}\widehat{B}\widehat{C}}\,, \end{equation} or, equally, \begin{equation}\label{eqn:deltalambdafAhBhCh} \delta_\lambda f_{\widehat{A}\widehat{B}\widehat{C}} = \lambda^\delta \nabla'{}_{\delta} f_{\widehat{A}\widehat{B}\widehat{C}}\,. \end{equation} A similar equation, \begin{equation} \delta_\lambda f_{\widehat{A}} = \lambda^\beta \nabla'{}_{\beta} f_{\widehat{A}}\,, \end{equation} holds for the one-index generalised flux $\widehat{f}_{\widehat{A}}$, too. These two equations become even more useful if we exchange the derivative $\nabla'{}_\alpha$ for $\nabla_\alpha$, which captures the covariant action of the generalised structure group on $T^M \oplus T^*M$. Various quantities we already encountered do not transform covariantly. Examples are the connections $\Omega_{ABC}$ and $r_{ABCD}$. To quantify the deviation from a covariant transformation, it is convenient to introduce the operator \begin{equation} \Delta_\lambda = \delta_\lambda - \lambda^\alpha \nabla_\alpha \,. \end{equation} Any quantity it annihilates is covariant. In general, there are two perspectives one can take on the Pol\'a\v{c}ek-Siegel{} construction. We started by looking at the extended, mega-space with $\widehat{f}_{\widehat{A}\widehat{B}\widehat{C}}$ and $\widehat{E}_{\widehat{A}}{}^{\widehat{I}}$. On the other hand, it also has to be possible to extract the same information just from the internal space $M$. Indeed this is when we consider the following data on $M$: \begin{enumerate} \item A generalised structure group $F$ which acts faithfully on the generalised tangent space $T M \oplus T^* M$. This will fix $f_{\alpha B C}$, $f_{\alpha\beta}{}^\gamma$ and set $f_{\alpha B}{}^\gamma = 0$. \item All dynamic contributions to the generalised fluxes on the mega-space. They are listed in the last columns of \eqref{eqn:decompfhABC} and \eqref{eqn:complistfAh} which are completely encoded in the torsions and curvatures \begin{center} $f_A$, $f_{ABC}$, $f_{ABCD}$ \quad and \quad $f_{ABCDE}$\,. \end{center} \item Everything else is fixed by the Bianchi identity on the mega-space. \end{enumerate} Taking into account \eqref{eqn:deltalambdafAhBhCh}, we find that all torsions and curvatures mentioned transform covariantly, except for \begin{equation} \Delta_\lambda f_{ABCD} = \lambda^\alpha f_{\alpha}{}^{\beta\gamma} f_{\beta AB} f_{\gamma CD}\,. \end{equation} This is the power of the Pol\'a\v{c}ek-Siegel{} formalism. It provides a systematic method for finding covariant tensors. Moreover, we know that both the generalised Ricci scalar $\mathcal{R}$ and the generalised Ricci tensor $\mathcal{R}_{AB}$ transform covariantly under double Lorentz transformations. Hence, when we rewrite them in terms of $f$'s in section~\ref{sec:truncgenRicci}, they must remain covariant. Therefore, all the non-covariant naked connections $\Omega_{ABC}$ must in the end be formed from covariant $f$'s or covariant derivatives of them. We also see that already at the two derivative level the antisymmetric part (with respect to $(AB) \leftrightarrow (CD)$) of $f_{ABCD}$ is not covariant. Thus, it cannot appear in the two derivative action or its field equations. \section{Construction of the mega-generalised frame}\label{sec:vielbeins} Currently all known generalised T-dualities can be captured by dressing cosets. In section~\ref{sec:truncgenRicci}, we concluded that these cosets result in consistent truncations where the Pol\'a\v{c}ek-Siegel{} construction has constant generalised fluxes $\widehat{f}_{\widehat{A}\widehat{B}\widehat{C}}$ on the mega-space. The objective now is to show how we can construct the corresponding mega-generalised frames that make the truncation ansatz discussed in section~\ref{sec:trunctheory} fully explicit. As we have seen in section~\ref{sec:gSSandPLTD}, these frames are not only very valuable for consistent truncations, but they also are an important tool for studying generalised T-dualities on the worldsheet, and for constructing the canonical transformations of the underlying $\sigma$-models. \subsection{Generalised frames on group manifolds}\label{sec:genframegroup} Our starting point is the constant generalised fluxes on the mega-space, and we are looking for a generalised frame $\widehat{E}_{\widehat{A}}{}^{\widehat{I}}$ that satisfies \begin{equation}\label{eqn:fhconst} \widehat{f}_{\widehat{A}\widehat{B}\widehat{C}} = \widehat{D}_{[\widehat{A}} \widehat{E}_{\widehat{B}}{}^{\widehat{I}} \widehat{E}_{\widehat{C}]\widehat{I}} = \text{const.} \end{equation} As outlined in section~\ref{sec:Jac}, the corresponding Bianchi identity becomes the Jacobi identity \eqref{eqn:bianchiFhABC} of a Lie algebra. We denote it as Lie($\mathds{D}$) and interpret the generalised fluxes $\widehat{f}_{\widehat{A}\widehat{B}\widehat{C}}$ as its structure coefficients. An important consequence is that the adjoint action of any element of $\mathds{D}$ will leave $\widehat{f}_{ABC}$ invariant. Therefore, we can identify $\widehat{f}_{\widehat{A}\widehat{B}\widehat{C}} = f_{\widehat{A}\widehat{B}\widehat{C}}$. It is known that one can construct $\widehat{E}_{\widehat{A}}{}^{\widehat{I}}$ systematically \cite{Hassler:2017yza,Demulder:2018lmj,Hassler:2019wvn,Borsato:2021vfy}, based on the following data: \begin{enumerate} \item\label{prop1} A doubled Lie group $\mathds{D}$, which is generated by the generators $t_{\widehat{A}}$ with the structure coefficients \begin{equation} [ t_{\widehat{A}}, t_{\widehat{B}} ] = f_{\widehat{A}\widehat{B}}{}^{\widehat{C}} t_{\widehat{C}}\,. \end{equation} In the following, we label them by $t_{\widehat{A}} = \begin{pmatrix} t_{\widehat{a}} & t^{\widehat{a}} \end{pmatrix}$, where $\widehat{A} = 1, \dots, 2 \widehat{D}$ and $\widehat{a}=1, \dots, \widehat{D}$. \item A non-degenerate, pairing, $\langle t_{\widehat{A}}, t_{\widehat{B}} \rangle = \eta_{\widehat{A}\widehat{B}} = \begin{pmatrix} 0 & \delta_{\hat a}{}^{\hat b} \\ \delta^{\hat a}{}_{\hat b} & 0 \end{pmatrix}$, that is invariant under the adjoint action of $\mathds{D}$. \item\label{prop3} A maximally isotropic subgroup $H \subset \mathds{D}$, generated by $t^{\hat a}$, with $\langle t^{\widehat{a}} , t^{\widehat{b}} \rangle = 0$. \end{enumerate} Explicitly, the generalised frame is defined on the coset $M = H \backslash \mathds{D}$ in terms of \begin{equation}\label{eqn:megagenframe} \widehat{E}_{\widehat{A}}{}^{\widehat{I}} = M_{\widehat{A}}{}^{\widehat{B}} \begin{pmatrix} \widehat{v}_{\widehat{b}}{}^{\widehat{i}} & \widehat{v}_{\widehat{b}}{}^{\widehat{j}} B_{\widehat{j}\widehat{i}} \\ 0 & v^{\widehat{b}}{}_{\widehat{i}} \end{pmatrix}\,. \end{equation} In studying its properties, it is convenient to use the differential forms $v^{\widehat{a}} = v^{\widehat{a}}{}_{\widehat{i}} \mathrm{d} x^{\widehat{i}}$, $A_{\widehat{a}} = A_{\widehat{a}\widehat{i}} \mathrm{d} x^{\widehat{i}}$, $B=\frac12 B_{\widehat{i}\widehat{j}} \mathrm{d} x^{\widehat{i}}\wedge x^{\widehat{j}}$ which are defined by \begin{align} \label{eqn:megagenframe.dm} \mathrm{d} m m^{-1} & = t_{\widehat{a}} v^{\widehat{a}} + t^{\widehat{a}} A_{\widehat{a}}\,, \qquad m \in M \\ \label{eqn:megagenframe.B} B &= \frac12 v^{\widehat{a}} \wedge A_{\widehat{a}} + B_{\mathrm{WZW}}\,, \\ \label{eqn:megagenframe.WZW} \mathrm{d} B_{\mathrm{WZW}} = H_{\mathrm{WZW}} &= -\frac1{12} \langle \mathrm{d} m m^{-1}, [ \mathrm{d} m m^{-1}, \mathrm{d} m m^{-1} ] \rangle\,. \end{align} In general, $H_{\mathrm{WZW}}$ is closed but not exact. If this is the case, $B_{\mathrm{WZW}}$ can only be defined in a local patch and the patches have to be connected by appropriate gauge transformations. Moreover, we need the adjoint action \begin{equation} m t_{\widehat{A}} m^{-1} = M_{\widehat{A}}{}^{\widehat{B}} t_{\widehat{B}}\,, \end{equation} and the dual vector field $\widehat{v}_{\widehat{a}}{}^{\widehat{i}}$, defined by $\widehat{v}_{\widehat{a}}{}^{\widehat{i}} v^{\widehat{b}}{}_{\widehat{i}} = \delta_{\widehat{a}}{}^{\widehat{b}}$, to complete the list of ingredients that enter \eqref{eqn:megagenframe}. At this point, we finally fully review how Poisson-Lie T-duality is realised by generalised Scherk-Schwarz reductions. Looking at the diagram in \eqref{eqn:dualitiesdiagram}, we want to preserve the truncated theory, and, most importantly, with it the generalised torsions $T_A$ and $T_{ABC}$. With them the generalised Ricci scalar $\mathcal{R}$ and tensor $\mathcal{R}_{AB}$ are also preserved, because there is no curvature $\mathcal{R}_{ABCD}$; all that counts for generalised Scherk-Schwarz reductions are the torsions. To preserve them, we are looking for different generalised frames $E^{(i)}_A{}^I$ which still produce the same constant generalised fluxes $F_{ABC}$. We used the construction presented above (after removing all hats) to achieve this goal. The generalised fluxes depend only on the double Lie group $\mathds{D}$, but not on the choice of the maximally isotropic subalgebra. On the other hand, the constructed generalised frame $E_A{}^I$ crucially depends on the subalgebra used in the construction. For any maximally isotropic subalgebra $H_i$, we obtain a new generalised frame field $E^{(i)}_A{}^I$ that still gives rise to the same generalised fluxes $F_{ABC}$. On the string worldsheet, the same mechanism was used to define Poisson-Lie T-duality and to show that it is a canonical transformation between two $\sigma$-models with dual target spaces. We are not completely done yet, because there is still the one-index generalised flux $\widehat{f}_{\widehat{A}}$. As it is constant, its Bianchi identity on the mega-space simplifies to \eqref{eqn:bianchiFhA}. As a consequence the adjoint action of any element in $\mathds{D}$ will leave $\widehat{f}_{\widehat{A}}$ invariant and we identify $\widehat{f}_{\widehat{A}} = f_{\widehat{A}}$. In analogy with \eqref{eqn:fhconst}, we have now to find a $\dh$ such that \begin{equation} \widehat{f}_{\widehat{A}} = 2 \widehat{D}_{\widehat{A}} \dh - \partial_{\widehat{I}} \widehat{E}_{\widehat{A}}{}^{\widehat{I}} = \text{const.} \end{equation} holds. The component \begin{equation}\label{eqn:fixedcomp} f^{\widehat{a}} = f_{\widehat{b}}{}^{\widehat{b}\widehat{a}} \end{equation} is automatically constant and from $f_{\widehat{a}}$, we obtain the differential equation \begin{align} \mathrm{d} \overline{d} &= \frac12 v^{\widehat{a}} \left( f_{\widehat{a}} - f_{\widehat{a}\widehat{b}}{}^{\widehat{b}} + \iota_{\widehat{v}_{\widehat{b}}} A_{\widehat{c}} f_{\widehat{a}}{}^{\widehat{b}\widehat{c}} \right) - \frac12 \iota_{\widehat{v}_{\widehat{a}}} \mathrm{d} v^{\widehat{a}} \nonumber \\ &= \frac12 \left( v^{\widehat{a}} f_{\widehat{a}} + A_{\widehat{a}} f^{\widehat{a}} \right) = \frac12 \langle \mathrm{d} m m^{-1}, t_{\widehat{A}} \rangle f^{\widehat{A}} \label{eqn:ddilaton} \end{align} with \begin{equation}\label{eqn:dconstfAh} \dh = \overline{d} - \frac12 \log \det v \end{equation} that fixes $\dh$ up to a constant. The integrability condition for $\mathrm{d}^2 \overline{d} = 0$ for this equation follows immediately from the Bianchi identity because \begin{equation} \mathrm{d}^2 \overline{d} = \frac14 \langle [\mathrm{d} m m^{-1}, \mathrm{d} m m^{-1}], t_{\widehat{A}} \rangle f^{\widehat{A}} = \frac14 v^{\widehat{A}} \wedge v^{\widehat{B}} \, f_{\widehat{A}\widehat{B}}{}^{\widehat{C}} f_{\widehat{C}} = 0\,, \end{equation} where $v^{\widehat{A}}$ denotes $\mathrm{d} m m^{-1} = t_{\widehat{A}} v^{\widehat{A}}$\,. This observation has been already made in \cite{Borsato:2021vfy}. Also one should note that according to \eqref{eqn:fixedcomp} the generalised fluxes $\widehat{f}_{\widehat{A}}$ and $\widehat{f}_{\widehat{A}\widehat{B}\widehat{C}}$ are not completely independent. One half of the former is completely fixed by the latter. It is not possible to break this connection in supergravity. But there is also the framework of generalised supergravity \cite{ Arutyunov:2015mqj}, where this additional constraint besides the Bianchi identities is not required \cite{Borsato:2021vfy}. \subsubsection*{$H$-shift of the coset representative and $B$-field gauge transformations} The coset representative we used in the last section is defined only up to the action of $H$ from the left. There, one might ask what happens to the generalised frame if we shift $m \rightarrow m' = h m$, with $h\in H$. The adjoint action in \eqref{eqn:megagenframe} transforms as \begin{align} M'_{\widehat{A}}{}^{\widehat{B}} &= M_{\widehat{A}}{}^{\widehat{C}} \Lambda_{\widehat{C}}{}^{\widehat{B}}~, \qquad h T_{\widehat{A}} h^{-1} =: \Lambda_{\widehat{A}}{}^{\widehat{B}} T_{\widehat{B}}~. \end{align} The fact that $H$ is an isotropic subgroup guarantees that $\Lambda^{\widehat{a} \widehat{b}}$ vanishes. To evaluate the remaining quantities in \eqref{eqn:megagenframe}, we compute first \begin{align} \mathrm{d} m' m'^{-1} &= h \,\mathrm{d} m m^{-1} \,h^{-1} +\mathrm{d} h h^{-1}~. \end{align} Defining $\mathrm{d} h h^{-1} = \omega_{\widehat{a}} T^{\widehat{a}}$, one can show that the one forms $v^{\widehat{a}}$ and $A_{\widehat{a}}$ shift as \begin{align} v'^{\widehat{a}} = v^{\widehat{b}} \Lambda_{\widehat{b}}{}^{\widehat{a}}~, \qquad A'_{\widehat{a}} = v^{\widehat{b}} \Lambda_{\widehat{b} \widehat{a}} + A_{\widehat{b}} \Lambda^{\widehat{b}}{}_{\widehat{a}} + \omega_{\widehat{a}}~. \end{align} For the WZW contribution to the $B$-field, we find \begin{align} B'_{\rm WZW} \cong B_{\rm WZW} - \frac{1}{2} v^{\widehat{a}} \Lambda_{\widehat{a}}{}^{\widehat{b}} \omega_{\widehat{b}} \end{align} where $\cong$ denotes equality up to an undetermined exact term. This exact term cannot be determined explicitly: the ansatz \eqref{eqn:megagenframe} involves only a locally defined $B_{\rm WZW}$ via \eqref{eqn:megagenframe.WZW}. This leads to \begin{align} B' &\cong B + \frac{1}{2} v^{\widehat{a}} \wedge v^{\widehat{b}} \Lambda_{\widehat{a}}{}^{\widehat{c}} \Lambda_{\widehat{b} \widehat{c}}~. \end{align} The combination $\Lambda_{\widehat{a}}{}^{\widehat{c}} \Lambda_{\widehat{b} \widehat{c}}$ is antisymmetric in $\widehat{a} \widehat{b}$ from the orthogonality condition on $\Lambda$. The difference in these two $B$-fields is not closed, but this is not crucial, because the field denoted $B$ here is not \emph{precisely} the physical $B$-field; the latter is encoded in the generalized metric \eqref{eqn:genmetric} and $M_A{}^B$ contributes non-trivially. When the shift in $M$ to $M'$ is accounted for, one finds indeed that it cancels the transformations not only of $B$ but also of $\widehat{v}$, leading to \begin{equation} \widehat{E}'_{\widehat{A}}{}^{\widehat{I}} = M'_{\widehat{A}}{}^{\widehat{B}} \begin{pmatrix} \widehat{v}'_{\widehat{b}}{}^{\widehat{i}} & \widehat{v}'_{\widehat{b}}{}^{\widehat{j}} B'_{\widehat{j}\widehat{i}} \\ 0 & v'^{\widehat{b}}{}_{\widehat{i}} \end{pmatrix} \cong M_{\widehat{A}}{}^{\widehat{B}} \begin{pmatrix} \widehat{v}_{\widehat{b}}{}^{\widehat{i}} & \widehat{v}_{\widehat{b}}{}^{\widehat{j}} B_{\widehat{j}\widehat{i}} \\ 0 & v^{\widehat{b}}{}_{\widehat{i}} \end{pmatrix} = \widehat{E}_{\widehat{A}}{}^{\widehat{I}} \end{equation} where $\cong$ denotes equality up to an exact shift in the $B$-field. Hence, changing the coset representative $m$ just amounts to making a $B$-field gauge transformation. This is nice, because it implies that different possible choices for $m$ are all related to each other, justifying the identification of the space as a coset. As one might expect, the generalised dilaton is unchanged (up to a constant) since it is determined by integrating the expression for $f_{\widehat{a}}$. From \eqref{eqn:ddilaton}, one finds $\mathrm{d} \overline{d}' = \mathrm{d} \overline{d} + \frac{1}{2} \omega_{\widehat{a}} f^{\widehat{a}}$ where the shift $\omega_{\widehat{a}} f^{\widehat{a}}$ is closed. In fact, it is exact since \begin{align} \mathrm{d} \log \det \Lambda_{\widehat{a}}{}^{\widehat{b}} = f_{\widehat{b}}{}^{\widehat{b} \widehat{a}} \omega_{\widehat{a}} = \omega_{\widehat{a}} f^{\widehat{a}} \end{align} and this relation implies that \begin{align} \dh' &= \dh + \textrm{const}~. \end{align} \subsection{Pol\'a\v{c}ek-Siegel{} form of the mega frame} Next, we have to check if the generalised frame we just constructed can be brought into the Pol\'a\v{c}ek-Siegel{} form \eqref{eqn:mega-vielbein} introduced in section~\ref{sec:mega-vielbein}. Because this form is tightly constrained by O($\widehat{D}$, $\widehat{D}$), we have only to check the last column, namely \begin{equation}\label{eqn:columntocheck} \widehat{E}_{\widehat{A}}{}_\mu = \widetilde{M}_{\widehat{A}}{}^{\widehat{B}} \begin{pmatrix} 0 \\ 0 \\ \widetilde{v}^\beta{}_\mu \end{pmatrix}\,. \end{equation} Considering the insight gained in \cite{Demulder:2019vvh}, we expect that this condition can only be satisfied if we consider a dressing coset. Therefore, we first decompose the coset representative $m$ as \begin{equation} m = n f \,, \quad \text{with} \quad n \in H \backslash \mathds{D} / F \quad \text{and} \quad f \in F\,. \end{equation} Now, $n$ is a representative of a double coset, which is called a dressing coset in the context of generalised T-duality \cite{Klimcik:1996np}. In particular, $F$ has to be an isotropic subgroup, but not necessarily maximally isotropic. The coordinates that we choose to parameterise $n$ are called $y^i$, while for $f$, we use $z^\mu$. As before, we adapt all constituents of the generalised frame in \eqref{eqn:megagenframe} to this new decomposition, starting with \begin{align} M_{\widehat{A}}{}^{\widehat{B}} &= \widetilde{M}_{\widehat{A}}{}^{\widehat{C}} \overline{M}_{\widehat{C}}{}^{\widehat{B}}\,, \qquad\qquad \text{with} && \widetilde{M}_{\widehat{A}}{}^{\widehat{B}} t_{\widehat{B}} = f t_{\widehat{A}} f^{-1} \\ &&& \overline{M}_{\widehat{A}}{}^{\widehat{B}} t_{\widehat{B}} = n t_{\widehat{A}} n^{-1} \,. \end{align} We also find \begin{align}\label{eqn:Bfielddressingcoset} B &= \frac12 \left( v^{\widehat{a}} \wedge A_{\widehat{a}} - \langle \mathrm{d} f f^{-1}, n^{-1} \mathrm{d} n \rangle \right) + \overline{B}_{\mathrm{WZW}} \quad \text{with} \\ \overline{H}_{WZW} &= \mathrm{d} \overline{B}_{\mathrm{WZW}} = - \frac1{12} \langle \mathrm{d} n n^{-1}, [\mathrm{d} n n^{-1}, \mathrm{d} n n^{-1}] \rangle\,, \end{align} which allows us to compute \begin{align}\label{eqn:ivthB} \iota_{\widehat{\vt}_{\alpha}} B &= - \langle n t_\alpha n^{-1} , t_{\widehat{a}} \rangle v^{\widehat{a}} \,, \\ \iota_{\widehat{\vt}_{\alpha}} \iota_{\widehat{\vt}_{\beta}} B &= 0\,, \end{align} by taking into account that $F$ is isotropic. (Recall that $\widehat{\vt}_{\alpha}$ is the dual vector field to the Maurer-Cartan form $t_\alpha \widetilde{v}^\alpha{}_{\mu} \mathrm{d} x^\mu = \mathrm{d} f f^{-1}$, which we introduced in section~\ref{sec:mega-vielbein}.) Equation \eqref{eqn:columntocheck} can now be equivalently written as \begin{equation} \iota_{\widehat{\vt}_{\beta}} \langle n t_{\widehat{A}} n^{-1}, t^{\widehat{c}} \iota_{\widehat{v}_{\widehat{c}}} B + t_{\widehat{c}} v^{\widehat{c}} \rangle = \langle t_{\widehat{A}}, t_\beta \rangle\,, \end{equation} which is better suited to be checked. Using, \eqref{eqn:ivthB}, $v^{\widehat{a}} = \langle t^{\widehat{a}}, \mathrm{d} n n^{-1} + n \mathrm{d} f f^{-1} n^{-1} \rangle$, $\iota_{\widehat{\vt}_{\alpha}} \mathrm{d} f f^{-1} = t_{\alpha}$, and $\iota_{\widehat{v}_{\widehat{a}}} v^{\widehat{b}} = \delta_{\widehat{a}}^{\widehat{b}}$, one can easily show that this equation indeed holds. Therefore, just using the appropriate $B$-field given in \eqref{eqn:Bfielddressingcoset}, the generalised frame field on the mega-space \eqref{eqn:megagenframe} takes the Pol\'a\v{c}ek-Siegel{} form. We also have to check if the generalised dilaton $\dh$, which we fixed in \eqref{eqn:dconstfAh}, is compatible with the ansatz \eqref{eqn:fhA} on the mega-space. To this end, we first write \begin{equation} v^{\widehat{a}}{}_{\widehat{i}} = \underbrace{\begin{pmatrix} \eta^{\widehat{a}}{}_\beta \quad& \overline{M}^{\widehat{a}}{}_b \end{pmatrix}}_{\displaystyle :=\mathrm{m}} \begin{pmatrix} \widetilde{v}^\beta{}_\mu & 0 \\ 0 & \overline{v}^b{}_i \end{pmatrix}\,, \end{equation} with the one-forms $\overline{v}^a$ which are defined by $t_a \overline{v}^a{}_i \mathrm{d} y^i = \mathrm{d} n n^{-1}$. After plugging this expression into \eqref{eqn:dconstfAh}, we obtain \begin{equation} \dh = \overline{d} - \frac12 \log \det \mathrm{m}(y) - \frac12 \log \det v(y) - \frac12 \log \det \widetilde{v}(z)\,. \end{equation} Comparing this equation with \eqref{eqn:fhA}, we have to identity \begin{equation} d(y) = \overline{d} - \frac12 \log \det \mathrm{m}(y) - \frac12 \log \det v(y) - \frac12 \log \det \widetilde{\mathrm{m}}(z) \end{equation} with $\widetilde{\mathrm{m}}_\alpha{}^\beta = \widetilde{M}_\alpha{}^\beta$. However, this only works if the final result, $d(y)$, does not depend on the auxiliary coordinates $z$. To check if this is indeed the case, we compute \begin{equation} \iota_{\widehat{\vt}_\alpha} \mathrm{d} d = \iota_{\widehat{\vt}_\alpha} \mathrm{d} \overline{d} - f_{\alpha\beta}{}^\beta = f_\alpha - f_{\alpha\beta}{}^\beta = 0\,. \end{equation} Fortunately, this is exactly the constraint we already imposed in \eqref{eqn:complistfAh}. Hence, we conclude that also the dilaton is of the right Pol\'a\v{c}ek-Siegel{} form. This completes the proof of the central result of this paper: \begin{theorem}\label{th:MAIN} For every every dressing coset $H \backslash \mathds{D} / F$, where $\mathds{D}$ and $H$ satisfy the properties~\ref{prop1}-\ref{prop3} in section~\ref{sec:genframegroup}, and $F$ is an isotropic subgroup of $\mathds{D}$, there exists a consistent truncation, governed by theorem~\ref{th:CT}, with generalised structure group $F$. \end{theorem} \subsection{$F$-shift and gauge transformations} We already figured out in section \ref{sec:genframegroup} what happens to the coset representative under $H$-shifts from the right. To extend this discussion to generalised cosets, we now study the infinitesimal $F$-action from the left on the representative $n \in H \backslash \mathds{D} / F$. To this end, we shift $\delta n = n \delta h$ with $\delta h = \lambda^\alpha t_\alpha := \lambda$. Under this shift, we first find \begin{equation} \delta M_{\widehat{A}}{}^{\widehat{B}} = \lambda^\alpha \widetilde{M}_{\widehat{A}}{}^{\widehat{C}} \widehat{f}_{\alpha\widehat{C}}{}^{\widehat{D}} \overline{M}_{\widehat{D}}{}^{\widehat{B}} \end{equation} and after a bit more of calculation \begin{align} \delta (\mathrm{d} m m^{-1}) &= t_{\widehat{a}} \delta v^{\widehat{a}} + t^{\widehat{a}} \delta A_{\widehat{a}} = n ( \mathrm{d} \lambda +[ \lambda, \mathrm{d} f f^{-1} ] ) n^{-1}\,, \\ \delta B_{\mathrm{WZW}} &= - \frac12 \langle \delta (\mathrm{d} m m^{-1}), \mathrm{d} n n^{-1} \rangle \,, \quad \text{and} \\ \delta B &= v^{\widehat{a}} \wedge \delta A_{\widehat{a}} \,. \end{align} We also need the transformation of the one-form $v^{\widehat{a}}$ and its dual vector fields $\widehat{v}_{\widehat{a}}$. It is convenient not to treat them separately, but instead combine them into \begin{equation} V_{\widehat{A}}{}^{\widehat{I}} = \begin{pmatrix} \widehat{v}_{\widehat{a}}{}^{\widehat{i}} & 0 \\ 0 & v^{\widehat{a}}{}_{\widehat{i}} \end{pmatrix} \end{equation} and compute \begin{equation} \delta V_{\widehat{A}\widehat{B}} = \overline{M}_{\widehat{A}}{}^{\widehat{C}} \delta V_{\widehat{C}}{}^{\widehat{I}} V^{\widehat{D}}{}_{\widehat{I}} \overline{M}_{\widehat{B}\widehat{D}} = - 2 \iota_{E_{[\widehat{A}}} \delta v^{\widehat{c}} \overline{M}_{\widehat{B}]\widehat{c}} \,. \end{equation} In the same vein, we introduce \begin{equation} \delta B_{\widehat{A}\widehat{B}} = \overline{M}_{\widehat{A}}{}^{\widehat{c}} \overline{M}_{\widehat{B}}{}^{\dh} \iota_{\widehat{v}_{\dh}} \iota_{\widehat{v}_{\widehat{c}}} \delta B = -2 \iota_{E_{[\widehat{A}}} \delta A_{\widehat{c}} \overline{M}_{\widehat{B}]}{}^{\widehat{c}} \,. \end{equation} These two quantities allow us to write the shift of the mega generalised frame in the compact form \begin{equation} \delta \widehat{E}_{\widehat{A}\widehat{B}} := \delta \widehat{E}_{\widehat{A}}{}^{\widehat{I}} \widehat{E}_{\widehat{B}\widehat{I}} = \widetilde{M}_{\widehat{A}}{}^{\widehat{C}} \widetilde{M}_{\widehat{B}}{}^{\widehat{D}} \left( -2 \nabla'{}_{[\widehat{C}} \lambda_{\widehat{D}]} + \lambda_{\widehat{C}\widehat{D}} \right) \end{equation} with \begin{equation} \lambda_{\widehat{A}} = \begin{pmatrix} 0 & 0 & \lambda^\alpha \end{pmatrix} \qquad \text{and} \qquad \lambda_{\widehat{A}\widehat{B}} = \lambda^\gamma f_{\gamma\widehat{A}\widehat{B}}\,. \end{equation} Comparing these equations with \eqref{eqn:deltaE} and \eqref{eqn:lambdacomponents} in section~\ref{sec:gaugetransformations}, we notice that they are exactly same gauge transformations we discussed in section~\ref{sec:gaugetransformations}. Therefore, we know that they leave the constant generalised fluxes $f_{\widehat{A}\widehat{B}\widehat{C}}$ invariant. \section{Conclusion and outlook} The results in this paper reveal a new, deep connection between consistent truncations and dualities. In particular, we established in theorem~\ref{th:MAIN} that generalised cosets, which underlie the most general formulation of T-duality currently known (excluding mirror symmetry), automatically give rise to a large family of consistent truncations. The relation between them is not immediately obvious, and we therefore developed a new geometrical approach that makes the generalised structure group of the consistent truncation manifest by introducing an auxiliary space. We showed that the relation represented by the solid arrow in the diagram \begin{equation}\label{eqn:arrows} \begin{tikzpicture} \node[name=TD] {generalised T-dualities}; \node[at={(TD.east)},anchor=west,xshift=5em,name=trunc] {consistent truncations}; \draw[->] ($(TD.east)+(0,0.1)$) -- ($(trunc.west)+(0,0.1)$); \draw[dashed,->] ($(trunc.west)-(0,0.1)$) -- ($(TD.east)-(0,0.1)$); \end{tikzpicture} \end{equation} holds. This means that all currently known backgrounds which give rise to generalised T-dualities also produce consistent truncations. We did not yet manage to determine the fate of the other direction, represented by the dashed arrow. There are two interesting alternatives that our analysis currently hints at: \begin{enumerate} \item Our new approach to generalised cosets suggests that their description in generalised geometry automatically leads to curvatures with more than two derivatives. This brings into question what happens for consistent truncations in (super)gravity beyond the leading, two-derivative level. To the best of our knowledge there is currently not much known. The results from section~\ref{sec:higherderiv} indicate that there might be new constraints that complement those already known from theorem~\ref{th:CT}. At the moment it seems that we know more consistent truncations than generalised T-dualities. However, these new constraints could level the field and even in the end result in a one-to-one correspondence. \item Alternatively, there may also be some new generalised T-dualities, waiting to be found. The relation between the former and consistent truncations, sketched in diagram \eqref{eqn:dualitiesdiagram}, could then be a useful tool for searching for new dualities by studying existing examples of consistent truncations beyond generalised cosets. \end{enumerate} Besides these conceptual questions, there are also important applications for our results. They originate from both sides of~\eqref{eqn:arrows}. \begin{itemize} \item \underline{Generalised T-dualities:} One particularly active sub-branch of this field is concerned with the construction and analysis of integrable deformations. Although it is still not completely understood why, there is a very close relation between generalised T-dualities and integrable string theories. The latter are among the primary means of exploring new concepts in theoretical physics, because they provide a superior level of computational control in comparison to models which are not integrable. All results we derived here apply mostly to bosonic strings or the NS/NS sectors of superstrings. This is sufficient for answering conceptual questions, but for concrete applications, such as integrable Green-Schwarz strings required for probing the AdS/CFT correspondence, the full R/R sector is needed also, together with supersymmetry. We shall address this problem in a forthcoming paper \cite{supergrouppaper}, by extending the results of the current paper to supergroups, in a supersymmetric version of double field theory \cite{Butter:2022gbc} proposed by one of the authors. \item \underline{Consistent truncations:} The last years have seen significant progress in constructing and understanding consistent truncations. They are mostly centered around exceptional generalised geometry and exceptional field theory, and have applications reaching from the AdS/CFT correspondence to supporting or disproving swampland conjectures. Exceptional field theory goes beyond the string and incorporates membranes too. At the same time, T-duality enhances to U-duality, which takes into account S-duality as well. Another advantage is that R/R fluxes are automatically implemented. However, one should not think that they arise in the same ways as in the Green-Schwarz string discussed above. While, for the latter, supersymmetry and its fermionic degrees of freedom result in R/R fluxes, in exceptional geometry/field theory U-duality is the driving force. In particular, it relates strings and D-branes, which source R/R fluxes. One important consequence is that exceptional field theory still requires an explicit splitting of the spacetime, which makes it perfectly suited for studying consistent truncations in supergravity. Therefore, one should also try to extend the results we have presented here for O($n$,$n$) generalised geometry to the corresponding exceptional groups $E_{n+1(n+1)}$. If possible, and there are no obvious conceptual problems, this would open up a route to many new consistent truncations, with a wide range of applications. On the other hand, it will also shed new light on extending generalised T-duality to U-duality, which has been already initiated \cite{Sakatani:2019zrs,Malek:2019xrf} based on the results in double field theory. \end{itemize} We hope that there will be more insights into all of these points in the future, strengthening the connection between consistent truncations and dualities even further. \section*{Acknowledgements} We would like to thank Riccardo Borsato, Sybille Driesen, Gabriel Larios, Gr\'egoire Josse, and Yuho Sakatani for helpful discussions. Parts of this work were finished while FH was visiting the group of Riccardo Borsato at the University of Santiago de Compostela. FH is very grateful for the hospitality he received during this time and for all the discussions from which this work benefited significantly. The work of FH is supported by the SONATA BIS grant 2021/42/E/ST2/00304 from the National Science Centre (NCN), Polen. CNP is supported in part by DOE grant DE-FG02-13ER42020.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the present paper we are interested in the spectral properties of Schrödinger operators with Aharonov--Bohm vector potential (see e.g. \cite{AB,MOR,AdamiTeta1998}), acting on functions $u \, : \, \mathbb{R}^2 \to \mathbb{C}$, i.e. \begin{equation} \label{eq:operator} (i\nabla + A_a^\alpha)^2 u := -\Delta u + 2 i A_a^\alpha \cdot \nabla u + |A_a^\alpha|^2 u, \end{equation} where the vector potential is singular at the point $a$ and takes the form \begin{equation} A_a^\alpha (x_1, x_2) = \alpha \left( - \frac{x_2 - a_2}{(x_1 - a_1)^2 + (x_2 - a_2)^2}, \frac{x_1 - a_1}{(x_1 - a_1)^2 + (x_2 - a_2)^2} \right). \end{equation} We address here its eigenvalues in the unit disk in the special case when circulation $\alpha=\tfrac12$. In order to pose the problem, we address here the general functional setting. If $\Omega \subset \mathbb{R}^2$ is open, bounded and simply connected, for $a \in \Omega$, we define the functional space $H^{1,a}_0(\Omega,\mathbb{C})$ as the completion of $C^\infty_c (\Omega \setminus \{a\}, \mathbb{C})$ with respect to the norm \[ \| u \|_{H^{1,a}(\Omega,\mathbb{C})} := \left( \| \nabla u \|_{L^2(\Omega,\mathbb{C}^2)}^2 + \| u \|_{L^2(\Omega,\mathbb{C})}^2 + \left\| \frac{u}{|x-a|} \right\|_{L^2(\Omega,\mathbb{C})}^2 \right)^{1/2}. \] When the circulation of the vector potential is not an integer, i.e.\ $\alpha \in \mathbb{R} \setminus \mathbb{Z}$, the latter norm is equivalent to the norm \begin{equation*} \| u \|_{H^{1,a}(\Omega,\mathbb{C})} = \left( \left\| ( i \nabla + A_a^\alpha ) u \right\|^2_{ L^2(\Omega,\mathbb{C}) } + \| u \|^2_{ L^2(\Omega,\mathbb{C}) } \right)^{\!\!1/2}, \end{equation*} by the Hardy type inequality proved in \cite{LaptevWeidl1999} (see also \cite{Balinsky} and \cite[Lemma 3.1 and Remark 3.2]{FelliFerreroTerracini2011}) \[ \int_{ D_r(a) } | (i\nabla + A_a^\alpha) u |^2 \,dx \geq \Big( \min_{j \in \mathbb{Z}} |j - \alpha| \Big)^2 \int_{ D_r(a) } \frac{ |u(x)|^2 }{|x-a|^2} \, dx, \] which holds for all $r > 0$, $a \in \mathbb{R}^2$ and $u \in H^{1,a}(D_r(a),\mathbb{C})$. Here we denote as $D_r(a)$ the disk of center $a$ and radius $r$. By a Poincaré type inequality, see e.g.\ \cite[A.3]{AbatangeloFelliNorisNys2016}, we can consider the equivalent norm on $H^{1,a}_0(\Omega, \mathbb{C})$ \[ \| u \|_{H^{1,a}_0(\Omega,\mathbb{C})} := \left( \| (i \nabla + A_a^\alpha) u \|_{L^2(\Omega,\mathbb{C}^2)}^2 \right)^{1/2}. \] We set the eigenvalue problem \begin{equation}\label{eq:eige1} \begin{cases} (i\nabla + A_a^\alpha)^2 \varphi = \lambda \varphi &\text{in }\Omega\\ \varphi=0 &\text{on }\partial \Omega, \end{cases} \end{equation} in a weak sense, that is $\lambda\in\mathbb{C}$ is an eigenvalue of problem \eqref{eq:eige1} if there exists $u\in H^{1,a}_{0}(\Omega,\mathbb{C})\setminus\{0\}$ (called eigenfunction) such that \[ \int_\Omega (i\nabla + A_a^\alpha) u \cdot \overline{(i \nabla + A_a^\alpha) v} \, dx = \lambda \int_\Omega u \overline{v} \,dx \quad \text{ for all } v \in H^{1,a}_{0}(\Omega,\mathbb{C}). \] From classical spectral theory, for every $(a,\alpha) \in \Omega \times \mathbb{R}$, the eigenvalue problem \eqref{eq:eige1} admits a diverging sequence of real and positive eigenvalues $\{\lambda_k{(a,\alpha)}\}_{k\geq 1}$ with finite multiplicity. These eigenvalues also have a variational characterization given by \begin{equation} \label{eq:var-char} \lambda_k(a, \alpha) = \min \Big\{ \sup_{u \in W_k \setminus\{0\}} \frac{ \int_{\Omega} |(i \nabla + A_a^\alpha) u|^2 }{\int_{\Omega} |u|^2 } \, : \, W_k \text{ is a linear $k$-dim subspace of } H^{1,a}_0(\Omega, \mathbb{C}), \Big\}. \end{equation} The paper \cite{AbatangeloNys2017} started the study of multiple eigenvalues of this operator with respect both to the position of the pole $a\in\Omega$ and the circulation $\alpha\in(0,1)$. It shows that multiple eigenvalues in general occur, even if under suitable assumptions they are very rare locally with respect to the two parameters. Here we just mention that these assumptions rely on the local behavior of the corresponding eigenfunctions. Moreover, to the best of our knowledge, no result is available about this rareness globally with respect to the two parameters, yet (on this general theme the interested reader can see \cite{Teytel1999}). As already mentioned, in this paper we consider the eigenvalue problem when $\Omega$ is the unit disk $D:=\{ (x_1,x_2)\in\mathbb{R}^2:\ {x_1}^2 + {x_2}^2 <1\}$ and the circulation $\alpha=\tfrac12$, i.e. the problem \begin{equation}\label{eq:eige} \begin{cases} (i\nabla + A_a)^2 \varphi = \lambda \varphi &\text{in }D\\ \varphi=0 &\text{on }\partial D. \end{cases} \end{equation} Throughout the paper we will erase the index $\alpha$, since it is fixed $\alpha=\tfrac12$. Because of this choice, in view of the correspondance between the magnetic problem and a real Laplacian problem on a double covering manifold (see \cite{HHOO1999,NT2010}), the operator \eqref{eq:operator} behaves as a \emph{real} operator. As a consequence, the nodal set of the eigenfunctions of operator \eqref{eq:operator} (i.e.\ the set of points where they vanish) is made of curves and not of isolated points as we could expect for complex valued functions. More specifically, the magnetic eigenfunctions always have an \emph{odd} number of nodal lines ending at the singular point $a$, and therefore at least one. In particular, we are going to focus our attention on the first eigenvalue to problem \eqref{eq:eige} and to study its multiplicity as the pole is moving from the origin around the disk. One can prove that this situation fulfills the assumptions of \cite[Theorem 1.6]{AbatangeloNys2017}, so that we know that the origin is {\em locally} the only point where the first eigenvalue is double. The main result of the paper is then the following \begin{theorem}\label{t:main} Let $\lambda(a)$ be the first eigenvalue of Problem \eqref{eq:eige}, i.e. $\lambda(a) := \lambda_1(a,\tfrac12)$. It is simple if and only if $a\neq0$. \end{theorem} We recall that the necessary condition is still known (see \cite{BNHHO09}). The new result is in fact the sufficient condition. The proof relies essentially in two steps. Firstly, we observe that eigenvalue functions are radial functions. Thanks to the local analytic regularity of eigenvalues with respect to analytic perturbations of the problem, the double eigenvalue for $a=0$ immediately splits in two locally analytic branches, which a priori can be the same. We will show that in fact they are really different by means of their Taylor expansion's first terms. The first derivatives of the two branches at the origin can be computed in terms of the corresponding eigenfunctions' asymptotic expansions in the spirit of \cite{AbatangeloNys2017}. This is the content of Section \ref{sec:split}. From a technical point of view, the disk gives us chances to compute eigenfunctions explicitly. This can be done by reducing problem \eqref{eq:eige} to a suitable weighted Laplace eigenvalue problem on the double covering and thanks to a certain spectral equivalence between Problem \ref{eq:eige} and suitable Laplace eigenvalue problems with mixed boundary conditions (see Section \ref{sec:explicit}). This is enough to prove that the first derivatives of the two aforementioned analytic branches computed at the origin are different, in particular with opposite sign, thus concluding Section \ref{sec:split}. The proof is concluded in Section \ref{sec:monotone} thanks to the continuity and monotonicity of the two branches up to the boundary of the domain. \subsection{Motivations} The interest in Aharonov-Bohm operators with half-integer circulation $\alpha=\tfrac12$ is motivated by the fact that nodal domains of their eigenfunctions are strongly related to spectral minimal partitions of the Dirichlet Laplacian, i.e. partitions of the domain minimizing the largest of the first eigenvalues on the components, in the special case when they present points of odd multiplicity (see \cite{BNHHO09}). We refer to papers \cite{BNH11, BNL14, H10, HHOO1999, HHO10, HHO13, HHOT09, HHOT10', HHOT10} for details on the deep relation between behavior of eigenfunctions, their nodal domains, and spectral minimal partitions. Related to this, the investigation carried out in \cite{AbatangeloFelli2015,AbatangeloFelli2016,AbatangeloFelliLena2016,BonnaillieNorisNysTerracini2014,Lena2015,NorisNysTerracini2015} highlighted a strong connection between nodal properties of eigenfunctions and asymptotic expansion of the function which maps the position of the pole $a$ in the domain to eigenvalues of the operator $(i\nabla +A_a)^2$ (see also \cite[Section 3]{Abatangelo2016} for a brief overview). The interest in the case of disk comes from the seminal papers \cite{HHO10} and \cite{BonnaillieHelffer2013}, where the so-called {\em Mercedes Star Conjecture} is introduced and discussed . Roughly speaking, the conjecture evokes that the spectral minimal 3-partition for the disk is in fact the {\em Mercedes Star} partition (see \cite[Figure 1]{BonnaillieHelffer2013}). For what concerns us, the disk gives us the opportunity to begin to tackle the interesting question about how rare multiple eigenvalues are with respect to the position of the pole globally in the domain. This is a first contribution to carry on the analysis started in \cite{AbatangeloNys2017}. On the other hand, the present paper is not dealing directly with the aforementioned conjecture, but it presents arguments which may be useful towards it. Finally, Theorem \ref{t:main} validates numerical simulations presented in \cite[Figure 1]{BonnaillieHelffer2013} for the first eigenvalue. \section{Explicit eigenfunctions and eigenvalues}\label{sec:explicit} The aim of this section is exploiting the symmetry of the disk in order to deduce peculiar features of eigenvalues to Problem \eqref{eq:eige}. Firstly, we recall that the map $a\mapsto \lambda_k(a)$ is a radial function for any $k\in\mathbb{N}\setminus \{0\}$. \subsection{Eigenfunctions in the double covering}\label{s:eigedoublecovering} In the papers \cite[Lemma 3.3]{HHOO1999} and \cite[Section 3]{NT2010} it is shown that in case of half-integer circulation the considered operator is equivalent to the standard Laplacian in the double covering. We then briefly recall some basic facts about Aharonov--Bohm operators. For any $a \in \mathbb{R}^2$, we define $\theta_a \, : \, \mathbb{R}^2 \setminus \{ a\} \to [0,2\pi)$ the polar angle centered at $a$ such that \begin{equation} \label{eq:polar-angle-a} \theta_a(a + r( \cos t, \sin t)) = t, \quad \text{ for } t \in [0, 2 \pi). \end{equation} Thus, it results (see \cite{FelliFerreroTerracini2011,HHOO1999,AbatangeloNys2017} for deeper explanations) that $2 A_a$ is gauge equivalent to $0$, as $ 2 A_a = - i e^{- i \theta_a} \nabla e^{i \theta_a} = \nabla \theta_a. $ We introduce the following antilinear and antiunitary operator \[ K_a u = e^{i \theta_a} \overline{u}. \] which depends on the position of the pole $a\in \Omega$ through the angle $\theta_a$. It results that $(i \nabla + A_a)^2$ and $K_a$ commute. The restriction of the scalar product to $L^2_{K_a}(\Omega):= \{ u \in L^2(\Omega,\mathbb{C}) \, : \, K_a u = u \}$ gives it the structure of a real Hilbert space and commutation implies that eigenspaces are stable under the action of $K_a$. Then we can find a basis of $L^2_{K_a} (\Omega)$ formed by $K_a$-real eigenfunctions of $(i \nabla + A_a)^2$. Being allowed to consider $K_a$-real eigenfunctions of $(i \nabla + A_a)^2$ allows to reduce the analysis to the real operator $(i \nabla + A_a)^2_{L^2_{K_a}(\Omega)}$ in the real space $L^2_{K_a}(\Omega)$. \begin{definition}(\cite[Lemma 2.3]{BonnaillieNorisNysTerracini2014}, \cite[Lemma 3.3]{HHOO1999})\label{d:doublecov} Let $\Omega\subset \mathbb{R}^2$ be an open simply connected and bounded set. Let $a\in \Omega$ be the pole of the operator. The {\em double covering} of $\Omega$ is the set \[ \tilde \Omega :=\{ y\in\mathbb{C}:\ y^2+a \in \Omega \}. \] \end{definition} \begin{lemma}(\cite[Lemma 2.3]{BonnaillieNorisNysTerracini2014})\label{l:bnnt} Let $\theta $ denote the angle of the polar coordinates in $\mathbb{R}^2$. If $\varphi$ is a $K_0$-real eigenfunction of the problem \eqref{eq:eige} for $a=0$, then the function \[ \psi(y) := e^{-i\theta(y)} \varphi(y^2) \text{ defined in }\tilde D \] is real valued and it is a solution to the problem \begin{equation}\label{eq:eigedouble} \begin{cases} -\Delta \psi = 4\lambda |y|^2\, \psi &\text{in }\tilde D\\ \psi=0 &\text{on }\partial \tilde D. \end{cases} \end{equation} \end{lemma} The second basic special feature of the disk is stated in the following \begin{lemma}\label{l:doubledisk} When $a=0$, the double covering of the unit disk $D$ can be identified with the twofold unit disk $D$. \end{lemma} \begin{proof} By Definition \ref{d:doublecov}, the double covering of the unit disk $D$ is \[ \Omega_0:=\{ y\in\mathbb{C}:\ y^2\in D \}. \] If we identify $\mathbb{C}$ with $\mathbb{R}^2$ in the standard way and consider the polar coordinates $(x_1,x_2)= \rho(\cos \theta,\sin\theta)$ we need that \[ (y_1,y_2)= \rho^2(\cos 2\theta,\sin2\theta)\in D. \] Then, observing that $y_1={x_1}^2 - {x_2}^2$ and $y_2=2x_1x_2$, a simple computation shows that ${y_1}^2 + {y_2}^2 = ({x_1}^2 + {x_2}^2)^2 <1$. \end{proof} Thanks to Lemma \ref{l:doubledisk}, we are in position to have an explicit expression of eigenfunctions to Problem \eqref{eq:eige} by means of Bessel and trigonometric functions. \begin{lemma}\label{l:doubleeigenf} If $\lambda_0$ is an eigenvalue of the problem \eqref{eq:eigedouble}, then it is double and its eigenfunctions take the form \[ \psi(\rho\cos\theta,\rho\sin\theta)= A\,J_{n/2}(\sqrt{\lambda_0}\rho^2) \cos(n\theta)\ +\ B\,J_{n/2}(\sqrt{\lambda_0}\rho^2)\sin(n\theta), \quad y=(\rho\cos\theta,\rho\sin\theta)\in \tilde D \] with $A,B\in\mathbb{R}$ and for some $n\in\mathbb{N}\setminus\{0\}$. Coming back to the original problem \eqref{eq:eige} on the original domain $D$, $\lambda_0$ is a double eigenvalue of the problem \eqref{eq:eige} and its eigenfunctions take the form \begin{equation}\label{eq:coeff2} \varphi(r\cos t,r\sin t)= e^{i\frac t2} J_{n/2}(\sqrt{\lambda_0}r) \left(A\,\cos\big(n\frac t2\big) + B\,\sin\big(n\frac t2\big)\right) \quad x=(r\cos t,r\sin t)\in D. \end{equation} \end{lemma} \begin{proof} Standard separation of variables $\psi(\rho\cos \theta,\rho\sin \theta)= u(\rho)v(\theta)$ leads to \begin{align*} v(\theta) = C \text{ or } v(\theta) = A\cos (n\theta) + B\sin(n\theta) \text{ for }n\in\mathbb{N} \end{align*} being $A,B,C\in\mathbb{R}$. The radial part produces a Bessel-type equation which reads \[ {\rho^2} \frac{d^2 u}{d\rho^2} + {\rho}\frac{d u}{d\rho} + (4\lambda_0 \rho^4 - n^2)u(\rho)=0 \] whose solutions are given by the so-called modified Bessel functions $ J_{n/2} (\sqrt{\lambda_0}\rho^2)$ or $ J_{-n/2} (\sqrt{\lambda_0}\rho^2)$ (for the modified Bessel functions, see the book by Watson \cite{Watson}). From the results in \cite{FelliFerreroTerracini2011,HHOO1999} we know that the eigenfunction is regular at the origin, so its radial part will be given in terms of the only $J_{n/2}$. Imposing the boundary conditions at $\rho=1$, we find $J_{n/2}(\sqrt{\lambda_0})=0$, which means that \[ \lambda_0 = {\alpha_{n/2,k}}^2 \quad \text{for some }k\in \mathbb{N}, \] where $\{\alpha_{n/2,k}\}_{k\in\mathbb{N}}$ denote the sequence of zeros of the Bessel function $J_{n/2}$. This concludes the first part of the statement. By virtue of Lemma \ref{l:bnnt} the rest of the statement follows. \end{proof} Note that the case of the disk is covered by the paper \cite{BNHHO09}: the fact that every eigenvalue is double was already provided by \cite[Proposition 5.3]{BNHHO09} in a more general context. Nevertheless, this is not the main point we are interested in. We recall that there is a connection between the zeros of the Bessel functions (to this aim we refer to \cite[Chapter XV]{Watson}): in particular, the positive zeros of the Bessel function $J_{\frac n2}$ are interlaced with those of the Bessel function $J_{\frac{n+1}2}$ and by Porter's Theorem the positive zeros of $J_{\frac n2}$ are interlaced with those of the Bessel function $J_{\frac{n+2}2}$. Then, denoting $z_{\frac n2,k}$ the $k$-th zero of the Bessel function $J_{\frac n2}$, we have \[ 0 < z_{\frac12,1} < z_{\frac32,1} < z_{\frac52,1} < z_{\frac12,2} < z_{\frac72,1} < \ldots \] \begin{remark}\label{r:ms} The first case is then $(n,k)=(1,1)$ and it corresponds to the double first eigenvalue for the Aharonov--Bohm operator with half-integer circulation and pole at the origin. The second case is $n=3$ and $k=1$, which produces the double third eigenvalue. \end{remark} \subsection{Isospectrality and consequences on eigenvalues} We introduce two auxiliary problems. Let us denote $D^+:= \{ (x_1,x_2)\in D:\ x_2>0 \}$. \begin{definition}(\cite{Lena2015})\label{d:auxiliary} The two problems \begin{align}\label{eq:dnnd} \begin{cases} -\Delta u = \lambda u &\text{in }D^+\\ u=0 &\text{on }\partial D^+ \setminus (t,1]\times\{0\}\\ \frac{\partial u}{\partial \nu} =0 &\text{on }(t,1]\times\{0\} \end{cases} \quad \begin{cases} -\Delta u = \lambda u &\text{in }D^+\\ u=0 &\text{on }\partial D^+ \setminus [-1,t)\times\{0\} \\ \frac{\partial u}{\partial \nu} =0 &\text{on }[-1,t)\times\{0\} \end{cases} \end{align} are called {\em Dirichlet--Neumann} and {\em Neumann--Dirichlet} eigenvalue problem for the Laplacian in the upper half-disk, respectively. \end{definition} We recall the following result proved in \cite{BNHHO09} (see also \cite[Proposition 5.3]{Lena2015}). \begin{lemma}(\cite{BNHHO09})\label{l:isospectrality} Let $a=(t,0)$ for $t\in[0,1]$. The set of the eigenvalues of Problem \eqref{eq:eige} $\{\lambda_j(t)\}_{j\geq1}$ is the union (counted with multiplicity) of the sequences $\{\lambda_j^{DN}(t)\}_{j\geq1}$ and $\{\lambda_j^{ND}(t)\}_{j\geq1}$, being $\{\lambda_j^{DN}(t)\}_{j\geq1}$ and $\{\lambda_j^{ND}(t)\}_{j\geq1}$ the set of the eigenvalues of the Dirichlet--Neumann and Neumann--Dirichlet problems \eqref{eq:dnnd} respectively. \end{lemma} By virtue of the latter Lemma \ref{l:isospectrality} and the continuity result stated in \cite{Lena2015} for Aharonov--Bohm eigenvalues (see also \cite[Section 10]{DaugeHelffer1993}), the following result holds true. \begin{lemma}(\cite{Lena2015}, \cite{DaugeHelffer1993})\label{l:continuity} Fix $k\in\mathbb{N}\setminus\{0\}$ and denote $\lambda_k^{DN}(t)$ ($\lambda_k^{ND}(t)$) the $k$-th eigenvalue of the Dirichlet--Neumann problem in \eqref{eq:dnnd} (Neumann--Dirichlet problem, respectively). Then the maps \[ t \mapsto \lambda^{DN}_k(t) \quad t \mapsto \lambda^{ND}_k(t) \qquad \text{are continuous in }(-1,1). \] \end{lemma} We observe that in this case the standard Courant--Fisher characterization of eigenvalues establishes \begin{equation}\label{eq:courantfisher} \lambda_k^{DN}(t) = \min_{\stackrel{E\subset\mathcal H_t \text{ subspace}}{\mathrm{dim}E=k}}\ \max_{u\in E \setminus\{0\}}\frac{\int_{\Omega} |\nabla u|^2}{\int_{\Omega} u^2}, \end{equation} where \[ \mathcal H_t :=\left\{ u\in H^1(\Omega):\ u=0 \text{ on }\partial \Omega\setminus (t,1)\times \{0\} \text{ and }\frac{\partial u}{\partial\nu}=0 \text{ on } (t,1)\times \{0\} \right\}, \] analogously for $\lambda_k^{ND}(t)$. \begin{remark}\label{r:monotonicity} By \eqref{eq:courantfisher}, if $-1<t_1 \leq t_2<1$ then $\mathcal H_{t_2}\subseteq \mathcal H_{t_1}$ and then $\lambda_j^{DN}(t_2)\geq\lambda_j^{DN}(t_1)$ for any $j\geq1$, i.e. the function $t\mapsto \lambda_j^{DN}(t)$ is monotone non-decreasing for any $j\geq1$. As well, the function $t\mapsto \lambda_j^{ND}(t)$ is monotone non-increasing for any $j\geq1$. In the case of the disk, one can even see it by noting that $\lambda_j^{DN}(t) = \lambda_j^{ND}(-t)$ because of the symmetry of the disk. \end{remark} Another consequence of Lemma \ref{l:isospectrality} is the following result. \begin{lemma}\label{l:t=1} Let us consider the problems in \eqref{eq:dnnd}. For $t=1$ we have $\lambda_1^{DN}(1) = \lambda_2^{ND}(1)$. \end{lemma} We note the latter result can be proved by direct computation, in terms of Bessel-type functions, as in the proof of Lemma \ref{l:doubleeigenf}. Now, if $a=(t,0)$ let us denote $\lambda_j(t)$ the $j$-th eigenvalue of the problem \eqref{eq:eige}. By Lemma \ref{l:isospectrality}, symmetry of the disk and Remark \ref{r:monotonicity} (non-increasing monotonicity of the map $t\mapsto \lambda_1^{ND}(t)$), we have \begin{equation}\label{eq:lambda1} \lambda_1(t) = \min\left \{ \lambda_1^{DN}(t),\ \lambda_1^{ND}(t) \right\} = \min\left \{ \lambda_1^{ND}(-t),\ \lambda_1^{ND}(t) \right\} = \lambda_1^{ND}(t) \quad \text{for any }t\in [0,1). \end{equation} We have as well \begin{equation}\label{eq:lambda2} \lambda_2(t) = \min\left \{ \lambda_1^{DN}(t),\ \lambda_2^{ND}(t) \right\} = \lambda_1^{DN}(t) \quad \text{for any }t\in [0,1), \end{equation} where the last equivalence follows from Lemma \ref{l:continuity}, Remark \ref{r:monotonicity} and Lemma \ref{l:t=1}, recalling that $\lambda_2^{ND}(0)=\lambda_2^{DN}(0)>\lambda_1^{DN}(0)=\lambda_1^{ND}(0)$. Indeed, if by contradiction there exists $\bar t \in(0,1)$ such that $\lambda_2^{ND}(\bar t)< \lambda_1^{DN}(\bar t)$, then Remark \ref{r:monotonicity} implies $\lambda_2^{ND}(1)\leq \lambda_2^{ND}(\bar t)< \lambda_1^{DN}(\bar t)\leq \lambda_1^{DN}(1)$ which denies Lemma \ref{l:t=1}. \section{Immediate splitting of the eigenvalue}\label{sec:split} The aim of this section is to show that as the pole is moved, then the double eigenvalue split and produce two locally {\em different} analytic branches of eigenvalues. The first one is stricly monotone decreasing whereas the second one is stricly monotone increasing in a small neighborhood of the origin, with respect to the distance of the pole from the origin. In order to do this, we are going to exploit the results achieved in Section \ref{sec:explicit}. In addition, by rotational symmetry, we will restrict ourselves to the case when the pole is moving along $x_1$-axis. \subsection{ Analytic perturbation with respect to the pole}\label{sec:analytic} As already pointed out in the Introduction (see also \cite[Section 2]{AbatangeloNys2017}, \cite{Lena2015}), as the pole moves not only the operator changes, but also this produces different variational settings: functional spaces depend on the position of the pole. In order to study the moving pole's effect on eigenvalues, first of all we need to define a family of diffeomorphisms which allow us to set the eigenvalue problem on a fixed domain, in the spirit of \cite{AbatangeloNys2017,Lena2015}. We consider a particular case of the local perturbation introduced in \cite[Subsection 5.1]{AbatangeloNys2017}. Let $\xi \in C^{\infty}_c(\mathbb{R}^2)$ be a cut-off function such that \begin{equation}\label{eq:xi} 0 \leq \xi \leq 1,\quad \xi \equiv 1 \text{ on } D_{1/4}(0), \quad \xi \equiv 0 \text{ on } \mathbb{R}^2\setminus D_{1/2}(0), \quad |\nabla \xi| \leq 16 \text{ on }\mathbb{R}^2. \end{equation} For $a \in D_{1/4}(0)$, we define the local transformation $\Phi_a \in C^\infty (\mathbb{R}^2, \mathbb{R}^2)$ by \begin{equation}\label{eq:Phi_a} \Phi_a (x) = x + a \xi(x). \end{equation} Notice that $\Phi_a(0) = a$ and that $\Phi_a'$ is a perturbation of the identity \[ \Phi_a' = I + a \otimes \nabla\xi = \begin{pmatrix} 1 + a_1 \frac{\partial \xi}{\partial x_1} & a_1 \frac{\partial \xi}{\partial x_2} \\ a_2 \frac{\partial \xi}{\partial x_1} & 1 + a_2 \frac{\partial \xi}{\partial x_2} \end{pmatrix}, \] so that \begin{equation}\label{eq:det} J_a(x):=\det(\Phi_a')= 1 + a_1 \frac{\partial \xi}{\partial x_1} + a_2 \frac{\partial \xi}{\partial x_2} = 1 + a \cdot \nabla \xi. \end{equation} Let $R = 1/128$. Then, if $a \in D_R(0)$, $\Phi_a$ is invertible, its inverse $\Phi_a^{-1}$ is also $C^\infty(\mathbb{R}^2, \mathbb{R}^2)$, see e.g. \cite[Lemma 1]{Micheletti1972}. Then, as in \cite[Section 7]{AbatangeloNys2017}, we define $\gamma_a \, : \, L^2(\Omega,\mathbb{C}) \to L^2(\Omega,\mathbb{C})$ by \begin{equation}\label{eq:gamma_a} \gamma_a (u) = \sqrt{J_a} (u \circ \Phi_a), \end{equation} where $J_a$ is defined in \eqref{eq:det}. Such a transformation $\gamma_a$ defines an isomorphism preserving the scalar product in $L^2(\Omega,\mathbb{C})$. Moreover, since $\Phi_a$ and $\sqrt{J_a}$ are $C^\infty$, $\gamma_a$ defines an algebraic and topological isomorphism of $H^{1,a}_0(\Omega,\mathbb{C})$ in $H^{1,0}_0(\Omega,\mathbb{C})$ and inversely with $\gamma_a^{-1}$, see \cite[Lemma 2]{Micheletti1972}. We notice that $\gamma_a^{-1}$ writes \[ \gamma_a^{-1} (u) = \left(\sqrt{J_a \circ \Phi_a^{-1}} \right)^{-1} ( u \circ \Phi_a^{-1}). \] With a little abuse of notation we define the application $\gamma_a \, : \, ( H^{1,a}_0(\Omega,\mathbb{C}) )^\star \to (H^{1,0}_0(\Omega,\mathbb{C}))^\star$ in such a way that \begin{equation} \label{eq:gamma-a-dual} \phantom{a}_{( H^{1,0}_0(\Omega,\mathbb{C}) )^\star }\langle \gamma_a(f), v \rangle_{H^{1,0}_0(\Omega,\mathbb{C})} = \phantom{a}_{( H^{1,0}_0(\Omega,\mathbb{C}) )^\star }\langle f, \gamma_a^{-1}(v) \rangle_{H^{1,a}_0(\Omega,\mathbb{C})}, \end{equation} for any $f \in ( H^{1,a}_0(\Omega,\mathbb{C}) )^\star$, and inversely for $\gamma_a^{-1} \, : \, ( H^{1,0}_0(\Omega,\mathbb{C}) )^\star \to (H^{1,a}_0(\Omega,\mathbb{C}))^\star$. We define the new operator $G_{a} \, : \, H^{1,0}_0(\Omega,\mathbb{C}) \to (H^{1,0}_0(\Omega,\mathbb{C}))^\star$ by the following relation \begin{equation} \label{eq:operator-G-a} G_{a} \circ \gamma_a = \gamma_a \circ (i \nabla + A_a)^2, \end{equation} being $\gamma_a$ defined in \eqref{eq:gamma_a} and \eqref{eq:gamma-a-dual}. By \cite[Lemma 3]{Micheletti1972} the domain of definition of $G_{a}$ is given by $\gamma_a(H^{1,a}_0(\Omega, \mathbb{C}))$, it coincides with $H^{1,0}_0(\Omega,\mathbb{C})$. Moreover, $G_{a}$ and $(i \nabla + A_a^\alpha)^2$ are \emph{spectrally equivalent}, in particular they have the same eigenvalues with the same multiplicity and the map $a \mapsto G_{a}$ is $C^\infty (D_R(0), BL( H^{1,0}_0(\Omega,\mathbb{C}), (H^{1,0}_0(\Omega,\mathbb{C}))^\star )$. Now, let us consider the special case $a=(a_1,0)$, which means moving the pole just along the $x_1$-axis. For simplicity, in the following we denote \[ t:=a_1 \quad \text{and} \quad G_t:=G_{(a_1,0)}. \] Then, following the same argument in \cite[Section 4]{Lena2015}, the family $t\mapsto G_{t}$ is an {\em analytic family of type (B) in the sense of Kato} with respect to the variable $t$. In order to prove it, by definition (see \cite[Chapter 7, Section 4]{Kato}) we need to show that the quadratic form $\mathfrak g_t$ associated to $G_t$, defined as \[ \mathfrak g_t(u)=\phantom{a}_{( H^{1,0}_0(\Omega) )^\star }\langle G_t u,u \rangle_{H^{1,0}_0(\Omega)}, \] is an {\em analytic family of type (a) in the sense of Kato}, i.e. it fulfills the following two conditions: \begin{itemize} \item[(i)] the form domain is independent of $t$; \item[(ii)] the form $\mathfrak g_t(u)$ is analytic with respect to the parameter $t$ for any $u$ in the form domain. \end{itemize} The first assertion follows from \eqref{eq:operator-G-a} (see \cite[Section 7.1]{AbatangeloNys2017}), whereas the second one follows from \cite[Lemmas 5.1,5.2,7.1]{AbatangeloNys2017} possibly shrinking the interval $(-R,R)$ where the parameter $t$ is varying. The Kato-Rellich perturbation theory gives some information in the case when the considered eigenvalue is not simple. Let $\lambda_0$ be any double eigenvalue of $G_0$. Then there exist a family of $2$ linearly independent $L^2(\Omega)$-normalized eigenfunctions $\{u_j (t)\}_{j=1,2}$ relative to the associated eigenvalue $\mu_j(t)$ for $j=1,2$ which depend analytically on the parameter $t$ and such that for $j=1,2$ $\mu_j (0) = \lambda_0$ and $\mu_j (t)$ is an eigenvalue of the operator $G_t$. We recall that $G_t$ has the same eigenvalues with the same multiplicity as operator $(i\nabla+A_{(t,0)})^2$. Note that the $2$ functions $t \mapsto \mu_1(t),\ t\mapsto \mu_2 (t)$ are not {\em a priori} necessarily distinct. The Feynman-Hellmann formula (see \cite[Chapter VII, Section 3]{Kato}) then tells us that \begin{equation}\label{eq:FHformula} \mu_j'(0) = \phantom{a}_{(H^{1,0}_0(\Omega,\mathbb{C}))^\star} \left\langle G'(0)[t]\,u_j(0), u_j(0) \right \rangle_{H^{1,0}_0(\Omega,\mathbb{C})}. \end{equation} \subsection{Computing the derivative at $0$ of the two branches} The aim of this subsection is showing that the two (\textit{a priori} not necessarily different) analytic branches $t\mapsto \mu_j(t)$, $j=1,2$, have a different derivative at $t=0$. In order to do this, we refer to the paper \cite{AbatangeloNys2017}. In particular, for $j=1,2$ \eqref{eq:FHformula} together with \cite[Lemma 8.2, Lemma 8.6]{AbatangeloNys2017} yield \begin{equation}\label{eq:FHformula2} \mu_j'(0) = \phantom{a}_{(H^{1,0}_0(\Omega,\mathbb{C}))^\star} \left\langle G'(0)[t]\,u_j(0), u_j(0) \right \rangle_{H^{1,0}_0(\Omega,\mathbb{C})} = \frac\pi2 ({A_j}^2 - {B_j}^2) \end{equation} where $A_j,B_j\in\mathbb{R}$ are the coefficients in the expansion \eqref{eq:coeff2}. What is left is detecting $u_j(0)$ for $j=1,2$. To this aim, we are going to exploit the symmetry property of the domain with respect to the $x_1$-axis. We refer to \cite{BNHHO09} and define the antiunitary antilinear operator $\Sigma:\ L^2 (D)\to L^2 (D)$ \[ \Sigma u := \bar u \circ \sigma, \] being $\sigma(x_1,x_2)=(x_1,-x_2)$. We have that $\Sigma$ and $(i\nabla +A_0)^2$ commute (see \cite[Section 5]{BNHHO09}), as well as $\Sigma $ and $K_0$. This means $L^2_{K_0}$ is stable under the action of $\Sigma$. Thus, if we write \begin{equation*}L^2_{K,\Sigma}(\Omega):= L^2_K(\Omega)\cap \mbox{ker}(\Sigma-Id) \qquad L^2_{K,a\Sigma}(\Omega):= L^2_K(\Omega)\cap \mbox{ker}(\Sigma+Id),\end{equation*} then we have the orthogonal decomposition \begin{equation}\label{eq:orthogonal} L^2_K(\Omega)=L^2_{K,\Sigma}(\Omega)\oplus L^2_{K,a\Sigma}(\Omega). \end{equation} We can therefore define the operators $(i\nabla +A_0)^2_{\Sigma}$ and $(i\nabla +A_0)^2_{a\Sigma}$, restrictions of $(i\nabla +A_0)^2$ to $L^2_{K,\Sigma}(\Omega)$ and $L^2_{K,a\Sigma}(\Omega)$ respectively. The spectrum of $(i\nabla +A_0)^2$ is the union (counted with multiplicities) of the spectra of $(i\nabla +A_0)^2_{\Sigma}$ and $(i\nabla +A_0)^2_{a\Sigma}$. Lemma \ref{l:isospectrality} is then completed by the following result. \begin{lemma}(\cite[Propositions 5.7 and 5.8]{BNHHO09})\label{l:isospectrality_eigenf} If $u$ is a $K_0$-real $\Sigma$-invariant eigenfunction of $(i\nabla +A_0)^2$ then the restriction to $D^+$ of $e^{-\tfrac i2 \theta_0}u$ is a real eigenfunction of the Dirichlet--Neumann problem in \eqref{eq:dnnd}. If $u$ is a $K_0$-real $a\Sigma$-invariant eigenfunction of $(i\nabla +A_0)^2$ then the restriction to $D^+$ of $e^{-\tfrac i2 \theta_0}u$ is a real eigenfunction of the Neumann--Dirichlet problem in \eqref{eq:dnnd}. Conversely, if $v$ is an eigenfunction of the Dirichlet--Neumann problem in $D^+$, if $\tilde v$ is the even extension of $u$ in $D$, the function $e^{\tfrac i2 \theta_0}\tilde v$ is a ($K_0$-real) $\Sigma$-invariant eigenfunction of $(i\nabla +A_0)^2$. If $v$ is an eigenfunction of the Neumann--Dirichlet problem in $D^+$, if $\tilde v$ is the odd extension of $u$ in $D$, the function $e^{\tfrac i2 \theta_0}\tilde v$ is a ($K_0$-real) $a\Sigma$-invariant eigenfunction of $(i\nabla +A_0)^2$. \end{lemma} In view of \eqref{eq:gamma_a} and \eqref{eq:operator-G-a} we have that $u_1(0)$ and $u_2(0)$ are two $K_0$-real linearly independent eigenfunctions of $(i\nabla+A_0)^2$. Therefore via \eqref{eq:orthogonal}, Lemma \ref{l:isospectrality_eigenf} and Lemma \ref{l:doubleeigenf} $u_1(0)$ is $a\Sigma$-invariant whereas $u_2(0)$ is $\Sigma$-invariant. From Lemma \ref{l:doubleeigenf}, Remark \ref{r:ms} and the asymptotic expansion of the Bessel functions (see e.g. \cite[Chapter 3]{Watson}) there exist $A,B\in\mathbb{R}\setminus \{0\}$ such that \begin{align} u_1(r\cos t,r\sin t)= e^{i \frac{t}{2}} r^{1/2} \,B \sin \frac{t}{2} + O(r^{\tfrac32}) \quad \text{ as } r \to 0^+ \\ u_2(r (\cos t, \sin t)) = e^{i \frac{t}{2}} r^{1/2} \,A \cos \frac{t}{2} + O(r^{\tfrac32}) \quad \text{ as } r \to 0^+. \end{align} Equations \eqref{eq:FHformula} and \eqref{eq:FHformula2} immediately give \begin{align} &\mu_1'(0) = - \frac\pi2 \,{B}^2 \ <\ 0,\\ &\mu_2'(0) = \frac\pi2 \,{A}^2 \ >\ 0, \end{align} thus concluding the first step towards our main result. \section{Conclusion}\label{sec:monotone} We are now in position to conclude the proof of our main result. \begin{proof}[Proof of Theorem \ref{t:main}] Thanks to rotational invariance of eigenvalues, it is sufficient to prove that if $a=(t,0)$ and $\lambda_1(t)$ is the first eigenvalue of the problem \eqref{eq:eige}, which is double for $t=0$, then $\lambda_1(t)$ is simple for any $t\in(0,1)$. By the results of Section \ref{sec:split}, there exists $\delta>0$ such that the two analytic eigenbranches $\mu_1(t)$ and $\mu_2(t)$ are different for $t\in(-\delta,\delta)$, since \begin{equation}\label{eq:derivate} \mu_1'(0)<0 \quad \text{whereas} \quad \mu_2'(0)>0. \end{equation} Moreover, we have that \begin{equation}\label{eq:rami} \lambda_1(t) = \begin{cases} \mu_2(t) &\text{ for }t\in(-\delta,0]\\ \mu_1(t) &\text{ for }t\in[0,\delta), \end{cases} \end{equation} since $\mu_j(t)$ are eigenvalues of the operator $G_t$ which is spectral equivalent to $(i\nabla +A_a)^2$ with $a=(t,0)$. In order to prove that it is simple for $t\in(0,1)$, it will be sufficient to prove that $\lambda_1(t)<\lambda_2(t)$ for $t\in(0,1)$. This is guaranteed by \eqref{eq:rami}, \eqref{eq:derivate},\eqref{eq:lambda1}, \eqref{eq:lambda2} and Remark \ref{r:monotonicity}. This concludes the proof of Theorem \ref{t:main}. \end{proof} \begin{figure}[h] \begin{center} \psset{unit=0.5cm} \begin{pspicture}(-10,0)(10,10) \psaxes*[labels=none,ticks=none]{->}(0,0)(-10,0)(10,10) \uput[270](10,0){$t$} \uput[270](0,5){$\lambda_1^{ND}(0)=\lambda_1^{DN}(0)$} \uput[0](3.5,6){$\lambda_{1}^{DN}(t)$} \uput[0](-6.5,6){$\lambda_{1}^{ND}(t)$} \uput[0](4.3,1.5){$\lambda_{1}^{ND}(t)$} \uput[0](-5.3,1.5){$\lambda_{1}^{DN}(t)$} \psline[linewidth=.5pt,linestyle=dashed](8,0)(8,7) \psline[linewidth=.5pt,linestyle=dashed](-8,0)(-8,7) \psline[linewidth=.5pt,linestyle=dashed](-8,7)(8,7) \psdot[dotstyle=*](0,4.4) \uput[270](8,0){$1$} \uput[270](-8,0){$-1$} \uput[270](0,0){$0$} \pscurve(-8,0.8)(-7,1)(6,6.9)(7,7) \pscurve(-7,7)(-6,6.9)(7,1)(8,0.8) \pscurve(0,9)(0.8,9)(6,7)(7,7) \psline(7,7)(8,7) \pscurve(-7,7)(-6,7)(-0.8,9)(0,9) \psline(-7,7)(-8,7) \psdot[dotstyle=*](0,9) \uput[90](0,8.6){$\lambda_2^{ND}(0)=\lambda_2^{DN}(0)$} \uput[0](3.5,8){$\lambda_{2}^{ND}(t)$} \uput[0](-7.5,8){$\lambda_{2}^{DN}(t)$} \end{pspicture} \end{center} \caption{The double first Aharonov--Bohm eigenvalue $\lambda_1(0)$ splits in two different branches of simple eigenvalues up to the boundary.}\label{fig:1} \end{figure} \section*{Acknowledgements} The author would like to thank dr. Manon Nys for many discussions, as well as the anonymous referee of the paper \cite{AbatangeloNys2017} who excited our interest about the theme. The author is partially supported by the project ERC Advanced Grant 2013 n. 339958: ``Complex Patterns for Strongly Interacting Dynamical Systems -- COMPAT'', by the PRIN2015 grant ``Variational methods, with applications to problems in mathematical physics and geometry'' and by the 2017-GNAMPA project ``Stabilità e analisi spettrale per problemi alle derivate parziali''.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A \emph{partition} $\lambda = (\lambda_1, \lambda_2, \ldots, \lambda_{\ell})$ of $n$ is a non-increasing positive integer sequence whose sum of the parts $\lambda_i$ is $n$. We denote that $\lambda_i \in \lambda$ and visualize a partition $\lambda$ with the \emph{Young diagram} $D(\lambda)$. For a partition $\lambda$, $\lambda'$ is called the \emph{conjugate} of $\lambda$ if $D(\lambda')$ is the reflection of $D(\lambda)$ across the main diagonal, and $\lambda$ is called \emph{self-conjugate} if $\lambda=\lambda'$. An $(i,j)$-box of $D(\lambda)$ is the box at the $i$th row from the top and the $j$th column from the left. The \emph{hook length} of an $(i,j)$-box, denoted by $h_{i,j}(\lambda)$, is the total number of boxes on the right and the below of the $(i,j)$-box and itself, and the \emph{hook set} $\mathcal{H}(\lambda)$ of $\lambda$ is the set of hook lengths of $\lambda$. We say that a partition $\lambda$ is an \emph{$s$-core} if $ks\notin\mathcal{H}(\lambda)$ for all $k \in \mathbb{N}$ and is an \emph{$(s_1, s_2, \dots, s_p)$-core} if it is an $s_i$-core for all $i=1,2,\dots,p$. Figure \ref{fig:ex} illustrates the Young diagram of a partition and a hook length. \begin{figure}[ht!] \centering \small{ $D(\lambda)=$~\begin{ytableau} ~&~&~&~&~&~&~ \\ ~&~&~&~&~&~ \\ ~&~&~ \\ ~&~ \end{ytableau} \qquad \qquad \begin{ytableau} ~&*(gray!50)9&*(gray!50)&*(gray!50)&*(gray!50)&*(gray!50)&*(gray!50) \\ ~&*(gray!50)&~&~&~&~ \\ ~&*(gray!50)&~ \\ ~&*(gray!50) \end{ytableau}} \caption{The Young diagram of the partition $\lambda=(7,6,3,2)$ and a hook length $h_{1,2}(\lambda)=9$.} \label{fig:ex} \end{figure} There have been active research on the number of simultaneous core partitions and self-conjugate simultaneous core partitions since Anderson \cite{Anderson} counted the number of $(s,t)$-core partitions for coprime $s$ and $t$. For more information, see \cite{AL,FMS,Wang} for example. In this paper, we investigate the three different types of core partitions, which are called bar-core partitions, core shifted Young diagrams, and doubled distinct core partitions. Researchers have been studied them independently but they are inevitably related to each other. We first give the definitions of the three objects that we only deal with under the condition that the partition is \emph{strict}, which means that each part is all distinct. For a strict partition $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_{\ell})$, an element of the set \[ \{\lambda_i+\lambda_{i+1}, \lambda_i+\lambda_{i+2}, \dots, \lambda_i+\lambda_{\ell} \} \cup \left( \{ \lambda_{i}, \lambda_{i}-1, \dots, 1 \} \setminus \{\lambda_{i}-\lambda_{i+1}, \dots, \lambda_{i}-\lambda_{\ell}\} \right) \] is called a \emph{bar length} in the $i$th row. A strict partition $\lambda$ is called an \emph{$\overline{s}$-core} (\emph{$s$-bar-core}) if $s$ is not a bar length in any row in $\lambda$. For example, the sets of bar lengths in every row of $\lambda=(7,6,3,2)$ are $\{13,10,9,7,6,3,2\}$, $\{9,8,6,5,2,1\}$, $\{5,3,2\}$, and $\{2,1\}$. Thus, $\lambda$ is an $\overline{s}$-core partition for $s=4,11,12$, or $s\geq 14$. The \emph{shifted Young diagram} $S(\lambda)$ of a strict partition $\lambda$ is obtained from $D(\lambda)$ by shifting the $i$th row to the right by $i-1$ boxes for each $i$. The \emph{shifted hook length} $h^*_{i,j}(\lambda)$ of an $(i,j)$-box in $S(\lambda)$ is the number of boxes on its right, below and itself, and the boxes on the $(j+1)$st row if exists. For example, the left diagram in Figure \ref{fig:bar} shows the shifted Young diagram of the partition $(7,6,3,2)$ with the shifted hook lengths. The shifted hook set $\mathcal{H}^*(\lambda)$ is the set of shifted hook lengths in $S(\lambda)$. A shifted Young diagram $S(\lambda)$ is called an \emph{$s$-core shifted Young diagram}, shortly $s$-CSYD, if none of the shifted hook lengths of $S(\lambda)$ are divisible by $s$. Sometimes we say that ``$\lambda$ is an $s$-CSYD'' instead of ``$S(\lambda)$ is an $s$-CSYD''. Given a strict partition $\lambda=(\lambda_1, \lambda_2, \ldots, \lambda_{\ell})$, the \emph{doubled distinct partition} of $\lambda$, denoted by $\lambda \lambda$, is a partition whose Young diagram $D(\lambda \lambda)$ is defined by adding $\lambda_i$ boxes to the $(i-1)$st column of $S(\lambda)$. In other words, the Frobenius symbol of the doubled distinct partition $\lambda\la$ is given by \[ \begin{pmatrix} \lambda_1 & \lambda_2 & \cdots &\lambda_{\ell}\\ \lambda_1 -1 & \lambda_2 -1 & \cdots & \lambda_{\ell} -1 \end{pmatrix}. \] The doubled distinct partition $\lambda\la$ is called a \emph{doubled distinct $s$-core} if none of the hook lengths are divisible by $s$. Note that the hook set of $D(\lambda\la)$ that is located on the right of the main diagonal is the same as $\mathcal{H}^*(\lambda)$. Indeed, the hook lengths on the $(\ell+1)$st column of $D(\lambda\la)$ are the parts of $\lambda$ and the deletion of this column from $D(\lambda\la)$ gives a self-conjugate partition. See Figure \ref{fig:bar} for example. \begin{figure}[ht!] {\small $S(\lambda)=~$\begin{ytableau} 13&10&9&7&6&3&2 \\ \none&9&8&6&5&2&1 \\ \none&\none&5&3&2 \\ \none&\none&\none&2&1 \\ \end{ytableau} \qquad \qquad $D(\lambda\la)=~$\begin{ytableau} *(gray!60)14&13&10&9&*(gray!20)7&6&3&2 \\ 13&*(gray!60)12&9&8&*(gray!20)6&5&2&1 \\ 10&9&*(gray!60)6&5&*(gray!20)3&2 \\ 9&8&5&*(gray!60)4&*(gray!20)2&1 \\ 6&5&2&1 \\ 3&2 \\ 2&1 \end{ytableau}} \caption{The shifted Young diagram $S(\lambda)$ with the shifted hook lengths and the doubled distinct partition $\lambda\la$ with the hook lengths for the strict partition $\lambda=(7,6,3,2)$.}\label{fig:bar} \end{figure} We extend the definition of simultaneous core partitions to bar-core partitions and CSYDs. We use the following notations for the variety sets of core partitions, \begin{align*} \mathcal{SC}_{(s_1, s_2, \dots, s_p)} &: \text{~the set of self-conjugate $(s_1, s_2, \dots, s_p)$-cores},\\ \mathcal{BC}_{(s_1, s_2, \dots, s_p)} &: \text{~the set of $(\overline{s_1}, \overline{s_2},\dots, \overline{s_p})$-cores},\\ \mathcal{CS}_{(s_1, s_2, \dots, s_p)} &: \text{~the set of $(s_1, s_2, \dots, s_p)$-CSYDs},\\ \mathcal{DD}_{(s_1, s_2, \dots, s_p)} &: \text{~the set of doubled distinct $(s_1, s_2, \dots, s_p)$-cores}. \end{align*} There are a couple of results on counting the number of simultaneous core partitions of the three objects, bar-cores, CSYDs, and doubled distinct cores. Bessenrodt and Olsson \cite{BO} adopted the Yin-Yang diagram to count the number of $(\ols{s\phantom{t}},\overline{t})$-core partitions for odd numbers $s$ and $t$, Wang and Yang \cite{WY} counted the same object when $s$ and $t$ are in different parity, and Ding \cite{Ding} counted the number of $(s,s+1)$-CSYDs (as far as the authors know these are the only counting results on the three objects known until now). Our main goal is to fill out all the possible results we could get on $(s,t)$-cores and $(s,s+d,s+2d)$-cores for the three objects by constructing some bijections. Additionally, we hire a well-known object so called self-conjugate core partitions to enumerate the number of such core partitions. For instance, bar-core partitions and self-conjugate core partitions are related to each other; Yang \cite[Theorem 1.1]{Yang} constructed a bijection between the set of self-conjugate $s$-cores and that of $\overline{s}$-cores for odd $s$; Gramain, Nath, and Sellers \cite[Theorem 4.12]{GNS} gave a bijection between self-conjugate $(s,t)$-core partitions and $(\ols{s\phantom{t}},\overline{t})$-core partitions, where both $s$ and $t$ are coprime and odd. The following theorems are the main results in this paper. \begin{thm}\label{thm:main1} For coprime positive integers $s$ and $t$, the number of doubled distinct $(s,t)$-core partitions is \[ |\mathcal{DD}_{(s,t)}|=\binom{\lfloor (s-1)/2 \rfloor + \lfloor (t-1)/2 \rfloor}{\lfloor (s-1)/2 \rfloor}, \] and the number of $(s,t)$-CSYDs is \[ |\mathcal{CS}_{(s,t)}|=\binom{\floor*{(s-1)/2} + \floor*{t/2} -1}{\floor*{(s-1)/2}} +\binom{\floor*{s/2} + \floor*{(t-1)/2}-1}{\floor*{(t-1)/2}}. \] \end{thm} \begin{thm}\label{thm:unifying} Let $s$ and $d$ be coprime positive integers. \begin{enumerate} \item[(a)] For odd $s$ and even $d$, \begin{align*} |\mathcal{BC}_{(s,s+d,s+2d)}|&=|\mathcal{CS}_{(s,s+d,s+2d)}|=|\mathcal{DD}_{(s,s+d,s+2d)}|\\ &=\sum_{i=0}^{(s-1)/2}\binom{(s+d-3)/2}{\lfloor i/2 \rfloor}\binom{(s+d-1)/2-\lfloor i/2 \rfloor}{(s-1)/2-i}. \end{align*} \item[(b)] For odd numbers $s$ and $d$, \begin{align*} &|\mathcal{BC}_{(s,s+d,s+2d)}|=|\mathcal{CS}_{(s,s+d,s+2d)}|\\ &~~=\sum_{i=0}^{(s-1)/2}\binom{(d-1)/2+i}{\lfloor i/2 \rfloor}\left( \binom{(s+d-2)/2}{(d-1)/2+i} + \binom{(s+d-4)/2}{(d-1)/2+i}\right). \end{align*} \item[(c)] For even $s$ and odd $d$, \begin{align*} |\mathcal{BC}_{(s,s+d,s+2d)}|=&\sum_{i=0}^{s/2} \binom{(s+d-1)/2}{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, s/2 -i}, \\ |\mathcal{CS}_{(s,s+d,s+2d)}|=&\sum_{i=0}^{(s-2)/2}\binom{(s+d-3)/2}{\lfloor i/2 \rfloor}\binom{(s+d-3)/2-\lfloor i/2 \rfloor}{(s-2)/2-i}\\ &+\sum_{i=0}^{(s-2)/2}\binom{(s+d-5)/2}{\lfloor i/2 \rfloor}\binom{(s+d-1)/2-\lfloor i/2 \rfloor}{(s-2)/2-i}. \end{align*} \item[(d)] For odd $d$, \[ |\mathcal{DD}_{(s,s+d,s+2d)}|=\sum_{i=0}^{ \lfloor(s-1)/2\rfloor} \binom{\lfloor (s+d-2)/2\rfloor }{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, \lfloor(s-1)/2\rfloor -i}. \] \end{enumerate} \end{thm} This paper is organized as follows: In Section \ref{sec:2}, we obtain useful propositions involving the three objects which are used frequently throughout this paper. Restricted those objects by the size of partitions, we get the generating functions of $\overline{s}$-cores and $s$-CSYDs for even $s$. Section \ref{sec:double} includes connections between the sets of $NE$ lattice paths and the three objects with the condition being $(s,t)$-cores. We consider the Yin-Yang diagrams to find the number of doubled distinct $(s,t)$-core partitions and the number of $(s,t)$-CSYDs by constructing each bijection to a certain set of $NE$ lattice paths. In Section \ref{sec:triple}, we describe the relations between free Motzkin paths and the three objects under the condition of being $(s,s+d,s+2d)$-cores by using the $(\overline{s+d},d)$-abacus diagram, the $(\overline{s+d},d)$-abacus function, and their properties. From the bijections we set up, we count the number of each $(s,s+d,s+2d)$-core partitions as a result of the number of corresponding free Motzkin paths. \section{Properties and generating functions}\label{sec:2} We begin this section by showing a property which follows straightly from the definition of the bar lengths and the shifted hook lengths. \begin{lem}\label{lem:barhook} Let $\lambda = (\lambda_1, \lambda_2, \dots, \lambda_{\ell})$ be a strict partition. The set of bar lengths in the $i$th row of $\lambda$ is equal to the set of the shifted hook lengths in the $i$th row of $S(\lambda)$. \end{lem} \begin{proof} Let $\mu \coloneqq (\lambda_1 - \ell +1, \lambda_2 -\ell +2, \dots, \lambda_{\ell})$. By the definition of the shifted hook lengths, we have \[ h_{i,j}^*(\lambda)=\begin{cases} \lambda_i+\lambda_{j+1} & \text{ if }~ i \le j \le \ell-1,\\ h_{i, j-\ell+1}(\mu) & \text{ if }~ \ell \le j \le \lambda_i. \end{cases} \] We show that the statement is true for the first row. Assume, on the contrary, that $h_{1,j}^*(\lambda)=h_{1, j-\ell+1}(\mu)=\lambda_1-\lambda_k=h_{1,1}(\mu)-h_{k,1}(\mu)$ for some $k$. Then, by the definition of hook lengths, \[ \mu_1+\mu_{j-\ell+1}'-(j-\ell+1) = (\mu_1+\mu_1'-1)-(\mu_k+\mu_1' -k), \] which implies that $\mu_k+\mu_{j-\ell+1}'-(k+j-\ell)=h_{k, j-\ell+1}(\mu)=0$. Since the hook lengths are always nonzero, we get a contradiction. Similarly, this argument works for the $i$th row in general. \end{proof} \subsection{Characterizations} In the theory of core partitions, a partition $\lambda$ is an $s$-core if $s\notin \mathcal{H}(\lambda)$ or, equivalently, if $ms\notin\mathcal{H}(\lambda)$ for all $m$. In \cite[p. 31]{MY}, Morris and Yaseen gave a corollary that $\lambda$ is an $\overline{s}$-core if and only if none of the bar lengths in the rows of $\lambda$ are divisible by $s$. However, Olsson \cite[p. 27]{Olsson-book} pointed out that this corollary is not true when $s$ is even. In Figure \ref{fig:bar}, one can see that $\lambda=(7,6,3,2)$ is a $\overline{4}$-core partition, but $h^*_{2,3}(\lambda)=8$. Later, Wang and Yang \cite{WY} gave a characterization of $\overline{s}$-core partitions. \begin{prop}\cite{WY}\label{prop:bar} For a strict partition $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell})$, $\lambda$ is an $\overline{s}$-core if and only if all the following hold: \begin{enumerate} \item[(a)] $s \notin \lambda$. \item[(b)] If $\lambda_i \in \lambda$ with $\lambda_i>s$, then $\lambda_i -s \in \lambda$. \item[(c)] If $\lambda_i, \lambda_j \in \lambda$, then $\lambda_i+\lambda_j \not\equiv 0 \pmod{s}$ except when $s$ is even and $\lambda_i,\lambda_j \equiv s/2 \pmod{s}$. \end{enumerate} \end{prop} We extend this characterization to doubled distinct $s$-core partitions and $s$-CSYDs. \begin{prop}\label{prop:dd} For a strict partition $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell})$, $\lambda\la$ is a doubled distinct $s$-core partition if and only if all the following hold: \begin{enumerate} \item[(a)] $\lambda$ is an $\overline{s}$-core. \item[(b)] $s/2 \notin \lambda$ for even $s$. \end{enumerate} \end{prop} \begin{proof} It is known by Lemma \ref{lem:barhook} and the definition of $\lambda\la$ that $$\mathcal{H}(\lambda\la)=\mathcal{H}^*(\lambda) \cup \{h_{i,i}(\lambda\la)=2\lambda_i \mid i=1,2,\dots,\ell \}.$$ Therefore, for an $\overline{s}$-core partition $\lambda$ and even $s$, $s/2 \in \lambda$ if and only if $s \in \mathcal{H}(\lambda\la)$, meaning that $\lambda\la$ is not a doubled distinct $s$-core. \end{proof} \begin{prop}\label{prop:CSYD} For a strict partition $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell})$, $S(\lambda)$ is an $s$-CSYD if and only if all the following hold: \begin{enumerate} \item[(a)] $\lambda$ is an $\overline{s}$-core. \item[(b)] $3s/2 \notin \lambda$ for even $s$. \end{enumerate} \end{prop} \begin{proof} Assume first that $S(\lambda)$ is an $s$-CSYD. By Lemma \ref{lem:barhook}, $\lambda$ is an $\overline{s}$-core. If $3s/2 \in \lambda$, then $s/2 \in \lambda$ by Proposition \ref{prop:bar} (b). This implies that there is a bar length of $2s$ in $\lambda$, which means that $S(\lambda)$ is not an $s$-CSYD. Conversely, suppose that two conditions (a) and (b) hold. If $\lambda$ is an $\overline{s}$-core but $S(\lambda)$ is not an $s$-CSYD, then there is a box $(i,j)$ in $S(\lambda)$ such that $h^*_{i,j}(\lambda)=sk$ for some $k\geq 2$. It follows from the definition of the bar lengths that there exist $\lambda_i,\lambda_j \in \lambda$ satisfying $\lambda_i+\lambda_j=sk$. Also, by Proposition~\ref{prop:bar}~(c), we deduce that $s$ is even and $\lambda_i,\lambda_j \equiv s/2 \pmod s$. Hence, when $\lambda_i > \lambda_j$, we can write $\lambda_i = (2m+1)s/2$ for some $m\geq 1$, and therefore $3s/2 \in \lambda$ by Proposition~\ref{prop:bar}~(b). It leads to a contradiction. \end{proof} \begin{rem} \label{rmk:oddoddodd} From the characterizations we observe that, for coprime odd integers $s_1,s_2,\dots,s_p$, we have \[ \mathcal{BC}_{(s_1, s_2, \dots, s_p)}=\mathcal{CS}_{(s_1, s_2, \dots, s_p)}=\mathcal{DD}_{(s_1, s_2, \dots, s_p)}. \] \end{rem} \subsection{Generating functions} In this subsection, we consider the generating functions of the following numbers, \begin{align*} sc_s(n) &: \text{~the number of self-conjugate $s$-core partitions of $n$},\\ bc_s(n) &: \text{~the number of $\overline{s}$-core partitions of $n$},\\ cs_s(n) &: \text{~the number of $s$-CSYDs of $n$},\\ dd_s(n) &: \text{~the number of doubled distinct $s$-core partitions of $n$}. \end{align*} Garvan, Kim, and Stanton \cite{GKS} obtained the generating functions of the numbers $sc_s(n)$ and $dd_s(n)$ by using the concept of the core and the quotient of a partition. As usual, we use the well-known $q$-product notation $$(a;q)_n=\prod\limits_{i=0}^{n-1}(1-aq^i) \quad \text{and} \quad (a;q)_{\infty}=\lim\limits_{n \to \infty} (a;q)_n \quad \text{for} ~ |q|<1.$$ \begin{prop}\cite[Equations (7.1a), (7.1b), (8.1a), and (8.1b)]{GKS}\label{prop:gf_GKS} For a positive integer $s$, we have \begin{align*} \sum_{n=0}^{\infty}sc_s(n)q^n&=\begin{dcases*} \frac{(-q;q^2)_\infty(q^{2s};q^{2s})^{(s-1)/2}_\infty}{(-q^s;q^{2s})_\infty} & \text{if $s$ is odd},\\ (-q;q^2)_\infty(q^{2s};q^{2s})^{s/2}_\infty & \text{if $s$ is even,} \end{dcases*}\\[2ex] \sum_{n=0}^{\infty}dd_s(n)q^n&=\begin{dcases*} \frac{(-q^2;q^2)_\infty(q^{2s};q^{2s})^{(s-1)/2}_\infty}{(-q^{2s};q^{2s})_\infty} & \text{if $s$ is odd},\\ \frac{(-q^2;q^2)_\infty(q^{2s};q^{2s})^{(s-2)/2}_\infty}{(-q^{s};q^{s})_\infty} & \text{if $s$ is even}. \end{dcases*} \end{align*} \end{prop} The generating function of the numbers $bc_s(n)$ for odd $s$ was found by Olsson \cite{Olsson-book}. Note that for odd $s$, it is clear that $bc_s(n)=cs_s(n)$ as a partition $\lambda$ is an $\overline{s}$-core if and only if it is an $s$-CSYD by Propositions \ref{prop:bar} and \ref{prop:CSYD}. \begin{prop}\cite[Proposition (9.9)]{Olsson-book} \label{prop:gf_O} For an odd integer $s$, we have \[ \sum_{n=0}^{\infty}bc_{s}(n)q^n=\sum_{n=0}^{\infty}cs_{s}(n)q^n=\frac{(-q;q)_\infty(q^{s};q^{s})^{(s-1)/2}_\infty}{(-q^s;q^s)_\infty}. \] \end{prop} From Propositions \ref{prop:gf_GKS} and \ref{prop:gf_O}, we also see that $dd_s(2n)=bc_{s}(n)$ when $s$ is odd. We now give generating functions of the numbers $bc_{s}(n)$ and $cs_s(n)$ for even $s$ by using Propositions \ref{prop:bar}, \ref{prop:dd}, and \ref{prop:CSYD}. \begin{prop}\label{prop:bargen} For an even integer $s$, we have \[ \sum_{n=0}^{\infty}bc_{s}(n)q^n=\frac{(-q;q)_\infty(q^{s};q^{s})^{(s-2)/2}_\infty}{(-q^{s/2};q^{s/2})_\infty}\sum_{n\geq 0} q^{sn^2/2}. \] \end{prop} \begin{proof} Let $s$ be a fixed even integer. From Propositions \ref{prop:bar} and \ref{prop:dd} we first see that the number of $\overline{s}$-core partitions $\lambda$ of $n$ for which $s/2\notin \lambda$ is equal to $dd_s(2n)$. We also notice that for a positive integer $i$, the number of $\overline{s}$-core partitions $\lambda$ of $n$ for which $(2i-1)s/2\in \lambda$ and $(2i+1)s/2\notin \lambda$ is equal to $dd_s(2n-i^2s)$ since $(2i-1)s/2\in \lambda$ implies $(2i-3)s/2, (2i-5)s/2, \dots, s/2 \in \lambda$ by Proposition \ref{prop:bar} (b). Therefore, we have \[ bc_s(n)=dd_s(2n)+dd_s(2n-s)+dd_s(2n-4s)+\cdots=\sum_{i\geq0} dd_s(2n-i^2s), \] which completes the proof from Proposition \ref{prop:gf_GKS}. \end{proof} \begin{prop} For an even integer $s$, we have \[ \sum_{n=0}^{\infty}cs_s(n)q^n=\frac{(-q;q)_\infty(q^{s};q^{s})^{(s-2)/2}_\infty}{(-q^s;q^{s/2})_\infty}. \] \end{prop} \begin{proof} Similar to the proof of Proposition \ref{prop:bargen}, $cs_s(n)=dd_s(2n)+dd_s(2n-s)$ for even $s$ by Propositions \ref{prop:dd} and \ref{prop:CSYD}. \end{proof} \section{Enumeration on $(s,t)$-cores} \label{sec:double} A \emph{north-east ($NE$) lattice path} from $(0,0)$ to $(s,t)$ is a lattice path which consists of steps $N=(0,1)$ and $E=(1,0)$. Let $\mathcal{NE}(s,t)$ denote the set of all $NE$ lattice paths from $(0,0)$ to $(s,t)$. In this section, we give $NE$ lattice path interpretations for $(\ols{s\phantom{t}},\overline{t})$-core related partitions and count such paths. Combining the results on self-conjugate $(s,t)$-core partitions and $(\ols{s\phantom{t}},\overline{t})$-core partitions which are independently proved by Ford, Mai, and Sze \cite[Theorem 1]{FMS}, Bessenrodt and Olsson \cite[Theorem 3.2]{BO}, and Wang and Yang \cite[Theorem 1.3]{WY}, we get the following theorem. \begin{thm}\cite{FMS,BO,WY}\label{thm:selfbar} For coprime positive integers $s$ and $t$, \[ |\mathcal{BC}_{(s,t)}|=|\mathcal{SC}_{(s,t)}|=\binom{\lfloor s/2 \rfloor + \lfloor t/2 \rfloor}{\lfloor s/2 \rfloor}. \] \end{thm} Also, Ding \cite{Ding} examined the Hasse diagram of the poset structure of an $(s,s+1)$-CSYD to count them. \begin{thm}\cite[Theorem 3.5]{Ding}\label{thm:Ding} For any positive integer $s\geq 2$, \[ |\mathcal{CS}_{(s,s+1)}|=\binom{s-1}{\floor*{(s-1)/2}}+\binom{s-2}{\floor*{(s-1)/2}}. \] \end{thm} From now on, we count doubled distinct $(s,t)$-cores and $(s,t)$-CSYDs. When $s$ and $t$ are both odd, the numbers of such partitions are already known by Remark \ref{rmk:oddoddodd}. We focus on the case when $s$ is even and $t$ is odd. For $(\ols{s\phantom{t}},\overline{t})$-cores with coprime odd integers $s$ and $t$ such that $1<s<t$, Bessenrodt and Olsson \cite{BO} defined the Yin-Yang diagram as an array $A(s,t)=\{A_{i,j}\}$, where \[ A_{i,j}\coloneqq-\frac{s+1}{2}t+js+it \qquad \text{ for } 1 \le i \le \frac{s-1}{2} \text{ and } 1 \le j \le \frac{t-1}{2}. \] The location of $A_{i,j}$ is at the intersection of the $i$th row from the top and the $j$th column from the left. For fixed $s$ and $t$, they showed that the set of parts consisting of all possible $(\ols{s\phantom{t}},\overline{t})$-core partitions is equal to the set of absolute values of $A_{i,j}$ in $A(s,t)$. They also gave a bijection $\phi$ between $\mathcal{BC}_{(s,t)}$ and the set $\mathcal{NE}((t-1)/2, (s-1)/2)$ in the Yin-Yang diagram from the lower-left corner to the upper-right corner. For an $NE$ lattice path $P$ in the Yin-Yang diagram $A(s,t)$, let $M(P)$ denote the set consisting of positive entries above $P$ and the absolute values of negative entries below $P$. According to the bijection $\phi$, if $\lambda$ is an $(\ols{s\phantom{t}},\overline{t})$-core partition and $P=\phi(\lambda)$ is the corresponding path in $A(s,t)$, then $M(P)$ is equal to the set of parts in $\lambda$. For $(\ols{s\phantom{t}},\overline{t})$-cores with coprime even $s$ and odd $t$, Wang and Yang \cite{WY} defined the Yin-Yang diagram to be an array $B(s,t)$, where \[ B_{i,j}\coloneqq-\frac{s+2}{2}t+js+it \qquad \text{ for } 1 \le i \le \frac{s}{2} \text{ and } 1 \le j \le \frac{t-1}{2}, \] and gave a bijection $\psi$ between the sets $\mathcal{BC}_{(s,t)}$ and $\mathcal{NE}((t-1)/2, s/2)$ in $B(s,t)$ from the lower-left corner to the upper-right corner. Again, the map $\psi$ sends an $(\ols{s\phantom{t}},\overline{t})$-core $\lambda$ to the path $Q=\psi(\lambda)$ in $B(s,t)$, where $M(Q)$ is equal to the set of parts in $\lambda$. See Figure \ref{fig:YinYang} for example. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=.5] \node at (0,0){ \begin{tabular}{ c c c c c c } -43 & -34 & -25 & -16 & -7 & 2\\ -30 & -21 & -12 & -3 & 6 & 15\\ -17 & -8 & 1 & 10 & 19 & 28\\ -4 & 5 & 14 & 23 & 32 & 41 \end{tabular}}; \node at (0,-3) {$A(9,13)$}; \end{tikzpicture} \qquad \quad \begin{tikzpicture}[scale=.5] \filldraw[color=gray!40] (-5.3,-2) rectangle (-3.5, -1) (-1.7,0) rectangle (1.9, 1) (3.7,1) rectangle (5.5, 2) ; \foreach \i in {0,1,2,3,4} \draw[dotted] (-5.3,-2+\i)--(5.5,-2+\i); \foreach \i in {0,1,2,3,4,5,6} \draw[dotted] (-5.3+1.8*\i,-2)--(-5.3+1.8*\i,2); \draw[thick] (-5.3,-2)--(-5.3,-1)--(-1.7,-1)--(-1.7,1)--(5.5,1)--(5.5,2); \node at (0,0){ \begin{tabular}{ c c c c c c } -43 & -34 & -25 & -16 & -7 & 2\\ -30 & -21 & -12 & -3 & 6 & 15\\ -17 & -8 & 1 & 10 & 19 & 28\\ -4 & 5 & 14 & 23 & 32 & 41 \end{tabular}}; \node at (0,-3) {$P=NEENNEEEEN$}; \end{tikzpicture}\\[2ex] \begin{tikzpicture}[scale=.5] \node at (0,0){ \begin{tabular}{ c c c c c c c} -44 & -36 & -28 & -20 & -12 & -4 \\ -31 & -23 & -15 & -7 & 1 & 9 \\ -18 & -10 & -2 & 6 & 14 & 22\\ -5 & 3 & 11 & 19 & 27 & 35 \end{tabular}}; \node at (0,-3) {$B(8,13)$}; \end{tikzpicture} \qquad \quad \begin{tikzpicture}[scale=.5] \filldraw[color=gray!40] (-5.3,-2) rectangle (-3.5, -1) (-1.7,-1) rectangle (0.1,0) (-1.7,0) rectangle (1.9, 1) ; \foreach \i in {0,1,2,3,4} \draw[dotted] (-5.3,-2+\i)--(5.5,-2+\i); \foreach \i in {0,1,2,3,4,5,6} \draw[dotted] (-5.3+1.8*\i,-2)--(-5.3+1.8*\i,2); \draw[thick] (-5.3,-2)--(-5.3,-1)--(-1.7,-1)--(-1.7,1)--(5.5,1)--(5.5,2); \node at (0,0){ \begin{tabular}{ c c c c c c c} -44 & -36 & -28 & -20 & -12 & -4 \\ -31 & -23 & -15 & -7 & 1 & 9 \\ -18 & -10 & -2 & 6 & 14 & 22\\ -5 & 3 & 11 & 19 & 27 & 35 \end{tabular}}; \node at (0,-3) {$Q=NEENNEEEEN$}; \end{tikzpicture} \caption{The Yin-Yang diagrams $A(9,13)$ and $B(8,13)$, and the paths $P=\phi((12,4,3,2))$ and $Q=\psi((15,7,5,2))$.}\label{fig:YinYang} \end{figure} Now we give path interpretations for doubled distinct $(s,t)$-cores and $(s,t)$-CSYDs for even $s$ and odd $t$ by using this Yin-Yang diagram $B(s,t)$ together with Propositions~\ref{prop:dd} and \ref{prop:CSYD}. \begin{thm}\label{thm:dd2} For even $s$ and odd $t$ that are coprime, there is a bijection between the sets $\mathcal{DD}_{(s,t)}$ and $\mathcal{NE}((t-1)/2,(s-2)/2)$. In addition, \[ |\mathcal{DD}_{(s,t)}|=\binom{(s-2)/2 + (t-1)/2}{(s-2)/2}. \] \end{thm} \begin{proof} Recall the bijection $\psi$ between the sets $\mathcal{BC}_{(s,t)}$ and $\mathcal{NE}((t-1)/2, s/2)$ in the Yin-Yang diagram $B(s,t)$ from the lower-left corner to the upper-right corner. To find the desired bijection, we restrict the domain of $\psi$ under the set $\mathcal{DD}_{(s,t)}$. By Proposition~\ref{prop:dd}~(b) and the fact that $B_{1,(t-1)/2}=-s/2$, we see that $Q=\psi(\lambda)$ corresponds to a partition $\lambda$ such that $\lambda\la$ is a doubled distinct $(s,t)$-core if and only if $Q$ is a path in the set $\mathcal{NE}((t-1)/2, s/2)$ in the Yin-Yang diagram $B(s,t)$ that ends with a north step $N$, equivalently $\mathcal{NE}((t-1)/2, (s-2)/2)$. Hence, the number of doubled distinct $(s,t)$-core partitions is given by $|\mathcal{NE}((t-1)/2, (s-2)/2)|$. \end{proof} \begin{thm}\label{thm:CSYD2} For even $s$ and odd $t$ that are coprime, there is a bijection between the sets $\mathcal{CS}_{(s,t)}$ and \[ \mathcal{NE}((t-1)/2,(s-2)/2)\cup \mathcal{NE}( (t-3)/2,(s-2)/2). \] In addition, \[ |\mathcal{CS}_{(s,t)}|=\binom{(s-2)/2 + (t-1)/2}{(s-2)/2}+\binom{(s-2)/2 + (t-3)/2}{(s-2)/2}. \] \end{thm} \begin{proof} It follows from Propositions~\ref{prop:bar} and \ref{prop:CSYD} that $\lambda$ is an $(s,t)$-CSYD if and only if $\lambda$ is an $(\ols{s\phantom{t}},\overline{t})$-core partitions and $3s/2 \notin \lambda$. We first note that $\lambda\la$ is a doubled distinct $(s,t)$-core partition if and only if $\lambda$ is an $(s,t)$-CSYD and $s/2 \notin \lambda$. Indeed, there is a bijection between the set of $(s,t)$-CSYDs $\lambda$ with $s/2 \notin \lambda$ and the set $\mathcal{NE}((t-1)/2, (s-2)/2)$ by Theorem~\ref{thm:dd2}. Therefore, it is sufficient to show that there is a bijection between the set of $(s,t)$-CSYDs $\lambda$ with $s/2 \in \lambda$ and the set $\mathcal{NE}((t-3)/2,(s-2)/2)$. Note that for an $(s,t)$-CSYD $\lambda$ such that $s/2 \in \lambda$, $Q=\psi(\lambda)$ is a path in the set $\mathcal{NE}((t-1)/2, s/2)$ in the Yin-Yang diagram $B(s,t)$ that must end with an east step preceded by a north step since $B_{1,(t-1)/2}=-s/2$ and $B_{1,(t-3)/2}=-3s/2$. Then, we get a bijection between the set of $(s,t)$-CSYDs $\lambda$ with $s/2 \in \lambda$ and the set $\mathcal{NE}((t-3)/2,(s-2)/2)$. Moreover, the number of $(s,t)$-CSYDs is obtained by counting the corresponding lattice paths. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main1}] It is followed by Remark \ref{rmk:oddoddodd}, Theorems \ref{thm:selfbar}, \ref{thm:dd2}, and \ref{thm:CSYD2} \end{proof} \section{Results on $(s,s+d,s+2d)$-cores}\label{sec:triple} A path $P$ is called a \emph{free Motzkin path of type $(s,t)$} if it is a path from $(0,0)$ to $(s,t)$ which consists of steps $U=(1,1)$, $F=(1,0)$, and $D=(1,-1)$. Let $\mathcal{F}(s,t)$ be the set of free Motzkin paths of type $(s,t)$. For given sets $A,B$ of sequences of steps, we denote $\mathcal{F}(s,t \,;\, A,B)$ the set of free Motzkin paths $P$ of type $(s,t)$, where $P$ does not start with the sequences in the set $A$ and does not end with the sequences in the set $B$. Recently, Cho and Huh \cite[Theorem 8]{ChoHuh} and Yan, Yan, and Zhou \cite[Theorems 1.1 and 1.2]{YYZ2} found a free Motzkin path interpretation of self-conjugate $(s,s+d,s+2d)$-core partitions and enumerated them independently. \begin{thm}\cite{ChoHuh,YYZ2} For coprime positive integers $s$ and $d$, there is a bijection between the sets $\mathcal{SC}_{(s,s+d,s+2d)}$ and \begin{enumerate} \item[(a)] $\mathcal{F}\left((s+d-1)/2,-d/2\right)$ if $s$ is odd and $d$ is even; \item[(b)] $\mathcal{F}\left((s+d)/2,-(d+1)/2 \,;\, \emptyset,\{U\}\right)$ if $s$ is odd and $d$ is odd; \item[(c)] $\mathcal{F}\left((s+d+1)/2,-(d+1)/2 \,;\, \emptyset,\{U\}\right)$ if $s$ is even and $d$ is odd. \end{enumerate} In addition, the number of self-conjugate $(s,s+d,s+2d)$-core partitions is \[ \displaystyle |\mathcal{SC}_{(s,s+d,s+2d)}|= \begin{cases} &\displaystyle\sum_{i=0}^{\lfloor s/4 \rfloor} \binom{(s+d-1)/2 }{i, d/2+i, (s-1)/2-2i} \qquad \text{if $d$ is even,}\\ &\\ &\displaystyle\sum_{i=0}^{\lfloor s/2\rfloor} \binom{\lfloor (s+d-1)/2 \rfloor}{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, \lfloor s/2 \rfloor -i} \quad \text{if $d$ is odd.} \end{cases} \] \end{thm} Similar to the construction in \cite{ChoHuh}, we give an abacus construction and a path interpretation for each set of $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partitions, doubled distinct $(s,s+d,s+2d)$-core partitions, and $(s,s+d,s+2d)$-CSYDs.\\ \subsection{$(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partitions}\label{sec:bar} For coprime positive integers $s$ and $d$, let the \emph{$(\overline{s+d},d)$-abacus diagram} be a diagram with infinitely many rows labeled by $i \in \mathbb{Z}$ and $\floor*{(s+d+2)/2}$ columns labeled by $j \in \{0,1,\dots,\floor*{(s+d)/2}\}$ from bottom to top and left to right whose position $(i,j)$ is labeled by $(s+d)i+dj$. The following proposition guarantees that, for each positive integer $h$, there is at least one position on the $(\overline{s+d},d)$-abacus diagram labeled by either $h$ or $-h$. \begin{prop} \label{prop:injection} Let $s$ and $d$ be coprime positive integers and $h$ be a positive integer. For a given $(\overline{s+d},d)$-abacus diagram, we get the following properties. \begin{itemize} \item[(a)] If $h\not\equiv 0, (s+d)/2 \pmod{s+d}$, then there exists a unique position labeled by $h$ or $-h$. \item[(b)] If $h\equiv 0 \pmod{s+d}$, then there are two positions labeled by $h$ and $-h$, respectively, in the first column. \item[(c)] If $s+d$ is even and $h\equiv (s+d)/2 \pmod{s+d}$, then there are two positions labeled by $h$ and $-h$, respectively, in the last column. \end{itemize} \end{prop} \begin{proof} In the $(\overline{s+d},d)$-abacus diagram, the absolute values of the labels in column $j$ are congruent to $dj$ or $-dj$ modulo $s+d$. We claim that $dj$ and $-dj$ for $j\in\{0,1,\dots, \floor*{(s+d)/2}$\} are all incongruent modulo $s+d$ except $j=0$ or $(s+d)/2$. For $0 \leq j_1 < j_2\leq \floor*{(s+d)/2}$, it is clear that $dj_1$ and $dj_2$ are incongruent modulo $s+d$. Suppose $dj_1 \equiv -dj_2 \pmod{s+d}$ for some $0 \leq j_1,j_2\leq \floor*{(s+d)/2}$, it follows that $d(j_1+j_2)$ is a multiple of $s+d$. Since $s$ and $d$ are coprime, $d(j_1+j_2)$ is not a multiple of $s+d$ except for $j_1=j_2=0$ or $j_1=j_2=(s+d)/2$, where both $s$ and $d$ are odd. This completes the proof of the claim. The claim implies that, for every positive integer $h$, there exists $j\in\{0,1,\dots, \floor*{(s+d)/2}\}$ such that $h$ is congruent to $dj$ or $-dj$ modulo $s+d$. In addition, if $h\not\equiv 0, (s+d)/2 \pmod{s+d}$, then there exists a unique position labeled by $h$ or $-h$ in the $(\overline{s+d},d)$-abacus diagram, which shows the statement (a). The statements (b) and (c) follows immediately. \end{proof} For a strict partition $\lambda=(\lambda_1,\lambda_2,\dots)$, the \emph{$(\overline{s+d},d)$-abacus of $\lambda$} is obtained from the $(\overline{s+d},d)$-abacus diagram by placing a bead on position labeled by $\lambda_i$ if exists. Otherwise, we place a bead on position labeled by $-\lambda_i$. A position without bead is called a \emph{spacer}. See Figure \ref{fig:abacus_bar} for example. We use this $(\overline{s+d},d)$-abacus when we deal with $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partitions. For the $(\overline{s+d},d)$-abacus of an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition $\lambda$, let $r(j)$ denote the row number such that position $(r(j),j)$ is labeled by a positive integer while position $(r(j)-1,j)$ is labeled by a non-positive integer. The arrangement of beads on the diagram can be determined by the following rules. \begin{lem}\label{lem:beads} Let $\lambda$ be a strict partition. For coprime positive integers $s$ and $d$, if $\lambda$ is an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core, then the $(\overline{s+d},d)$-abacus of $\lambda$ satisfies the following. \begin{enumerate} \item[(a)] If a bead is placed on position $(i,j)$ such that $i> r(j)$, then a bead is also placed on each of positions $(i-1,j), (i-2,j), \dots, (r(j),j)$. \item[(b)] If a bead is placed on position $(i,j)$ such that $i< r(j)-1$, then a bead is also placed on each of positions $(i+1,j), (i+2,j), \dots, (r(j)-1,j)$. \item[(c)] For each $j$, at most one bead is placed on positions $(r(j),j)$ or $(r(j)-1,j)$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[(a)] The fact that a bead is placed on position $(i,j)$ with $i>r(j)$ implies that $(s+d)i+dj$ is a part in $\lambda$. Since $\lambda$ is an $(\overline{s+d})$-core, it follows from Proposition~\ref{prop:bar}~(b) that $(s+d)(i-1)+dj$ is a part in $\lambda$. In a similar way, we also have $(s+d)(i-2)+dj, \dots, (s+d)r(j)+dj \in \lambda$ so that a bead is placed on each of positions $(i-1,j), (i-2,j), \dots, (r(j),j)$. \item[(b)] If a bead is placed on position $(i,j)$ with $i<r(j)-1$, then $-(s+d)i-dj$ is a part in $\lambda$. Again, it follows from Proposition~\ref{prop:bar}~(b) that $-(s+d)(i+1)-dj$ is a part in $\lambda$ and so are $-(s+d)(i+2)-dj, \dots, -(s+d)(r(j)-1)-dj \in \lambda$. Thus, we place a bead on positions $(i+1,j), (i+2,j), \dots, (r(j)-1,j)$. \item[(c)] Suppose that beads are placed on both positions $(r(j),j)$ and $(r(j)-1,j)$ labeled by $(s+d)r(j)+dj$ and $(s+d)(r(j)-1)+dj$, respectively. One can notice that $(s+d)(r(j)-1)+dj$ is a non-positive integer and the sum of the absolute values of $(s+d)r(j)+dj$ and $(s+d)(r(j)-1)+dj$ is $s+d$, which contradicts to Proposition~\ref{prop:bar}~(c). In particular, if one of them is labeled by $(s+d)/2$, then the other must be labeled by $-(s+d)/2$, which is also a contradiction to the definition of the $(\overline{s+d},d)$-abacus. \end{enumerate} \end{proof} For an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition $\lambda$, in order to explain the properties of the $(\overline{s+d},d)$-abacus of $\lambda$ more simply, we define the \emph{$(\overline{s+d},d)$-abacus function of $\lambda$} \[ f:\{0,1,\dots,\lfloor (s+d)/2 \rfloor\}\rightarrow \mathbb{Z} \] as follows: For each $j \in \{0,1,\dots,\lfloor (s+d)/2 \rfloor\}$, if there is a bead labeled by a positive integer in column $j$, let $f(j)$ be the largest row number in column $j$, where a bead is placed on. Otherwise, let $f(j)$ be the largest row number in column $j$, where position $(f(j),j)$ is a spacer with a non-positive labeled number. The following propositions give some basic properties of the $(\overline{s+d},d)$-abacus function of an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition. \begin{prop}\label{prop:f_initial} Let $s$ and $d$ be coprime positive integers. If $\lambda$ is an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition, then the $(\overline{s+d},d)$-abacus function $f$ of $\lambda$ satisfies the following. \begin{enumerate} \item[(a)] $f(0)=0$ and $f(1)=0$ or $-1$. \item[(b)] $f(j-1)$ is equal to one of the three values $f(j)-1$, $f(j)$, and $f(j)+1$ for $j=1,2,\dots, \lfloor(s+d)/2\rfloor$. \end{enumerate} \end{prop} \begin{proof} We consider the $(\overline{s+d},d)$-abacus of $\lambda$. \begin{enumerate} \item[(a)] Since positions $(0,0)$ and $(1,0)$ are labeled by $0$ and $s+d$, respectively, there is no bead in column $0$. Hence, $f(0)=0$. Similarly, since positions $(-1,1)$, $(0,1)$, and $(1,1)$ are labeled by $-s$, $d$, and $s+2d$ respectively, there is at most one bead on position $(0,1)$ in column $1$. Hence, $f(1)=0$ or $-1$. \item[(b)] For a fixed $j$, let $f(j)=i$. Suppose that a bead is placed on position $(i,j)$ which is labeled by a positive integer. If position $(i-1,j-1)$ is labeled by a positive integer, then a bead is placed on this position by Proposition~\ref{prop:bar}~(b). Otherwise, position $(i-1,j-1)$ is a spacer by Proposition~\ref{prop:bar}~(c). In any case, it follows from the definition of $f$ that $f(j-1)\geq f(j)-1$. Additionally, since position $(i+1,j)$ is a spacer, position $(i+2,j-1)$ is a spacer by Proposition~\ref{prop:bar}~(b). Hence, $f(j-1)\leq f(j)+1$. Next, suppose that position $(i,j)$ is a spacer which is labeled by a negative integer. Since position $(i-1,j-1)$ is labeled by a negative integer, it is a spacer, so $f(j-1)\geq f(j)-1$. We now assume that $f(j-1)\geq i+2$. If position $(i+2,j-1)$ is labeled by a positive integer, then a bead is placed on this position by Lemma~\ref{lem:beads}~(a). In this case, position $(i+1,j)$ either has with a bead labeled by a positive integer or is a spacer labeled by a negative integer by Proposition~\ref{prop:bar}~(b) and (c), which contradicts to $f(j)=i$. Otherwise, if position $(i+2,j-1)$ is labeled by a negative integer, then it is a spacer. Therefore, position $(i+1,j)$ is a spacer by Proposition~\ref{prop:bar}~(b), which also contradicts to $f(j)=i$. Hence, $f(j-1)\leq f(j)+1$. \end{enumerate} \end{proof} \begin{prop}\label{prop:barf} Let $s$ and $d$ be coprime integers. For an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition $\lambda$, the $(\overline{s+d},d)$-abacus function $f$ of $\lambda$ satisfies the following. \begin{enumerate} \item [(a)] If $s$ is odd and $d$ is even, then $f(\frac{s+d-1}{2})\in \{-\frac{d+2}{2}, -\frac{d}{2}\}$. \item [(b)] If $s$ and $d$ are both odd, then $f(\frac{s+d}{2}) \in \{-\frac{d+1}{2},-\frac{d-1}{2}\}$. In addition, $f(\frac{s+d-2}{2})=-\frac{d+1}{2}$ when $f(\frac{s+d}{2})=-\frac{d-1}{2}$. \item [(c)] If $s$ is even and $d$ is odd, then $f(\frac{s+d-1}{2})\in \{-\frac{d+3}{2}, -\frac{d+1}{2}, -\frac{d-1}{2}\}$. \end{enumerate} \end{prop} \begin{proof} Let position $(a,b)$ denote position $(-\lfloor d/2 \rfloor,\lfloor (s+d)/2 \rfloor)$. \begin{enumerate} \item [(a)] Positions $(a-1,b),(a,b)$, and $(a+1,b)$ are labeled by $-s-3d/2, -d/2$, and $s+d/2$, respectively. First we show that $s+d/2$ and $s+3d/2$ are not parts of $\lambda$. If $s+d/2 \in \lambda$, then $d/2\in\lambda$ by Proposition \ref{prop:bar} (b). It gives a contradiction by Proposition \ref{prop:bar} (c) since $(s+d/2)+d/2=s+d$. One can similarly show that $s+3d/2 \notin \lambda$. Hence, the only possibility of having a bead in column $b$ is putting it on position $(a,b)$. Thus, $f(b)=a-1$ or $a$. \item [(b)] Positions $(a-1,b),(a,b)$, and $(a+1,b)$ are labeled by $-(s+d)/2$, $(s+d)/2$, and $(3s+3d)/2$, respectively. We first claim that there is no bead on position $(a+1,b)$. If $(3s+3d)/2 \in \lambda$, then $(s+d)/2,(s+3d)/2 \in \lambda$ by Proposition \ref{prop:bar} (b), which contradicts to Proposition \ref{prop:bar} (c) since $(s+d)/2 + (s+3d)/2 = s+2d$. This completes a proof of the claim. Therefore, $f(b)=a$ when $(s+d)/2 \in \lambda$ and $f(b)=a-1$ otherwise. Furthermore, we would like to show that $f(b-1)=a-1$ assuming that $f(b)=a$. Consider positions $(a-1,b-1)$ and $(a,b-1)$ which are labeled by $-(s+3d)/2$ and $(s-d)/2$, respectively. Position $(a-1,b-1)$ is a spacer by Proposition \ref{prop:bar} (c) since $(s+3d)/2+(s+d)/2=s+2d$. When $s>d$, position $(a,b-1)$ is also a spacer by Proposition \ref{prop:bar} (c) since $(s-d)/2+(s+d)/2=s$. Otherwise, $(s-d)/2$ is negative and a bead is placed on position $(a,b-1)$ since $(d-s)/2=(s+d)/2-s$. In any case, we conclude that $f(b-1)=a-1$. \item [(c)] Positions $(a-2,b), (a-1,b), (a,b)$, and $(a+1,b)$ are labeled by $-(3s+4d)/2,-(s+2d)/2,s/2$ and $(3s+2d)/2$, respectively. If $(3s+2d)/2 \in \lambda$, then $s/2, (s+2d)/2\in\lambda$ by Proposition \ref{prop:bar} (b), which contradicts to Proposition \ref{prop:bar} (c). Thus, $(3s+2d)/2 \notin \lambda$ and $f(b)<a+1$. Similarly, $(3s+4d)/2 \notin \lambda$ which implies $f(b)\geq a-2$. \end{enumerate} \end{proof} For coprime positive integers $s$ and $d$, it is obvious that the map from the set of $(\ols{s\phantom{d}}, \overline{s+d}, \overline{s+2d})$-core partitions to the set of functions satisfying the conditions in Propositions \ref{prop:f_initial} and \ref{prop:barf} is well-defined and injective. The following proposition shows that this map is surjective. \begin{prop}\label{prop:barinv} For coprime positive integers $s$ and $d$, let $f$ be a function that satisfies Propositions \ref{prop:f_initial} and \ref{prop:barf}. If $\lambda$ is a strict partition such that $f$ is the $(\overline{s+d},d)$-abacus function of $\lambda$, then $\lambda$ is an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition. \end{prop} \begin{proof} We show that $\lambda$ satisfies the conditions in Proposition \ref{prop:bar} (a), (b), and (c). \begin{enumerate} \item [(a)] It follows from Proposition \ref{prop:f_initial} (a) that $s,s+d,s+2d \notin \lambda$. \item [(b)] Assume that $h$ is a part in $\lambda$. If $h > s+d$, then $h - (s+d) \in \lambda$ by Lemma \ref{lem:beads}. Consider the $(\overline{s+d},d)$-abacus diagram and suppose that $h > s$, but $h - s \notin \lambda$ to the contrary. Let position $(i,j)$ be labeled by $a$ such that $|a|=h$ which has a bead on. If $a>0$, then we get $j<\floor*{(s+d)/2}$ or $h=(s+d)/2$ with $s<d$ for odd numbers $s$ and $d$ by Proposition \ref{prop:barf}. First, assume that $j<\floor*{(s+d)/2}$. Then, position $(i-1,j+1)$ is a spacer labeled by $h -s$ which implies $f(j)\geq i$ and $f(j+1)<i-1$, so we get a contradiction to Proposition \ref{prop:f_initial} (b). Now, for odd numbers $s$ and $d$, let $h=(s+d)/2$ with $s<d$. Then, we have a bead on position $(-(d-1)/2,(s+d-2)/2)$ labeled by $(s-d)/2$ by Proposition \ref{prop:barf} (b), which gives a contradiction. If $a<0$, then position $(i+1,j-1)$ labeled by $-h +s$ is a spacer. This implies that $f(j-1) \geq i+1$ and $f(j) < i$, which contradicts to Proposition \ref{prop:f_initial} (b). By the similar argument, one can show that $h > s+2d$ implies $h - (s+2d) \in \lambda$. \item [(c)] By Lemma \ref{lem:beads} (c) and the construction of $f$, it is sufficient to show that there are no $h_1,h_2 \in \lambda$ such that $h_1 \neq h_2$ and $h_1 + h_2 \in \{s,s+2d\}$. Assume that there exist $h_1,h_2 \in \lambda$ satisfying $h_1 + h_2 =s$. If $h_1, h_2\neq (s+d)/2$, then there are positions $(i,j)$ and $(i-1,j+1)$ that are labeled by $h_1$ and $-h_2$, respectively. In this case, we get $f(j)\geq i$ and $f(j+1) < i-1$, which contradicts to Proposition \ref{prop:f_initial} (b). If $h_2=(s+d)/2$ (so both $s$ and $d$ are odd), then positions $(i,(s+d-2)/2)$ and $(i,(s+d)/2)$ are labeled by $h_1$ and $h_2$, respectively, and we get a contradiction to Proposition \ref{prop:barf} (b). Similar argument works for the case when $h_1+h_2 \neq s+2d$. \end{enumerate} \end{proof} \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=.3] \small \draw[color=gray!70] (0,0)--(2,0)--(4,0)--(6,-1.2)--(8,-2.4)--(10,-3.6)--(12,-2.4); \filldraw[color=gray!70] (0,0) circle (2pt) (2,0) circle (2pt) (4,0) circle (2pt) (6,-1.2) circle (2pt) (8,-2.4) circle (2pt) (10,-3.6) circle (2pt) (12,-2.4) circle (2pt) ; \filldraw[color=gray!40] (2,0) circle (18pt); \filldraw[color=gray!40] (4,0) circle (18pt); \filldraw[color=gray!40] (6,-1.2) circle (18pt); \filldraw[color=gray!40] (10,-2.4) circle (18pt); \node at (-4,2.4) {$\mathbf{2}$}; \node at (-4,1.2) {$\mathbf{1}$}; \node at (-4,0) {$\mathbf{0}$}; \node at (-4,-1.2) {$\mathbf{-1}$}; \node at (-4,-2.4) {$\mathbf{-2}$}; \node at (-4,-3.6) {$\mathbf{-3}$}; \node at (-4,-4.8) {$\mathbf{-4}$}; \node at (-3.9,-7.4) {$\mathbf{i~/~j}$}; \node at (0,-7.4) {$\mathbf{0}$}; \node at (2,-7.4) {$\mathbf{1}$}; \node at (4,-7.4) {$\mathbf{2}$}; \node at (6,-7.4) {$\mathbf{3}$}; \node at (8,-7.4) {$\mathbf{4}$}; \node at (10,-7.4) {$\mathbf{5}$}; \node at (12,-7.4) {$\mathbf{6}$}; \foreach \i in {22,26,30,34,38,42} \node at (\i/2-22/2,2.4) {$\i$}; \foreach \i in {11,15,19,23,27,31} \node at (\i/2-11/2,1.2) {$\i$}; \foreach \i in {0,4,8,12,16,20} \node at (\i/2,0) {$\i$}; \foreach \i in {-11,-7,-3,1,5,9} \node at (\i/2+11/2,-1.2) {$\i$}; \foreach \i in {-22,-18,-14,-10,-6,-2} \node at (\i/2+22/2,-2.4) {$\i$}; \foreach \i in {-33,-29,-25,-21,-17,-13} \node at (\i/2+33/2,-3.6) {$\i$}; \foreach \i in {-44,-40,-36,-32,-28,-24} \node at (\i/2+44/2,-4.8) {$\i$}; \node at (5,4) {\vdots}; \node at (5,-5.7) {\vdots}; \draw (-2.5,4)--(-2.5,-8); \draw (-5,-6.7)--(13,-6.7); \node at (3,-9.5) {I. $(\overline{11},4)$-abacus of $(8,4,2,1)$}; \end{tikzpicture} \quad \begin{tikzpicture}[scale=.3] \small \draw[color=gray!70] (0,0)--(2,0)--(4,-1.2)--(6,-2.4)--(8,-3.6)--(10,-2.4)--(12,-2.4); \filldraw[color=gray!70] (0,0) circle (2pt) (2,0) circle (2pt) (4,-1.2) circle (2pt) (6,-2.4) circle (2pt) (8,-3.6) circle (2pt) (10,-2.4) circle (2pt) (12,-2.4) circle (2pt) ; \filldraw[color=gray!40] (2,0) circle (18pt); \filldraw[color=gray!40] (6,-1.2) circle (18pt); \filldraw[color=gray!40] (8,-2.4) circle (18pt); \node at (-4,2.4) {$\mathbf{2}$}; \node at (-4,1.2) {$\mathbf{1}$}; \node at (-4,0) {$\mathbf{0}$}; \node at (-4,-1.2) {$\mathbf{-1}$}; \node at (-4,-2.4) {$\mathbf{-2}$}; \node at (-4,-3.6) {$\mathbf{-3}$}; \node at (-4,-4.8) {$\mathbf{-4}$}; \node at (-3.9,-7.4) {$\mathbf{i~/~j}$}; \node at (0,-7.4) {$\mathbf{0}$}; \node at (2,-7.4) {$\mathbf{1}$}; \node at (4,-7.4) {$\mathbf{2}$}; \node at (6,-7.4) {$\mathbf{3}$}; \node at (8,-7.4) {$\mathbf{4}$}; \node at (10,-7.4) {$\mathbf{5}$}; \node at (12,-7.4) {$\mathbf{6}$}; \foreach \i in {20,23,26,29,32,35} \node at (\i/1.5-20/1.5,2.4) {$\i$}; \foreach \i in {10,13,16,19,22,25} \node at (\i/1.5-10/1.5,1.2) {$\i$}; \foreach \i in {0,3,6,9,12,15} \node at (\i/1.5,0) {$\i$}; \foreach \i in {-10,-7,-4,-1,2,5} \node at (\i/1.5+10/1.5,-1.2) {$\i$}; \foreach \i in {-20,-17,-14,-11,-8,-5} \node at (\i/1.5+20/1.5,-2.4) {$\i$}; \foreach \i in {-30,-27,-24,-21,-18,-15} \node at (\i/1.5+30/1.5,-3.6) {$\i$}; \foreach \i in {-40,-37,-34,-31,-28,-25} \node at (\i/1.5+40/1.5,-4.8) {$\i$}; \node at (5,4) {\vdots}; \node at (5,-5.7) {\vdots}; \draw (-2.5,4)--(-2.5,-8); \draw (-5,-6.7)--(13,-6.7); \node at (3,-9.5) {II. $(\overline{10},3)$-abacus of $(8,3,1)$}; \end{tikzpicture}\\ \begin{tikzpicture}[scale=.3] \small \draw[color=gray!70] (0,0)--(2,0)--(4,-1.2)--(6,-2.4)--(8,-2.4)--(10,-1.2)--(12,-2.4); \filldraw[color=gray!70] (0,0) circle (2pt) (2,0) circle (2pt) (4,-1.2) circle (2pt) (6,-2.4) circle (2pt) (8,-2.4) circle (2pt) (10,-1.2) circle (2pt) (12,-2.4) circle (2pt) ; \filldraw[color=gray!40] (2,0) circle (18pt); \filldraw[color=gray!40] (6,-1.2) circle (18pt); \filldraw[color=gray!40] (10,-1.2) circle (18pt); \node at (-4,2.4) {$\mathbf{2}$}; \node at (-4,1.2) {$\mathbf{1}$}; \node at (-4,0) {$\mathbf{0}$}; \node at (-4,-1.2) {$\mathbf{-1}$}; \node at (-4,-2.4) {$\mathbf{-2}$}; \node at (-4,-3.6) {$\mathbf{-3}$}; \node at (-4,-4.8) {$\mathbf{-4}$}; \node at (-3.9,-7.4) {$\mathbf{i~/~j}$}; \node at (0,-7.4) {$\mathbf{0}$}; \node at (2,-7.4) {$\mathbf{1}$}; \node at (4,-7.4) {$\mathbf{2}$}; \node at (6,-7.4) {$\mathbf{3}$}; \node at (8,-7.4) {$\mathbf{4}$}; \node at (10,-7.4) {$\mathbf{5}$}; \node at (12,-7.4) {$\mathbf{6}$}; \foreach \i in {20,23,26,29,32,35} \node at (\i/1.5-20/1.5,2.4) {$\i$}; \foreach \i in {10,13,16,19,22,25} \node at (\i/1.5-10/1.5,1.2) {$\i$}; \foreach \i in {0,3,6,9,12,15} \node at (\i/1.5,0) {$\i$}; \foreach \i in {-10,-7,-4,-1,2,5} \node at (\i/1.5+10/1.5,-1.2) {$\i$}; \foreach \i in {-20,-17,-14,-11,-8,-5} \node at (\i/1.5+20/1.5,-2.4) {$\i$}; \foreach \i in {-30,-27,-24,-21,-18,-15} \node at (\i/1.5+30/1.5,-3.6) {$\i$}; \foreach \i in {-40,-37,-34,-31,-28,-25} \node at (\i/1.5+40/1.5,-4.8) {$\i$}; \node at (5,4) {\vdots}; \node at (5,-5.7) {\vdots}; \draw (-2.5,4)--(-2.5,-8); \draw (-5,-6.7)--(13,-6.7); \node at (3,-9.5) {III. $(\overline{10},3)$-abacus of $(5,3,1)$}; \end{tikzpicture} \quad \begin{tikzpicture}[scale=.3] \small \draw[color=gray!70] (0,0)--(2,0)--(4,0)--(6,-1.2)--(8,-2.4)--(10,-3.6)--(12,-2.4); \filldraw[color=gray!70] (0,0) circle (2pt) (2,0) circle (2pt) (4,0) circle (2pt) (6,-1.2) circle (2pt) (8,-2.4) circle (2pt) (10,-3.6) circle (2pt) (12,-2.4) circle (2pt) ; \filldraw[color=gray!40] (2,0) circle (18pt); \filldraw[color=gray!40] (4,0) circle (18pt); \filldraw[color=gray!40] (10,-2.4) circle (18pt); \node at (-4,2.4) {$\mathbf{2}$}; \node at (-4,1.2) {$\mathbf{1}$}; \node at (-4,0) {$\mathbf{0}$}; \node at (-4,-1.2) {$\mathbf{-1}$}; \node at (-4,-2.4) {$\mathbf{-2}$}; \node at (-4,-3.6) {$\mathbf{-3}$}; \node at (-4,-4.8) {$\mathbf{-4}$}; \node at (-3.9,-7.4) {$\mathbf{i~/~j}$}; \node at (0,-7.4) {$\mathbf{0}$}; \node at (2,-7.4) {$\mathbf{1}$}; \node at (4,-7.4) {$\mathbf{2}$}; \node at (6,-7.4) {$\mathbf{3}$}; \node at (8,-7.4) {$\mathbf{4}$}; \node at (10,-7.4) {$\mathbf{5}$}; \node at (12,-7.4) {$\mathbf{6}$}; \foreach \i in {22,25,28,31,34,37} \node at (\i/1.5-22/1.5,2.4) {$\i$}; \foreach \i in {11,14,17,20,23,26} \node at (\i/1.5-11/1.5,1.2) {$\i$}; \foreach \i in {0,3,6,9,12,15} \node at (\i/1.5,0) {$\i$}; \foreach \i in {-11,-8,-5,-2,1,4} \node at (\i/1.5+11/1.5,-1.2) {$\i$}; \foreach \i in {-22,-19,-16,-13,-10,-7} \node at (\i/1.5+22/1.5,-2.4) {$\i$}; \foreach \i in {-33,-30,-27,-24,-21,-18} \node at (\i/1.5+33/1.5,-3.6) {$\i$}; \foreach \i in {-44,-41,-38,-35,-32,-29} \node at (\i/1.5+44/1.5,-4.8) {$\i$}; \node at (5,4) {\vdots}; \node at (5,-5.7) {\vdots}; \draw (-2.5,4)--(-2.5,-8); \draw (-5,-6.7)--(13,-6.7); \node at (3,-9.5) {IV. $(\overline{11},3)$-abacus of $(7,6,3)$}; \end{tikzpicture} \caption{The $(\overline{s+d},d)$-abaci of several partitions and the corresponding free Motzkin paths}\label{fig:abacus_bar} \end{figure} For given coprime integers $s$ and $d$, let $\lambda$ be an $(\ols{s\phantom{d}}, \overline{s+d}, \overline{s+2d})$-core partition. For the $(\overline{s+d},d)$-abacus function $f$ of $\lambda$, we set $f(\floor*{(s+d+2)/2})\coloneqq -\floor*{(d+1)/2}$ and define $\phi(\lambda)$ to be the path $P=P_1P_2 \cdots P_{\floor*{(s+d+2)/2}}$, where the $j$th step is given by $P_j=(1,f(j)-f(j-1))$ for each $j$. By Proposition \ref{prop:f_initial} (b), $P_j$ is one of the three steps $U=(1,1)$, $F=(1,0)$, and $D=(1,-1)$, so $P$ is a free Motzkin path. From this construction together with Proposition~\ref{prop:barf}, we obtain a path interpretation of an $(\ols{s\phantom{d}}, \overline{s+d}, \overline{s+2d})$-core partition as described in the following theorem. \begin{thm}\label{thm:barcore} For coprime positive integers $s$ and $d$, there is a bijection between the sets $\mathcal{BC}_{(s,s+d,s+2d)}$ and \begin{enumerate} \item[(a)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d}{2} \,;\, \{U\},\{D\})$ if $s$ is odd and $d$ is even; \item[(b)] $\mathcal{F}(\frac{s+d+2}{2},-\frac{d+1}{2} \,;\, \{U\},\{FD,DD,U\})$ if both $s$ and $d$ are odd; \item[(c)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset)$ if $s$ is even and $d$ is odd. \end{enumerate} \end{thm} \begin{proof} All the bijections come from Propositions \ref{prop:f_initial}, \ref{prop:barf}, and \ref{prop:barinv}. By drawing line segments that connects the positions $(f(j),j)$ and $(f(j+1),j+1)$ to obtain $P=P_1P_2 \cdots P_{\floor*{(s+d)/2}}$ in the $(\overline{s+d},d)$-abacus, we have the one-to-one correspondences between the sets $\mathcal{BC}_{(s,s+d,s+2d)}$ and { \small \begin{align*} \text{(a) }& \mathcal{F}\left(\frac{s+d-1}{2},-\frac{d}{2}\,;\, \{U\},\emptyset\right)\cup\mathcal{F}\left(\frac{s+d-1}{2}, -\frac{d+2}{2}\,;\, \{U\},\emptyset\right);\\ \text{(b) }& \mathcal{F}\left(\frac{s+d}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right) \cup \mathcal{F}\left(\frac{s+d}{2},-\frac{d-1}{2} \,;\, \{U\},\{F,D\}\right);\\ \text{(c) }& \mathcal{F}\left(\frac{s+d-1}{2},-\frac{d-1}{2} \,;\, \{U\},\emptyset\right) \cup \mathcal{F}\left(\frac{s+d-1}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right) \\ &\hspace{55mm}\cup\mathcal{F}\left(\frac{s+d-1}{2},-\frac{d+3}{2} \,;\, \{U\},\emptyset\right). \end{align*} } The addition of the last step gives free Motzkin paths of type $(\lfloor (s+d+2)/2\rfloor,-\lfloor (d+1)/2 \rfloor)$ as we desired. \end{proof} \begin{ex} For a $(\overline{7}, \overline{11}, \overline{15})$-core partition $\lambda=(8,4,2,1)$, Diagram I in Figure \ref{fig:abacus_bar} illustrates the $(\overline{11},4)$-abacus of $\lambda$. The $(\overline{11},4)$-abacus function $f$ of $\lambda$ is given by $$f(0)=0,~ f(1)=0,~ f(2)=0,~ f(3)=-1,~ f(4)=-2, ~f(5)=-3, ~f(6)=-2,$$ and its corresponding path is $P=\phi(\lambda)=FFDDDU$. \end{ex} \subsection{Doubled distinct $(s,s+d,s+2d)$-core partitions} Recall that for an $\overline{s}$-core partition $\lambda$ with even $s$, $\lambda\la$ is a doubled distinct $s$-core if and only if $s/2 \notin \lambda$. \begin{prop}\label{prop:dd_f} For a strict partition $\lambda$ such that $\lambda\la$ is a doubled distinct $(s,s+d,s+2d)$-core, the $(\overline{s+d},d)$-abacus function $f$ of $\lambda$ satisfies the following. \begin{enumerate} \item [(a)] If $s$ is odd and $d$ is even, then $f(\frac{s+d-1}{2})\in \{ -\frac{d+2}{2}, -\frac{d}{2}\}$. \item [(b)] If $s$ and $d$ are both odd, then $f(\frac{s+d}{2})=-\frac{d+1}{2}$. \item [(c)] If $s$ is even and $d$ is odd, then $f(\frac{s+d-1}{2})=-\frac{d+1}{2}$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item [(a)] It follows from Proposition \ref{prop:barf} (a) since we do not need to consider the additional property of a doubled distinct core partition. \item [(b)] Positions $(-(d+1)/2,(s+d)/2)$ and $(-(d-1)/2,(s+d)/2)$ are labeled by $-(s+d)/2$ and $(s+d)/2$, respectively. Since $(s+d)/2 \notin \lambda$ by Proposition \ref{prop:dd} (b), there is no bead in column $(s+d)/2$, and $f((s+d)/2)=-(d+1)/2$. \item [(c)] Positions $(-(d+1)/2,(s+d-1)/2)$ and $(-(d-1)/2,(s+d-1)/2)$ are labeled by $-(s+2d)/2$ and $s/2$, respectively. We know that $s/2,(s+2d)/2 \notin \lambda$ by Proposition \ref{prop:dd} (b), so $f((s+d-1)/2)=-(d+1)/2$. \end{enumerate} \end{proof} Similar to the bar-core case considered in Section \ref{sec:bar}, there is a one-to-one correspondence between the set of doubled distinct $(s,s+d,s+2d)$-cores and the set of functions satisfying the conditions in Propositions \ref{prop:f_initial} and \ref{prop:dd_f}. The following proposition completes the existence of the bijection. \begin{prop}\label{prop:dd_inverse} For coprime positive integers $s$ and $d$, let $f$ be a function that satisfies Propositions \ref{prop:f_initial} and \ref{prop:dd_f}. If $\lambda$ is a strict partition such that $f$ is the $(\overline{s+d},d)$-abacus function of $\lambda$, then $\lambda\la$ is a doubled distinct $(s,s+d,s+2d)$-core. \end{prop} \begin{proof} It is sufficient to show that $\lambda$ satisfies Proposition \ref{prop:dd} (b). We consider the case according to the parity of $s$ and $d$. For odd $s$ and even $d$, all of $s,s+d,s+2d$ are odd, so we no longer need to consider the additional property of $\lambda\la$. For odds $s$ and $d$, there is no bead in column $(s+d)/2$ by Proposition \ref{prop:dd_f} (b). Since the only column that has labels whose absolute values are $(s+d)/2$ is the column $(s+d)/2$, it follows that $(s+d)/2 \notin \lambda$. If $s$ is even and $d$ is odd, then $s$ and $s+2d$ are even. In a similar way, $s/2,(s+2d)/2 \notin \lambda$ by Proposition \ref{prop:dd_f} (c). \end{proof} Now we give a path interpretation for the doubled distinct $(s,s+d,s+2d)$-cores. \begin{thm}\label{thm:dd3} For coprime positive integers $s$ and $d$, there is a bijection between the sets $\mathcal{DD}_{(s,s+d,s+2d)}$ and \begin{enumerate} \item[(a)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d}{2} \,;\, \{U\},\{D\})$ if $s$ is odd and $d$ is even; \item[(b)] $\mathcal{F}(\frac{s+d}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset)$ if both $s$ and $d$ are odd; \item[(c)] $\mathcal{F}(\frac{s+d-1}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset)$ if $s$ is even and $d$ is odd. \end{enumerate} \end{thm} \begin{proof} Part (a) comes from Theorem \ref{thm:barcore} (a). Parts (b) and (c) are followed by Propositions \ref{prop:f_initial} and \ref{prop:dd_f}. Note that the length of the corresponding paths in parts (b) and (c) are different than the original setting. Since parts (b) and (c) in Proposition \ref{prop:dd_f} give only one option for the value of $f$ at the second last step, we no longer need to extend the corresponding path to the end point. \end{proof} \subsection{$(s,s+d,s+2d)$-CSYDs} We recall that for even $s$, $\lambda$ is an $s$-CSYD if and only if $\lambda$ is an $\overline{s}$-core and $3s/2 \notin \lambda$. \begin{prop}\label{prop:csyd_f} For a strict partition $\lambda$ such that $S(\lambda)$ is an $(s,s+d,s+2d)$-CSYD, the $(\overline{s+d},d)$-abacus function $f$ of $\lambda$ satisfies the following. \begin{enumerate} \item [(a)] If $s$ is odd and $d$ is even, then $f(\frac{s+d-1}{2})\in\{-\frac{d+2}{2},-\frac{d}{2}\}$. \item [(b)] If $s$ and $d$ are both odd, then $f(\frac{s+d}{2}) \in \{-\frac{d+1}{2},-\frac{d-1}{2}\}$. In addition, $f(\frac{s+d-2}{2})=-\frac{d+1}{2}$ when $f(\frac{s+d}{2})=-\frac{d-1}{2}$. \item [(c)] If $s$ is even and $d$ is odd, then $f(\frac{s+d-1}{2}), f(\frac{s+d-3}{2}) \in \{ -\frac{d+3}{2}, -\frac{d+1}{2}, -\frac{d-1}{2}\}$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item [(a)] It also follows from Proposition \ref{prop:barf} (a) since we do not need to consider the additional property of an $S(\lambda)$. \item [(b)] From the proof of Proposition \ref{prop:barf} (b), we have $(3s+3d)/2\notin \lambda$ for an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition $\lambda$. Therefore, $\lambda$ is an $(\ols{s\phantom{d}},\overline{s+d},\overline{s+2d})$-core partition if and only if $S(\lambda)$ is an $(s,s+d,s+2d)$-CSYD for odd numbers $s$ and $d$. \item [(c)] Let $(a,b)=(-(d+3)/2,(s+d-3)/2)$. By the proof of Proposition \ref{prop:barf} (c) we have $f(b+1)=a,a+1$, or $a+2$. Note that positions $(a,b)$, $(a+1,b)$, $(a+2,b)$, and $(a+3,b)$ are labeled by $-(3s+6d)/2$, $-(s+4d)/2$, $(s-2d)/2$, and $3s/2$ respectively. Since $3s/2,(3s+6d)/2 \notin \lambda$ by Proposition \ref{prop:CSYD} (b), there is at most one bead labeled by $(s-2d)/2$ or $-(s+4d)/2$ in column $b$. Hence, $f(b)=a,a+1$, or $a+2$. \end{enumerate} \end{proof} Again, we construct a bijection between the set of $(s,s+d,s+2d)$-CSYDs and the set of functions satisfying the conditions in Propositions \ref{prop:f_initial} and \ref{prop:csyd_f}. \begin{prop} For coprime positive integers $s$ and $d$, let $f$ be a function that satisfies Propositions \ref{prop:f_initial} and \ref{prop:csyd_f}. If $\lambda$ is a strict partition such that $f$ is the $(\overline{s+d},d)$-abacus function of $\lambda$, then $S(\lambda)$ is an $(s,s+d,s+2d)$-CSYD. \end{prop} \begin{proof} Similar to Proposition \ref{prop:dd_inverse}, it is sufficient to show that $\lambda$ satisfies Proposition \ref{prop:CSYD} (b). Also, we do not need to check the additional condition when $s$ is odd and $d$ is even. If $s$ and $d$ are both odd, by Proposition \ref{prop:csyd_f} (b), there is at most one bead labeled by $(s+d)/2$ in column $(s+d)/2$. Since no columns but the column $(s+d)/2$ has labels whose absolute values are $(3s+3d)/2$, it follows that $(3s+3d)/2 \notin \lambda$. If $s$ is even and $d$ is odd, then only the column $(s+d-3)/2$ has positions labeled by $-(3s+6d)/2$ and $3s/2$. Since there is at most one bead being labeled by $(-s+4d)/2$ or $(s-2d)/2$ in column $(s+d-3)/2$ by Proposition \ref{prop:csyd_f} (c), we have $3s/2,(3s+6d)/2 \notin \lambda$. It completes the proof. \end{proof} Similarly, we give a path interpretation for $(s,s+d,s+2d)$-CSYDs. \begin{thm}\label{thm:csyd3} For coprime positive integers $s$ and $d$, there is a bijection between the sets $\mathcal{CS}_{(s,s+d,s+2d)}$ and \begin{enumerate} \item[(a)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d}{2} \,;\, \{U\},\{D\})$ if $s$ is odd and $d$ is even; \item[(b)] $\mathcal{F}(\frac{s+d+2}{2},-\frac{d+1}{2} \,;\, \{U\},\{FD,DD,U\})$ if both $s$ and $d$ are odd; \item[(c)] $\mathcal{F}(\frac{s+d+1}{2},-\frac{d+1}{2} \,;\, \{U\},\{UU,DD\})$ if $s$ is even and $d$ is odd. \end{enumerate} \end{thm} \begin{proof} Parts (a) and (b) follow from Theorem \ref{thm:barcore}. Now we need to construct a bijection for the set $\mathcal{CS}_{(s,s+d,s+2d)}$ when $s$ is even and $d$ is odd. Until the second last step of the corresponding free Motzkin paths, the paths should be in one of the following sets: \begin{align*} &\mathcal{F}\left((s+d-1)/2,-(d-1)/2 \,;\, \{U\},\{D\}\right),\\ &\mathcal{F}\left((s+d-1)/2,-(d+1)/2 \,;\, \{U\},\emptyset\right),\\ &\mathcal{F}\left((s+d-1)/2,-(d+3)/2 \,;\, \{U\},\{U\}\right). \end{align*} By adding the end point of the free Motzkin path, we get the statements. \end{proof} \subsection{Enumerating $(s,s+d,s+2d)$-core partitions} In this subsection we give a proof of Theorem~\ref{thm:unifying}. We begin with a useful lemma. \begin{lem}\label{lem:path1} Let $a$ and $b$ be positive integers. \begin{enumerate} \item[(a)] The total number of free Motzkin paths of type $(a+b,-b)$ for which starts with either a down or a flat step is given by \[ |\mathcal{F}(a+b,-b \,;\, \{U\},\emptyset)|=\sum_{i=0}^{a}\binom{a+b-1}{\lfloor i/2 \rfloor, b+\lfloor (i-1)/2\rfloor, a-i}. \] \item[(b)] The total number of free Motzkin paths of type $(a+b,-b)$ for which starts with either a down or a flat step and ends with either a up or a flat step is \[ |\mathcal{F}(a+b,-b \,;\, \{U\},\{D\})|=\sum_{i=0}^{a-1}\binom{a+b-2}{\lfloor i/2 \rfloor}\binom{a+b-1-\lfloor i/2 \rfloor}{a-i-1}. \] \item[(c)] The total number of free Motzkin paths of type $(a+b,-b)$ for which starts with either a down or a flat step and ends with either a down or a flat step is \[ |\mathcal{F}(a+b,-b \,;\, \{U\},\{U\})|=\sum_{i=0}^{a}\binom{a+b-2}{\lfloor i/2 \rfloor}\binom{a+b-1-\lfloor i/2 \rfloor}{a-i}. \] \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[(a)] The number of free Motzkin paths of type $(a+b,-b)$ having $k$ up steps (so that it has $b+k$ down steps and $a-2k$ flat steps) for which starts with a down (resp. flat) step is $\binom{a+b-1}{k,b+k-1,a-2k}$ (resp. $\binom{a+b-1}{k,b+k,a-(2k+1)}$). Hence, the total number of free Motzkin paths of type $(a+b,-b)$ for which starts with either a down or a flat step is \[ \sum_{k=0}^{\lfloor a/2 \rfloor}\binom{a+b-1}{k,b+k-1,a-2k} +\sum_{k=0}^{\lfloor (a-1)/2 \rfloor}\binom{a+b-1}{k,b+k,a-(2k+1)}, \] which can be written as in the statement. \item[(b)] Note that $|\mathcal{F}(a+b,-b\,;\,\{U\},\{D\})|$ is equal to the sum of the two values, which are given by (a), \begin{align*} |\mathcal{F}(a+b-1,-b \,;\, \{U\},\emptyset)|&=\sum_{i=0}^{a-1}\binom{a+b-2}{\lfloor i/2 \rfloor, b+\lfloor (i-1)/2\rfloor, a-i-1},\\ |\mathcal{F}(a+b-1,-b-1 \,;\, \{U\},\emptyset)|&=\sum_{i=0}^{a-2}\binom{a+b-2}{\lfloor i/2 \rfloor, b+\lfloor (i+1)/2\rfloor, a-i-2}. \end{align*} Hence, $|\mathcal{F}(a+b,-b\,;\,\{U\},\{D\})|$ is equal to \[ \sum_{i=0}^{a-1}\binom{a+b-2}{\lfloor i/2 \rfloor}\left(\binom{a+b-2-\lfloor i/2 \rfloor}{a-i-1} +\binom{a+b-2-\lfloor i/2 \rfloor}{a-i-2}\right), \] which can be written as in the statement. \item[(c)] Similar to (b), the formula follows. \end{enumerate} \end{proof} For coprime positive integers $s$ and $d$, let $\mathfrak{sc}$, $\mathfrak{bc}$, $\mathfrak{cs}$, and $\mathfrak{dd}$ denote the cardinalities of the sets $\mathcal{SC}_{(s,s+d,s+2d)}$, $\mathcal{BC}_{(s,s+d,s+2d)}$, $\mathcal{CS}_{(s,s+d,s+2d)}$, and $\mathcal{DD}_{(s,s+d,s+2d)}$, respectively. \begin{proof}[Proof of Theorem~\ref{thm:unifying}.] \begin{enumerate} \item[(a)] Recall that for odd $s$ and even $d$, the three sets $\mathcal{BC}_{(s,s+d,s+2d)}$, $\mathcal{DD}_{(s,s+d,s+2d)},$ and $\mathcal{CS}_{(s,s+d,s+2d)}$ are actually the same by Remark~\ref{rmk:oddoddodd}. By Theorem \ref{thm:barcore} (a), the set $\mathcal{BC}_{(s,s+d,s+2d)}$ is bijective with $\mathcal{F}((s+d+1)/2,-d/2 \,;\, \{U\},\{D\}).$ By setting $a=(s+1)/2$ and $b=d/2$ in Lemma~\ref{lem:path1}~(b), we obtain a desired formula. \item[(b)] For odd numbers $s$ and $d$, we have $\mathfrak{bc}=\mathfrak{cs}$ by Theorems \ref{thm:barcore} (b) and \ref{thm:csyd3} (b). By Lemma~\ref{lem:path1}~(a), we get \begin{align*} &\left|\mathcal{F}\left(\frac{s+d}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right)\right|=\sum_{i=0}^{(s-1)/2}\binom{(s+d-2)/2}{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, (s-1)/2-i}, \\ &\left|\mathcal{F}\left(\frac{s+d}{2},-\frac{d-1}{2} \,;\, \{U\},\{F,D\}\right)\right|=\left|\mathcal{F}\left(\frac{s+d-2}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right)\right|\\ & \hspace{54.5mm} =\sum_{i=0}^{(s-3)/2}\binom{(s+d-4)/2}{\lfloor i/2 \rfloor, \lfloor (d+i)/2\rfloor, (s-3)/2-i}. \end{align*} As in the proof of Theorem \ref{thm:barcore}, $\mathfrak{bc}$ is equal to the sum of these two terms, which can be written as follows. \[ \mathfrak{bc}=\mathfrak{cs}=\sum_{i=0}^{(s-1)/2}\binom{(d-1)/2+i}{\lfloor i/2 \rfloor}\left( \binom{(s+d-2)/2}{(d-1)/2+i} + \binom{(s+d-4)/2}{(d-1)/2+i}\right). \] \item[(c)] By Theorem \ref{thm:barcore} (c), the set $\mathcal{BC}_{(s,s+d,s+2d)}$ is bijective with the set $\mathcal{F}((s+d+1)/2,-(d+1)/2 \,;\, \{U\},\emptyset)$ for even $s$ and odd $d$. By Lemma~\ref{lem:path1}~(a), \[ \mathfrak{bc}=\sum_{i=0}^{s/2}\binom{(s+d-1)/2}{\lfloor i/2 \rfloor, (d+1)/2+\lfloor (i-1)/2\rfloor, s/2-i}. \] Now we consider the set $\mathcal{CS}_{(s,s+d,s+2d)}$. As in the proof of Theorem \ref{thm:csyd3}, $\mathfrak{cs}=|\mathcal{F}_1|+|\mathcal{F}_2|+|\mathcal{F}_3|$, where \begin{align*} \mathcal{F}_1&\coloneqq\mathcal{F}\left(\frac{s+d-1}{2},-\frac{d-1}{2} \,;\, \{U\},\{D\}\right)\!,\\ \mathcal{F}_2&\coloneqq\mathcal{F}\left(\frac{s+d-1}{2},-\frac{d+1}{2} \,;\, \{U\},\emptyset\right)\!,\\ \mathcal{F}_3&\coloneqq\mathcal{F}\left(\frac{s+d-1}{2},-\frac{d+3}{2} \,;\, \{U\},\{U\}\right)\!. \end{align*} From Lemma~\ref{lem:path1}, we obtain that \begin{align*} |\mathcal{F}_2|&=\sum_{i=0}^{(s-2)/2}\binom{(s+d-3)/2}{\left\lfloor i/2 \right\rfloor} \binom{(s+d-3)/2-\left\lfloor i/2 \right\rfloor}{(s-2)/2-i},\\ |\mathcal{F}_1|+|\mathcal{F}_3|&=\sum_{i=0}^{(s-2)/2}\binom{(s+d-5)/2}{\left\lfloor i/2 \right\rfloor} \binom{(s+d-1)/2-\left\lfloor i/2 \right\rfloor}{(s-2)/2-i}, \end{align*} which completes the proof. \item[(d)] Theorem \ref{thm:dd3} (b) and (c), and Lemma \ref{lem:path1} give an expression of $\mathfrak{dd}$ depending on the parity of $s$. By manipulating binomial terms, one can combine two expressions into one. \end{enumerate} \end{proof} \begin{rem} From the path constructions, we compare the sizes among them. \begin{enumerate} \item[(a)] If $s$ is odd and $d$ is even, then $\mathfrak{sc}<\mathfrak{bc}=\mathfrak{cs}=\mathfrak{dd}$. \item[(b)] If both $s$ and $d$ are odd, then $\mathfrak{sc}=\mathfrak{dd}<\mathfrak{bc}=\mathfrak{cs}$. \item[(c)] If $s$ is even and $d$ is odd, then $\mathfrak{dd}<\mathfrak{cs}<\mathfrak{sc}=\mathfrak{bc}$. \end{enumerate} \end{rem} \section*{Acknowledgments} Hyunsoo Cho was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2021R1C1C2007589) and the Ministry of Education (No. 2019R1A6A1A11051177). JiSun Huh was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2020R1C1C1A01008524). Jaebum Sohn was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2020R1F1A1A01066216).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The recent release of the third LIGO--Virgo--KAGRA (LVK) Gravitational-Wave Transient Catalog (GWTC-3) increased the total number of gravitational-wave events detected by LVK to 90 \citep{gwtc-3}. \cite{Olsen2022} detect an additional ten events with the so-called IAS pipeline while \cite{Nitz2021} detect seven events not included in GWTC-3. The GWTC-3 catalog consists mostly of binary black hole mergers with two binary neutron star mergers and $\gtrsim 2$ two neutron star + black hole mergers.\footnote{The event GW190814 \citep{GW190814} could be a binary black hole or a neutron star + black hole binary.} Each event is characterized by $\approx 15$-$17$ astrophysical parameters \citep{lalinference}. There are seven extrinsic parameters describing the location and orientation of the event with respect to the observatory and eight or more intrinsic parameters including the component masses and spin vectors. Tidal parameters are often included for systems with a neutron star candidate \citep{Lackey2015,GW170817_EOS,Chatziioannou2020} and some binary black hole analyses now include parameters characterising orbital eccentricity \citep{eccentricity,gwtc1_eccentricity,GW190521_formation,gayathri22,Lennon2020,gwtc2_eccentricity}. Each event is analyzed with a Bayesian inference pipeline \citep{lalinference,bilby,bilby_gwtc1,Biwer2019,rift} in order to determine its astrophysical parameters. The output of these pipelines typically includes posterior samples---discrete representations of the posterior distribution, which can be used to calculate credible intervals for different combinations of astrophysical parameters (corner plots). However, their usefulness does not end there, as they serve as a reduced data product for some of the most exciting gravitational-wave science. Posterior samples are used for population studies \citep{intro,Vitale2021} to probe how compact binaries are distributed in mass and spin, providing insights into stellar evolution and binary formation (see, e.g., \cite{gwtc-3_pop,gwtc-3_cosmo,o3a_pop,o2_pop}). They are used in standard-siren analyses \citep{Schutz1986,Holz2005} to measure cosmological expansion (see, e.g., \cite{gwtc-3_cosmo}) and to determine the neutron star equation of state \citep{Baiotti2019,Lackey2015,Landry2019,stacking2,Wysocki2020,lvk_eos}. While the output of inference pipelines is crucially important to gravitational-wave astronomy, there are several aspects of these inference products that make life complicated for gravitational-wave astronomers. First, it is often necessary to carry out many different inference calculations, which analyse the same event with subtle differences. In particular, there are several popular waveform approximants used to model gravitational waveforms, each with different capabilities and different systematic errors. It is therefore common to analyse each event with multiple waveforms. Likewise, it is sometimes necessary to analyse the same event with different prior assumptions (for example, assuming one or both compact objects are not spinning \citep{BuildingBetterModels}), which can also lead to multiple runs. There may also be runs carried out with different samplers or different sampler settings, which occasionally yield qualitatively different results (see, e.g., \citep{Chia2021,deep}). Finally, the data itself can have different versions due to variations in calibration \citep{Sun2020}, cleaning \citep{gwosc2021}, and/or glitch subtraction schemes \citep{Chatziioannou2020b}. The large number of inference results associated with each event creates a book-keeping problem. This problem is compounded by a second issue: the high computational cost of gravitational-wave inference. Even a ``fast'' run on an ordinary binary black hole event with a cheap approximant can take $\approx\unit[10]{hrs}$. More ambitious analyses on longer signals and/or with cutting-edge approximants can take weeks. Given the substantial cost in time and CO$_2$ emission associated with the generation of astrophysical inference results, it is becoming increasingly necessary to carefully curate gravitational-wave inference results. Finally, the lack of a centralized repository for inference results makes the current workflow of gravitational-wave astronomers inefficient and susceptible to error. A researcher looking for the output of a particular inference run, (e.g., using the \textsc{IMRPhenomXPHM}approximant \citep{Pratten2021} to analyze GW151226 \citep{GW151226} with special sampler settings) may need to email collaborators to find the results. The results in question may have been subsequently moved or even deleted. And the researcher cannot be certain that the files she tracks down are precisely what she is looking for. The situation is already challenging with 90 events. The difficulties will, of course, increase as the gravitational-wave catalog swells \citep{Baibhav2019} to ${\cal O}(1000)$ events in the A+ era and ${\cal O}(10^6)$ events in the era of third-generation observatories such as Cosmic Explorer \citep{Reitze2019,Evans2021} and the Einstein Telescope \citep{Maggiore2020}. In order to address these challenges, we introduce {\sc GWCloud}\xspace, a searchable repository for the creation and curation of gravitational-wave inference results. There are five pillars underpinning its design philosophy:\footnote{These design pillars are aligned with the Australian Research Data Commons guidelines for ``FAIR'' data, which is findable, accessible, interoperable, and reusable. For more information, see \url{https://ardc.edu.au/resources/aboutdata/fair-data/}.} \begin{enumerate} \item \textbf{Uniformity of results.} Inference results are downloaded and uploaded in a uniform format. Uniformity facilitates validation: new results must pass checks to ensure that the inference output is complete and uncorrupted with the necessary metadata to repeat the analysis. \item \textbf{Reproducibility of results.} By curating the metadata and code version of each result, we insist that every entry in {\sc GWCloud}\xspace can be reproduced. \item \textbf{Stability of results over time.} Each result is assigned a permanent location. Users can locate previous results using a search engine. Before launching a new inference job, users can search to see if the analysis they want has already been performed. Avoiding duplicate analyses reduces the carbon footprint of gravitational-wave astronomy. \item \textbf{Access to results.} While a large fraction of gravitational-wave astronomy effort takes place within the LVK collaboration, significant advances are now made by external groups. {\sc GWCloud}\xspace provides multiple levels of access so that results can be shared both within the LVK collaboration and to the larger astronomical community, facilitating the exchange of ideas among a broad community. \item \textbf{Efficient use of computing resources.} {\sc GWCloud}\xspace enables users to submit inference jobs on multiple computing clusters through a single portal. Each cluster can use different batch queuing protocols (e.g., \textsc{slurm} versus \textsc{condor}) and allow for different user groups (e.g., LVK users versus the general public). In this way, {\sc GWCloud}\xspace helps match users with computing resources. \end{enumerate} {\sc GWCloud}\xspace is not the only tool that has been created to tackle these challenges. The Gravitational-Wave Open Science Centre (GWOSC) provides access to most of the publicly available posterior samples used in LVK papers. The samples can be queried, discovered, and downloaded through the GWOSC Event Portal, at \url{https://gwosc.org/eventapi}. These samples are also available through \url{zenodo.org}. Recently, \cite{Asimov} introduced \textsc{Asimov}, a framework for coordinating parameter estimation workflows. It includes a number of useful features, including a review sign-off system so that key results are vetted by humans. Meanwhile, the program \textsc{PESummary} \citep{pesummary} has helped facilitate the dissemination of uniform results (while simultaneously providing a tool for the visualisation of inference results). It includes functionality to access result files\footnote{\url{https://lscsoft.docs.ligo.org/pesummary/stable_docs/gw/fetch.html}} (both public and private) and functionality to reproduce results\footnote{\url{https://lscsoft.docs.ligo.org/pesummary/stable_docs/gw/cli/summaryrecreate.html}}. \textsc{PESummary}, \textsc{Asimov}, and {\sc GWCloud}\xspace provide complementary services, although the way in which they will interact in the future is not yet clear. The remainder of this paper is organized as follows. In Section~\ref{basics}, we cover the basics of {\sc GWCloud}\xspace: how to submit new inference jobs and how to upload the results of an inference analysis. In Section~\ref{GW150914}, we provide the first of three case studies: we use {\sc GWCloud}\xspace to reanalyze the iconic event GW150914 \citep{GW150914}, but with the assumption that both black holes have negligible spin. In Section~\ref{correlations}, we present the second case study: using {\sc GWCloud}\xspace to investigate correlations between mass and spin parameters using events in the second gravitational-wave transient catalog (GWTC-2) \citep{gwtc-2}. In Section~\ref{GW190521}, we describe the third case study: using {\sc GWCloud}\xspace to download posterior samples for the remarkably high-mass event GW190521 \citep{GW190521} obtained using an eccentric waveform approximant. We conclude in Section~\ref{conclusions} with a discussion of future development plans. Technical details are provided in the appendix (Section~\ref{appendix}). The case studies presented in this paper are supported by {\sc Jupyter}\xspace notebooks available as part of the online supplement here: \url{https://git.ligo.org/gwcloud/paper/}. \section{Basics}\label{basics} \subsection{What is {\sc GWCloud}\xspace?} In this Section, we provide a high-level overview of {\sc GWCloud}\xspace and instructions for the most basic tasks users are likely to perform. {\sc GWCloud}\xspace consists of two components: a portal to launch inference jobs and a database to store the results of inference jobs. Both components can be accessed using a web-based graphical user interface (UI) at \url{https://gwcloud.org.au/}. Users who prefer to access {\sc GWCloud}\xspace entirely with command-line programming may instead use the application programming interface (API). We anticipate that the UI's job submission feature will be most useful for new and casual users. However, the UI's search feature should be useful to any user searching for old inference results. The API is likely to be most useful to experts who sometimes need to submit large batches of jobs. It allows for more complicated job submissions with features that are not supported using the UI (for example, custom priors). The portal launches inference jobs using {\sc Bilby}\xspace \citep{bilby,bilby_gwtc1}. Jobs launched through the UI are at present run on the {\sc OzStar}\xspace cluster based at Swinburne University. Jobs submitted by authenticated LVK users through the API can also be run on computers that form part of the LIGO Data Grid. It is also possible to upload jobs to the {\sc GWCloud}\xspace database that were not run through the {\sc GWCloud}\xspace portal so long as they are in the standard {\sc Bilby}\xspace format (see Section \ref{upload}).\footnote{This feature can be used to upload results from other inference code such as \textsc{LALInference} \citep{lalinference} or \textsc{RIFT} \citep{rift}.} This is useful for storing jobs that were run before the creation of {\sc GWCloud}\xspace or jobs that require special resources to run, for example, computationally expensive Parallel {\sc Bilby}\xspace~\citep{pBilby} analyses that require a high-performance computing cluster. Users visiting the {\sc GWCloud}\xspace landing page are met with a prompt requiring sign-in. Members of the LVK collaboration can sign in using their \texttt{albert.einstein} credentials, which also provides access to the LIGO Data Grid, while other users can create a {\sc GWCloud}\xspace account. After logging in, the user is taken to the ``public jobs'' page, which lists the most recent {\sc GWCloud}\xspace runs; see Fig.~\ref{fig:example_entry} for an example entry from this recent-job list. The search field allows users to find jobs based on their \texttt{description}, the \texttt{user} who submitted the job, the \texttt{job\_name}, and the \texttt{event ID}. \texttt{Labels} are available to distinguish some jobs as special. For example, the \texttt{preferred} label indicates that a job is used for an official LVK result.\footnote{Other currently available labels include \texttt{Bad run}, \texttt{Production run}, \texttt{Review requested}, and \texttt{Reviewed}.} Previous jobs can be viewed and downloaded by clicking on the appropriate \texttt{view} link. Users may create a new job by clicking on \texttt{start a new job} and following the instructions. \begin{figure*} \centering \fbox{ \includegraphics[width=\textwidth]{example_entry.png} } \caption{ Example entry in {\sc GWCloud}\xspace. } \label{fig:example_entry} \end{figure*} In the next subsections we describe how to submit (or upload) a job using the API. Additional information is provided on the {\sc GWCloud}\xspace web page by clicking on \texttt{Python API.} In order to implement the examples below, readers must install the {\sc GWCloud}\xspace API: \begin{verbatim} pip install gwcloud-python \end{verbatim} \subsection{Submitting a new job with the {\sc GWCloud}\xspace API}\label{submit} Here we describe a {\sc Python}\xspace script for submitting a new {\sc GWCloud}\xspace job using the API; see \sloppy \texttt{JobSubmission.ipynb} for the corresponding {\sc Jupyter}\xspace notebook. The corresponding job can be viewed on the {\sc GWCloud}\xspace UI by searching for the name: \texttt{GW150914Example}. The first step is for the user to authenticate by initialising a token identifier generated by {\sc GWCloud}\xspace. At the beginning of any {\sc GWCloud}\xspace script, include the following lines to import the {\sc GWCloud}\xspace API and set up your token: \begin{verbatim} from gwcloud_python import GWCloud gwc = GWCloud(token=`YourTokenHere') \end{verbatim} The next step is to create a {\sc Bilby}\xspace {\tt .ini}\xspace file, which is required to submit a job with {\sc GWCloud}\xspace because it is required to run {\sc Bilby}\xspace. The {\tt .ini}\xspace file for this tutorial is \texttt{GW150914\_example.ini}. The {\tt .ini}\xspace file tells {\sc Bilby}\xspace which data to analyze and how to analyze it.\footnote{ Please see \url{https://lscsoft.docs.ligo.org/bilby/} for additional {\sc Bilby}\xspace documentation.} The {\tt .ini}\xspace file contains local paths to noise power spectral density ({\tt PSD}\xspace) file(s), spline calibration ({\tt .calib}\xspace) file(s), and the {\tt .prior}\xspace file. All of these files are uploaded to {\sc GWCloud}\xspace for reproducibility. With the {\tt .ini}\xspace file ready, we submit the job to the LIGO Data Grid's Caltech cluster like so: \begin{verbatim} new_job = gwc.start_bilby_job_from_file( job_name = ``GW150914Example'', job_description = ``Testing GWCloud'', private=False, ini_file=`GW150914_example.ini', cluster = Cluster.CIT) \end{verbatim} Once this command is executed, a new job with \texttt{job\_name = GW150914Example} becomes visible on the {\sc GWCloud}\xspace UI.\footnote{The job name, combined with the {\sc GWCloud}\xspace user name of the person who submitted the job uniquely define each event. Thus, two different users can have a job named \texttt{GW150914Example}, but one user can not give this name to two different jobs.} Since we set \texttt{private=False}, the job can be viewed by anyone using {\sc GWCloud}\xspace.\footnote{Jobs are marked as \texttt{LVK} or not. They are also marked as \texttt{private} or not. A job with \texttt{LVK=true} and \texttt{private=false} may be viewed by all members of the LVK Collaboration.} The last line of code tells {\sc GWCloud}\xspace to run this job on the Caltech computing cluster. The progress of the new job can be monitored with the {\sc GWCloud}\xspace UI. When the job is complete, the API can be used to retrieve the posterior samples from {\sc GWCloud}\xspace with the following command: \begin{verbatim} job.save_result_json_files(`/path/') \end{verbatim} Which saves the result files containing the posteriors to the specified path. \subsection{Uploading the results of an existing {\sc Bilby}\xspace run}\label{upload} Here we describe a {\sc Python}\xspace script to upload existing results to {\sc GWCloud}\xspace job using the API; see \texttt{JobUpload.ipynb} for the corresponding {\sc Jupyter}\xspace notebook. The corresponding job can be viewed on {\sc GWCloud}\xspace by searching for \texttt{job\_name = GW190412} by \texttt{user = Asa Baker}. As our starting point, we need a {\sc Bilby}\xspace{} {\tt output/}\xspace directory with the requisite subdirectory structure. We modify the \texttt{label} field in the \texttt{*\_config\_complete.ini} file to set the {\sc GWCloud}\xspace job name, e.g., \begin{verbatim} label = `GW150914_Upload_Example' \end{verbatim} Next, create a tar-zipped file of the {\sc Bilby}\xspace{} {\tt output/}\xspace directory, which can be accomplished by running this command: \begin{verbatim} tar -cvf archive.tar.gz . \end{verbatim} Finally, the job is submitted by uploading the tar-zipped file to {\sc GWCloud}\xspace: \begin{verbatim} gwc.upload_job_archive('Example upload with GW159014.', '/path/archive.tar.gz') \end{verbatim} {\sc GWCloud}\xspace checks the submission to make sure all the requisite results and supporting files are included. \section{Case Study I: submit a job to analyze GW150914 with a zero-spin prior}\label{GW150914} The {\sc GWCloud}\xspace graphical UI allows users to submit inference jobs with various default prior settings. While these settings are probably adequate for new users, expert users will need the API in order perform runs with custom priors. Here we provide an example of how the API can be used to carry out an inference calculation with a non-standard prior; see \texttt{CaseStudy1.ipynb} for the corresponding {\sc Jupyter}\xspace notebook. The corresponding jobs can be viewed on {\sc GWCloud}\xspace by searching for the names: \texttt{GW150914Example} and \texttt{GW150914NoSpin} by \texttt{Asa Baker}. Specifically, we reanalyze the iconic first binary black hole event GW150914 \citep{GW150914}, but assuming that both black holes have negligible dimensionless spins $\chi_1 = \chi_2 = 0$ (here the $1$ subscript refers to the more massive ``primary'' black hole while the $2$ subscript refers to the less massive ``secondary'' black hole). This example is motivated by work by \cite{Fuller2019,Miller2020,Roulet2020,BuildingBetterModels,Hoy2022}, which suggest a sub-population of LVK detections are likely characterized by negligible black-hole spin. We prepare two {\tt .ini}\xspace files: \texttt{GW150914.ini} reproduces standard {\sc Bilby}\xspace settings for a short-duration (high-mass) binary black hole signal. The prior for the dimensionless spins $\chi_1, \chi_2$ is uniform on the interval of zero to one. Meanwhile, in \texttt{GW150914\_nospin.ini}, we set $\chi_1=\chi_2=0$. We submit both jobs and download the results using the syntax described in Section~\ref{submit}. In Fig.~\ref{fig:GW150914} we provide a corner plot comparing the credible intervals for various parameters of GW150914 assuming a uniform prior for the dimensionless spins (blue) and a no-spin prior (orange). The different shading indicates one-, two-, and three-sigma credible intervals. The different choice of prior yields subtle but interesting shifts in the posterior distribution. Comparing the marginal likelihoods for each run, we find that the zero-spin hypothesis is preferred with a Bayes factor of $\text{BF}=3.7$, consistent with the conclusions from \cite{Miller2020,Roulet2020,BuildingBetterModels,Hoy2022} that some gravitational-wave events are best described as having negligible spin. \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{GWCloudSpinVsNoSpinCorner.pdf} \caption{ A corner plot showing the marginalised posterior distribution of the first binary black hole event GW150914. The masses are given provided in the lab frame. The default results (calculated with a $U(0,1)$ prior for the dimensionless spins $\chi_1, \chi_2$) is shown in blue while the orange shows the results assuming $\chi_1=\chi_2=0$. The different shades indicate one-, two-, and three-sigma credible intervals. } \label{fig:GW150914} \end{figure*} \section{Case Study II: Download results from GWTC-2 for Correlation study}\label{correlations} In this case study, we provide an example of how the API can be used to download previous inference results to look for trends in the population of merging binary black holes; see \texttt{CaseStudy2.ipynb} for the corresponding {\sc Jupyter}\xspace notebook. The corresponding jobs can be viewed on {\sc GWCloud}\xspace by searching for the keyword: \texttt{GWTC-2}. This example is motivated by work by \cite{Callister2021}, suggesting that black-hole spin is correlated with mass ratio. We download the ``preferred samples'' (used for official LVK analyses) for 47 binary black hole events in GWTC-2 \citep{gwtc-2,o3a_pop}. To retrieve these GWTC-2 jobs, we run the following command: \begin{verbatim} jobs = gwc.get_public_job_list( search="GWTC-2", time_range=TimeRange.ANY) \end{verbatim} In Fig.~\ref{fig:GWCloud_GWTC_masses}, we plot the 90\% credible intervals in the plane of total mass $M$ and mass ratio $q$ for events in GWTC-2. In Fig.~\ref{fig:GWCloud_GWTC_chieff}, meanwhile, we plot credible intervals in the plane of chirp mass $\mathcal{M}$ and the effective inspiral spin $\chi_\text{eff}$. These two plots can be compared to Figs.~6-7 in \cite{gwtc-2}. By examining the distributions of events in two-dimensional planes, it is sometimes possible to see previously unknown correlations. In this case, there is not an obvious correlation present in either plot. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{GWCloud_GWTC12_masses.pdf} \caption{ Compact binary coalescence events from GWTC-2 in the plane of total mass $M$ and mass ratio $q$. Each contour represents the 90\% credible region. Select events are highlighted. The dashed lines mark the border beyond which one or more component has a mass $<\unit[3]{M_\odot}$; objects below this threshold are neutron-star candidates. The events in the grey region are confidently binary black hole events while events in the mauve region may contain a neutron star. The purple region is forbidden by the requirement that $m_1 > m_2$. } \label{fig:GWCloud_GWTC_masses} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{GWCloud_GWTC12_chieff.pdf} \caption{ Compact binary coalescence events of the LVK GWTC-2 catalog in the plane of chirp mass $\mathcal{M}$ and effective inspiral spin $\chi_\text{eff}$. Each contour represents the 90\% credible region for a different event. Select events are highlighted. } \label{fig:GWCloud_GWTC_chieff} \end{figure*} \section{Case Study III: Download results for eccentric analysis of GW190521}\label{GW190521} Binary black holes formed from stellar binaries are expected to merge with quasi-circular orbits. However, a non-zero eccentricity may indicate that the binary was assembled from previously unbound black holes, a process called ``dynamical formation.'' We consider GW190521~\citep{GW190521}, one of the most massive binary black hole events to date, which shows signs of non-zero spin precession and/or eccentricity~\citep{GW190521_formation, gayathri22}. \cite{GW190521_formation} analyzed this event using quasi-circular and eccentric waveforms. The results of this analysis have been uploaded to {\sc GWCloud}\xspace and can be viewed on by searching for: \texttt{job\_name = GW190521, user = Asa Baker} (for results obtained with the circular waveform obtained with \textsc{NRSur7dq4} \cite{Varma2019}) and/or \texttt{job\_name = GW190521\_eccentric, user = Asa Baker} (for the results obtained with \textsc{SEOBNRE} \citep{Cao2017,Liu2020}). To retrieve these jobs from {\sc GWCloud}\xspace, search for \texttt{GW190521} in the public jobs by executing the following command \begin{verbatim} jobs = gwc.get_public_job_list( search="GW190521", time_range=TimeRange.ANY) \end{verbatim} and download the jobs by \texttt{Asa Baker}. In Fig.~\ref{fig:GW190521}, we plot the posterior distribution for the eccentricity of GW190521 at a reference frequency of $\unit[10]{Hz}$ (compare with Fig. 1 of~\citet{GW190521_formation}). See \texttt{CaseStudy3.ipynb} for the corresponding {\sc Jupyter}\xspace notebook. \begin{figure*} \centering \includegraphics[width = 0.6\textwidth]{Eccentricity_posterior.pdf} \caption{The posterior distribution for the eccentricity of GW190521 at a reference frequency of $\unit[10]{Hz}$ obtained with the \textsc{SEOBNRE} waveform \citep{Cao2017,Liu2020}) by \cite{GW190521_formation}. } \label{fig:GW190521} \end{figure*} \section{Future development}\label{conclusions} We close by considering the future of {\sc GWCloud}\xspace, describing new functionality we hope to add in both the short term and long term. As we plan for the future, we invite input from the astronomical community; please visit our git issue tracker to leave a suggestion or to propose a new feature.\footnote{\url{https://gitlab.com/CAS-eResearch/GWDC/projects/gwcloud/issues}} \textbf{Short-term goals.} \begin{enumerate} \item \textit{Making LVK jobs public.} When LVK data is published, jobs that are previously marked as \texttt{LVK} can be changed to \texttt{public}. \item \textit{{\sc GWCloud}\xspace teams.} Share jobs among a small team. Team members can add comments to different jobs, e.g., ``this result does not look fully converged.'' Teams can combine jobs to create catalogs. \item \textit{Archiving complementary information.} Gravitational-wave inference results do not exist in vacuum. In order to generate and interpret them, we rely on a number of other data products including estimates of the noise power spectral density (e.g., \cite{BayesLine}), injection studies used to quantify selection effects \citep{Talbot2022,Gerosa2020}, and probabilities that a given event is astrophysical $p_\text{astro}$ \citep{Kapadia2020}. We hope to extend {\sc GWCloud}\xspace to include these and other data products. \item \textit{Visualization.} Static and dynamic visualization of inference products is useful to understand covariances. Such functionality is currently offered within the \texttt{pe\_summary} toolkit~\cite{pesummary}; a short-term goal is full integration of these visualisation toolkits into the {\sc GWCloud}\xspace workflow. \end{enumerate} \textbf{Long-term goals.} \begin{enumerate} \item \textit{Identify similar jobs.} Warn users if they are about to launch a job that is similar to one already in the database. Users may choose to use existing results rather than waiting for new ones (and potentially generating more CO$_2$ emissions). In some cases, importance sampling can be used to re-weight posterior samples to convert the results from a ``proposal'' distribution to a ``target'' distribution \citep{hom}. \item \textit{Estimate job run time.} Use machine learning to provide estimated time to completion for new jobs. Warn users if they launch a job that is likely to take more than a week to complete. \item \textit{Connecting to other clusters.} Currently, {\sc GWCloud}\xspace provides users access to the computing clusters of the LIGO Data Grid and the OzStar clusters at the Swinburne University of Technology. However, {\sc GWCloud}\xspace could be connected to other computing resources such as the Open Science Grid \citep{Pordes2007}. \item \textit{Automated inference.} The project could be extended to launch automated inference jobs for promising triggers by e.g., integrating with \textsc{Asimov}~\cite{Asimov}. When extra computational resources are available, carry out inference on all data segments. The results can be used to carry out a statistically optimal search for the astrophysical background \citep{tbs} and to construct fully Bayesian detection statistics \citep{VeitchVecchio,bcr,Pratten2021b}. \item \textit{Beyond posterior samples.} The majority of gravitational-wave inference relies on posterior samples. However, in some cases, it can be useful to work with other inference products, for example, machine-learning (and grid) representations of marginal likelihoods \citep{stacking,stacking2,Wysocki2020,rift}. Additional work is required to define a standardised format for such inference products. \end{enumerate} \section*{Acknowledgements} This work is supported by the Gravitational Wave Data Centre, which is funded under the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) Program via Astronomy Australia Ltd. (AAL). This work is supported by the Australian Research Council (ARC) Centre of Excellence CE170100004. PDL is supported by ARC Discovery Project DP22010161.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Reinforcement learning has been used to solve many control and robotics tasks, however, only a handful of papers has been published so far that apply this technique to end-to-end driving \cite{AsyncMethods,EndToEndRaceDriving,LearningToDriveInADay,SSINet,DeepRacer,Szemenyei2019}. Even fewer works focus on reinforcement learning-based driving, trained only in simulations but applied to real-world problems. Generally, bridging the gap between simulation and the real world is an important transfer learning problem related to reinforcement learning and is an unresolved task for researchers. Mnih~et~al.~\cite{AsyncMethods} proposed a method to train vehicle controller policies that predict discrete control actions based on a single image of a forward-facing camera. Jaritz~et~al.~\cite{EndToEndRaceDriving} used WRC6, a realistic racing simulator to train a vision-based road following policy. They assessed the policy's generalization capability by testing on previously unseen tracks and on real diving videos, in an open-loop configuration, but their work didn't extend to evaluation on real vehicles in closed-loop control. Kendall~et~al.~\cite{LearningToDriveInADay} demonstrated real-world driving by training a lane-following policy exclusively on a real vehicle, under the supervision of a safety driver. Shi~et~al.~\cite{SSINet} presented research that involves training reinforcement learning agents in Duckietown similarly to ours, however, they mainly focused on presenting a method that explains the reasoning of the trained agents, rather than on the training methods. Similarly to our research, Balaji~et~al.~\cite{DeepRacer} presented a method for training a road-following policy in a simulator using reinforcement learning and tested the trained agent in the real world, yet their primary contribution is the DeepRacer platform, rather than the in-depth analysis of the road following policy. In this contribution, we study vision-based end-to-end reinforcement learning on vehicle control problems and propose a solution that performs lane following in the real world, using continuous actions, without any real data provided by an expert (as~in~\cite{LearningToDriveInADay}). Also, we perform validation of the trained policies in both the real and simulated domains. \section{Methods} \begin{figure}[tbp] \centering \subfloat[Simulated]{\includegraphics[width=2.5cm]{Salient_objects_mono_sim_LF_26} \label{fig_sal_obj_sim}} \subfloat[Real]{\includegraphics[width=2.5cm]{Salient_objects_mono_real_LF_22} \label{fig_sal_obj_real}} \subfloat[Collision avoidance]{\includegraphics[width=2.5cm]{Salient_objects_mono_real_LFV_29} \label{fig_sal_obj_lfv}} \caption{Salient objects highlighted on observations in different domains and tasks. Blue regions represent high activations throughout the network.} \label{fig_sal_obj} \end{figure} We trained a neural network-based controller that takes images from a forward-looking monocular camera and produces control signals to drive a vehicle in the right lane of a two-way road. The vehicle to be controlled is a small differential-wheeled mobile robot, a so-called Duckiebot, which is part of the Duckietown ecosystem\cite{Duckietown}, a simple and accessible platform for research and education on mobile robotics and autonomous vehicles. The primary objective is to travel as far as possible under a given time, without leaving the road (while lane departure is allowed, but not preferred). Training and evaluation code for this paper will be open sourced after the 5\textsuperscript{th} AI-Driving Olympics and will be available on GitHub\footnote{\url{https://github.com/kaland313/Duckietown-RL}}. \subsection{Reinforcement learning algorithm} To train the policy we used Proximal Policy Optimization algorithm \cite{PPO} for its stability, sample-complexity, and ability to take advantage of multiple parallel workers. Policy optimization algorithms are on-policy reinforcement learning methods that directly update the $\pi_\theta(a_t|s_t)$ policy based on the $a_t$ actions and the $r_t$ reward received for them ($\theta$ denotes the trainable parameters of the policy and $s_t$ is the observation at timestep $t$). The policy used for these algorithms is stochastic and in case of deep reinforcement learning it's implemented by a neural network, which is updated using gradient methods. In simpler versions of the algorithm (such as REINFORCE\cite{REINFORCE}), the gradients are estimated by $\hat{g} = \mathop{\mathbb{\hat{E}}}_{\tau\sim \pi_\theta}\left [ \nabla_\theta \log \pi_\theta(a_t|s_t) {G}^{\pi_\theta}(a_t,s_t)\right ]$, where $G_t$ is the return. Proximal Policy Optimization performs the weight updates using a special loss function to keep the new policy close to the old, thereby improving the stability of the training. Two loss functions were proposed by Schulman et al.~\cite{PPO}: \small\begin{equation} \mathfrak{L}_{CLIP}(\theta)=\mathop{\mathbb{\hat{E}}}\left [\min \left( r_t(\theta) \hat{A}_t, \mathrm{clip} (r_t(\theta), 1-\epsilon, 1+\epsilon)\hat{A}_t\right) \right] \end{equation} \begin{equation} \mathfrak{L}_{KLPEN}(\theta)=\mathop{\mathbb{\hat{E}}}\left[r_t(\theta) \hat{A} - \beta \mathrm{KL}\left[\pi_{\theta_{old}}(\cdot|s_t), \pi_{\theta}(\cdot|s_t) \right] \right] \end{equation} \normalsize where $\mathrm{clip}(\cdot)$ and $\mathrm{KL}[\cdot]$ refer to the clipping function and KL-divergence respectively, while $\hat{A}$ is calculated as the generalized advantage estimate~\cite{GAE}. In these loss functions $r_t(\theta) = \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_{old}}(a_t|s_t)}$, $\epsilon$ is a constant usually in the $[0.1, 0.3]$ range, while $\beta$ is an adaptive parameter. We used an open-source implementation of the algorithm\cite{RLlib}, which performs the gradient updates based on the weighted sum of these loss functions. \subsection{Policy architecture} \begin{figure}[tbp] \centering \includegraphics[width=8.5cm]{figs/PolicyArchitecture.pdf} \caption{Illustration of the policy architecture with the used notations.} \label{fig:policyarchitecture} \end{figure} The controller policy is realized by a shallow (4-layer) convolutional neural network. We consider this policy end-to-end because the only learning component is the neural network, which directly computes actions based on observations from the environment. Both the policy and the value network use the architecture presented by Mnih et al. \cite{AsyncMethods}, with no weight sharing (with the only difference of using linear activation on the output of the policy network). Some pre- and post-processing is applied to the observations and actions respectively, but these only perform very simple transformations. The input of the policy network is the last three observations (images) scaled, cropped and stacked (along the depth axis). The observations returned by the environment are $640\times480$ (width, height) RGB images whose top third mainly shows the sky, therefore is cropped. Then, the cropped images are scaled down to $84\times84$ resolution (note the uneven scaling), which are then stacked along the depth axis resulting in $84\times84\times9$ input tensors. The last three images are stacked to provide the policy with information about the robot's speed and acceleration. We experimented with multiple action representations (see sec.~\ref{sec_action_representations}), depending on these the policy outputs one or two scalar values which control the vehicle. The policy is stochastic, therefore the output of the neural network produces the parameters of a (multivariate diagonal) normal distribution, which is sampled to acquire actions. \subsection{Action representations} \label{sec_action_representations} The vehicle to be controlled is a differential-wheeled robot, therefore the most general action representation is to directly predict the angular velocities of the two wheels as continuous values in the $\omega_{l,r}\in[-1;1]$ range (where 1 and -1 correspond to rotating forward and backward at full speed). However, this action space allows for actions that are not necessary for the maneuvers we examine in this paper. Moreover, by allowing unnecessary actions, the reinforcement learning algorithm must rule these out, potentially making the exploration of the action space more difficult therefore increasing the steps required to train an agent. Several methods can be used to constrain and simplify the action space, such as discretization, clipping some actions, or mapping to a lower-dimensional space. Most previous works (\hspace{-0.01mm}\cite{AsyncMethods,EndToEndRaceDriving,DeepRacer}) use discrete action spaces, thus the neural network in these policies selects one from a set of hand-crafted actions (steering, throttle combinations), while Kendall et al. \cite{LearningToDriveInADay} utilize continuous actions, as we do. However, they don't predict throttle directly, only a speed set-point for a classical controller. In order to test the reinforcement learning algorithm's ability to solve the most general problem, we experimented with multiple action mappings and simplifications of the action space. These were \subsubsection{Wheel Velocity} The policy directly outputs wheel velocities $\omega_{l,r}\in[-1;1]$ \subsubsection{Wheel Velocity - Positive Only} Only allow positive wheel velocities, because only these are required to move forward. Values predicted outside the $\omega_{l,r}\in[0;1]$ interval are clipped. \subsubsection{Wheel Velocity - Braking} Wheel velocities could still only fall in the $\omega_{l,r}\in[0;1]$ interval, but the predicted values are interpreted as the amount of braking from the maximum speed $\omega_{l,r}=1-y_{pred,l,r}$. The main differentiating factor from the Positive Only option is the bias towards moving forward at full speed. \subsubsection{Steering} Predicting a scalar value that is continuously mapped to combinations of wheel velocities. The 0.0 scalar value corresponds to going straight (at full speed), while -1.0 and 1.0 refer to turning left or right, with one wheel completely stopped and the other one going at full speed. The speed of the robot is always maximal for a particular steering value. \subsection{Reward shaping} \begin{figure}[tbp] \centering \includegraphics[width=8cm]{Preferred-Angle-Drawing3} \caption[]{Explanation of the proposed Orientation reward. (a) explains $\Psi, d$, (b) shows how the desired orientation depends on the lateral error, (d), shows some examples of desired configurations, while (c) shows the function \\ \begin{minipage}{\columnwidth} \[ \Lambda (x) = \left\{\begin{array}{ll} \frac{1}{2} + \frac{1}{2}\cos \left(\pi \frac{x}{\varphi}\right) & \textrm{if } -1\ge x\ge 1\\ \varepsilon (-|\frac{x}{\varphi}| + 1) & \textrm{otherwise } \end{array}\right., \varepsilon \in [10^{-1},10^{-2}] \tag{3} \label{eq_Lambda} \] \end{minipage} } \label{fig_reward} \end{figure} The reward function is a fundamental element of every reinforcement learning problem as it serves the important role of converting a task from a textual description to a mathematical optimization problem. The primary objective for the agent is to travel as far as possible under a given time in the right lane, therefore we propose two rewards that promote this behavior. \subsubsection{Distance traveled} The agent is directly rewarded proportionally to the distance it moved further along the right lane under every step. Only longitudinal motion is counted, and only if the robot stayed in the right lane. \subsubsection{Orientation} The agent is rewarded if it is facing towards and moves in a certain desired orientation, which is determined based on its lateral position. In simple terms, it is rewarded the most if it faces towards the center of the right lane (some example configurations are shown on fig.~\ref{fig_reward}d). A term proportional to the angular velocity of the faster moving wheel is also added to encourage fast motion. This reward is calculated as $r = \lambda_{\Psi} r_{\Psi}(\Psi, d) + \lambda_v r_v(\omega_l,\omega_r)$, where $r_{\Psi}(\cdot), r_v(\cdot)$ are the orientation and velocity based components, while $\lambda_{\Psi}, \lambda_v$ constants scale these to $[-1,1]$. $\Psi, d$ are orientation and lateral error from the desired trajectory, which is the center line of the right lane (see fig.~\ref{fig_reward}a) The orientation-based term is calculated as $r_{\Psi}(\Psi, d) = \Lambda(\Psi_{err}) = \Lambda(\Psi-\Psi_{des}(d))$, where $\Psi_{des}(d)$ is the desired orientation, calculated based on the lateral distance from the desired trajectory (see fig.~\ref{fig_reward}b for the illustration of $\Psi_{des}(d)$). The $\Lambda$ function achieves that $|\Psi_{err}| < \varphi$ error is promoted largely, while error larger than this leads to small negative reward (formal description \eqref{eq_Lambda} and a plot of $\Lambda$ is shown on fig.~\ref{fig_reward}c). $\varphi=50^\circ$ hyper-parameter was selected arbitrarily. The velocity-based component is calculated as $r_v(\omega_l,\omega_r) = \mathrm{max}(\omega_l,\omega_r)$ to reward high speed motion equally in straight and curved sections, where only the outer wheel can rotate as fast as on straight sections. \subsection{Simulation to reality transfer} \begin{figure}[tbp] \centering \includegraphics[width=8cm]{DomainRand} \caption{Examples of domain randomized observations} \label{fig_domain_rand} \end{figure} To train agents, we used an open-source simulation of the Duckietown environment\cite{GymDuckietown}. It models certain physical properties of the real environment accurately (dimensions of the robot, camera parameters, dynamic properties etc.), but several other effects (textures, objects surrounding the roads) and light simulation are less realistic (e.g. compared to modern computer games). These inaccuracies create a gap between simulation and reality which makes it challenging for any reinforcement learning agent to be trained in a simulation but operate in reality. To bridge the simulation to reality gap, and to achieve the generalization capability required for real performance we used domain randomization. This involves training the policy in many different variants of a simulated environment, by varying lighting conditions, object textures, camera, and vehicle dynamics parameters, road structures etc. (for examples of domain randomized observations see fig.~\ref{fig_domain_rand}). In addition to the "built-in" randomization options of Gym-Duckietown, we trained on a diverse set of maps to further improve the agent's generalization capability. \subsection{Collision avoidance} Collision avoidance with other vehicles greatly increases the complexity of the lane-following task. These problems can be solved in different ways, e.g. by overtaking or following from a safe distance. However, the sensing capability of the vehicle and the complexity of the policy determine the solution it can learn. Images from the forward-facing camera of a duckiebot only have $~160^\circ$ horizontal field of view, therefore the policy controlling the vehicle has no information about objects moving next to or behind the robot. Also, for simplicity, we chose a convolutional network and didn't incorporate an LSTM cell into it. For these reasons, it is unable to plan long maneuvers, such as overtaking, which also requires side-vision to check if returning to the right lane is safe. Therefore, we trained a policy in situations where there is a slow vehicle ahead, and the agent has to learn to perform lane following at full speed until it catches up with the vehicle upfront, then it must reduce its speed and keep a safe distance to avoid collision. In these experiments, the \textit{Wheel Velocity - Braking} action mapping was used because this allows the policy to slow down or even stop the vehicle if necessary (unlike the one we call \textit{Steering}). Rewards used to train for collision avoidance were the modified version of the Orientation reward and Distance traveled (unchanged). The simulation we used provides a $p_{coll} $ penalty if the so-called safety circles of two vehicles overlap. The reward term calculated based on this penalty is proportional to its change if it's decreasing, otherwise, it's 0. \stepcounter{equation} \begin{equation} r_{coll}=\left\{\begin{array}{ll} -\lambda_{coll}\cdot\Delta p_{coll} & \textrm{if } \Delta p_{coll}<0 \\ 0 & \textrm{otherwise} \end{array}\right. \end{equation} This term is added to the Orientation reward and intends to encourage the policy to increase the distance from the vehicle ahead if it got too close. Collisions are only penalized by terminating the episode, without giving any negative reward. \subsection{Evaluation} \label{sec_evaluation} \begin{figure}[tbp] \centering \subfloat[Simulated]{ \includegraphics[height=35mm]{TestTrackSim} \label{fig_test_track_sim}} \subfloat[Simulated]{ \includegraphics[width=35mm,angle=90]{TestLoopSimulated}\label{fig_test_track_sim2real_sim}} \subfloat[Real]{ \includegraphics[width=35mm,angle=90]{TestLoopReal}\label{fig_test_track_sim2real_real}} \caption{(a): Test track used for simulated reinforcement learning and baseline evaluations. (b),(c): Real and simulated test track used for the evaluation of the simulation to reality transfer.} \label{fig_test_track} \end{figure} To assess the performance of the reinforcement learning-based controller, we measured multiple performance metrics in the simulation and compared these against two baselines, one using a classical control theory approach, and human driving. To our knowledge no other methods have been published so far, which could be used as a baseline. These metrics are: \subsubsection{Survival time} The time until the robot left the road or the time period of an evaluation. \subsubsection{Distance traveled in ego-lane [m]} The distance traveled along the right-hand-side lane under a fixed time period. Only longitudinal motion is counted, therefore tangential movement counts the most towards this metric. \subsubsection{Distance traveled both lanes [m]} The distance traveled along the road under a fixed time period, but also sections where the agent moved in the oncoming lane count towards this metric. \subsubsection{Lateral deviation [m$\cdot$s]} Lateral deviation from the lane center line integrated over the time of an episode. \subsubsection{Orientation deviation [rad$\cdot$s]} The robot orientation's deviation from the tangent of the lane center line, integrated over the time of an episode. The classical control theory baseline relies on information about the robot's relative location and orientation to the centerline of the lane, which are available in the simulator. This baseline works by controlling the robot to orient itself towards a point on it's desired path ahead and calculating wheel velocities using a proportional-derivative (PD) controller, based on the orientation error of the robot. The parameters of this controller were hand-tuned to achieve sufficiently good performance, but more advanced control schemes could offer better results. In many reinforcement learning problems (e.g. the Atari 2600 games \cite{Atari2600}) the agents are compared to human baselines. Motivated by this benchmark we propose a method to measure how well humans are able to control duckiebots, which could be used as a baseline. The values shown in Table~\ref{tab_results_sim} were recorded by controlling the simulated robot using the arrow keys on a keyboard (therefore via discrete actions), while the observations seen by the human driver were very similar compared to the observations of the reinforcement learning agent. \section{Results} \begin{table}[tbp] \caption{Comparison of the reinforcement learning agent to two baselines in simulation} \begin{center} \begin{tabular}{lcccc} \hline Mean metrics over 5 episodes & & RL & PD & Human \\ & & agent & baseline & baseline \\ \hline Survival time [s] & $\uparrow$ & 15 & 15 & 15 \\ Distance traveled both lanes [m] & $\uparrow$ & 7.1 & 7.6 & 7.0 \\ Distance traveled ego-lane [m] & $\uparrow$ & 7.0 & 7.6 & 6.7 \\ Lateral deviation [m$\cdot$s] & $\downarrow$ & 0.5 & 0.5 & 0.9 \\ Orientation deviation [rad$\cdot$s]& $\downarrow$ & 1.5 & 1.1 & 2.8 \\ \hline \end{tabular} \label{tab_results_sim} \end{center} \end{table} \begin{figure}[tbp] \centering \includegraphics[width=6cm]{Actiontypes_Legend.pdf}\\ \vspace{-3mm} \subfloat[Orientation reward]{\includegraphics[height=3.1cm]{Actiontypes_RrewardOrientation}} \subfloat[Distance travelled reward]{\includegraphics[height=3.1cm]{Actiontypes_RrewardDistanceTraveled_NoAx}} \caption{Learning curves for the reinforcement learning agent with different action representations and reward functions.} \label{fig_rewards_actionmappings} \end{figure} \begin{figure*}[!ht] \centering \subfloat[{$t=0$[s] Initial positions}]{\includegraphics[width=27mm]{TrajectoryLFV_TopView_t0s.png}} \enspace \subfloat[{$t=6$[s] Catching up}]{\includegraphics[width=27mm]{TrajectoryLFV_TopView_t6s.png}} \enspace \subfloat[{$t=8$[s]}]{\includegraphics[trim=842pt 595pt 842pt 595pt, clip,width=27mm]{TrajectoryLFV_TopView_t8s}} \enspace \subfloat[{$t=24$[s]}]{\includegraphics[trim=842pt 595pt 842pt 595pt, clip,width=27mm]{TrajectoryLFV_TopView_t24s}} \enspace \subfloat[Approximate distance between the vehicles.]{\includegraphics[width=58mm]{FollowDistanceTimeplot}\label{fig_lfv_distance_plot}} \caption{Sequence of robot positions in a collision avoidance experiment with a policy trained using the modified Orientation reward. After $t=6[s]$ the controlled robot follows the vehicle in front of it from a short, but safe distance until the end of the episode. (Approximate distance is calculated as the distance between the center points of the robots minus the length of a robot.)} \label{fig_lfv_traj_sequence} \end{figure*} \subsection{Simulation} Even though multiple papers demonstrate the feasibility of training vision-based driving policies using reinforcement learning, adapting to a new environment still poses many challenges. Due to the high dimensionality of the image-like observations, many algorithms converge slowly and are very sensitive to hyperparameter selection. Our method, using Proximal Policy Optimization is able to converge to good lane following policies in 1 million timesteps, thanks to the high sample-complexity of the algorithm. \subsubsection{Comparing against baselines}Table~\ref{tab_results_sim} compares our reinforcement learning agent to the baselines. The performance of the trained policy is measurable to our classical control theory baseline, as well as to how well humans are able to control the robot in the simulation. Most metrics indicate similarly good or equal performance, even though the PD controller baseline relies on high-level data such as position and orientation error, rather than images. \subsubsection{Action representation and reward shaping} Experiments with different action representations show that constrained and preferably biased action spaces allow convergence to good policies (\textit{Wheel Velocity - Braking} and \textit{Steering}), however, more general action spaces (\textit{Wheel Velocity} and it's \textit{Clipped} version) can only converge to inferior policies under the same number of steps (see fig.~\ref{fig_rewards_actionmappings}). The proposed orientation based reward function also leads to as good final performance as "trivially" rewarding based on the distance traveled, however, the latter seems to perform better on more general action representations (because policies using these action spaces and trained with the Orientation reward doesn't learn to move fast). \subsection{Real-world driving} \begin{table}[tbp] \caption{Evaluation results of reinforcement learning agent in the real environment and in matching simulations} \begin{center} \begin{tabular}{llccc} \hline Eval. & Mean metrics over 6 episodes & & Domain & Nominal \\ Domain & & & Rand. & \\ \hline Real & Survival time [s] & $\uparrow$ & 54 & 45 \\ & Distance traveled both lanes [m] & $\uparrow$ & 15.6 & 11.4 \\ & Distance traveled ego-lane [m] & $\uparrow$ & 7.0 & 8.4 \\ \hline Sim. & Survival time [s] & $\uparrow$ & 60 & 60 \\ & Distance traveled [m] & $\uparrow$ & 15.5 & 15.0 \\ \hline& \end{tabular} \label{tab_sim2real} \end{center} \end{table} To measure the quality of the transfer learning process and the performance of the controller in the real world, we selected performance metrics that are easily measurable both in reality and simulation. These were recorded in both domains in matching experiments and compared against each other. The geometry of the tracks, the dimensions, and speed of the robot are simulated accurately enough, to evaluate the robustness of the policy against all inaccurately and not simulated effects. Using this method, we tested policies trained in the domain randomized simulation, but also ones that were trained only in the "nominal" simulation. This allows us to evaluate the transfer learning process and highlight the effects of training with domain randomization. The real and simulated version of the test track used in this analysis is shown on fig.~\ref{fig_test_track_sim2real_sim}~and~\ref{fig_test_track_sim2real_real}. During real evaluations, generally, we experienced that under ideal circumstances (no distracting objects outside the roads and good lighting conditions) the policy trained in the "nominal" simulation is able to drive reasonably well. However, training with domain randomization leads to more reliable robust performance in the real world. Table~\ref{tab_sim2real} show the quantitative results of this evaluation. The two policies seem to perform equally well if comparing them based on their performance in the simulation. However, metrics recorded in the real environment show that the policy trained with domain randomization performs almost as well as in the simulation, while the other policy performs noticeably worse. The lower \textit{Distance traveled ego-lane} metric of the domain randomized policy is because the vehicle tends to drift to the left lane in sharp turns but returns to the right-lane afterward, while the nominal policy usually made more serious mistakes. Note that in these experiments the Orientation based reward and the Steering action representation were used, as this configuration learns to control the robot in the least amount of steps and training time. An online video demonstrates the performance of our trained agent: \url{https://youtu.be/kz7YWEmg1Is} \subsection{Collision avoidance} \begin{table}[tbp] \caption{Evaluation results of policies trained for collision avoidance with different reward functions} \begin{center} \begin{tabular}{lccc} \hline Mean metrics over 15 episodes & & Distance & Orientation \\ & & traveled & $+ r_{coll}$ \\ \hline Survival time (max. 60) [s] & $\uparrow$ & 46 & 52 \\ Distance traveled both lanes [m] & $\uparrow$ & 22.5 & 22.9 \\ Distance traveled ego-lane [m] & $\uparrow$ & 22.7 & 23.1 \\ Lateral deviation [m$\cdot$s] & $\downarrow$ & 1.9 & 1.6 \\ Orientation deviation [rad$\cdot$s]& $\downarrow$ & 6.3 & 5.8 \\ \hline \end{tabular} \label{tab_results_collsion} \end{center} \vspace*{-3mm} \end{table} Fig~\ref{fig_lfv_traj_sequence} demonstrates the learned collision avoidance behavior. In the first few seconds of the simulation, the robot controlled by the reinforcement learning policy accelerates to full speed. Then, as it approaches the slower, non-learning robot, it reduces it's speed and maintains approximately a constant distance from the vehicle ahead (see fig~\ref{fig_lfv_distance_plot}). Table~\ref{tab_results_collsion} shows that training with both reward functions lead to functional lane-following behavior, however the non-maximal \textit{Survival time} values indicate that neither of the policies are capable of performing lane following reliably with the presence of an obstacle robot for 60 seconds. All metrics in Table~\ref{tab_results_collsion} indicate that the modified Orientation reward leads better lane following metrics, than the simpler Distance traveled reward. It should be noted, that these metrics were mainly selected to evaluate the lane following capabilities of an agent, more in-depth analysis of collision avoidance with a vehicle upfront call for more specific metrics. An online video demonstrates the performance of our trained agent: \url{https://youtu.be/8GqAUvTY1po} \enlargethispage{-2.8cm} \subsection{Salient object maps} Visualizing which parts of the input image contribute the most to a particular output (action) is important, because, it provides some explanation of the network's inner workings. Fig.~\ref{fig_sal_obj} shows salient object maps in different scenarios, generated using the method proposed in \cite{SalientObj}. All of these images indicate high activations on lane markings, which is expected. \section{Conclusions} This work presented a solution to the problem of complex, vision-based lane following in the Duckietown environment using reinforcement learning to train an end-to-end steering policy capable of simulation to real transfer learning. We found that the training is sensitive to problem formulation, for example to the representation of actions. We showed that by using domain randomization, a moderately detailed and accurate simulation is sufficient for training end-to-end lane following agents that operate in a real environment. The performance of these agents was evaluated by comparing some basic metrics in matching real and simulated scenarios. Agents were also successfully trained to perform collision avoidance in addition to lane following. Finally, salient object visualization was used to give an illustrative explanation of the inner workings of the policies, in both the real and simulated domains. \section*{Acknowledgment} We would like to show our gratitude to professor Bálint Gyires-Tóth (BME, Dept. of Telecommunications and Media Informatics) for his assistance and comments on the progress of our research. The research reported in this paper and carried out at the Budapest University of Technology and Economics was supported by Continental Automotive Hungary Ltd. and the "TKP2020, Institutional Excellence Program" of the National Research Development and Innovation Office in the field of Artificial Intelligence (BME IE-MI-SC TKP2020).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0 \setcounter{theorem}{0 \noindent \subsection{Quantization formulae} The establishment of a quantization formula (QF) for the eigenvalues of the Schr\"o\-din\-ger\ operators is a classical mathematical problem of quantum mechanics (see e.g.\cite{FM}). To review the notion of QF, consider first a semiclassical pseudodifferential operator $H$ (for this notion, see e.g.\cite{Ro}) acting on $L^2({\mathcal R}^l)$, $l\geq 1$, of order $m$, self-adjoint with pure-point spectrum, with (Weyl) symbol $\sigma_H(\xi,x)\in C^\infty({\mathcal R}^l\times{\mathcal R}^l;{\mathcal R})$. \begin{definition}\label{quant} {\it We say that $H$ admits an $M$-smooth {\rm exact} QF, $M\geq 2$, if there exists a function $\mu:$ $(A,\hbar)\mapsto \mu(A,\hbar)\in C^M({\mathcal R}^l\times [0,1]; {\mathcal R})$ such that: \begin{enumerate} \item $\mu(A,\hbar)$ admits an asymptotic expansion up to order $M$ in $\hbar$ uniformly on compacts with respect to $A\in{\mathcal R}^l$; \item $\forall\hbar\in]0,1]$, there is a sequence $n_k:=(n_{k_1},\ldots,n_{k_l})\subset \Z^l$ such that all eigenvalues $\lambda_{k}(\hbar)$ of $H$ admit the representation: \begin{equation} \label{FQ1} \lambda_{k}(\hbar)=\mu(n_k\hbar,\hbar). \end{equation} \end{enumerate}} \end{definition} \begin{remark} (Link with the Maslov index) \label{maslov} Consider any function $f:\ {\mathcal R}^l\to{\mathcal R}^l$ with the property $\langle f(A),\nabla\mu(A,0)\rangle $ $=\partial_\hbar\mu(A,0)$. Then we can rewrite the asymptotic expansion of $\mu$ at second order as : \begin{equation} \mu(n_k\hbar,\hbar)=\mu(n_k\hbar+\hbar f(n_k\hbar))+O(\hbar^2). \end{equation} When $f(m\hbar)=\nu, \;\nu\in\Bbb Q^l$, the Maslov index \cite{Ma} is recovered. Moreover, when \begin{equation} \label{QF2} |\lambda_{k}(\hbar)-\mu(n_k\hbar,\hbar)|=O(\hbar^M), \quad \hbar\to 0, \quad M\geq 2 \end{equation} then we speak of {\it approximate} QF of order $M$. \end{remark} \begin{example} (Bohr-Som\-mer\-feld-Ein\-stein for\-mu\-la). Let $\sigma_H$ fulfill the conditions of the Liouville-Arnold theorem (see e.g.\cite{Ar1}, \S 50). Denote $A=(A_1,\ldots,A_l)\in {\mathcal R}^l$ the action variables, and $E(A_1,\ldots,A_l)$ the symbol $\sigma_H$ expressed as a function of the action variables. Then the Bohr-Som\-mer\-feld-Ein\-stein for\-mu\-la (BSE) QF is \begin{equation} \label{QF3} \lambda_{n,\hbar}=E((n_1+\nu/4)\hbar,\ldots,(n_l+\nu/4)\hbar)+O(\hbar^2) \end{equation} where $\nu=\nu(l)\in\Bbb N\cup\{0\}$ is the Maslov index \cite{Ma}. When $H$ is the Schr\"o\-din\-ger\ operator, and $\sigma_H$ the corresponding classical Hamiltonian, (\ref{QF3}) yields the approximate eigenvalues, i.e. the approximate quantum energy levels. In the particular case of a quadratic, positive definite Hamiltonian, which can always be reduced to the harmonic oscillator with frequencies $\omega_1>0,\ldots,\omega_l>0$, the BSE is an exact quantization formula in the sense of Definition 1.1 with $\nu=2$, namely: $$ \mu(A,\hbar)=E(A_1+\hbar/2,\ldots,A_l+\hbar/2) =\sum_{k=1}^l\omega_k(A_k+\hbar/2) $$ \end{example} \vskip 10pt To our knowledge, if $l>1$ the only known examples of exact QF in the sense of Definition 1.1 correspond to classical systems integrable by separation of variables, such that each separated system admits in turn an exact QF, as in the case of the Coulomb potential (for exact QFs for general one-dimensional Schr\"o\-din\-ger\ operators see \cite{Vo}). For general integrable systems, only the approximate BSE formula is valid. Non-integrable systems admit a formal approximate QF, the so-called Einstein-Brillouin-Keller (EBK), recalled below, provided they possess a normal form to all orders. In this paper we consider a perturbation of a linear Hamiltonian on $T^\ast\T^l={\mathcal R}^l\times\T^l$, and prove that the corresponding quantized operator can be unitarily conjugated to a function of the differentiation operators via the construction of a quantum normal form which converges uniformly with respect to $\hbar\in [0,1]$. This yields immediately an exact, $\infty$-smooth QF. The uniformity with respect to $\hbar$ yields also an explicit family of classical Hamiltonians admitting a convergent normal form, thus making the system integrable. \subsection{Statement of the results} Consider the Hamiltonian family ${\mathcal H}_\varepsilon: {\mathcal R}^l\times \T^l\rightarrow {\mathcal R}, (\xi,x)\mapsto {\mathcal H}_\varepsilon(\xi,x)$, indexed by $\varepsilon\in{\mathcal R}$, defined as follows: \begin{equation} {\mathcal H}_\varepsilon(\xi,x):={\mathcal L}_\omega(\xi)+\varepsilon {\mathcal V}(x,\xi);\quad {\mathcal L}_\omega(\xi):=\langle\omega,\xi\rangle, \quad\omega\in{\mathcal R}^l,\quad {\mathcal V}\in C^\infty({\mathcal R}^l\times\T^l;{\mathcal R}). \end{equation} Here $\xi\in{\mathcal R}^l, x\in\T^l$ are canonical coordinates on the phase space ${\mathcal R}^l\times\T^l$, the $2l-$cylinder. ${\mathcal L}_\omega(\xi)$ generates the linear Hamiltonian flow $\xi_i\mapsto \xi_i, x_i\mapsto x_i+\omega_it$ on ${\mathcal R}^l\times\T^l$. For $l>1$ the dependence of ${\mathcal V}$ on $\xi$ makes non-trivial the integrability of the flow of ${\mathcal H}_\varepsilon$ when $\varepsilon\neq 0$, provided the {\it frequencies} $\omega:=(\omega_1,\ldots, \omega_l)$ are independent over $\Bbb Q$ and fulfill a diophantine condition such as (\ref{DC}) below. Under this assumption it is well known that ${\mathcal H}_\varepsilon$ admits a {\it normal form} at any order (for this notion, see e.g. \cite{Ar2}, \cite{SM}). Namely, $\forall\,N\in\Bbb N$ a canonical bijection ${\mathcal C}_{\varepsilon,N}:{\mathcal R}^l\times\T^l\leftrightarrow {\mathcal R}^l\times\T^l$ close to the identity can be constructed in such a way that: \begin{equation} \label{CNF} ({\mathcal H}_\varepsilon\circ {\mathcal C}_{\varepsilon,N})(\xi,x)={\mathcal L}_\omega(\xi)+\sum_{k=1}^N {\mathcal B}_k(\xi;\omega)\varepsilon^k+\varepsilon^{N+1}{\mathcal R}_{N+1,\varepsilon}(\xi,x) \end{equation} This makes the flow of ${\mathcal H}_\varepsilon(\xi,x)$ integrable up to an error of order $\varepsilon^{N+1}$. In turn, ${\mathcal C}_{\varepsilon,N}$ is the Hamiltonian flow at time $1$ generated by \begin{equation} \label{FGen} {\mathcal W}^N_\varepsilon(\xi,x):=\langle\xi,x\rangle+\sum_{k=1}^N{\mathcal W}_k(\xi,x)\varepsilon^k, \end{equation} where the functions ${\mathcal W}_k(\xi,x): {\mathcal R}^l\times \T^l\to{\mathcal R}$ are recursively computed by canonical perturbation theory via the standard Lie transform method of Deprit\cite{De} and Hori\cite{Ho} (see also e.g \cite{Ca}). To describe the quantum counterpart, let $H_\varepsilon=L_\omega+\varepsilon V$ be the operator in $L^2(\T^l)$ of symbol ${\mathcal H}_\varepsilon$, with domain $D(H_\varepsilon)= H^1(\T^l)$ and action specified as follows: \begin{eqnarray} \forall u\in D(H_\varepsilon), \quad H_\varepsilon u= L_\omega u+Vu, \quad L_\omega u=\sum_{k=1}^l\omega_kD_ku, \;\; D_k u:=-i\hbar\partial_{x_k}u, \end{eqnarray} and $V$ is the Weyl quantization of ${\mathcal V}$ (formula (\ref{1erweyl}) below). Since {\it uniform} quantum normal forms (see e.g. \cite{Sj},\cite{BGP},\cite{Po1}, \cite{Po2}) are not so well known as the classical ones, let us recall here their definition. The construction is reviewed in Appendix. \begin{definition} \label{QuNF}[Quantum normal form (QNF)] {\it We say that a family of operators $H_\varepsilon$ $\varepsilon$-close (in the norm resolvent topology) to $H_0=L_\omega$ admits a uniform quantum normal form (QNF) at any order if \begin{itemize} \item[(i)] There exists a sequence of continuous self-adjoint operators $W_k(\hbar)$ in $L^2(\T^l)$, $k=1,\ldots$ and a sequence of functions $B_k(\xi_1,\ldots,\xi_l,\hbar)\in C^\infty({\mathcal R}^l\times [0,1];{\mathcal R})$, such that, defining $\forall\,N\in\Bbb N$ the family of unitary operators: \begin{eqnarray} \label{QNF} U_{N,\varepsilon}(\hbar)=e^{iW_{N,\varepsilon}(\hbar)/\hbar}, \quad W_{N,\varepsilon}(\hbar)=\sum_{k=1}^N W_k(\hbar)\varepsilon^k \end{eqnarray} we have: \begin{eqnarray} \label{AQNF} && U_{N,\varepsilon}(\hbar)H_\varepsilon U_{N,\varepsilon}^\ast(\hbar)=L_\omega+\sum_{k=1}^N B_k(D_1,\ldots,D_l,\hbar)\varepsilon^k+\varepsilon^{N+1}R_{N+1,\varepsilon}(\hbar). \end{eqnarray} \item [(ii)] The operators $B_k(D,\hbar): k=1,2\ldots$, $R_{N+1}$ are continuous in $L^2(\T^l)$; the corresponding symbols ${\mathcal W}_k, {\mathcal B}_k, \mathcal R_{N+1}(\varepsilon)$ belong to $ C^\infty({\mathcal R}^l\times\T^l\times [0,1])$, and reduce to the classical normal form construction (\ref{CNF}) and (\ref{FGen}) as $\hbar\to 0$: \begin{equation} \label{princip} {\mathcal B}_k(\xi;0)={\mathcal B}_k(\xi);\quad {\mathcal W}_k(\xi,x,0)={\mathcal W}_k(\xi,x),\quad \mathcal R_{N+1,\varepsilon}(x,\xi;0)=\mathcal R_{N+1,\varepsilon}(x,\xi) \end{equation} \end{itemize}} \end{definition} (\ref{AQNF}) entails that $H_\varepsilon$ commutes with $H_0$ up to an error of order $\varepsilon^{N+1}$; hence the following approximate QF formula holds for the eigenvalues of $H_\varepsilon$: \begin{equation} \label{AQF} \lambda_{n,\varepsilon}(\hbar)=\hbar\langle n,\omega\rangle+\sum_{k=1}^N {\mathcal B}_k(n_1\hbar,\ldots,n_l\hbar,\hbar)\varepsilon^k+O(\varepsilon^{N+1}). \end{equation} \vskip 6pt\noindent \begin{definition} \label{QNFConv}{(Uniformly convergent quantum normal forms)} {\it We say that the QNF converges $M$-smoothly, $M > 2l$, {\rm uniformly with respect to the Planck constant $\hbar$}, if there is $\varepsilon^\ast>0$ such that \vskip 5pt\noindent \begin{eqnarray} && \label{convunifQ1} \sum_{k=1}^\infty\,\sup_{{\mathcal R}^l\times\T^l\times [0,1]}\sum_{|\alpha|\leq M}|D^\alpha{\mathcal W}_k(\xi,x;\hbar)\varepsilon^k|<+\infty \\ && \label{convunifQ2} \sum_{k=1}^\infty\,\sup_{{\mathcal R}^l\times [0,1]}\sum_{|\alpha|\leq M}|D^\alpha{\mathcal B}_k(\xi,\hbar)\varepsilon^k|<+\infty , \quad |\varepsilon|<\varepsilon^\ast. \end{eqnarray} \vskip 5pt\noindent Here $\displaystyle D^\alpha=\partial ^{\alpha_1}_\xi \partial ^{\alpha_2}_x \partial ^{\alpha_3}_\hbar$, $|\alpha|=|\alpha_1|+|\alpha_2|+\alpha_3$. } \end{definition} \noindent (\ref{convunifQ1},\ref{convunifQ2}) entail that, if $|\varepsilon|<\varepsilon^\ast$, we can define the symbols \vskip 3pt\noindent \begin{eqnarray} \label{somma} && {\mathcal W}_{\infty}(\xi,x;\varepsilon,\hbar):=\langle \xi,x\rangle+\sum_{k=1}^\infty{\mathcal W}_k(\xi,x;\hbar)\varepsilon^k\in C^M({\mathcal R}^l\times\T^l\times [0,\varepsilon^\ast] \times[0,1];\Bbb C), \\ \label{somma1} && {\mathcal B}_{\infty}(\xi;\varepsilon,\hbar):={\mathcal L}_\omega(\xi)+\sum_{k=1}^\infty{\mathcal B}_k(\xi;\hbar)\varepsilon^k \in C^M({\mathcal R}^l\times [0,\varepsilon^\ast] \times[0,1];\Bbb C) \end{eqnarray} \vskip 3pt\noindent By the Calderon-Vaillancourt theorem (see \S 3 below) their Weyl quantizations $W_{\infty}(\varepsilon,\hbar)$, $B_{\infty}(\varepsilon,\hbar)$ are continuous operator in $L^2(\T^l)$. Then: \begin{equation} e^{iW_{\infty}(\varepsilon,\hbar)/\hbar}H_\varepsilon e^{-iW_{\infty}(\varepsilon,\hbar)/\hbar}=B_{\infty}(D_1,\ldots,D_l;\varepsilon,\hbar). \end{equation} Therefore the uniform convergence of the QNF has the following straightforward consequences: \begin{itemize} \item[(A1)] {\it The eigenvalues of $H_\varepsilon$ are given by the {\rm exact} quantization formula:} \begin{equation} \label{QF} \lambda_{n}(\hbar,\varepsilon)={\mathcal B}_{\infty}(n\hbar,\hbar,\varepsilon), \qquad n\in\Z^l, \quad \varepsilon\in {\frak D}^\ast:=\{\varepsilon\in {\mathcal R}\,|\,|\varepsilon|<\varepsilon^\ast\} \end{equation} \item [(A2)] {\it The classical normal form is convergent, uniformly on compacts with respect to $\xi\in{\mathcal R}^l$, and therefore if $\varepsilon\in {\frak D}^\ast$ the Hamiltonian ${\mathcal H}_\varepsilon(\xi,x)$ is integrable.} \end{itemize} Let us now state explicit conditions on $V$ ensuring the uniform convergence of the QNF. \newline Given $\F(t,x)\in C^\infty({\mathcal R}\times\T^l;{\mathcal R})$, consider its Fourier expansion \begin{equation} \label{FFE} \F(t,x)=\sum_{q\in\Z^l}\F_q(t)e^{i\langle q,x\rangle}. \end{equation} and define furthermore $ \F_\omega: {\mathcal R}^l\times\T^l\to {\mathcal R}; \F_\omega \in C^\infty({\mathcal R}^l\times\T^l;{\mathcal R})$ in the following way: \vskip 4pt\noindent \begin{eqnarray} && \label{Fouom} \F_\omega(\xi,x):=\F({\mathcal L}_\omega(\xi),x)=\sum_{q\in\Z^l}\F_{\omega,q}(\xi)e^{i\langle q,x\rangle}, \\ && \F_{\omega,q}(\xi):=(\F_q\circ {\mathcal L}_\omega)(\xi)=\frac1{(2\pi)^{l/2}}\int_{\mathcal R}\widehat{\F}_q(p)e^{-ip{\mathcal L}_\omega(\xi)}\,dp= \\ && = \frac1{(2\pi)^{l/2}}\int_{\mathcal R}\widehat{\F}_q(p)e^{-i\langle p\omega,\xi\rangle}\,dp, \quad p\omega :=(p\omega_1,\ldots,p\omega_l ). \end{eqnarray} \vskip 4pt\noindent Here, as above, ${\mathcal L}_\omega(\xi)=\langle\omega,\xi\rangle$. \vskip 4pt\noindent Given $\rho>0$, introduce the weighted norms: \begin{eqnarray} && \|\F_{\omega,q}(\xi)\|_\rho:=\int_{\mathcal R}|\widehat{\F}_q(p)|e^{\rho |p|}|\,dp \\ && \|\F_\omega(x,\xi)\|_{\rho}:=\sum_{q\in\Z^l}\,e^{\rho |q|}\|\F_{\omega,q}\|_\rho \end{eqnarray} \vskip 4pt\noindent We can now formulate the main result of this paper. Assume: \vskip 4pt\noindent \begin{itemize} \item[(H1)] There exist $\gamma >1, \tau >l-1$ such that the frequencies $\omega$ fulfill the diophantine condition \begin{equation} \label{DC} |\langle\omega,q\rangle|^{-1}\leq \gamma |q|^{\tau}, \quad q \in\Z^l, \; q\neq 0. \end{equation} \item[(H2)] $V_\omega$ is the Weyl quantization of ${\mathcal V}_\omega(\xi,x)$ (see Sect.3 below), that is: \vskip 8pt\noindent \begin{equation}\label{1erweyl} V_\omega f(x)=\int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{{\mathcal V}}_q(p) e^{i\langle q,x\rangle+\hbar p\langle \omega,q\rangle/2}f(x+\hbar p\omega)\,dp, \quad f\in L^2(\T^l). \end{equation} \vskip 5pt\noindent with ${\mathcal V}(\xi,x;\hbar)={\mathcal V}(\langle\omega,\xi\rangle,x)={\mathcal V}_\omega(\xi,x)$ for some function ${\mathcal V}(t;x): {\mathcal R}\times\T^l\to {\mathcal R}$. \vskip 4pt\noindent \item[(H3)] $$ \|{\mathcal V}_\omega\|_{\rho}<+\infty, \qquad \rho >1+16\gamma\tau^\tau. $$ \end{itemize} \vskip 4pt\noindent Clearly under these conditions the operator family $ H_\varepsilon:=L_\omega+\varepsilon V_\omega$, $D(H_\varepsilon) =H^1(\T^l)$, $\varepsilon\in{\mathcal R}$, is self-adjoint in $L^2(\T^l)$ and has pure point spectrum. We can then state the main results. \vskip 4pt\noindent \begin{theorem} \label{mainth} Under conditions (H1-H3), $ H_\varepsilon$ admits a uniformly convergent quantum normal form ${\mathcal B}_{\infty,\omega}(\xi,\varepsilon,\hbar)$ in the sense of Definition 1.5, with radius of convergence no smaller than: \vskip 6pt\noindent \begin{equation} \label{rast} \varepsilon^\ast(\gamma,\tau):=\frac{1}{e^{24(3+2\tau)}2^{2\tau}\|{\mathcal V}\|_{\rho}}. \end{equation} \vskip 6pt\noindent \end{theorem} If in addition to (H1-H2) we assume, for any fixed $r\in\Bbb N$: \begin{itemize} \item[(H4)] \begin{equation} \label{condizrho} \rho> \lambda(\gamma,\tau,r):=1+8\gamma\tau[(2(r+1)^2] \end{equation} \vskip 6pt\noindent \end{itemize} we can sharpen the above result proving smoothness with respect to $\hbar$: \begin{theorem} \label{regolarita} Let conditions (H1-H2-H4) be fulfilled. For $r\in {\Bbb N}$ define ${\frak D}^\ast_r:=\{\varepsilon\in\Bbb C\,|\,|\varepsilon|<\varepsilon^\ast(\gamma,\tau,r)\}$, where: \vskip 8pt\noindent \begin{eqnarray} \label{epastr} \varepsilon^\ast(\gamma,\tau,r):=\frac{1}{e^{24(3+2\tau)}(r+2)^{2\tau}\|{\mathcal V}\|_{\rho}}\end{eqnarray} \vskip 8pt\noindent Then $\displaystyle \hbar\mapsto {\mathcal B}_\infty(t,\varepsilon,\hbar)\in C^\infty ([0,1]; C^\omega( \{t\in\Bbb C\,|\,|\Im t|<{\rho}/{2}\times{\frak D}_r^\ast(\rho) \})$; i.e. there exist $C_r(\varepsilon^\ast)>0$ such that, for $\varepsilon\in {\frak D}_r^\ast$: \vskip 4pt\noindent \begin{eqnarray} \label{stimaG1} \sum_{\gamma=0}^r\max_{\hbar\in [0,1]} \|\partial^\gamma_\hbar {\mathcal B}_{\infty,\omega}(\xi;\varepsilon,\hbar) \|_{\rho/2}\leq C_r, \;\;r=0,1,\ldots \end{eqnarray} \end{theorem} \vskip 4pt In view of Definition \ref{quant}, the following statement is a straightforward consequence of the above Theorems: \begin{corollary}[Quantization formula]\label{QFE} ${\mathcal H}_\varepsilon$ admits an $\infty$-smooth quantization formula in the sense of Definition 1.1. That is, $\forall\,r\in\Bbb N$, $\forall \,|\epsilon|<\varepsilon^\ast (\gamma,\tau,r)$ given by (\ref{epastr}), the eigenvalues of $H_\varepsilon$ are expressed by the formula: \begin{equation} \label{EQF} \lambda(n,\hbar,\varepsilon)={\mathcal B}_{\infty,\omega}(n\hbar,\varepsilon, \hbar) ={\mathcal L}_\omega(n\hbar)+\sum_{s=1}^\infty{\mathcal B}_s({\mathcal L}_\omega(n\hbar),\hbar)\varepsilon^s \end{equation} where ${\mathcal B}_{\infty,\omega}(\xi,\varepsilon, \hbar)$ belongs to $C^r({\mathcal R}^l\times [0,\varepsilon^\ast(\cdot,r)]\times [0,1])$, and admits an asymptotic expansion at order $r$ in $\hbar$, uniformly on compacts with respect to $(\xi,\varepsilon)\in{\mathcal R}^l\times [0,\varepsilon^\ast(\cdot,r)]$. \end{corollary} {\bf Remarks} \begin{itemize} \item[(i)] (\ref{stimaG1}) and (\ref{EQF}) entail also that the Einstein-Brillouin-Keller (EBK) quantization formula: \begin{equation} \label{EBK} \lambda_{n,\varepsilon}^{EBK}(\hbar):={\mathcal L}_\omega(n\hbar)+\sum_{s=1}^\infty {\mathcal B}_s({\mathcal L}_\omega(n\hbar))\varepsilon^s={\mathcal B}_{\infty,\omega}(n\hbar,\varepsilon),\quad n\in\Z^l \end{equation} reproduces here ${\rm Spec}(H_\varepsilon)$ up to order $\hbar$. \item[(ii)] Apart the classical Cherry theorem yielding convergence of the Birkhoff normal form for smooth perturbations of the harmonic flow with {\it complex} frequencies when $l=2$ (see e.g. \cite{SM}, \S 30; the uniform convergence of the QNF under these conditions is proved in \cite{GV}), no simple convergence criterion seems to be known for the QNF nor for the classical NF as well. (See e.g.\cite{PM}, \cite{Zu}, \cite{St} for reviews on convergence of normal forms). Assumptions (1) and (2) of Theorem \ref{mainth} entail Assertion (A2) above. Hence they represent, to our knowledge, a first explicit convergence criterion for the NF. \end{itemize} Remark that ${\mathcal L}_\omega(\xi)$ is also the form taken by harmonic-oscillator Hamiltonian in ${\mathcal R}^{2l}$, $$ {\mathcal P}_0(\eta,y;\omega):= \sum_{s=1}^l\omega_s(\eta^2_s+y_s^2), \quad (\eta_s,y_s)\in{\mathcal R}^2,\quad s=1,\ldots,l $$ if expressed in terms of the action variables $\xi_s>0, \,s=1,\ldots,l$, where $$ \xi_s:=\eta^2_s+y_s^2=z_s\overline{z}_s, \quad z_s:=y_s+i\eta_s. $$ {Assuming} (\ref{DC}) {\it and} the property \begin{equation} \label{Rk1} {\mathcal B}_k(\xi)=(\F_k\circ{\mathcal L}_\omega(\xi))=\F_{k}(\sum_{s=1}^l \omega_s z_s\overline{z}_s), \quad k=0,1,\ldots \end{equation} R\"ussmann \cite{Ru} (see also \cite{Ga}) proved convergence of the Birkhoff NF if the perturbation ${\mathcal V}$, expressed as a function of $(z,\overline{z})$, is {\it in addition} holomorphic at the origin in $\Bbb C^{2l}$. No explicit condition on ${\mathcal V}$ seems to be known ensuring {\it both} (\ref{Rk1}) and the holomorphy. In this case instead we {\it prove} that the assumption ${\mathcal V}(\xi,x)={\mathcal V}({\mathcal L}_\omega(\xi),x)$ entails (\ref{Rk1}), uniformly in $\hbar\in [0,1]$; namely, we construct $\F_s(t;\hbar):{\mathcal R}\times [0,1]\to{\mathcal R}$ such that: \begin{equation} \label{Rk} {\mathcal B}_s(\xi;\hbar)=\F_s({\mathcal L}_\omega(\xi);\hbar):=\F_{\omega,s}(\xi;\hbar), \quad s=0,1,\ldots \end{equation} The conditions of Theorem \ref{mainth} cannot however be transported to R\"ussmann's case: the map \vskip 6pt\noindent $$ {\mathcal T}(\xi,x)=(\eta,y):= \begin{cases} \eta_i=-\sqrt{\xi_i}\sin x_i, \\ y_i=\sqrt{\xi_i}\cos x_i, \end{cases}\quad i=1,\ldots,l, $$ \vskip 6pt\noindent namely, the inverse transformation into action-angle variable, is defined only on ${\mathcal R}_+^l\times\T^l$ and does not preserve the analyticity at the origin. On the other hand, ${\mathcal T}$ is an analytic, canonical map between ${\mathcal R}_+^l\times\T^l$ and ${\mathcal R}^{2l}\setminus\{0,0\}$. Assuming for the sake of simplicity ${\mathcal V}_0=0$ the image of ${\mathcal H}_\varepsilon$ under ${\mathcal T}$ is: \begin{eqnarray} \label{H0} ({\mathcal H}_\varepsilon \circ {\mathcal T})(\eta,y)= \sum_{s=1}^l\omega_s(\eta^2_s+y_s^2)+\varepsilon ({\mathcal V}\circ {\mathcal T})(\eta,y):={\mathcal P}_0(\eta,y)+\varepsilon {\mathcal P}_1(\eta,y) \end{eqnarray} where \begin{eqnarray} && \label{H1} {\mathcal P}_1(\eta,y)=({\mathcal V}\circ {\mathcal T})(\eta,y)={\mathcal P}_{1,R}(\eta,y)+{\mathcal P}_{1,I}(\eta,y), \;(\eta,y)\in{\mathcal R}^{2l}\setminus\{0,0\}. \end{eqnarray} \begin{eqnarray} && \nonumber {\mathcal P}_{1,R}(\eta,y)=\frac12\sum_{k\in\Z^l}(\Re{{\mathcal V}}_k\circ{\mathcal H}_0)(\eta,y)\prod_{s=1}^l \left(\frac{\eta_s-iy_s}{\sqrt{\eta^2_s+y_s^2}}\right)^{k_s} \\ \nonumber && {\mathcal P}_{1,I}(\eta,y)=\frac12\sum_{k\in\Z^l} (\Im{{\mathcal V}}_k\circ{\mathcal H}_0)(\eta,y)\prod_{s=1}^l \left(\frac{\eta_s-iy_s}{\sqrt{\eta^2_s+y_s^2}}\right)^{k_s} \end{eqnarray} \vskip 4pt\noindent If ${\mathcal V}$ fulfills Assumption (H3) of Theorem \ref{mainth}, both these series converge uniformly in any compact of ${\mathcal R}^{2l}$ away from the origin and ${\mathcal P}_1$ is holomorphic on ${\mathcal R}^{2l}\setminus\{0,0\}$. Therefore Theorem \ref{mainth} immediately entails a convergence criterion for the Birkhoff normal form generated by perturbations holomorphic away from the origin. We state it under the form of a corollary: \begin{corollary} \label{mainc} {\rm (A convergence criterion for the Birkhoff normal form)} Under the assumptions of Theorem \ref{mainth} on $\omega$ and ${\mathcal V}$, consider on ${\mathcal R}^{2l}\setminus\{0,0\}$ the holomorphic Hamiltonian family $P_\varepsilon(\eta,y):={\mathcal P}_0(\eta,y)+\varepsilon{\mathcal P}_1(\eta,y)$, $\varepsilon\in{\mathcal R}$, where ${\mathcal P}_0$ and ${\mathcal P}_1$ are defined by (\ref{H0},\ref{H1}). Then the Birkhoff normal form of $H_\varepsilon$ is uniformly convergent on any compact of ${\mathcal R}^{2l}\setminus\{0,0\}$ if $|\varepsilon|<\varepsilon^\ast (\gamma,\tau)$. \end{corollary} \vskip 0.5cm\noindent \subsection{Strategy of the paper} The proof of Theorem \ref{mainth} rests on an implementation in the quantum context of R\"ussmann's argument\cite{Ru} yielding convergence of the KAM iteration when the complex variables $(z,\overline{z})$ belong to an open neighbourhood of the origin in $\Bbb C^{2l}$. Conditions (\ref{DC}, \ref{Rk}) prevent the occurrence of accidental degeneracies among eigenvalues at any step of the quantum KAM iteration, in the same way as they prevent the formation of resonances at the same step in the classical case. However, the global nature of quantum mechanics prevents phase-space localization; therefore, and this is the main difference, at each step the coefficients of the homological equation for the operator symbols not only have an additional dependence on $\hbar$ but also have to be controlled up to infinity. These difficulties are overcome by exploiting the closeness to the identity of the whole procedure, introducing adapted spaces of symbols i(Section \ref{not}), which account also for the properties of differentiability with respect to the Planck constant. The link between quantum and classical settings is provided by a sharp (i.e. without $\hbar^\infty$ approximation) Egorov Theorem established in section \ref{sectionegorov}. Estimates for the solution of the quantum homological equation and their recursive properties are obtained in sections \ref{hom} (Theorem \ref{homo}) and \ref{towkam} (Theorem \ref{resto}) respectively. Recursive estimates are established in Section \ref{recesti} (Theorem \ref{final}) and the proof of our main result is completed in section \ref{iteration}. The link with the usual construction of the quantum normal form described in Appendix. \vskip 1cm\noindent \section{Norms and first estimates} \label{not} \setcounter{equation}{0 \setcounter{theorem}{0 Let $m,l=1,2,\dots$. For $\F\in C^\infty({\mathcal R}^m\times\T^l\times [0,1]; \Bbb C),\ (\xi,x,\hbar)\to\F(\xi,x;\hbar)$, and $\G\in C^\infty({\mathcal R}^m\times [0,1]; \Bbb C),\, (\xi,\hbar)\to\G(\xi;\hbar)$, consider the Fourier transforms \begin{equation} \widehat{\G}(p;\hbar)=\frac1{(2\pi)^{m/2}}\int_{{\mathcal R}^m}\G(\xi;\hbar)e^{-i\langle p,\xi\rangle}\,dx \end{equation} \begin{equation} \F(\xi,q;\hbar):=\frac1{(2\pi)^{m/2}}\int_{\T^l}\F(\xi,x;\hbar)e^{-i\langle q,x\rangle}\,dx . \quad \end{equation} \begin{equation} \label{FE1} \F(\xi,x;\hbar)=\sum_{q\in\Z^l}\F(\xi,q;\hbar)e^{-i\langle q,x\rangle} \end{equation} \begin{equation} \label{FE2} \widehat{\F}(p,q;\hbar)=\frac1{(2\pi)^{m/2}}\int_{{\mathcal R}^m}\F(\xi,q;\hbar)e^{-i\langle p,\xi\rangle}\,dx \end{equation} It is convenient to rewrite the Fourier representations (\ref{FE1}, \ref{FE2}) under the form a single Lebesgue-Stieltjes integral. Consider the product measure on ${\mathcal R}^m\times {\mathcal R}^l$: \begin{eqnarray} \label{pm1} && d\lambda (t):=dp\,d\nu(s), \quad t:=(p,s)\in{\mathcal R}^m\times {\mathcal R}^l; \\ \label{pm2} && dp:=\prod_{k=1}^m\,dp_k;\quad d\nu(s):=\prod_{h=1}^l \sum_{q_h\leq s_h} \delta (s_h-q_h), \;q_h\in\Z, h=1,\ldots,l \end{eqnarray} Then: \begin{equation} \label{IFT} \F(\xi,x;\hbar)=\int_{{\mathcal R}^m\times{\mathcal R}^l}\,\widehat{\F}(p,s;\hbar)e^{i\langle p,\xi\rangle +i\langle s,x\rangle}\,d\lambda(p,s) \end{equation} \begin{definition} {\it For $\rho\geq 0$, $\sigma\geq 0$, we introduce the weighted norms } \vskip 3pt\noindent \begin{eqnarray} \label{norma1} |\G|^\dagger_{\sigma}&:=&\max_{\hbar\in [0,1]}\|\widehat{\G}(.;\hbar)\|_{L^1({\mathcal R}^m,e^{\sigma |p|}dp)}=\max_{\hbar\in [0,1]}\int_{{\mathcal R}^l}\|\widehat{\G}(.;\hbar)\|\,e^{\sigma |p|}\,dp. \\ \label{norma1k} |\G|^\dagger_{\sigma,k}&:=&\max_{\hbar\in [0,1]}\sum_{j=0}^k\|(1+|p|^2)^{\frac{k-j}{2}}\partial^j_\hbar\widehat{\G}(.;\hbar)\|_{L^1({\mathcal R}^m,e^{\sigma |p|}dp)};\quad |\G|^\dagger_{\sigma;0}:=|\G|^\dagger_{\sigma}. \end{eqnarray} \end{definition} \begin{remark} By noticing that $\vert p\vert\leq\vert p^\prime-p\vert+\vert p^\prime\vert$ and that, for $x\geq 0$, $\displaystyle x^je^{-\delta x}\leq \frac 1 e(\frac j{\delta})^j$, we immediately get the inequalities \begin{equation}\label{plus} \vert \F\G\vert^\dagger_{\sigma}\leq\vert \F\vert_{\sigma}\vert \G\vert_{\sigma}, \end{equation} \begin{equation} \label{diff} \vert (I-\Delta^{j/2})\F\vert_{\sigma-\delta}\leq \frac1 e\left(\frac j\delta\right)^j\vert \F\vert_\sigma, \quad k\geq 0. \end{equation} \end{remark} Set now for $ k\in\Bbb N\cup\{0\} $: \begin{equation}\label{muk} \mu_{k}(t):=(1+|t|^{2})^{\frac k 2}=(1+|p|^{2}+|s|^{2})^{\frac k 2}. \end{equation} and note that \begin{equation} \mu_k(t-t^\prime)\leq 2^{\frac k 2} \mu_k(t)\mu_k( t ^\prime). \end{equation} because $|x-x^\prime|^2\leq 2(|x|^2+|x^\prime|^2)$. \begin{definition} {\it Consider $\F(\xi,x;\hbar)\in C^\infty({\mathcal R}^m\times \T^l\times[0,1];\Bbb C)$, with Fourier expansion \begin{equation} \label{FF} \F(\xi,x;\hbar)=\sum_{q\in\Z^l}\,\F(\xi,q;\hbar)e^{i\langle q,x\rangle} \end{equation} \begin{itemize} \item [(1)] Set: \begin{eqnarray} \label{sigmak} \Vert \F\Vert^\dagger_{\rho,k}:=\max_{\hbar\in [0,1]}\sum_{\gamma=0}^k \int_{{\mathcal R}^m\times {\mathcal R}^l}\vert \mu_{k-\gamma}(p,s)\partial^\gamma_\hbar\widehat{\F}(p,s;\hbar)\vert e^{\rho(\vert s\vert+\vert p\vert)}\,d\lambda(p,s). \end{eqnarray} \item [(2)] Let ${\mathcal O}_\omega$ be the set of functions ${\Phi}:{\mathcal R}^l\times\T^l\times[0,1]$ such that $\Phi(\xi,x;\hbar)=\F({\mathcal L}_\omega(\xi),x;\hbar)$ for some $\F:\ {\mathcal R}\times\T^l\times [0,1]\to \Bbb C$. Define, for $\Phi\in {\mathcal O}_\omega$: \begin{eqnarray}\label{sigom} \Vert \Phi\Vert_{\rho,k}:=\max_{\hbar\in [0,1]}\sum_{\gamma=0}^k \int_{{\mathcal R}}\vert \mu_{k-\gamma}( p\omega,q) \partial^\gamma_\hbar\widehat{\F}(p,s;\hbar)\vert e^{\rho(\vert s\vert+\vert p\vert}\,d\lambda(p,s). \end{eqnarray} \item [(3)] Finally we denote $Op^W(\F)$ the Weyl quantization of $\F$ recalled in Section \ref{sectionweyl} and \begin{eqnarray} \label{normsymb'} \J^\dagger_k(\rho)&=&\{\F \,|\,\Vert \F\Vert^\dagger_{\rho,k}<\infty\}, \\ \label{normop'} J^\dagger_k(\rho)&=&\{Op^W(\F)\,|\,\F\in\J^\dagger(\rho,k)\}, \\ \label{normsymb} \J_k(\rho)&=&\{\F\in {\mathcal O}_\omega\,|\,\Vert \F\Vert_{\rho,k}<\infty\}, \\ \label{normop} J_k(\rho)&=&\{\F \,|\,\Vert \F\Vert_{\rho,k}<\infty\}, \end{eqnarray} \end{itemize}} \end{definition} Finally we denote: $L^1_\sigma({\mathcal R}^m):=L^1({\mathcal R}^m,e^{\sigma |p|}dp)$. \begin{remark} Note that, if $\F(\xi,q,\hbar)$ is independent of $q$, i.e. $\F(\xi,q,\hbar)=\F(\xi,\hbar)\delta_{q,0}$, then: \begin{equation} \label{normeid} \|\F\|^\dagger_{\rho,k}=|\F|^\dagger_{\rho,k}; \quad \|\F\|_{\rho,k}=|\F|_{\rho,k} \end{equation} while in general \begin{eqnarray} && \|\F\|_{\rho,k}\leq \|\F\|_{\rho^\prime,k^\prime} \quad {\rm whenever}\; k\geq k^\prime,\,\rho\leq \rho^\prime; \end{eqnarray} \end{remark} \begin{remark} (Regularity properties) Let $\F\in \J_k^\dagger(\rho), k\geq 0$. Then: \begin{enumerate} \item There exists $K(\alpha,\rho,k)$ such that \begin{equation} \label{maggC} \max_{\hbar\in [0,1]}\|\F(\xi,x;\hbar)\|_{C^\alpha({\mathcal R}^m\times\T^l)}\leq K \|\F\|^\dagger_{\rho,k}, \quad \alpha\in\Bbb N \end{equation} and analogous statement for the norm $\|\cdot\|_{\rho,k}$. \item Let $\rho>0$, $k\geq 0$. Then $\F(\xi,x;\hbar)\in C^k([0,1];C^\omega(\{|\Im \xi|<\rho\}\times \{|\Im x|<\rho\})$ and \begin{equation} \label{supc} \sup_{\{|\Im \xi|<d\}\times \{|\Im x|<d\}}\leq \|\F\|^\dagger_{\rho,k}. \end{equation} Analogous statements for $\F\in \J_k(\rho)$. \end{enumerate} \end{remark} We will show in section \ref{sectionweyl} that: \begin{eqnarray} \|Op^W(F)\|_{\mathcal B(L^2)}\leq \|\F\|_{\rho,k}\ \ \ \forall k,\ \rho >0. \end{eqnarray} In what follows we will often use the notation $\F$ also to denote the function $\F({\mathcal L}_\omega(\xi))$, because the indication of the belonging to $J$ or $J^\dagger$, respectively, is already sufficient to mark the distinction of the two cases. \begin{remark} Without loss of generality we may assume: \begin{equation} |\omega |:=|\omega_1|+\ldots+|\omega_l |\leq 1 \end{equation} Indeed, the general case $|\omega|=\alpha |\omega^\prime|$, $|\omega^\prime|\leq 1$, $\alpha>0$ arbitrary reduces to the former one just by the rescaling $\varepsilon\to \alpha\varepsilon$. \end{remark} \vskip 1.0cm\noindent \section{Weyl quantization, matrix elements, commutator estimates}\label{sectionweyl} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0 \setcounter{theorem}{0 \subsection{Weyl quantization: action and matrix elements} We sum up here the canonical (Weyl) quantization procedure for functions (classical observables) defined on the phase space ${\mathcal R}^l\times\T^l$. In the present case it seems more convenient to consider the representation (unique up to unitary equivalences) of the natural Heisenberg group on ${\mathcal R}^l\times\T^l$. Of course this procedure yields the same quantization as the standard one via the Br\'ezin-Weil-Zak transform (see e.g. \cite{Fo}, \S 1.10) and has already been employed in \cite{CdV}, \cite{Po1},\cite{Po2}). \par Let $\Bbb H_l({\mathcal R}^l\times{\mathcal R}^l\times{\mathcal R})$ be the Heisenberg group over $\displaystyle {\mathcal R}^{2l+1}$ (see e.g.\cite{Fo}, Chapt.1). Since the dual space of ${\mathcal R}^l\times\T^l$ under the Fourier transformation is ${\mathcal R}^l\times\Z^l$, the relevant Heisenberg group here is the subgroup of $\Bbb H_l({\mathcal R}^l\times{\mathcal R}^l\times{\mathcal R})$, denoted by $\Bbb H_l({\mathcal R}^l\times\Z^l\times{\mathcal R})$, defined as follows: \begin{definition} \label{HSG} {\it Let $u:=(p,q), p\in{\mathcal R}^l, q\in\Z^l$, and let $ t\in{\mathcal R}$. Then $\Bbb H_l({\mathcal R}^l\times\Z^l\times{\mathcal R})$ is the subgroup of $\Bbb H_l({\mathcal R}^l\times{\mathcal R}^l\times{\mathcal R})$ topologically equivalent to ${\mathcal R}^l\times\Z^l\times{\mathcal R}$ with group law \begin{equation} \label{HGL} (u,t)\cdot (v,s)= (u+v, t+s+\frac12\Omega(u,v)) \end{equation} Here $\Omega(u,v)$ is the canonical $2-$form on ${\mathcal R}^l\times\Z^l$:} \begin{equation} \label{2forma} \Omega(u,v):=\langle u_1,v_2\rangle-\langle v_1,u_2\rangle \end{equation} \end{definition} $\Bbb H_l({\mathcal R}^l\times\Z^l\times{\mathcal R})$ is the Lie group generated via the exponential map from the Heisenberg Lie algebra ${\mathcal H}{\mathcal L}_l(\Z^l\times{\mathcal R}^l\times{\mathcal R})$ defined as the vector space ${\mathcal R}^l\times\Z^l\times{\mathcal R}$ with Lie bracket \begin{equation} \label{LA} [(u,t)\cdot (v,s)]= (0, 0,\Omega(u,v)) \end{equation} The unitary representations of $\Bbb H_l({\mathcal R}^l\times\Z^l\times{\mathcal R})$ in $L^2(\T^l)$ are defined as follows \begin{equation} \label{UR} (U_\hbar(p,q,t)f)(x):=e^{i\hbar t +i\langle q,x\rangle+\hbar\langle p.q\rangle/2}f(x+\hbar p) \end{equation} $\forall\,\hbar\neq 0$, $\forall\,(p,q,t)\in{\mathcal H}_l$, $\forall\,f\in L^2(\T^l)$. These representations fulfill the Weyl commutation relations \begin{equation} \label{Weyl} U_\hbar(u)^\ast =U_\hbar(-u), \qquad U_\hbar(u)U_\hbar(v)=e^{i\hbar\Omega(u,v)}U(u+v) \end{equation} For any fixed $\hbar>0$ $U_\hbar$ defines the Schr\"o\-din\-ger\ representation of the Weyl commutation relations, which also in this case is unique up to unitary equivalences (see e.g. \cite{Fo}, \S 1.10). Consider now a family of smooth phase-space functions indexed by $\hbar$, ${\mathcal A}(\xi,x,\hbar):{\mathcal R}^l\times\T^l\times [0,1]\to\Bbb C$, written under its Fourier representation \begin{equation} \label{FFR} {\mathcal A}(\xi,x,\hbar)=\int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{{\mathcal A}}(p,q;\hbar)e^{i(\langle p.\xi\rangle +\langle q,x\rangle)}\,dp=\int_{{\mathcal R}^l\times {\mathcal R}^l}\widehat{{\mathcal A}}(p,s;\hbar)e^{i(\langle p.\xi\rangle +\langle s,x\rangle)}\,d\lambda(p,s) \end{equation} \begin{definition} \label{Qdef} {\it The (Weyl) quantization of ${\mathcal A}(\xi,x;\hbar)$ is the operator $A(\hbar)$ definde as} \begin{eqnarray} \label{Wop} && (A(\hbar)f)(x):=\int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{{\mathcal A}}(p,q;\hbar)U_\hbar(p,q)f(x)\,dp \\ && \nonumber = \int_{{\mathcal R}^l\times{\mathcal R}^l}\widehat{{\mathcal A}}(p,s;\hbar)U_\hbar(p,s)f(x)\,d\lambda(p,s) \quad f\in L^2(\T^l) \end{eqnarray} \end{definition} \noindent \begin{remark} Formula \eqref{Wop} can be also be written as \begin{equation} \label{Wopq} (A(\hbar)f)(x)=\sum_{q\in\Z^l}\,A(q,\hbar)f, \quad (A(q,\hbar)f)(x)=\int_{{\mathcal R}^l}\,\widehat{{\mathcal A}}(p,q;\hbar)U_\hbar(p,q)f(x)\,dp \end{equation} \end{remark} \noindent From this we compute the action of $A(\hbar)$ on the canonical basis in $L^2(\T^l)$: $$ e_m(x):=(2\pi)^{-l/2}e^{i\langle m, x\rangle}, \quad x\in\T^l, \;m\in\Z^l . $$ \begin{lemma} \label{azione} \begin{equation} \label{azioneAop} A(\hbar)e_m(x)= \sum_{q\in\Z^l}e^{i\langle (m+q),x\rangle}{\mathcal A}(\hbar (m+q/2),q,\hbar) \end{equation} \end{lemma} \begin{proof} By \eqref{Wopq}, it is enough to prove that the action of $A(q,\hbar)$ is \begin{equation} \label{azioneAq} A(q,\hbar)e_m(x)= e^{i\langle (m+q),x\rangle}{\mathcal A}(\hbar(m+q/2),q,\hbar) \end{equation} Applying Definition \ref{Qdef} we can indeed write: \begin{eqnarray*} && (A(q,\hbar)e_m)(x)=(2\pi)^{-l/2}\int_{{\mathcal R}^l}\widehat{A}(p,q;\hbar)e^{i\langle q,x\rangle+i\hbar \langle p,q\rangle/2}e^{i\langle m,(x+\hbar p)\rangle}\,dp \\ && =(2\pi)^{-l/2}e^{i\langle (m+q),x\rangle}\,\int_{{\mathcal R}^l}\widehat{A}(p;q,\hbar)e^{i\hbar \langle p,(m+q/2)\rangle}\,dp =e^{i\langle (m+q),x\rangle}{\mathcal A}(\hbar(m+q/2),q,\hbar). \end{eqnarray*}. \end{proof} We note for further reference an obvious consequence of \eqref{azioneAq}: \begin{equation} \label{ortogq} \langle A(q,\hbar)e_m,A(q,\hbar)e_n\rangle_{L^2(\T^l)}=0,\;m\neq n;\quad \langle A(r,\hbar)e_m,A(q,\hbar)e_n\rangle_{L^2(\T^l)}=0,\;r\neq q. \end{equation} As in the case of the usual Weyl quantization, formula (\ref{Wop}) makes sense for tempered distributions ${\mathcal A}(\xi,x;\hbar)$ \cite{Fo}. Indeed we prove in this context, for the sake of completeness, a simpler, but less general, version of the standard Calderon-Vaillancourt criterion: \begin{proposition} Let $A(\hbar)$ by defined by (\ref{Wop}). Then \vskip 8pt\noindent \begin{equation} \label{CV} \Vert A(\hbar)\Vert_{L^2\to L^2}\leq \frac{2^{l+1}}{l+2}\cdot \frac{\pi^{(3l-1)/2}}{\Gamma(\frac{l+1}{2})}\,\sum_{|\alpha|\leq 2k}\,\Vert \partial_x^k{\mathcal A}(\xi,x;\hbar)\Vert_{L^\infty({\mathcal R}^l\times\T^l)}. \end{equation} where $$ k=\begin{cases} \frac{l}{2}+1,\quad l\;{\rm even} \\ {} \\ \frac{l+1}{2}+1,\quad l\;{\rm odd}. \end{cases} $$ \end{proposition} \begin{proof} Consider the Fourier expansion $$ u(x)=\sum_{m\in\Z^l}\,\widehat{u}_me_m(x),\quad u\in L^2(\T^l). $$ Since: $$ \|A(q,\hbar)\widehat{u}_me_m\|^2=|{\mathcal A}(\hbar(m+q/2),q,\hbar) |^2\cdot |\widehat{u}_m|^2 $$ by Lemma \ref{azione} and \eqref{ortogq} we get: \begin{eqnarray*} \|A (\hbar)u\|^2&\leq & \sum_{(q,m)\in\Z^l\times\Z^l}\|A(q,\hbar)\widehat{u}_m e_m\|^2 = \sum_{(q,m)\in\Z^l\times\Z^l}|{\mathcal A}(\hbar (m+q/2),q,\hbar)|^2\cdot |\widehat{u}_m|^2 \\ &\leq& \sum_{q\in\Z^l}\,\sup_{\xi\in{\mathcal R}^l}|{\mathcal A}(\xi,q,\hbar) |^2\sum_{m\in\Z^l}|\widehat{u}_m|^2 = \sum_{q\in\Z^l}\,\sup_{\xi\in{\mathcal R}^l}|{\mathcal A}(\xi,q,\hbar) |^2\|u\|^2 \\ &\leq& \big[ \sum_{q\in\Z^l}\,\sup_{\xi\in{\mathcal R}^l}|{\mathcal A}(\xi,q,\hbar) |\big]^2\|u\|^2 \end{eqnarray*} Therefore: \[ \Vert A(\hbar)\Vert_{L^2\to L^2} \leq \sum_{q\in\Z^l}\,\sup_{\xi\in{\mathcal R}^l}\vert{\mathcal A}(\xi,q,\hbar)\vert. \] Integration by parts entails that, for $k\in \Bbb N$, and $\forall \,g\in C^\infty(\T^l)$: \vskip 8pt\noindent \begin{eqnarray*} && \left |\int_{\T^l}e^{i\langle q,x\rangle}g(x)dx\right |=\frac 1 {1+|q|^{2k}}\left |\int_{\T^l} e^{i\langle q,x\rangle}(1+(-\triangle_x)^k) g(x)dx\right | \\ && \leq \frac 1 {1+\vert q\vert^{2k}}(2\pi)^l\sup_{\T^l}\sum_{|\alpha|\leq 2k}\vert \partial_x^\alpha g(x)\vert . \end{eqnarray*} \vskip 8pt\noindent Let us now take: \begin{equation} \label{kappa} k=\begin{cases} \frac{l}{2}+1,\quad l\;{\rm even} \\ {} \\ \frac{l+1}{2}+1,\quad l\;{\rm odd} \end{cases} \Longrightarrow \begin{cases} 2k-l+1=3,\quad l\;{\rm even} \\ 2k-l+1=2,\quad l\;{\rm odd} \end{cases} \end{equation} \vskip 4pt\noindent Then $2k-l+1\geq 2$, and hence: $$ \sum_{q\in\Z^l}\,\frac1{1+\vert q\vert^{2k}}\leq 2\int_{{\mathcal R}^l}\,\frac{du_1\cdots du_l}{1+\|u\|^{2k}}\leq 2\frac{\pi^{(l-1)/2}}{\Gamma(\frac{l+1}{2})}\int_0^\infty\frac{\rho^{l-1}}{1+\rho^{2k}}\,d\rho. $$ Now: \begin{eqnarray*} && \int_0^\infty\frac{\rho^{l-1}}{1+\rho^{2k}}\,d\rho =\frac 1 {2k}\int_0^\infty\,\frac{u^{l/2k -1}}{1+u}\,du \\ && \leq \frac 1 {2k}\left(\int_0^1\, u^{l/2k -1}\,du+\int_1^\infty\,{u^{l/2k -2}}\,du\right)=\frac{1}{(4k-l)(2k-l)} \end{eqnarray*} This allows us to conclude: \begin{eqnarray*} \sum_{q\in\Z^l}\,\sup_\xi\vert{\mathcal A}(\xi,q,\hbar)\vert &\leq &(2\pi)^l \sum_{|\alpha|\leq 2k}\Vert \partial_x^{\alpha}{\mathcal A}(\xi,x;\hbar)\Vert_{L^\infty({\mathcal R}^l\times\T^l)}\cdot \sum_{q\in\Z^l}\,\frac1{1+\vert q\vert^{2k}} \\ &\leq & 2^{l+1}\cdot \frac{\pi^{(3l-1)/2}}{\Gamma(\frac{l+1}{2})}\frac{1}{l+2}\sum_{|\alpha|\leq 2k}\,\Vert \partial_x^k{\mathcal A}(\xi,x;\hbar)\Vert_{L^\infty({\mathcal R}^l\times\T^l)}. \end{eqnarray*} with $k$ given by (\ref{kappa}). This proves the assertion. \end{proof} \begin{remark} Thanks to Lemma \ref{azione} we immediately see that, when ${\mathcal A}(\xi, x,\hbar)=\F({\mathcal L}_\omega(\xi),x;\hbar)$, \vskip 8pt\noindent \begin{eqnarray} \label{quant2} && {\mathcal A}(\hbar)f=\int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{\F}(p,q;\hbar)U_h(p\omega ,q)f\,dp \\ \nonumber && = \int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{\F}(p,q;\hbar)e^{i\langle q,x\rangle+i\hbar p\langle\omega,q\rangle/2}f(x+\hbar p\omega)\,dp\quad f\in L^2(\T^l) \end{eqnarray} \vskip 5pt\noindent where, again, $p\omega:=(p\omega_1,\dots,p\omega_l)$. Explicitly, \eqref{azioneAq} and \eqref{azioneAop} become: \begin{eqnarray} \label{azioneAom} && A(\hbar)e_m(x)= \sum_{q\in\Z^l}e^{i\langle (m+q),x\rangle}{\mathcal A}(\hbar \langle\omega,(m+q/2)\rangle,q,\hbar) \\ && \label{azioneAqom} A(q,\hbar)e_m(x)= e^{i\langle (m+q),x\rangle}{\mathcal A}(\hbar\langle\omega,(m+q/2)\rangle,q,\hbar) \end{eqnarray} \end{remark} \begin{remark} If ${\mathcal A}$ does not depend on $x$, then ${\mathcal A}(\xi,q,\hbar)=0, q\neq 0$, and (\ref{azioneAop}) reduces to the standard (pseudo) differential action \begin{eqnarray} (A(\hbar) u)(x)=\sum_{m\in\Z^l}\overline{{\mathcal A}}(m\hbar ,\hbar) \widehat{u}_m e^{i\langle m,x\rangle}=\sum_{m\in\Z^l}\overline{{\mathcal A}}(-i\hbar\nabla,\hbar) \widehat{u}_m e^{i\langle m,x\rangle} \end{eqnarray} because $-i\hbar\nabla e_m=m\hbar e_m$. On the other hand, if $\F$ does not depend on $\xi$ (\ref{azioneAop}) reduces to the standard multiplicative action \begin{equation} (A (\hbar)u)(x)=\sum_{q\in\Z^l}{\mathcal A}(q,\hbar)e^{i\langle q,x\rangle}\sum_{m\in\Z^l}\widehat{u}_m e^{i\langle m,x\rangle}={\mathcal A}(x,\hbar)u(x) \end{equation} \end{remark} \noindent \begin{corollary} \label{corA} Let $A(\hbar): L^2(\T^l)\to L^2(\T^l)$ be defined as in \ref{Qdef}. Then: \begin{enumerate} \item $\forall\rho\geq 0, \forall\,k\geq 0$ we have: \begin{equation}\label{stimz} \Vert A(\hbar)\Vert_{L^2\to L^2}\leq\Vert{\mathcal A}\Vert^\dagger_{\rho,k} \end{equation} and, if ${\mathcal A}(\xi, x,\hbar)={\mathcal A}({\mathcal L}_\omega(\xi),x;\hbar)$ \begin{equation}\label{stimg} \Vert A(\hbar)\Vert_{L^2\to L^2}\leq\Vert{\mathcal A}\Vert_{\rho,k}. \end{equation} \item \begin{eqnarray} \label{elm44} && \langle e_{m+s}, A(q,\hbar)e_m\rangle =\delta_{q,s}{\mathcal A}( (m+q/2)\rangle\hbar,q,\hbar) \\ && \label{elm55} \langle e_{m+s},A(\hbar)e_m\rangle ={\mathcal A}((m+s/2)\hbar,s,\hbar) \end{eqnarray} and, if ${\mathcal A}(\xi, x,\hbar)=\F({\mathcal L}_\omega(\xi),x;\hbar)$ \begin{eqnarray} \label{elm4} && \langle e_{m+s}, F(q,\hbar)e_m\rangle =\delta_{q,s}\F(\langle\omega, (m+q/2)\rangle\hbar,q,\hbar) =\delta_{q,s}\F({\mathcal L}_\omega (m+s/2)\hbar,q,\hbar) \\ && \label{elm5} \langle e_{m+s},F(\hbar)e_m\rangle =\F(\langle\omega,(m\hbar+s\hbar/2)\rangle,s,\hbar) =\F({\mathcal L}_\omega(m\hbar+s\hbar/2),s,\hbar) \end{eqnarray} Equivalently: \begin{equation} \langle e_m,A(\hbar) e_n\rangle={\mathcal A}((m+n)\hbar/2,m-n,\hbar) \end{equation} \item $A(\hbar)$ is an operator of order $-\infty$, namely there exists $C(k,s)>0$ such that \begin{equation} \|A(\hbar)u\|_{H^k(\T^l)}\leq C(k,s)\|u\|_{H^s(\T^l)}, \quad (k,s)\in{\mathcal R},\; k\geq s \end{equation} \end{enumerate} \end{corollary} \begin{proof} (1) Formulae \eqref{stimz} and \eqref{stimg} are straighforward consequences of Formula (\ref{maggC}). \vskip 5pt\noindent (2) (\ref{elm4}) immediately yields (\ref{elm5}). In turn, (\ref{elm4}) follows at once by \eqref{azioneAq}. \vskip 5pt\noindent (3) The condition ${\mathcal A}\in\J(\rho)$ entails: \begin{eqnarray} \label{stimaexp} \sup_{(\xi;\hbar)\in{\mathcal R}^l\times [0,1]}|{\mathcal A}(\xi;q,\hbar)|e^{\rho |q|}\leq e^{\rho |q|}\max_{\hbar\in [0,1]}\|\widehat{{\mathcal A}}(p;q,\hbar)\|_1\to 0, \;|q|\to \infty. \end{eqnarray} Therefore: \begin{eqnarray*} \|A(\hbar)u\|^2_{H^k}&\leq& \sum_{(q,m)\in\Z^l\times\Z^l}(1+|q|^2)^k{\mathcal A}((m+q/2)\hbar,q,\hbar) |^2\cdot |\widehat{u}_m|^2 \\ &\leq& \sum_{q\in\Z^l}\,\sup_{q,m}(1+|q|^2)^k|{\mathcal A}((m+q/2)\hbar,q,\hbar) |^2\sum_{m\in\Z^l}\,(1+|m|^2)^{s}|\widehat{u}_m|^2 \\ &=& C(k,s)\|u\|^2_{H^s} \\ C(k,s)&:=&\sum_{q\in\Z^l}\,\sup_{q,m}(1+|q|^2)^k|{\mathcal A}((m+q/2)\hbar,q,\hbar)|^2 \end{eqnarray*} where $0<C(k,s)<+\infty$ by (\ref{stimaexp}) above. The Corollary is proved. \end{proof} \subsection{Compositions, Moyal brackets} We first list the main pro\-perties which are straightforward consequences of the definition, as in the case of the standard Weyl quantization in ${\mathcal R}^{2l}$. First introduce the abbreviations \begin{eqnarray} \label{tt} && t:=(p,s); \quad t^\prime=(p^\prime,s^\prime);\quad \omega t:=(p\omega,s) \\ \label{omt} && \Omega_\omega(t^\prime -t,t^\prime):=\langle(p^\prime-p)\omega,s^\prime\rangle- \langle (s^\prime-s),p^\prime\omega \rangle =\langle p^\prime\omega ,s\rangle-\langle s^\prime,p\omega \rangle. \end{eqnarray} Given $\F(\hbar), \G(\hbar)\in \J_k(\rho)$, define their twisted convolutions: \vskip 3pt\noindent \begin{eqnarray} && \label{twc} (\widehat{\F}(\hbar){\widetilde{\ast}} \widehat{\G}(\hbar))(p,q;\hbar):= \int_{{\mathcal R}\times{\mathcal R}^l}\widehat{\F}(t^\prime -t;\hbar) \widehat{\G}(t^\prime;\hbar)e^{ i[\hbar \Omega_\omega(t^\prime -t,t^\prime)/2]}\,d\lambda(t^\prime) \\ \nonumber && {} \\ && \label{flat3} (\F\sharp\G)(x,\xi,\hbar):= \int_{{\mathcal R}\times{\mathcal R}^l} (\widehat{\F}(\hbar){\widetilde{\ast}} \widehat{\G}(\hbar))(t,\hbar)e^{i\langle s,x\rangle+p{\mathcal L}_\omega(\xi)}\,d\lambda(t) \\ \nonumber {} \\ \label{MBB1} && \widehat{\mathcal C}(p,q;\hbar):= \frac{1}{\hbar}\int_{{\mathcal R}\times{\mathcal R}^l}\widehat{\F}(t^\prime -t,\hbar) \widehat{\G}(t^\prime,\hbar)\sin[\hbar \Omega_\omega(t^\prime -t,t^\prime)/2]\,d\lambda(t^\prime) \\ \nonumber {} \\ && \label{IFMM1} {\mathcal C}(x,\xi;\hbar):=\int_{{\mathcal R}\times{\mathcal R}^l} \widehat{\mathcal C}(p,s;\hbar)e^{ip{\mathcal L}_\omega(\xi) +i\langle s,x\rangle}\,d\lambda(t) \end{eqnarray} Once more by the same argument valid for the Weyl quantization in ${\mathcal R}^{2l}$: \begin{proposition} \label{Quant2} The following composition formulas hold: \begin{eqnarray} \label{Comm6} && F(\hbar)G(\hbar)= \int_{{\mathcal R}\times{\mathcal R}^l}(\widehat{\F}(\hbar){\widetilde{\ast}} \widehat{\G}(\hbar))(t;\hbar) U_\hbar(\omega t)\,d\lambda(t). \end{eqnarray} \begin{eqnarray} \label{Comm7} && \frac{[F(\hbar),G(\hbar)]}{i\hbar}= \int_{{\mathcal R}\times{\mathcal R}^l}\widehat{\mathcal C}(t;\hbar)U_\hbar(\omega t)\,d\lambda(t) \end{eqnarray} \end{proposition} \begin{remark} The symbol of the product $F(\hbar)G(\hbar)$ is then $(\F\sharp\G)({\mathcal L}_\omega(\xi),x,\hbar)$ and the symbol of the commutator $[F(\hbar),G(\hbar)]/i\hbar$ is $ {\mathcal C}({\mathcal L}_\omega(\xi),x;\hbar)$, which is by definition the Moyal bra\-cket of the symbols $\F, \G$. From (\ref{MBB1}) we get the asymptotic expansion: \vskip 3pt\noindent \begin{eqnarray} \label{Mo6} && \widehat{\mathcal C}(p,q;\omega;\hbar)=\sum_{j=0}^\infty\frac{(-1)^j \hbar^{2j}}{(2j+1)!} D^j(p,q;\omega) \\ && D^j(p,q;\omega):=\int_{{\mathcal R}\times{\mathcal R}^l}\widehat{\F}(t^\prime-t,\hbar) \widehat{\G}(t^\prime,\hbar)[\Omega_\omega(t^\prime -t,t^\prime)^j\,d\lambda(t^\prime) \end{eqnarray} \vskip 3pt\noindent whence the asymptotic expansion for the Moyal bracket \vskip 3pt\noindent \begin{eqnarray} \label{Moexp} && \{\F, \G\}_M({\mathcal L}_\omega(\xi),x;\hbar)=\{\F, \G\}({\mathcal L}_\omega(\xi),x,\hbar)+ \\ \nonumber && \sum_{|r+j|=0}^\infty\frac{(-1)^{|r|}\hbar^{|r+j|}}{r!sj}[\partial_x^r \omega\partial^j_{\mathcal L} \F({\mathcal L}_\omega(\xi),x)]\cdot [ \omega\partial^j_{\mathcal L} \partial_x^r G({\mathcal L}_\omega(\xi),x,\hbar)]- \\ \nonumber && -\sum_{|r+j|=0}^\infty\frac{(-1)^{|r|}\hbar^{|r+j|}}{r!j!}[\partial_x^r \omega\partial^j_{\mathcal L} \G({\mathcal L}_\omega(\xi),x)]\cdot [ \omega\partial^j_{\mathcal L} \partial_x^r F({\mathcal L}_\omega(\xi),x,\hbar)] \end{eqnarray} Remark that: \begin{equation} \label{Mo5} \{\F, \G\}_M({\mathcal L}_\omega(\xi),x;\hbar)=\{\F, \G\}({\mathcal L}_\omega(\xi),x)+O(\hbar) \end{equation} In particular, since $L_\omega(\xi)$ is linear, we have $\forall\,\F(\xi;x;\hbar)\in C^\infty({\mathcal R}^l\times\T^l\times[0,1])$: \begin{equation} \label{MP} \{\F, {\mathcal L}_\omega(\xi)\}_M({\mathcal L}_\omega(\xi),x;\hbar)=\{\F, {\mathcal L}_\omega(\xi)\}({\mathcal L}_\omega(\xi),x;\hbar) \end{equation} \end{remark} The observables $\F(\xi,x;\hbar)\in\J(\rho)$ enjoy the crucial property of stability under compositions of their dependence on ${\mathcal L}_\omega(\xi)$ (formulae (\ref{flat3}) and (\ref{IFMM1}) above). As in \cite{BGP}, we want to estimate the relevant quantum observables uniformly with respect to $\hbar$, i.e. through the weighted norm (\ref{sigom}). \vskip 3pt\noindent \subsection{Uniform estimates} The following proposition is the heart of the estimates needed for the convergence of the KAM iteration. The proof will be given in the next (sub)section. Even though we could limit ourselves to symbols in $\J(\rho)$, we consider for the sake of generality and further reference also the general case of symbols belonging to $\J^\dagger(\rho)$. \begin{proposition} \label{stimeMo} Let $F$, $G\in J^\dagger_k(\rho)$, $k=0,1,\ldots$, $d=d_1+d_2$. Let $\F, \G$ be the corresponding symbols, and $0<d+d_1<\rho$. Then: \begin{enumerate} \item[\bf{($1^\dagger$)}] $FG\in J^\dagger_k(\rho)$ and fulfills the estimate \vskip 3pt\noindent \begin{equation} \label{2conv'} \|FG\|_{\mathcal B(L^2)}\leq \|\F\sharp\G\|^\dagger_{\rho,k} \leq (k+1)4^k \|\F\|^\dagger_{\rho,k} \cdot \|\G\|^\dagger_{\rho,k} \end{equation} \vskip 3pt\noindent \item[\bf{($2^\dagger$)}] $\displaystyle \frac{[F,G]}{i\hbar}\in J^\dagger_k(\rho-d)$ and fulfills the estimate \vskip 3pt\noindent \begin{eqnarray} \label{normaM2'} \left\Vert\frac{[F,G]}{i\hbar}\right\Vert_{\mathcal B(L^2)}\leq \|\{\F,\G\}_M\|_{\rho-d-d_1,k}^\dagger \leq \frac{(k+1)4^k}{e^2d_1(d+d_1)}\|\F\|_{\rho,k}^\dagger \|\G\|_{\rho-d,k}^\dagger \end{eqnarray} \vskip 3pt\noindent \item[\bf{($3^\dagger$)}] $\F\G \in \J^\dagger_k(\rho)$, and \begin{equation} \label{simple'} \|\F\G\|^\dagger_{\rho,k} \leq (k+1)4^k \|\F\|^\dagger_{\rho,k} \cdot \|\G\|^\dagger_{\rho,k} \end{equation} \end{enumerate} Moreover if $F$, $G\in J_k(\rho)$, $k=0,1,\ldots$, and $\F, \G\in\J_k(\rho)$, then: \begin{enumerate} \item[\bf{(1)}] $FG\in J_k(\rho)$ and fulfills the estimate \vskip 3pt\noindent \begin{equation} \label{2conv} \|FG\|_{\mathcal B(L^2)}\leq \|\F\sharp\G\|_{\rho,k} \leq (k+1)4^k \|\F\|_{\rho,k} \cdot \|\G\|_{\rho,k} \end{equation} \vskip 3pt\noindent \item[\bf{(2)}] $\displaystyle \frac{[F,G]}{i\hbar}\in J_k(\rho-d)$ and fulfills the estimate \vskip 5pt\noindent \begin{equation} \label{normaM2} \left\Vert\frac{[F,G]}{i\hbar}\right\Vert_{\mathcal B(L^2)}\leq \|\{\F,\G\}_M\|_{\rho-d-d_1,k} \leq \frac{(k+1)4^k}{e^2d_1(d+d_1)}\|\F\|_{\rho,k}\cdot \|\G \|_{\rho-d,k} \end{equation} \item[\bf{(3)}] $\F \G \in \J_k(\rho)$ and \begin{equation} \label{simple} \|\F\G\|_{\rho,k} \leq (k+1)4^k \|\F\|_{\rho,k} \cdot \|\G\|_{\rho,k}. \end{equation} \end{enumerate} \vskip 3pt\noindent \end{proposition} \begin{remark} The operators $F(\hbar)$ with the uniform norm $\|F\|_{\rho,k}, k=0,1,\ldots$ form a Banach subalgebra (without unit) of the algebra of the continuous operators in $L^2(\T^l)$. \end{remark} Before turning to the proof we state and prove two further useful results. \begin{corollary} \label{multipleM} Let $\F,\G\in\J_k(\rho)$, and let $0<d<\rho$, $r\in {\Bbb N}$. Then: \vskip 4pt\noindent \begin{eqnarray} \label{stimaMr} \frac{1}{r!}\|\{\F,\{\F,\ldots,\{\F,{\mathcal G}\}_M\}_M\ldots\}_M\|_{\rho-d,k} \leq \frac{\sqrt{2\pi r}(k+1)4^k}{(ed)d^r}\|\F\|_{\rho,k}^r \|{\mathcal G}\|_{\rho,k} \end{eqnarray} \end{corollary} \begin{proof} We follow the argument of \cite{BGP}, Lemma 3.5. If $d=d_1+d_2$, (\ref{normaM2'}) entails: $$ \|\{\F,\G\}_M\|_{\rho-d,k}\leq \frac{C_k}{e^2dd_1}\|\F\|_{\rho,k}\cdot\|\G\|_{\rho-d_2,k},\quad C_k:=(k+1)4^k. $$ because $\|\G\|_{\rho-d,k}\|\leq \|\G\|_{\rho-d_2,k}$ and $d_1(d+d_1)< d_1d$. Set now $\displaystyle d_2=\frac{r-1}{r}d$ which yields $\displaystyle d_1=\frac{d}{r}$. Then: $$ \|\{\F,\G\}_M\|_{\rho-d,k}\leq \frac{C_k}{e^2d\frac{d}{r}}\|\F\|_{\rho,k}\cdot\|\G\|_{\rho-\frac{r-1}{r}d,k}=\frac{C_kr}{(ed)^2} \|\F\|_{\rho,k}\cdot\|\G\|_{\rho-\frac{r-1}{r}d,k} $$ and \begin{eqnarray*} && \|\{\F,\{\F,\G\}_M\}_M\|_{\rho-d,k}\leq \frac{C_k}{ed\frac{d}{r},k}\|\F\|_{\rho,k}\cdot\|\{\F,\G\}_M\|_{\rho-\frac{r-2}{r}d,k}\leq \\ && \leq \frac{(C_kr)^2}{(ed)^3} \|\F\|^2_{\rho,k}\cdot\|\G\|_{\rho-\frac{r-1}{r}d,k} \end{eqnarray*} \vskip 6pt\noindent Iterating $r$ times we get: \vskip 6pt\noindent $$ \frac1{r!}\|\{\F,\{\F,\cdots,\{\F,\G\}_M\}_M,\cdots\}_M\|_{\rho-d,k}\leq \frac{(C_kr)^{r}}{r!}\frac1{(ed)^{r+1}} \|\F\|^r_{\rho,k}\cdot\|\G\|_{\rho-\frac{r-1}{r}d,k}. $$ \vskip 6pt\noindent The Stirling formula and the majorization $\displaystyle \|\G\|_{\rho-\frac{r-1}{r}d,k} \leq \|\G\|_{\rho,k}$ now yield (\ref{stimaMr}). \end{proof} \begin{proposition} \label{stimaP} Let $\F(\xi;x;\hbar)\in\J_k(\rho)$, $\rho>0$, $k=0,1,\ldots$. Then $\{\F,{\mathcal L}_\omega\}_M\in\J_k(\rho-d)$ $\forall\,0<d<\rho$ and the following estimates hold: \begin{equation} \label{stimapp} \|[F,L_\omega]/i\hbar\|_{\rho-d,k}= \|\{\F,{\mathcal L}_\omega\}_M\|_{\rho-d,k} \leq \frac{1}{d}\|\F\|_{\rho,k} \end{equation} \begin{eqnarray} \label{stimaMpp} && \|[F,[\cdots,[F,L_\omega]\cdots]/(i\hbar)^r\|_{\rho-d,k}= \|\{\F,\cdots,\{\F,{\mathcal L}_\omega\}_M\cdots,\}_M\|_{\rho-d,k} \\ && \nonumber \\ && \nonumber \leq \frac{\sqrt{2\pi (r-1)}(k+1)4^k}{(ed)d^r}\|\F\|_{\rho,k}^r \end{eqnarray} \end{proposition} \begin{proof} By (\ref{MP}): $$ \{\F,{\mathcal L}_\omega\}_M=\{\F,{\mathcal L}_\omega\}=-\langle \omega,\nabla_x\rangle\F(\xi,x;\hbar)=\sum_{q\in\Z^l}\langle\omega,q\rangle e^{i\langle q,x\rangle}\int_{{\mathcal R}}\widehat{\F}_q(p;\hbar)e^{ip{\mathcal L}_\omega(\xi)}\,dp $$ and therefore: \begin{eqnarray*} && \|\{\F,{\mathcal L}_\omega\}_M\|_{\rho-d,k}\leq \|\{\F,{\mathcal L}_\omega\}\|_{\rho-d,k}\leq \sum_{q\in\Z^l}|\langle\omega,q\rangle|e^{(\rho-d)|q|}\|\F_q\|_{\rho,k} \leq \\ && \sup_{q\in\Z^l}\langle\omega,q\rangle|e^{-d|q|}\sum_{q\in\Z^l}e^{\rho|q|}\|\F_q\|_{\rho,k} \leq \frac{1}{d}\|\F\|_{\rho,k} \end{eqnarray*} because $|\omega|\leq 1$ by Remark 2.6. This proves (\ref{stimapp}). (\ref{stimaMpp}) is a direct consequence of Corollary \ref{multipleM}. \end{proof} \subsection{Proof of Proposition \ref{stimeMo}} \subsubsection{Three lemmata} \label{2l} The proof will use the three following Lemmata. \begin{lemma} \label{symp} Let $p,p'\in{\mathcal R}^{l},\ s,s^\prime\in{\mathcal R}^l$. Define $t:=(p,s), t^\prime:=(p^\prime,s^\prime)$. Let $\Omega_\omega(\cdot)$ and $\mu_j(\cdot)$ be defined by (\ref{omt}) and (\ref{muk}), respectively. Then: \begin{equation} \vert\Omega_\omega(t,t^\prime)\vert^j\leq 2^j\mu_j(t)\mu_j(t^\prime). \end{equation} \end{lemma} The proof is straightforward, because $\vert\Omega_\omega(t,t^\prime)\vert\leq 2\vert t\vert\vert t^\prime\vert$ and $|\omega|\leq 1$. \begin{lemma}\label{sin} \begin{equation} \left\vert\frac{d^m}{d\hbar^m}\frac{\sin{\hbar x/2}}\hbar\right\vert\leq \frac{\vert x\vert^{m+1}}{2^{m+1}}. \end{equation}. \end{lemma} \begin{proof} Write: \vskip 6pt\noindent \begin{eqnarray*} \frac{d^m}{d\hbar^m}\frac{1}{\hbar}\sin{\hbar x/2} =\frac{d^m}{d\hbar^m}\frac12\int_0^x\cos{\hbar t/2}\,dt =\frac{(-\hbar)^m}{2^{m+1}}\int_0^xt^m\cos^{(m)}{(\hbar t/2)}\,dt \leq\frac{\hbar^m}{2^{m+1}}\int_0^xt^m\,dt. \end{eqnarray*} \vskip 6pt\noindent whence $$ \left\vert\frac{d^m}{d\hbar^m}\frac{\sin{\hbar x/2}}\hbar\right\vert\leq \frac{\hbar^m}{2^{m+1}}\left\vert \int_0^xt^m\,dt\right\vert =\frac{\hbar^m\vert x\vert^{m+1}}{2^{m+1}(m+1)}\leq \frac{\vert x\vert^{m+1}}{2^{m+1}}. $$ \end{proof} \begin{lemma} \label{MoyalS} Let $(\F,G)\in\J^\dagger_\rho$, $0<d+d_1<\rho$, $t=(p,s)$, $t^\prime=(p^\prime,s^\prime)$, $|t|:=|p|+|s|$, $|t^\prime|:=|p^\prime|+|s^\prime|$. Then: \begin{equation} \|\{\F,\G\}_M\|_{\rho-d-d_1}^\dagger \leq \frac{1}{e^2d_1(d+d_1)}\|\F\|_\rho^\dagger \|\G\|_{\rho-d}^\dagger \end{equation} \end{lemma} \begin{proof} We have by definition \begin{eqnarray*} && |\{\F,\G\}_M\|^\dagger_{\rho-d-d_1}\leq \frac{1}{\hbar}\int_{{\mathcal R}^{2l}}e^{(\rho-d-d_1)|t|}d\lambda(t)\int_{{\mathcal R}^{2l}}|\F(t^\prime)\G(t^\prime-t)|\cdot |\sin{\hbar(t^\prime-t)\wedge t^\prime/\hbar}|\,d\lambda(t^\prime) \\ && \leq \int_{{\mathcal R}^{2l}}e^{(\rho-d-d_1)|t|}d\lambda(t)\int_{{\mathcal R}^{2l}}|\F(t^\prime)|\cdot |\G(t^\prime-t)|\cdot |(t^\prime-t)|\cdot |t^\prime|\,d\lambda(t^\prime) \\ && =\int_{{\mathcal R}^{2l}}e^{(\rho-d-d_1)|t|}d\lambda(t)\int_{{\mathcal R}^{2l}}|\F(u+t/2)\G(u-t/2)|\cdot |u-t/2|\cdot |u+t/2|\,d\lambda(u) \\ && =\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}}e^{(\rho-d-d_1)(|x|+|y|)}|\F(x)\G(y)|\cdot |x|\cdot |y|\,d\lambda(x)d\lambda(y) \leq \\ && \frac{1}{d_1(d+d_1)}\int_{{\mathcal R}^{2l}}|\F(x)|e^{\rho |x|}\,d\lambda(x) \int_{{\mathcal R}^{2l}}|\G(y)|e^{(\rho-d) |y|}\,d\lambda(x)\leq \frac{1}{e^2d_1(d+d_1)}\|\F\|_\rho^\dagger\|\G\|_{\rho-d}^\dagger \end{eqnarray*} because $\displaystyle \sup_{\alpha\in{\mathcal R}}|\alpha| e^{-\delta\alpha}=\frac{1}{e\delta}, \delta>0$. \end{proof} \subsubsection{ Assertion {\mbox{\bf ($1^\dagger$)}}}\label{1'} By definition \begin{eqnarray} && \|\F(\hbar)\sharp\G(\hbar)\|_{\rho,k}^\dagger= \nonumber \sum_{\gamma=0}^k\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}}| \partial^\gamma_\hbar [\widehat{\F}(t^\prime-t,\hbar) \widehat{\G}(t^\prime,\hbar)e^{i\hbar\Omega_\omega(t^\prime,t^\prime-t)}] |\mu_{k-\gamma}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t) \nonumber \end{eqnarray} whence \begin{eqnarray} && \|\F(\hbar)\sharp\G(\hbar)\|^\dagger_{\rho,k}= \nonumber \\ && \sum_{\gamma=0}^k\sum_{j=0}^\gamma\binom {\gamma}{j}\int_{{\mathcal R}^{2l}\times {\mathcal R}^{2l} }\vert\partial_\hbar^{\gamma-j} [\widehat{\F}(t^\prime-t,\hbar) \widehat{\G}(t^\prime,\hbar)]\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^j \mu_{k-\gamma}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t)= \nonumber \\ && \nonumber \sum_{\gamma=0}^k\sum_{j=0}^\gamma\sum_{i=0}^{\gamma-j}\binom {\gamma}{j}\binom{j}{i}\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}} \vert\partial_\hbar^{\gamma-j-i}\widehat{\F}(t^\prime-t,\hbar) \partial_\hbar^{i}\widehat{\G}(t^\prime,\hbar) \vert\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^j\mu_{k-\gamma}(t)e^{\rho|t|}\,d\lambda(t^\prime) d\lambda(t) \end{eqnarray} By Lemma \ref{symp} and the inequality $\displaystyle \mu_k(t^\prime-t)\leq 2^{k/2}\mu_k(t^\prime)\mu_k(t)$ we get, with $t=(p,s): t^\prime=(p^\prime,s^\prime)$ \begin{eqnarray*} && \vert\Omega_\omega(t^\prime-t,t^\prime)\vert^j\mu_{k-\gamma}(t)\leq 2^j\mu_j(t^\prime-t)\mu_j(t^\prime)\mu_{k-\gamma}(t) \\ && \leq 2^j\mu_jt^\prime-t)\mu_j(t^\prime)\mu_{k-\gamma}(t)2^{(k-\gamma)/2}\mu_{k-\gamma}(t^\prime -t)\mu_{k-\gamma}(t) \\ && \leq 2^{j+(k-\gamma)/2}\mu_{k-\gamma+j}(t^\prime -t)\mu_{k-\gamma+j}(t) \end{eqnarray*} Denote now $\gamma-j-i=k-\gamma^\prime$, $i=k-\gamma^{\prime\prime}$ and remark that $j\leq\gamma^\prime$, $i\leq\gamma-j$. Then: \begin{eqnarray*} 2^{j+(k-\gamma)/2}\mu_{k-\gamma+j}(t^\prime -t)\mu_{k-\gamma+j}(t) \leq 2^k\mu_{\gamma^\prime}(t^\prime)\mu_{\gamma^{\prime\prime}}(t) \end{eqnarray*} Since $\displaystyle \binom {\gamma}{j}\binom{j}{i}\leq 4^k$ and the sum over $k$ has $(k+1)$ terms we get: \begin{eqnarray*} && \|\F(\hbar)\sharp\G(\hbar)\|^\dagger_{\rho,k} \leq \\ && (k+1)4^k\,\sum_{\gamma^\prime,\gamma^{\prime\prime}=0}^k\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}} |\partial^{k-\gamma^\prime}_\hbar\widehat{\F}(t^\prime -t,\hbar) |\partial^{k-\gamma^{\prime\prime}}_\hbar\widehat{\G}(t^\prime,\hbar)| \mu_{\gamma^\prime}(t^\prime -t)\mu_{\gamma^{\prime\prime}}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t) \end{eqnarray*} Now we can repeat the argument of Lemma \ref{MoyalS} to conclude: \begin{eqnarray*} \|\F(\hbar)\sharp\G(\hbar)\|_{\rho,k}^\dagger \leq (k+1)4^k \|\F\|^\dagger_{\rho,k} \cdot \|\G\|^\dagger_{\rho,k} \end{eqnarray*} which is (\ref{2conv'}). Assertion {\mbox{\bf ($3^\dagger$)}}, formula (\ref{simple'}) is the particular case of (\ref{2conv'}) obtained for $\Omega_\omega=0$, and Assertion ${\bf (3)}$, formula (\ref{simple}), is in turn particular case of (\ref{simple'}) . \subsubsection{ Assertion{\mbox{\bf ($2^\dagger$)}}}\label{2'} By definition: \begin{eqnarray*} \|\{\F(\hbar),\G(\hbar)\}_M\|^\dagger_{\rho,k}= \sum_{\gamma=0}^k\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}}| \partial^\gamma_\hbar [\widehat{\F}(t^\prime -t,\hbar) \widehat{\G}(t^\prime,\hbar)\sin\hbar\Omega(t^\prime-t,t^\prime)/\hbar] |\mu_{k-\gamma}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t). \end{eqnarray*} Lemma \ref{sin} entails: $$ \vert\partial_\hbar^j \sin\hbar\Omega(t^\prime-t,t^\prime)/\hbar\vert\leq \vert \Omega(t^\prime-t,t^\prime)\vert^{j+1} $$ and therefore: \begin{eqnarray} && \|\{\F(\hbar),\G(\hbar)\}_M\|_{\rho,k}\leq \nonumber \\ && \sum_{\gamma=0}^k\sum_{j=0}^\gamma\binom {\gamma}{j}\int_{{\mathcal R}^{2l}\times {\mathcal R}^{2l} }\vert\partial_\hbar^{\gamma-j} [\widehat{\F}(t^\prime -t,\hbar) \widehat{\G}(t^\prime,\hbar)]\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^{j+1} \mu_{k-\gamma}(t)e^{\rho(|t|}\,d\lambda(t^\prime) d\lambda(t)= \nonumber \\ && \nonumber \sum_{\gamma=0}^k\sum_{j=0}^\gamma\sum_{i=0}^{\gamma-j}\binom {\gamma}{j}\binom{j}{i}\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}} \vert\partial_\hbar^{\gamma-j-i}\widehat{\F}(t^\prime -t,\hbar) \partial_\hbar^{i}\widehat{\G}(t^\prime,\hbar) \vert\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^{j+1}\mu_{k-\gamma}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t) \end{eqnarray} Let us now absorb a factor $\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^{j}$ in exactly the same way as above, and recall that $\vert\Omega_\omega(t^\prime-t,t^\prime)\vert\leq \vert (t^\prime-t)t^\prime\vert$. We end up with the inequality: \begin{eqnarray*} && \|\{\F(\hbar),\G(\hbar)\}_M\|^\dagger_{\rho,k} \leq \\ && (k+1)4^k\,\sum_{\gamma^\prime,\gamma^{\prime\prime}=0}^k\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}} |\partial^{k-\gamma'}_\hbar\widehat{\F}(t^\prime -t,\hbar) |\partial^{k-\gamma"}_\hbar\widehat{\G}(t^\prime,\hbar)| |t^\prime -t||t^\prime| \mu_{\gamma^\prime}(t^\prime -t)\mu_{\gamma^{\prime\prime}}(t^\prime)e^{\rho( |t|}\,d\lambda(t^\prime) d\lambda(t) \end{eqnarray*} Repeating once again the argument of Lemma \ref{MoyalS} we finally get: \begin{eqnarray*} \|\{\F(\hbar),\G(\hbar)\}_M\|^\dagger_{\rho-d-d_1,k} \leq \frac{(k+1)4^k}{e^2d_1(d+d_1)} \|\F\|^\dagger_{\rho,k} \cdot \|\G\|^\dagger_{\rho-d,k} \end{eqnarray*} which is (\ref{normaM2'}). Once more, Assertion ${\bf (2)}$ is a particular case of (\ref{normaM2'}) and Assertion ${\bf (1)}$ a particular case of (\ref{2conv'}). This completes the proof of Proposition 3.10. \vskip 1cm \section{A sharper version of the semiclassical Egorov theorem}\label{sectionegorov} Let us state and prove in this section a particular variant of the semiclassical Egorov theorem (see e.g.\cite{Ro}) which establishes the relation between the unitary transformation $\displaystyle e^{i\varepsilon W/i\hbar}$ and the canoni\-cal transformation $\phi^\varepsilon_{{\mathcal W}_0}$ generated by the flow of the symbol ${\mathcal W}(\xi,x;\hbar)|_{\hbar=0}:={\mathcal W}_0(\xi,x)$ (principal symbol) of $W$ at time $1$. The present version is sharper in the sense that the usual one allows for a $O(\hbar^\infty)$ error term. \begin{theorem} Let $\rho>0, k=0,1,\ldots$ and let $A,W\in J^\dagger_k(\rho)$ with symbols $\mathcal A,\ \mathcal W$. Then: \begin{equation} \nonumber S_\varepsilon:=e^{i\frac {\varepsilon W}\hbar}(L_\omega+A)e^{-i\frac {\varepsilon W}\hbar}=L_\omega+B \end{equation} where: \begin{enumerate} \item $\forall\,0<d<\rho$, $B\in J^\dagger_k(\rho-d)$; \item \begin{eqnarray*} \|{\mathcal B}\|^\dagger_{\rho-d,k}\leq\frac{(k+1)4^k}{(ed)^2}\left[1-|\varepsilon|\|{\mathcal W}\|^\dagger_{\rho,k}/{d}\right]^{-1}\left[\|{\mathcal A}\|^\dagger_{\rho,k}+|\varepsilon|\|{\mathcal W}\|^\dagger_{\rho,k}/de\right] \end{eqnarray*} \item Moreover the symbol $\mathcal B$ of $B$ is such that: $$ {\mathcal L}_\omega+{\mathcal B}=({\mathcal L}_\omega+\mathcal A)\circ \Phi^\varepsilon_{\mathcal W_0}+O(\hbar) $$ where $\Phi^\varepsilon_{\mathcal W_0}$ is the Hamiltonian flow of $\mathcal W_0:=\mathcal W|_{\hbar=0}$ at time $\varepsilon$. \item Assertions (1), (2), (3) hold true when $(A,B,W)\in J_k(\rho)$ with $\|{\mathcal A}\|^\dagger_{\rho,k}$, $\|{\mathcal B}\|^\dagger_{\rho,k}$, $\|{\mathcal W}\|^\dagger_{\rho,k}$ replaced by $\|{\mathcal A}\|_{\rho,k}$, $\|{\mathcal B}\|_{\rho,k}$, $\|{\mathcal W}\|_{\rho,k}$. \end{enumerate} \end{theorem} \begin{proof}The proof is the same in both cases, since it it is based only on Proposition \ref{stimeMo}. Therefore we limit ourselves to the $\J_k(\rho)$ case. By Corollary \ref{corA}, Assertion (3), under the present assumptions $H^1(\T^l)$, the domain of the self-adjoint operator $\F(L_\omega)+A$, is left invariant by the unitary operator $\displaystyle e^{i\frac {\varepsilon W}{\hbar}}$. Therefore on $H^1(\T^l)$ we can write the commutator expansion $$ S_\varepsilon=L_\omega+\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,L_\omega]\ldots]+\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,A]\ldots] $$ whence the corresponding expansions for the symbols \begin{eqnarray*} && {\mathcal S}(x,\xi;\hbar,\varepsilon)={\mathcal L}_\omega(\xi)+\sum_{m=1}^\infty \frac{\varepsilon^m}{m!}\{{\mathcal W},\{{\mathcal W},\ldots,\{{\mathcal W},{\mathcal L}_\omega\}\ldots\}_M \\ && +\sum_{m=1}^\infty \frac{\varepsilon^m}{m!}\{{\mathcal W},\{{\mathcal W},\ldots,\{{\mathcal W},{\mathcal A}\}_M\ldots\}_M \end{eqnarray*} because $\{{\mathcal W},{\mathcal L}_\omega\}_M=\{{\mathcal W},{\mathcal L}_\omega\}$ by the linearity of ${\mathcal L}_\omega$. Now apply Corollaries \ref{multipleM} and \ref{stimaP}. We get, denoting once again $C_k=(k+1)4^k$: \begin{eqnarray*} && \|\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,L_\omega]\ldots]\|_{L^2\to L^2}\leq \|\sum_{m=1}^\infty \frac{\varepsilon^m}{m!}\{{\mathcal W},\{{\mathcal W},\ldots,\{{\mathcal W},{\mathcal L}_\omega\}\ldots\}_M\|_{\rho-d,k} \\ && \leq \sum_{m=1}^\infty \frac{|\varepsilon|^m}{m!}\|\{{\mathcal W},\{{\mathcal W},\ldots,\{-i\langle\omega,\nabla_x\rangle {\mathcal W}\}_M\ldots\}_M\|_{\rho-d,k}\leq \frac{C_k}{ed}\sum_{m=1}^\infty\sqrt{2\pi m} \left(\frac{|\varepsilon|\|{\mathcal W}\|_{\rho,k} }{d}\right)^m \end{eqnarray*} \begin{eqnarray*} && \|\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,A]\ldots] \|_{L^2\to L^2} \leq \|\sum_{m=1}^\infty \frac{\varepsilon^m}{m!}\{{\mathcal W},\{{\mathcal W},\ldots,\{{\mathcal W},{\mathcal A}\}_M\ldots\}_M\|_{\rho-d,k} \\ && \leq \frac{C_k}{ed}\|{\mathcal A}\|_{\rho,k}\sum_{m=1}^\infty\sqrt{2\pi m} \left(\frac{|\varepsilon|\|{\mathcal W}\|_{\rho,k} }{d}\right)^m \end{eqnarray*} Now define: \begin{equation} \label{Aprimo} B:=\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,{\mathcal L}_\omega]\ldots]+\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,A]\ldots] \end{equation} and remark that $\forall\,\eta>0$ we can always find $0<d^\prime<d-\eta$ such that $\displaystyle \sqrt{2\pi m}d^{-m}\leq (d^\prime)^{-m}$. Denoting (abuse of notation) $d^\prime=d$ we can write: \begin{eqnarray*} \|{\mathcal B}\|_{\rho-d,k}\leq\frac{(k+1)4^k}{(ed)^2}\left[1-|\varepsilon|\|{\mathcal W}\|_{\rho,k}/{d}\right]^{-1}\left[\|{\mathcal A}\|_{\rho,k}+|\varepsilon|\|{\mathcal W}\|_{\rho,k}/de\right] \end{eqnarray*} This proves assertions (1) and (2). \newline By Remark 2.9, we have: \begin{eqnarray*} && {\mathcal S}^0_\varepsilon(x,\xi;\hbar)|_{\hbar=0}={\mathcal L}_\omega+{\mathcal B}_\varepsilon(\xi,x;\hbar)|_{\hbar=0}= \\ && \sum_{k=0}^\infty \frac{(\varepsilon)^k}{k!}\{{\mathcal W}_0,\{{\mathcal W},\ldots,\{{\mathcal W}_0,{{\mathcal L}+{\mathcal A}}\}\ldots\}=e^{\varepsilon {\mathcal L}_{{\mathcal W}_0}}({\mathcal L}_\omega+{\mathcal A}) \end{eqnarray*} where ${\mathcal L}_{{\mathcal W}_0}\F=\{{\mathcal W},\F\}$ denote the Lie derivative with respect to the Hamiltonian flow generated by ${\mathcal W}_0$. Now, by Taylor's theorem $$ e^{\varepsilon {\mathcal L}_{{\mathcal W}_0}}({\mathcal L}_\omega+{\mathcal A})=({\mathcal L}_\omega+{\mathcal A})\circ \phi^\varepsilon_{{\mathcal W}_0}(x,\xi) $$ and this concludes the proof of the Theorem. \end{proof} \begin{remark} Let $W$ be a solution of the homological equation (\ref{heq}). Then the explicit expression of ${\mathcal W}_0$ clearly is: $$ {\mathcal W}_0=\frac1{\F^\prime({\mathcal L}_\omega(\xi))}\sum_{q\in\Z^\ell}\frac{{\mathcal V}_q (\xi)}{\langle \omega,q\rangle}e^{i\langle q,x\rangle} $$ and $$ e^{\varepsilon {\mathcal L}_{{\mathcal W}_0}}(\F({\mathcal L}_\omega)+\varepsilon{\mathcal A})=\F(L_\omega)+\varepsilon {\mathcal N}_{0,\varepsilon}({\mathcal L}_\omega)+O(\varepsilon^2). $$ \end{remark} Thus ${\mathcal W}_0$ coincides with the expression obtained by first order canonical perturbation theory. \vskip 1cm\noindent \section{Homological equation: solution and estimate} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0 \setcounter{theorem}{0 Let us briefly recall the well known KAM iteration in the quantum context. \par The first step consists in looking for an $L^2(\T^l)$-unitary map $U_{0,\varepsilon}=e^{i\varepsilon W_0/\hbar}$, $W_0=W_0^\ast$, such that $$ S_{0,\varepsilon}:=U_{0,\varepsilon}(L_\omega+\varepsilon V_0)U_{0,\varepsilon}^\ast=\F_{1,\varepsilon}(L_\omega)+\varepsilon^2 V_{1,\varepsilon}, \quad V_0:=V, \quad \F_{1,\varepsilon}(L_\omega)=L_\omega+\varepsilon N_0(L_\omega). $$ Expanding to first order near $\varepsilon=0$ we get that the two unknowns $W_0$ and $N_0$ must solve the equation $$ \frac{[L_\omega,W_0]}{i\hbar}+V=N_0 $$ $V_{1,\varepsilon}$ is the second order remainder of the expansion. Iterating the procedure: \vskip 3pt\noindent \begin{eqnarray*} && U_{\ell,\varepsilon}:= e^{i\varepsilon^{2^\ell}W_\ell/\hbar}; \\ && S_{\ell,\varepsilon}:=U_{\ell.\varepsilon}(\F_{\ell,\varepsilon}(L_\omega)+\varepsilon^{2^{\ell}} V_{\ell,\varepsilon})U_{\ell,\varepsilon}^\ast= = \F_{\ell+1,\varepsilon}(L_\omega)+\varepsilon^{2^{\ell+1}} V_{\ell+1}(\varepsilon), \\ && \frac{[\F_{\ell,\varepsilon}(L_\omega),W_{\ell,\varepsilon}]}{i\hbar}+V_{\ell,\varepsilon}=N_{\ell,\varepsilon} \end{eqnarray*} \vskip 3pt\noindent With abuse of notation, we denote by $\F_{\ell,\varepsilon}({\mathcal L}_\omega,\hbar)$, ${\mathcal N}_{\ell,\varepsilon}({\mathcal L}_\omega,\hbar)$, ${\mathcal V}_{\ell,\varepsilon}({\mathcal L}_\omega,\hbar)$ the corresponding symbols. \newline The KAM iteration procedure requires therefore the solution in $J_k(\rho)$ of the operator homological equation in the two unknowns $W$ and $M$ (here we have dropped the dependence on $\ell$ and $\varepsilon$, and changed the notation from $N$ to $M$ to avoid confusion with what follows): \begin{equation} \label{heq} \frac{[\F(L_\omega),W]}{i\hbar}+V=M(L_\omega) \end{equation} with the requirement $M(L_\omega)\in J_k(\rho)$; the solution has to be expressed in terms of the corresponding Weyl symbols $({\mathcal L}_\omega, {\mathcal W}, {\mathcal V}, {\mathcal M})\in\J_k(\rho)$ in order to obtain estimates uniform with respect to $\hbar$. Moreover, the remainder has to be estimated in terms of the estimates for $W, M$. \newline Equation (\ref{heq}), written for the symbols, becomes \begin{equation} \label{Mo} \{\F({\mathcal L}_\omega(\xi),\hbar),{\mathcal W}(x,\xi;\hbar)\}_M+{\mathcal V}(x,L_\omega(\xi);\hbar)={\mathcal M}({\mathcal L}_\omega(\xi),\hbar) \end{equation} \subsection{The homological equation}\label{hom} We will construct and estimate the solution of (\ref{heq}), actually solving (\ref{Mo}) and estimating its solution, under the following assumptions on $\F$: \vskip 5pt\noindent {\textbf{Condition (1)}} {\it $(u,\hbar)\mapsto \F(u;\hbar)\in C^\infty({\mathcal R}\times [0,1]; {\mathcal R})$;} \vskip 4pt\noindent {\textbf{Condition (2)}} $$ \inf_{(u,\hbar)\in{\mathcal R}\times [0,1]}\partial_u\F(u;\hbar)>0; \quad \lim_{|u|\to \infty}\frac{|\F(u,\hbar)|}{|u|}=C>0 $$ {\it uniformly with respect to $\hbar\in [0,1]$.} \vskip 5pt\noindent {\textbf{Condition (3)}} {\it Set: \begin{equation} \label{Kappa} \K_\F(u,\eta,\hbar)=\frac{\eta}{\F(u+\eta,\hbar)-\F(u,\hbar)} \end{equation} Then there is $0<\Lambda(\F)<+\infty$ such that} \begin{equation} \label{KB} \sup_{u\in{\mathcal R},\eta\in{\mathcal R},\hbar\in [0,1]}\vert\K_\F(u,\eta,\hbar)\vert<\Lambda. \end{equation} \vskip 5pt\noindent The first result deals with the identification of the operators $W$ and $M$ through the determination of their matrix elements and corresponding symbols ${\mathcal W}$ and ${\mathcal M}$. \begin{proposition} \label{WN} Let $V\in J(\rho)$, $\rho>0$, and let $W$ and $M$ be the minimal closed operators in $L^2(\T^n)$ generated by the infinite matrices \begin{equation} \label{sheq1} \langle e_m,We_{m+q}\rangle =\frac{i\hbar\langle e_m,Ve_{m+q}\rangle}{\F(\langle \omega,m\rangle\hbar,\hbar)-\F(\langle\omega,(m+q)\rangle\hbar,\hbar)},\quad q\neq 0, \quad \langle e_m,We_m\rangle=0 \end{equation} \begin{equation} \langle e_m,Me_m\rangle=\langle e_m,Ve_m\rangle,\qquad \langle e_m,Me_{m+q}\rangle=0, \quad q\neq 0 \label{sheq2} \end{equation} on the eigenvector basis $e_m: m\in\Z^l$ of $L_\omega$. Then: \begin{enumerate} \item $W$ and $M$ are continuous and solve the homological equation (\ref{heq}); \item The symbols ${\mathcal W}(x,\xi;\hbar)$ and ${\mathcal M}(\xi,\hbar)$ have the expression: \begin{eqnarray} \label{defW} && {\mathcal M}(\xi;\hbar)=\overline{{\mathcal V}}({\mathcal L}_\omega(\xi);\hbar);\quad {\mathcal W}({\mathcal L}_\omega(\xi),x;\hbar)=\sum_{q\in\Z^l,q\neq 0}{\mathcal W}({\mathcal L}_\omega(\xi),q;\hbar)e^{i\langle q,x\rangle} \\ && {\mathcal W}({\mathcal L}_\omega(\xi),q;\hbar):=\frac{i\hbar{\mathcal V}({\mathcal L}_\omega(\xi);q;\hbar)}{\F({\mathcal L}_\omega(\xi);\hbar)-\F({\mathcal L}_\omega(\xi+q),\hbar)}, \;q\neq 0; \quad \overline{{\mathcal W}}({\mathcal L}_\omega(\xi);\hbar)=0. \end{eqnarray} \vskip 4pt\noindent Here the series in (\ref{defW}) is $\|\cdot\|_\rho$ convergent; $\overline{{\mathcal V}}({\mathcal L}_\omega(\xi);\hbar)$ is the $0$-th coefficients in the Fourier expansion of ${\mathcal V}({\mathcal L}_\omega(\xi),x,\hbar)$. \end{enumerate} \end{proposition} \begin{proof} Writing the homological equation in the eigenvector basis $e_m: m\in\Z^l$ we get \vskip 7pt\noindent \begin{equation} \label{mheq} \langle e_m,\frac{[\F(L_\omega),W]}{i\hbar}e_n\rangle+\langle e_m,Ve_n\rangle=\langle e_m,M(L_\omega)e_n\rangle\delta_{m,n} \end{equation} \vskip 5pt\noindent which immediately yields (\ref{sheq1},\ref{sheq2}) setting $n=m+q$. As far the continuity is concerned, we have: \vskip 6pt\noindent $$ \frac{i\hbar}{\F(\langle \omega,m\rangle\hbar,\hbar)-\F(\langle\omega,(m+q)\rangle\hbar,\hbar)}=\langle\omega,q\rangle^{-1}\frac{\eta} {\F(\langle \omega,m\rangle\hbar,\hbar)-\F(\langle\omega,m\rangle\hbar+\eta,\hbar)},\quad \eta:=\langle q,\omega\rangle\hbar. $$ \vskip 7pt\noindent and therefore, by (\ref{KB}) and the diophantine condition: $$ |\langle e_m,We_{m+q}\rangle|\leq \gamma |q|^\tau\Lambda |\langle e_m,Ve_{m+q}\rangle|. $$ The assertion now follows by Corollary \ref{corA}, which also entails the $\|\cdot\|_\rho$ convergence of the series (\ref{defW}) because ${\mathcal V}\in \J_\rho$. Finally, again by Corollary \ref{corA}, formulae \eqref{elm4}, \eqref{elm5}, we can write $$ \langle e_m,We_{m+q}\rangle= {\mathcal W}(\langle \omega,(m+q/2)\rangle\hbar,q,\hbar); \quad \langle e_m,Me_m\rangle={\mathcal M}(\omega,m\rangle\hbar,\hbar)={\mathcal V}({\mathcal L}_\omega(\omega,m\rangle\hbar,0,\hbar) $$ and this concludes the proof of the Proposition. \end{proof} The basic example of $\F$ is the following one. Let: \begin{eqnarray} \label{FNl} && \bullet \qquad \F_{\ell}(u,\varepsilon;\hbar)=u+\Phi_{\ell}(u,\varepsilon,\hbar),\qquad \ell=0,1,2,\ldots \\ && \bullet \qquad \Phi_{\ell}(\varepsilon,\hbar):=\varepsilon{\mathcal N}_{0}(u;\varepsilon,\hbar)+\varepsilon^2{\mathcal N}_{1}(u;\varepsilon,\hbar)+\ldots+\varepsilon_{\ell}{\mathcal N}_{\ell}(u,\varepsilon,\hbar), \quad \varepsilon_{j}:=\varepsilon^{2^{j}}. \end{eqnarray} where we assume holomorphy of $\varepsilon\mapsto {\mathcal N}_s(u,\varepsilon,\hbar)$ in the unit disk and the existence of $\rho_0>\rho_1>\ldots>\rho_{\ell}>0$ such that: \begin{itemize} \item[($N_s$)] $\displaystyle\qquad\qquad\qquad\qquad\quad \max_{|\varepsilon|\leq 1} \vert{\mathcal N}\vert_{\rho_s}<\infty, \qquad .$ \end{itemize} Denote, for $\zeta\in{\mathcal R}$: \vskip 6pt\noindent \begin{equation} \label{gl} g_\ell(u,\zeta;\varepsilon,\hbar):=\frac{\Phi_{\ell-1}(u+\zeta;\varepsilon,\hbar)-\Phi_{\ell-1}(u;\varepsilon,\hbar)}{\zeta} \end{equation} \vskip 6pt\noindent Let furthermore: \begin{eqnarray} \label{ddll} && 0<d_{\ell}<\ldots<d_0<\rho_0, \quad 0<\rho_0:=\rho; \\ && \nonumber \rho_{s+1}=\rho_s-d_{s}>0, \;s=0,\ldots,\ell-1 \\ && \delta_\ell:=\sum_{s=0}^{\ell-1}d_\ell <\rho \end{eqnarray} and set, for $j=1,2,\ldots$: \begin{eqnarray} \label{theta} && \theta_{\ell,k}({\mathcal N},\varepsilon):=\sum_{s=0}^{\ell-1}\frac{|\varepsilon_s|\,|{\mathcal N}_s|_{\rho_s,k}}{ed_{s}}, \qquad \theta_{\ell}({\mathcal N},\varepsilon):=\theta_{\ell,0}({\mathcal N},\varepsilon). \end{eqnarray} By Remark 2.4 we have \begin{eqnarray} \label{Theta} && \theta_{\ell,k}({\mathcal N},\varepsilon)=\sum_{s=0}^{\ell-1}\frac{|\varepsilon_s|\,\|{\mathcal N}_s\|_{\rho_s,k}}{ed_{s}} \end{eqnarray} \begin{lemma} \label{propN} In the above assumptions: \begin{enumerate} \item For any $R>0$ the function $\zeta\mapsto g_\ell(u,\zeta,\varepsilon,\hbar)$ is holomorphic in $\{\zeta\;|\,\,|\zeta|<R\,|\,|\Im\zeta|<\rho\}$, uniformly on compacts with respect to $(u,\varepsilon,\hbar)\in{\mathcal R}\times{\mathcal R}\times [0,1]$; \vskip 5pt\noindent \item For any $n\in\Bbb N\cup\{0\}$: \begin{equation} \label{convN} \sup_{{\zeta\in{\mathcal R}}}\,|[g(u,\zeta,\varepsilon,\hbar)]^n|_{\rho_\ell}\leq [\theta_{\ell}({\mathcal N},\varepsilon)]^{n} \end{equation} \item Let: \begin{equation} \label{epbar} \max_{|\varepsilon|\leq L}{\theta_{\ell}({\mathcal N},\varepsilon)}<1, \qquad L>0. \end{equation} Then: \begin{equation} \label{stimaKg} \sup_{\zeta\in{\mathcal R};u\in{\mathcal R}}|\K_\F(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}\leq \frac{1}{|\zeta|}\cdot \frac1{1-\theta_{\ell}({\mathcal N},\varepsilon)} \end{equation} \item \begin{eqnarray} && \label{stimadgu} \sup_{\zeta\in{\mathcal R}}\,|\partial^j_u g(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}\leq \theta_{\ell,j}({\mathcal N},\varepsilon) \\ && \label{stimadgeta} \sup_{\zeta\in{\mathcal R}}\,|\partial^j_\zeta g(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}\leq \theta_{\ell,j}({\mathcal N},\varepsilon) \\ && \label{stimadgh} \sup_{\zeta\in{\mathcal R}}\,|\partial^j_\hbar g(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell }\leq \theta_{\ell,j}({\mathcal N},\varepsilon). \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} The holomorphy is obvious given the holomorphy of ${\mathcal N}_s(u;\varepsilon,\hbar)$. To prove the estimate (\ref{convN}), denoting $\widehat{{\mathcal N}}_s(p,\varepsilon,\hbar)$ the Fourier transform of ${\mathcal N}_s(\xi,\varepsilon,\hbar)$ we write \vskip 4pt\noindent \begin{eqnarray} && \label{gF} g_\ell(u,\zeta,\varepsilon,\hbar)=\frac{1}{\zeta}\sum_{s=0}^{\ell-1}\,\varepsilon_s\,\int_{\mathcal R}\widehat{{\mathcal N}}_\ell(p,\varepsilon,\hbar)(e^{i\zeta p}-1)e^{iu p}\,dp= \\ \nonumber && \frac{2}{\zeta}\sum_{s=0}^{\ell-1}\,\varepsilon_s\,\int_{\mathcal R}\widehat{{\mathcal N}}_\ell(p,\varepsilon,\hbar)e^{ip(u+\zeta)/2}\sin{\zeta p/2}\,dp\qquad\quad \end{eqnarray} which entails: \begin{eqnarray*} && \sup_{{\zeta\in{\mathcal R}}}|g_\ell(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}=\sup_{{\zeta\in{\mathcal R}}}\int_{\mathcal R}\,|\widehat{g}_\ell (p,\zeta,\varepsilon,\hbar)|e^{\rho_\ell |p|}\,dp \\ && \leq \max_{\hbar\in [0,1]}\sum_{s=0}^{\ell-1}|\varepsilon_s|\, \int_{\mathcal R}|\widehat{{\mathcal N}}_s(p,\varepsilon,\hbar) p|e^{(\rho_s-d_s) |p|}\,dp \leq \frac1{e}\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\,\frac{|{\mathcal N}_s|_{\rho_s}}{d_s}= \theta_\ell({\mathcal N},\varepsilon,1)\,\qquad 0<d_s<\rho_s. \end{eqnarray*} \vskip 4pt\noindent Hence Assertion (3) of Proposition \ref{stimeMo}, considered for $k=0$, immediately yields (\ref{convN}). Finally, if $g_\ell$ is defined by (\ref{gl}), then: $$ \K_\F(u,\zeta,\varepsilon,\hbar)=\frac{1}{\zeta}\frac{1}{1+ g_\ell(u,\zeta,\varepsilon,\hbar)} $$ and the estimate (\ref{stimaKg}) follows from (\ref{convN}) which makes possible the expansion into the geome\-trical series \begin{equation} \label{sgg} \frac{1}{1+g_\ell(u,\zeta,\varepsilon,\hbar)}=\sum_{n=0}^\infty\,(-1)^n\,g_\ell(u,\zeta,\varepsilon,\hbar)^n \end{equation} \vskip 5pt\noindent convergent in the $\theta_{\ell}({\mathcal N},\varepsilon)$ norm. To see (\ref{stimadgu}), remark that (\ref{gF}) yields: \vskip 5pt\noindent \begin{eqnarray*} && \partial^j_u g_\ell(u,\zeta,\varepsilon,\hbar)=\frac{2}{\zeta}\sum_{s=0}^{\ell-1}\,\varepsilon_s\,\int_{\mathcal R}\widehat{{\mathcal N}}_\ell(p,\varepsilon,\hbar)(ip)^j e^{ip(u+\zeta)/2}\sin{\zeta p/2}\,dp. \end{eqnarray*} Therefore: \begin{eqnarray*} && \sup_{{\zeta\in{\mathcal R}}}\,| \partial^j_u g_\ell(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}\leq \sup_{{\zeta\in{\mathcal R}}}\,\max_{\hbar\in [0,1]} 2\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\int_{\mathcal R}|\widehat{{\mathcal N}}_s(p,\varepsilon,\hbar)||p|^j|\sin{\zeta p/2}|/{\zeta}|e^{\rho_\ell|p|}\,dp \\ && \leq \sup_{{\zeta\in{\mathcal R}}}\,\max_{\hbar\in [0,1]} 2\sum_{s=}^{\ell-1}\,|\varepsilon_s|\int_{\mathcal R}|\widehat{{\mathcal N}}_s(p,\varepsilon,\hbar)||p|^j|\sin{\zeta p/2}|/{\zeta}|e^{(\rho_s-d_s)|p|}\,dp \\ && \leq \sup_{p\in{\mathcal R}}\,[|p|\,\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\,e^{-d_s |p|}]\max_{\hbar\in [0,1]}\int_{\mathcal R}\,|p|^j\widehat{{\mathcal N}}(p,\varepsilon,\hbar)e^{\rho_s|p|}\,dp \\ && \leq \frac1{e}\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\frac{|{\mathcal N}_s|_{\rho_s,j}}{d_s}\leq \theta_{\ell,j}({\mathcal N},\varepsilon) \end{eqnarray*} (\ref{stimadgeta}) is proved by exactly the same argument. Finally, to show (\ref{stimadgh}) we write: \begin{eqnarray*} && \sup_{{\zeta\in{\mathcal R}}}| \partial^j_\hbar g_\ell(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell} \leq \sup_{{\zeta\in{\mathcal R}}}\max_{\hbar\in [0,1]} 2\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\int_{\mathcal R}|\partial^j_\hbar\widehat{{\mathcal N}_s}(p,\varepsilon,\hbar)|\cdot|\sin{\zeta p/2}|/{\zeta}|e^{\rho_\ell |p|}\,dp \\ && \leq \max_{\hbar\in [0,1]}\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\int_{\mathcal R}|\partial^j_\hbar\widehat{{\mathcal N}}(p,\varepsilon,\hbar)|e^{(\rho_s-d_s)|p|}\,dp \leq \theta_{\ell}({\mathcal N},\varepsilon) \end{eqnarray*} \vskip 4pt\noindent This proves the Lemma. \end{proof} By \textbf{Condition (1)} the operator family $\hbar \mapsto \F(L_\omega;\varepsilon,\hbar)$, defined by the spectral theorem, is self-adjoint in $L^2(\T^l)$; by \textbf{Condition (2)} $D(\F(L_\omega))=H^1(\T^l)$. Since $L_\omega$ is a first order operator with symbol ${\mathcal L}_\omega$, the symbol of $\F(L_\omega;\varepsilon,\hbar)$ is $\F({\mathcal L}_\omega(\xi),\varepsilon,\hbar)$. We can now state the main result of this section. Let $\F_\ell(x,\varepsilon,\hbar)$ be as in Lemma \ref{propN}, which entails the validity of \textbf{Conditions (1), (2), (3)}. \begin{theorem} \label{homo} \label{homeq} Let $V_\ell\in J_k(\rho_\ell)$, $\ell=0,1\ldots$, $V_1\equiv V$ for some $\rho_\ell> \rho_{\ell+1}>0$, $k=0,1,\ldots$. Let ${\mathcal V}_\ell({\mathcal L}_\omega(\xi),x;\varepsilon,\hbar)\in\J_k(\rho)$ be its symbol. Then for any $\displaystyle \theta_{\ell}({\mathcal N},\varepsilon)<1$ the homological equation (\ref{heq}), rewritten as \vskip 4pt\noindent \begin{equation} \label{heqell} \frac{[\F_\ell(L_\omega),W_\ell]}{i\hbar}+V_{\ell}=N_\ell(L_\omega,\varepsilon) \end{equation} \vskip 6pt\noindent \begin{equation} \label{Moell} \{\F_\ell({\mathcal L}_\omega(\xi),\varepsilon,\hbar),{\mathcal W}_\ell(x,\xi;\varepsilon,\hbar)\}_M+{\mathcal V}_{\ell}(x,L_\omega(\xi);\varepsilon,\hbar)={\mathcal N}_\ell({\mathcal L}_\omega(\xi),\varepsilon,\hbar) \end{equation} \vskip 4pt\noindent admits a unique solution $(W_\ell,N_\ell)$ of Weyl symbols ${\mathcal W}_\ell({\mathcal L}_\omega(\xi),x;\varepsilon,\hbar)$, ${\mathcal N}_\ell({\mathcal L}_\omega(\xi),\varepsilon,\hbar)$ such that \begin{enumerate} \item $W_\ell=W^\ast_\ell\in J_k(\rho_\ell)$, with: \begin{eqnarray} && \label{Thm5.1} \|W_\ell\|_{\rho_{\ell+1},k}=\|{\mathcal W}\|_{\rho_{\ell+1},k}\leq A(\ell,k,\varepsilon)\|{\mathcal V}_\ell\|_{\rho_{\ell},k} \\ \nonumber && {} \\ && \label{Adrk} A(\ell,k,\varepsilon)=\gamma \frac{\tau^\tau}{(ed_\ell)^\tau}\left[1+\frac{2^{k+1}(k+1)^{2(k+1)}k^k}{(e\delta_\ell)^{k}[1-\theta_\ell({\mathcal N},\varepsilon)]^{k+1}} \theta_{\ell,k}^{k+1}\right]. \end{eqnarray} \vskip 6pt\noindent \item ${\mathcal N}_\ell=\overline{{\mathcal V}}_\ell$; therefore ${\mathcal N}_\ell\in J_k(\rho_\ell)$ and $ \|{\mathcal N} \|_{\rho_\ell,k} \leq \|{\mathcal V}_\ell\|_{\rho_\ell,k} .$ \end{enumerate} \end{theorem} \begin{proof} The proof of (2) is obvious and follows from the definition of the norms $\Vert\cdot\Vert_\rho$ and $\Vert\cdot\Vert_{\rho,k}$. The self-adjointess property $W=W^*$ is implied by the construction itself, which makes $W$ symmetric and bounded. Consider ${\mathcal W}_\ell$ as defined by (\ref{defW}). Under the present assumptions, by Lemma \ref{propN} we have: \vskip 8pt\noindent $$ {\mathcal W}_\ell({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar):=\frac1{\langle\omega,q\rangle}\frac{i\hbar{\mathcal V}_\ell({\mathcal L}_\omega(\xi);q;\varepsilon,\hbar)}{1+ g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)}, \quad q\neq 0; \quad {\mathcal W}_\ell(\cdot,0;\hbar)=0. $$ \vskip 8pt\noindent By the $\|\cdot\|_{\rho_\ell}$-convergence of the series (\ref{sgg}) we can write \begin{eqnarray} && \partial^\gamma_\hbar {\mathcal W}_\ell({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar)=\sum_{n=0}^\infty\,(-\varepsilon)^n\,\partial^\gamma_\hbar {\mathcal W}_{\ell,n}({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar), \\ && {\mathcal W}_{\ell,n}({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar)=\frac1{\langle\omega,q\rangle}{\mathcal V}_\ell({\mathcal L}_\omega(\xi);q;\varepsilon,\hbar)[g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)]^n \\ \label{derivateWn} && \partial^\gamma _\hbar{\mathcal W}_{\ell,n}({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar)= \\ \nonumber &&\sum_{j=0}^\gamma\,\binom{\gamma}{j}\,\partial^{\gamma-j}_\hbar {\mathcal V}_\ell({\mathcal L}_\omega(\xi);q;\varepsilon,\hbar)D^j_\hbar [g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)]^n \end{eqnarray} \vskip 4pt\noindent where $D_\hbar$ denotes the total derivative with respect to $\hbar$. We need the following preliminary result. \begin{lemma} \label{derivateg} Let $\zeta(\hbar):=\langle\omega,q\rangle\hbar$. Then: \begin{enumerate} \item \begin{eqnarray} \label{stimadghh} |D^j_\hbar g_\ell({\mathcal L}_\omega(\xi),\zeta(\hbar),\varepsilon,\hbar)|_{\rho_\ell} \leq (j+1) ({2|q|})^j \theta_{\ell,j}({\mathcal N},\varepsilon)^2 \end{eqnarray} \item \begin{eqnarray} \label{stimadgjn} |D^j_\hbar [g_\ell({\mathcal L}_\omega(\xi);\zeta(\hbar),\varepsilon,\hbar)]^n|_{\rho_\ell}\leq 2n^j (\theta_\ell({\mathcal N},\varepsilon))^{n-j} [2(j+1)|q|]^j\theta_{\ell,j}({\mathcal N},\varepsilon)^{2j}. \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} The expression of total derivative $D_\hbar g$ is: \begin{equation} \label{Dom} D_\hbar g(\cdot;\langle\omega,q\rangle\hbar,\varepsilon,\hbar)=(\langle\omega,q\rangle\ \frac{\partial}{\partial\zeta}+\frac{\partial}{\partial\hbar})\left.g_\ell(\cdot;\zeta,\varepsilon,\hbar)\right|_{\zeta=\langle\omega,q\rangle\hbar} \end{equation} By Leibnitz's formula we then have: \begin{equation} D^j_\hbar g_\ell(\cdot;\langle\omega,q\rangle\hbar,\varepsilon,\hbar)=\sum_{i=0}^j\,\binom{j}{i}\langle\omega,q\rangle^{j-i}\frac{\partial^{j-i}g_\ell}{\partial\zeta^{j-i}}\frac{\partial^i g_\ell}{\partial\hbar^{i}} \end{equation} Apply now (\ref{simple}) with $k=0$, (\ref{stimadgu}) and (\ref{stimadgh}). We get: \vskip 5pt\noindent \begin{eqnarray*} \left\vert\frac{\partial^{j-i}g_\ell}{\partial\zeta^{j-i}}\frac{\partial^i g_\ell}{\partial\hbar^{i}}\right\vert_{\rho_\ell}\leq (j+1)2^j \theta_{\ell,j}({\mathcal N},\varepsilon)^2 \end{eqnarray*} whence, since $|\omega|\leq 1$: \begin{eqnarray} \label{stimaDjg} \left\vert\frac{D^jg_\ell}{D\hbar^j}\right\vert_{\rho_\ell} \leq (j+1)(2)^j{|q|^j}\theta_{\ell,j}({\mathcal N},\varepsilon)^2 \end{eqnarray} This proves Assertion (1). To prove Assertion (2), let us first note that \begin{equation} D^j_\hbar [g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)]^n=P_{n,j}\left(g_\ell,\frac{Dg_\ell}{D\hbar},\ldots,\frac{D^jg_\ell}{D\hbar^j}\right). \end{equation} \vskip 5pt\noindent where $P_{n,j}(x_1,\ldots,x_j)$ is a homogeneous polynomial of degree $n$ with $n^j$ terms. Explicitly: $$ P_{n,j}\left(g_\ell,\frac{Dg_\ell}{D\hbar},\ldots,\frac{D^jg_\ell}{D\hbar^j}\right)=\sum_{j=1}^n\,{g_\ell}^{n-j}\,\prod_{{k=1}\atop {j_1+\ldots+j_k=j}}^j \frac{D^{j_k}g_\ell}{D\hbar^{j_k}}. $$ Now (\ref{stimadghh}), (\ref{stimaDjg}) and Proposition \ref{stimeMo} (3) entail: \begin{eqnarray*} && |D^j_\hbar [g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)]^n|_{\rho_\ell}\leq n^j|g|_{\rho_\ell}^{n-j} \prod_{{k=1}\atop {j_1+\ldots+j_k=j}}^j 2(j_k+1)\left({2|q|}\right)^{j_k}\theta_{\ell,j_k}({\mathcal N},\varepsilon)^2 \\ && \leq 2n^j (\theta_\ell({\mathcal N},\varepsilon))^{n-j} [2(j+1)|q|]^j\theta_{\ell,j}({\mathcal N},\varepsilon)^{2j}. \end{eqnarray*} This concludes the proof of the Lemma. \end{proof} \noindent To conclude the proof of the theorem, we must estimate the $\|\cdot\|_{\rho_{\ell+1},k}$ norm of the derivatives $\displaystyle \partial^\gamma _\hbar{\mathcal W}_{\ell,n}({\mathcal L}_\omega(\xi),x;\varepsilon,\hbar)$. Obviously: \begin{equation} \label{serieW} \|{\mathcal W}_\ell(\xi,x;\varepsilon,\hbar)\|_{\rho_\ell+1,k}\leq \sum_{n=0}^\infty\,\|{\mathcal W}_{\ell,n}(\xi,x;\varepsilon,\hbar)\|_{\rho_{\ell+1,k}}. \end{equation} \vskip 4pt\noindent For $n=0$: \begin{eqnarray*} && \|{\mathcal W}_{\ell,0}(\xi,x;\varepsilon,\hbar)\|_{\rho_{\ell+1,k}}\leq \gamma\sum_{\gamma=0}^k\int_{{\mathcal R}\times{\mathcal R}^l}|\partial^\gamma_\hbar\widehat{{\mathcal W}}_{\ell,0}(p,s;\cdot)||s|^{\tau}\mu_{k-\gamma}(p\omega,s)\,e^{\rho_{\ell+1} (|p|+|s|)}\,d\lambda(p,s) \\ && \leq \gamma\sum_{\gamma=0}^k\int_{{\mathcal R}\times{\mathcal R}^l}|\partial^\gamma_\hbar\widehat{{\mathcal V}}_{\ell,0}(p,s;\cdot)||s|^{\tau}\mu_{k-\gamma}(p\omega,s)\,e^{\rho_{\ell+1} (|p|+|s|)}\,d\lambda(p,s)\leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau}\|{\mathcal V}_\ell\|{\rho_{\ell,k}} \end{eqnarray*} where the inequality follows again by the standard majorization \vskip 6pt\noindent $$ e^{\rho_{\ell+1} (|p|+|s|)}=e^{\rho_{\ell} (|p|+|s|)}e^{-d_\ell(|p|+|s|)}, \quad \sup_{s\in{\mathcal R}^l}[|s|^\tau e^{-d_\ell |s|}]\leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau} $$ \vskip 4pt\noindent on account of the small denominator estimate (\ref{DC}). For $n>0$ we can write, on account of (\ref{pm1},\ref{pm2}): \begin{eqnarray*} && \|{\mathcal W}_{\ell,n}(\xi,x;\cdot)\|_{\rho_{\ell+1},k}=\sum_{\gamma=0}^k\int_{{\mathcal R}\times{\mathcal R}^l}|\partial^\gamma_\hbar\widehat{{\mathcal W}}_{\ell,n}(p,s;\cdot)||s|^{\tau}\mu_{k-\gamma}(p\omega,s)\,e^{\rho_{\ell+1} (|p|+|s|)}\,d\lambda(p,s)\leq \\ && \leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau}\sum_{\gamma=0}^k\sum_{j=0}^\gamma \,\binom{\gamma}{j}\,\int_{{\mathcal R}^l}{\mathcal Q}(s,\cdot)e^{\rho_\ell |s|}\,d\nu(s) \end{eqnarray*} where \begin{eqnarray*} {\mathcal Q}(s,\cdot):=\int_{\mathcal R}|[\partial^{\gamma-j}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)]\ast [D^j_\hbar \widehat{g}^{\,\ast_n}_\ell(p;\langle\omega,s\rangle\hbar,\cdot)] \mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell |p|}\,dp \end{eqnarray*} Here $\ast$ denotes convolution with respect only to the $p$ variable, and $\widehat{g}^{\,\ast_i n}_\ell(p,\zeta,\cdot)$ denotes the $n-$th convolution of $\widehat{g}_\ell$ with itself, i.e. the $p$-Fourier transform of $g^n_\ell$. Now, by Assertion (3) of Proposition (\ref{stimeMo}) and the above Lemma: \begin{eqnarray*} && \int_{{\mathcal R}^l}{\mathcal Q}(s,\cdot)e^{\rho_\ell |s|}\,d\nu(s)= \\ && =\int_{{\mathcal R}\times{\mathcal R}^l}|[\partial^{\gamma-j}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)]\ast_\xi [D^j_\hbar g^{\ast_\xi n}_\ell(p;\langle\omega,s\rangle\hbar,\cdot)] \mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell(|p|+|s|)}\,d\lambda(p,s) \\ && \leq \int_{{\mathcal R}^l}\left[\int_{{\mathcal R}}|[\partial^{\gamma-j}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\hbar)]\ast [D^j_\hbar \widehat{g}^{\,\ast_ n}(p;\langle\omega,s\rangle\hbar,\cdot)]|\mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell |p|}\,dp\right]e^{\rho_\ell |s|} \,d\nu(s) \\ && \leq 2A(j)^j\theta_\ell({\mathcal N},\varepsilon)^{n-j}\int_{{\mathcal R}^l}\int_{{\mathcal R}}|\partial^{\gamma-j}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)|\mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell |p|}|s|^{j}e^{\rho_\ell |s|} \,\,dp d\nu(s), \end{eqnarray*} with \[ A(j):= 2n (j+1) \theta_{\ell,j}({\mathcal N},\varepsilon)^{2}. \] This yields, with $\delta_\ell$ defined by (\ref{ddll}): \begin{eqnarray*} && \|{\mathcal W}_{\ell,n}(\xi,x;\cdot)\|_{\rho_\ell+1,k}\leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau}\sum_{\gamma=0}^k\int_{{\mathcal R}\times{\mathcal R}^l}|\partial^\gamma_\hbar\widehat{{\mathcal W}}_{\ell,n}(p,s;\cdot)\mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell(|p|+|s|)}\,d\lambda(p,s)\leq \\ && \leq \frac{\gamma \tau^\tau (k+1)(2A(k))^k}{(ed_\ell)^\tau}\theta_\ell({\mathcal N},\varepsilon)^{n-j}\sum_{\gamma=0}^k\int_{{\mathcal R}\times {\mathcal R}^l} |\partial^{\gamma}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)|\cdot \mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell |p|}|s|^{j}e^{\rho_\ell |s|} \,\,d\lambda(p,s) \\ && \leq \frac{\gamma \tau^\tau (k+1)(2A(k))^k}{(ed_\ell)^\tau}\frac{k^{k}}{(e\delta_\ell)^{k}}\theta_\ell({\mathcal N},\varepsilon)^{n-j}\sum_{\gamma=0}^k\int_{{\mathcal R}^l}\int_{{\mathcal R}}|\partial^{\gamma}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)| \mu_{k-\gamma}(p\omega,s)e^{\rho |p|}e^{\rho |s|}\,d\lambda(p,s) \\ && \leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau}\frac{(k+1)k^{k}}{(e\delta_\ell)^{k}} 2(2n)^k(\theta_\ell({\mathcal N},\varepsilon))^{n-j}(k+1)^k\theta_{\ell,k}^{2k} \|{\mathcal V}_\ell\|_{\rho,k}. \end{eqnarray*} \vskip 4pt\noindent Therefore, by (\ref{serieW}): \begin{eqnarray*} && \|{{\mathcal W}}_\ell(\xi;x;\varepsilon,\hbar)\|_{\rho_{\ell+1},k} \leq \sum_{n=0}^\infty\,{{\mathcal W}}_{\ell,n} (\xi;x;\varepsilon,\hbar)\|_{\rho_{\ell+1},k} \leq \\ && \leq \gamma \frac{\tau^\tau}{(ed_\ell)^\tau}\|{\mathcal V}_\ell\|_{\rho_\ell,k}\left[1+\frac{2^{k+1}(k+1)^{k+1}k^k}{(e\delta_\ell)^{k}} \theta_{\ell,k}^{2k}\sum_{n=1}^\infty\, n^k (\theta_\ell({\mathcal N},\varepsilon))^{n-j}\right] \\ && \leq \gamma \frac{\tau^\tau}{(ed_\ell)^\tau}\|{\mathcal V}_\ell\|_{\rho_\ell,k}\left[1+\frac{2^{k+1}(k+1)^{k+1}k^k}{(e\delta_\ell)^{k}} \theta_{\ell,k}^{2k-j}\sum_{n=1}^\infty\, n^k (\theta_\ell({\mathcal N},\varepsilon))^{n}\right] \\ && \leq\gamma \frac{\tau^\tau}{(ed_\ell)^\tau}\|{\mathcal V}_\ell\|_{\rho_\ell,k}\left[1+\frac{2^{k+1}(k+1)^{2(k+1)}k^k}{(e\delta_\ell)^{k}[(1- \theta_\ell({\mathcal N},\varepsilon)^{k+1}]} \theta_{\ell,k}^{k+1}\right]. \end{eqnarray*} \vskip 4pt\noindent because $j\leq k$, and \begin{eqnarray*} && \sum_{n=1}^\infty \,n^kx^n\leq \sum_{n=1}^\infty\,(n+1)\cdots (n+k) x^n=\frac{d^k}{dx^k}\,\sum_{n=1}^\infty x^{n+k} \\ && =\frac{d^k}{dx^k}\frac{x^{k+1}}{1-x}=(k+1)!\sum_{j=0}^{k+1}\left(k+1-j\atop j\right) \frac{x^{k+1-j}}{(1-x)^j}\leq \frac{2^{k+1}(k+1)!}{(1-x)^{k+1}}. \end{eqnarray*} By the Stirling formula this concludes the proof of the Theorem. \end{proof} \vskip 2pt\noindent \subsection{Towards KAM iteration}\label{towkam} Let us now prove the estimate which represents the starting point of the KAM iteration: \begin{theorem} \label{resto} Let $\F_\ell$ and $V_\ell$ be as in Theorem \ref{homeq}, and let $W_\ell$ be the solution of the homological equation (\ref{heq}) as constructed and estimated in Theorem \ref{homo}. Let (\ref{epbar}) hold and let furthermore \begin{equation} \label{condepell} |\varepsilon|<\overline{\varepsilon}_\ell, \quad \overline{\varepsilon}_\ell:=\left(\frac{d_\ell}{\|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}}\right)^{2^{-\ell}}. \end{equation} Then we have: \begin{equation} \label{resto1} e^{i\varepsilon_\ell W_\ell/\hbar}(\F_\ell(L_\omega)+\varepsilon_\ell V_\ell)e^{-i\varepsilon_\ell W_\ell/\hbar}=(\F_\ell+\varepsilon_\ell N_\ell)(L_\omega)+\varepsilon_\ell^2V_{\ell+1,\varepsilon} \end{equation} where, $\forall\,0<2d_\ell<\rho_\ell$ and $k=0,1,\ldots$: \begin{eqnarray} && \label{resto2} \|V_{\ell+1,\varepsilon}\|_{\rho_\ell-2d_\ell,k}\leq C(\ell,k,\varepsilon) \frac{\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}} {1-{|\varepsilon_\ell |}A(\ell,k,\varepsilon) \|{\mathcal V}\|_{\rho_\ell,k}/{d_\ell}} \\ && \nonumber {} \\ \label{Cdrk} && C(\ell,k,\varepsilon):=\frac{(k+1)^2 4^{2k}}{(ed_\ell)^3}{A(\ell,k.\varepsilon)}\left[2+|\varepsilon_{\ell} |\frac{(k+1) 4^{k}}{(ed_\ell)^2 }{A(\ell,k.\varepsilon)\|{\mathcal V}_\ell\|_{\rho_\ell,k}}{} \right] \end{eqnarray} \vskip 6pt\noindent Here $A(\ell,k,\varepsilon)$ is defined by (\ref{Adrk}). \end{theorem} \begin{remark} We will verify in the next section (Remark \ref{verifica} below) that (\ref{condepell}) is actually fulfilled for $|\varepsilon|<1/|{\mathcal V}|_\rho$. \end{remark} \begin{proof} To prove the theorem we need an auxiliary result, namely: \begin{lemma} \label{RResto4} For $\ell=0,1,\ldots$ let $\rho_\ell>0, \rho_0:=\rho$, $A\in J_k(\rho)$, $W_\ell\in J_k(\rho_\ell)$, $k=0,1,\ldots$. Let $W_\ell^\ast=W_\ell$, and define: \begin{equation} \label{resto5} A_{\varepsilon}(\hbar):=e^{i\varepsilon_\ell W_\ell/\hbar}Ae^{-i\varepsilon_\ell W/\hbar}. \end{equation} Then, for $\displaystyle |\varepsilon|< (d^\prime_\ell/\|{\mathcal W}\|_{\rho_{\ell+1},k})^{2^{-\ell}}$, and $\forall\,0<d^\prime_\ell<\rho_\ell$, $k=0,1,\ldots$: \begin{equation} \label{resto6} \|A_{\varepsilon}(\hbar)\|_{\rho-d^\prime_\ell,k}\leq \frac{(k+1)4^{k}}{ed^\prime_\ell}\frac{\|{\mathcal A}\|_{\rho_\ell,k}}{1-|\varepsilon_\ell| \|{\mathcal W}\|_{\rho_{\ell+1},k}/d^\prime_\ell} \end{equation} \end{lemma} \begin{proof} Since the operators $W_\ell$ and $A$ are bounded, there is ${\varepsilon}_0>0$ such that the commutator expansion for $A_{\varepsilon}(\hbar)$: \vskip 4pt\noindent $$ A_{\varepsilon}(\hbar)=\sum_{m=0}^\infty \frac{(i\varepsilon_\ell)^m}{ \hbar^m m!}[W_\ell,[W_\ell,\ldots,[W_\ell,A]\ldots] $$ \vskip 4pt\noindent is norm convergent for $|\varepsilon|<\varepsilon_0$ if $\hbar\in]0,1[$ is fixed. The corresponding expansion for the symbols is \vskip 4pt\noindent $$ {\mathcal A}_{\varepsilon}(\hbar)=\sum_{m=0}^\infty \frac{(\varepsilon_\ell)^m}{m!}\{{\mathcal W}_\ell,\{{\mathcal W},\ldots,\{{\mathcal W}_\ell,{\mathcal A}\}_M\ldots\}_M $$ \vskip 4pt\noindent Now we can apply once again Corollary \ref{multipleM}. We get, with the same abuse of notation of Theorem 4.1: \begin{equation} \frac{1}{m!}\|\{{\mathcal W}_\ell,\{{\mathcal W}_\ell,\ldots,\{{\mathcal W}_\ell,{\mathcal A}\}_M\ldots\}_M\|_{\rho-d^\prime_\ell,k} \leq \frac{(k+1)4^{k}}{ed_1}\left(\frac{\|{\mathcal W}_\ell\|_{\rho_\ell,k}}{d^\prime_\ell}\right)^m \|{\mathcal A}\|_{\rho_\ell,k} \end{equation} Therefore \vskip 4pt\noindent $$ \|A_{\varepsilon}(\hbar)\|_{\rho_\ell-d^\prime_\ell,k}\leq \frac{(k+1)4^{k}}{ed^\prime_\ell}\|{\mathcal A}\|_{\rho_\ell,k}\sum_{m=0}^\infty |\varepsilon|^m [\|{\mathcal W}\|_{\rho_{\ell+1},k}/d^\prime_\ell]^m=\frac{(k+1)4^{k}}{ed^\prime_\ell}\frac{\|{\mathcal A}\|_{\rho_\ell,k}}{1-|\varepsilon_\ell| \|{\mathcal W}\|_{\rho_{\ell+1},k}/d^\prime_\ell} $$ \vskip 4pt\noindent and this concludes the proof. \end{proof} $W_\ell$ solves the homological equation (\ref{heq}). Then by Theorem \ref{homo} $W_\ell=W_\ell^\ast\in J_k(\rho_\ell-d_\ell)$, $k=0,1,\ldots$; in turn, by Assertion (3) of Corollary \ref{corA} the unitary operator $\displaystyle e^{i\varepsilon_\ell W_\ell/\hbar}$ leaves $H^1(\T^l)$ invariant. Therefore the unitary image of $H_\varepsilon$ under $\displaystyle e^{i\varepsilon_\ell W/\hbar}$ is the real-holomorphic operator family in $L^2(\T^l)$ \begin{equation} \label{S} \varepsilon\mapsto S_{\varepsilon}:=e^{i\varepsilon_\ell W_\ell/\hbar}(\F_\ell(L_\omega)+\varepsilon_\ell V_\ell)e^{-i\varepsilon\_ell W/\hbar}, \quad D(S(\varepsilon))=H^1(\T^l) \end{equation} Computing its Taylor expansion at $\varepsilon_\ell=0$ with second order remainder we obtain: \begin{eqnarray}\label{lemmm} && S_{\varepsilon}u=\F_\ell(L_\omega)u+\varepsilon_\ell N_\ell(L_\omega)u+ \varepsilon_\ell^2 V_{\ell+1,\varepsilon}u, \quad u\in H^1(\T^l) \\ \nonumber && {} \\ && V_{\ell+1,\varepsilon}=\frac12\int_0^{\varepsilon_\ell} (\varepsilon_\ell -t)e^{i t W_\ell/\hbar}\left(\frac{[N_\ell,W_\ell]}{i\hbar}+\frac{[W_\ell,V_\ell]}{i\hbar}+t \frac{[W_\ell,[W_\ell,V_\ell]]}{(i\hbar)^2}\right)e^{-itW_\ell/\hbar}\,dt \end{eqnarray} To see this, first remark that $S_0=\F(L_\omega)$. Next, we compute, as equalities between continuous operators in $L^2(\T^l)$: \begin{eqnarray*} && S^\prime_{\varepsilon}=e^{i\varepsilon_\ell W/\hbar}([\F_\ell(L_\omega),W_\ell]/i\hbar +V_\ell+\varepsilon_\ell [V,W]/i\hbar)e^{-i\varepsilon_\ell W/\hbar}= \\ && e^{i\varepsilon_\ell W/\hbar}(N_\ell+\varepsilon_\ell [V_\ell,W_\ell]/i\hbar)e^{i\varepsilon_\ell W_\ell/\hbar}; \qquad S^\prime_0= N_\ell \\ && S^{\prime\prime}_{\varepsilon}=e^{i\varepsilon_\ell W_\ell/\hbar}([N_\ell,W_\ell]/i\hbar + [V_\ell,W_\ell]/i\hbar +\varepsilon_\ell [W_\ell,[W_\ell,V_\ell]]/(i\hbar)^2)e^{-i\varepsilon_\ell W_\ell/\hbar}, \end{eqnarray*} and this proves (\ref{lemmm}) by the second order Taylor's formula with remainder: $$ S_{\varepsilon}=S(0)+\varepsilon S^\prime_0+\frac12\int_0^{\varepsilon_\ell} (\varepsilon-t)S^{\prime\prime}(t),dt $$ The above formulae obviously yield \begin{equation} \label{stimar2} \| {V}_{l+1,\varepsilon}\|\leq |\varepsilon_\ell |^2 \max_{0\leq |t|\leq |\varepsilon_\ell |}\|S^{\prime\prime}(t)\| \end{equation} Set now: \begin{equation} \label{R1} R_{\ell+1,\varepsilon}:=[N_\ell,W_\ell]/i\hbar + [V_\ell,W_\ell]/i\hbar +\varepsilon_\ell [W_\ell,[W_\ell,V_\ell]]/(i\hbar)^2 \end{equation} $R_{\ell+1,\varepsilon}$ is a continuous operator in $L^2$, corresponding to the symbol \begin{equation} \label{simbR1} {\mathcal R}_{\ell+1,\varepsilon}({\mathcal L}_\omega(\xi),x;\hbar)=\{{\mathcal N}_\ell,{\mathcal W}_\ell\}_M+\{{\mathcal V}_\ell,{\mathcal W}_\ell\}_M+\varepsilon_\ell\{{\mathcal W}_\ell,\{{\mathcal W}_\ell,{\mathcal V}_\ell\}_M\}_M \end{equation} Let us estimate the three terms individually. By Theorems \ref{homo} and \ref{stimeMo} we can write, with $A(\ell,k,\varepsilon)$ given by (\ref{Adrk}): \begin{eqnarray*} && \|[N_\ell,W_\ell]/i\hbar\|_{\rho_\ell-d_\ell,k}\leq \|\{{\mathcal N}_\ell,{\mathcal W}_\ell\}_M\|_{\rho_\ell-d_\ell,k}\leq \frac{(k+1)4^k}{(ed_\ell)^2}\|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}\|{\mathcal N}_\ell\|_{\rho_\ell,k} \\ && \leq \frac{(k+1)4^k}{(ed)^2} A(\ell,k,\varepsilon)\|{\mathcal V}_\ell\|^2_{\rho_\ell,k} \\ && \|[V_\ell,W_\ell]/i\hbar\|_{\rho_\ell-d_\ell,k}\leq\|\{{\mathcal V}_\ell,{\mathcal W}_\ell\}_M\|_{\rho_\ell-d_\ell,k}\leq \frac{(k+1)4^k}{(ed_\ell)^2}\|{\mathcal V}_\ell\|_{\rho_\ell,k}\|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}\leq \\ && \leq \frac{(k+1)4^k}{(ed_\ell)^2}A(\ell,k.\varepsilon)\|{\mathcal V}_\ell\|^2_{\rho_\ell,k} \\ && \|[W_\ell,[W_\ell,V_\ell]]/(i\hbar)^2\|_{\rho_\ell-d_\ell,k}\leq \|\{{\mathcal W}_\ell,\{{\mathcal W}_\ell,{\mathcal V}_\ell\}_M\}_M\|_{\rho_\ell-d_\ell,k}\leq \frac{(k+1)^2 4^{2k}}{(ed_\ell)^4} \|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}^2 \|{\mathcal V}_\ell\|_{\rho_\ell,k} \\ && \leq \frac{(k+1)^2 4^{2k}}{(ed_\ell)^4}A(\ell,k,\varepsilon)^2\|{\mathcal V}_\ell\|_{\rho_\ell,k}^3 \end{eqnarray*} \vskip 6pt\noindent We can now apply Lemma \ref{RResto4}, which yields: \vskip 2pt\noindent \begin{eqnarray*} && \|e^{i\varepsilon_\ell W_\ell/\hbar}[N_\ell,W_\ell] e^{-i\varepsilon_\ell W_\ell/\hbar}/i\hbar\|_{\rho_\ell-d_\ell-d^\prime_\ell,k}\leq \frac{(k+1)^2 4^{2k}}{(ed_\ell)^2 ed^\prime_\ell}\Xi(\ell,k) \\ && \|e^{i\varepsilon_\ell W_\ell/\hbar}[V_\ell,W_\ell] e^{-i\varepsilon_\ell W_\ell/\hbar}/i\hbar\|_{\rho_\ell-d_\ell-d^\prime_\ell,k}\leq \frac{(k+1)^2 4^{2k}}{(ed_\ell)^2 ed^\prime_\ell}\Xi(\ell,k) \\ && \|e^{i\varepsilon_\ell W_\ell/\hbar}[W_\ell,[W_\ell,V_\ell]] e^{-i\varepsilon_\ell W_\ell/\hbar}/(i\hbar)^2\|_{\rho_\ell-d_\ell-d^\prime_\ell,k}\leq \frac{(k+1)^3 4^{3k}}{(ed_\ell)^4 ed^\prime_\ell}\Xi_1(\ell,k) \end{eqnarray*} where \begin{eqnarray} && \label{Xi} \Xi(\ell,k):= A(\ell,k)\cdot\frac{\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}} {1-|\varepsilon_\ell |\|{\mathcal W}\|_{\rho_{\ell+1},k}/d^\prime_\ell} \\ && \label{Xi1} \Xi_1(\ell,k)=A(\ell,k,\varepsilon)^2\cdot \frac{\|{\mathcal V}\|^3_{\rho_\ell,k}} {1-|\varepsilon_\ell |\|{\mathcal W}\|_{\rho_{\ell+1},k}/d^\prime_\ell} \end{eqnarray} \vskip 6pt\noindent Therefore, summing the three inequalities we get \vskip 3pt\noindent \begin{eqnarray*} && \|V_{\ell+1,\varepsilon}\|_{\rho_\ell-d_\ell-d^\prime_\ell,k}\leq \frac{(k+1)^2 4^{2k}}{(ed_\ell)^2 ed^\prime_\ell}A(\ell,k,\varepsilon)\times \\ && \times \frac{\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}} {1-|\varepsilon_\ell |\|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}/d^\prime_\ell}\left[2+|\varepsilon_\ell|\frac{(k+1) 4^{k}}{(ed_\ell)^2 }A(\ell,k,\varepsilon){\|{\mathcal V}_\ell\|_{\rho_\ell,k}} \right] \end{eqnarray*} \vskip 8pt\noindent If we choose $d^\prime_\ell=d_\ell$ this is (\ref{resto2}) on account of Theorem \ref{homo}. This concludes the proof of Theorem \ref{resto}. \end{proof} \vskip 1cm\noindent \section{Recursive estimates}\label{recesti} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \def{\mathcal P}{{\mathcal P}} \setcounter{equation}{0 \setcounter{theorem}{0 Consider the $\ell$-th step of the KAM iteration. Summing up the results of the preceding Section we can write: \begin{eqnarray*} && \bullet\ S_{\ell,\varepsilon}:=e^{i\varepsilon_\ell W_\ell/\hbar}\cdots e^{i\varepsilon_2 W_1/\hbar}e^{i\varepsilon W_0/\hbar}(\F(L_\omega)+\varepsilon V)e^{-i\varepsilon W_0/\hbar}e^{-i\varepsilon_2 W_1/\hbar}\cdots e^{-i\varepsilon_\ell W_\ell/\hbar} \\ && = e^{i\varepsilon_\ell W_\ell/\hbar}(\F_{\ell,\varepsilon}(L_\omega)+\varepsilon^{2^\ell} V_{\ell,\varepsilon})e^{-i\varepsilon_\ell W_\ell/\hbar} =\F_{\ell+1,\varepsilon}(L_\omega)+\varepsilon_{\ell +1} V_{\ell +1,\varepsilon}, \\ && \bullet\ \F_{\ell,\varepsilon}(L_\omega)=\F(L_\omega)+\sum_{k=1}^{\ell-1} \varepsilon_kN_k(L_\omega), \quad [\F_{\ell}(L_\omega),W_\ell]/i\hbar +V_{\ell,\varepsilon} =N_\ell(L_\omega,\varepsilon) \\ && \bullet V_{\ell+1,\varepsilon}=\frac12\int_0^{\varepsilon_\ell} (\varepsilon_\ell -t)e^{i t W_\ell/\hbar}R_{\ell+1,t}e^{-itW_\ell/\hbar}\,dt \\ && \bullet\ R_{\ell+1,\varepsilon}:=[N_{\ell},W_{\ell}]/\hbar+[W_{\ell},V_{\ell,\varepsilon}]/\hbar+\varepsilon_{\ell} [W_{\ell},[W_{\ell},V_{\ell,\varepsilon}]]/\hbar^2 \end{eqnarray*} We now proceed to obtain recursive estimates for the above quantities in the $\|\cdot\|_{\rho_\ell,k}$ norm. Consider (\ref{resto2}) and denote: \vskip 6pt\noindent \begin{eqnarray} && \label{stimaPsi} \Psi(\ell,k)=\frac{(k+1)^24^k}{(ed_\ell)^3}\Pi(\ell,k); \quad \Pi(\ell,k):= \frac{[2(k+1)^2]^{k+1}k^k}{e^{k}\delta_\ell^{k}} \\ \label{Pll} && P(\ell,k,\varepsilon):=\frac{\theta_{\ell,k}({\mathcal N},\varepsilon)^{k+1}}{[1-\theta_\ell({\mathcal N},\varepsilon)]^{k+1}} \end{eqnarray} \vskip 6pt\noindent where $\theta_{\ell,k}({\mathcal N},\varepsilon)$ is defined by (\ref{Theta}). (\ref{stimaPsi}) and (\ref{Pll}) yield \begin{equation} \label{alk} A(\ell,k,\varepsilon)= \gamma \frac{\tau^\tau}{(ed_\ell)^\tau}[1+\Pi(\ell,k)P(\ell,k,\varepsilon)]. \end{equation} Set furthermore: \begin{eqnarray} && \label{resto22} E({\ell}, k,\varepsilon) := \frac{\Psi(\ell,k)B(\ell,k,\varepsilon)[2+ |\varepsilon_{\ell}| e\Psi(\ell,k)A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}]}{1-|\varepsilon_\ell |A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}/d_\ell} \end{eqnarray} Then we have: \begin{lemma} \label{stimaVl+1} Let: \begin{equation} \label{stimadenE} |\varepsilon_\ell |A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}/d_\ell<1. \end{equation} Then: \begin{equation} \label{restoll} \|V_{\ell+1,\varepsilon}\|_{\rho_{\ell+1},k}\leq E({\ell}, k,\varepsilon)\|V_{\ell,\varepsilon}\|^{2}_{\rho_{\ell},k} \end{equation} \vskip 3pt\noindent \end{lemma} \begin{remark} The validity of the assumption (\ref{stimadenE}) is to be verified in Proposition \ref{estl} below. \end{remark} \begin{proof} Since $d_\ell <1$, by (\ref{Cdrk}), (\ref{stimaPsi}) and (\ref{alk}) we can write: \begin{equation} C(\ell,k,\varepsilon)\leq \Psi(\ell,k)A(\ell,k,\varepsilon))\left[2+ |\varepsilon_{\ell}| e\Psi(\ell,k)A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}\right] \end{equation} and therefore, by (\ref{resto2}): \begin{eqnarray*} && \|V_{\ell+1,\varepsilon}\|_{\rho_\ell-2d_\ell,k}\leq C(\ell,k,\varepsilon) \frac{\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}} {1-|\varepsilon_\ell|A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}/d_\ell} \\ && {} \\ && \leq \frac{\Psi(\ell,k)A(\ell,k,\varepsilon)\left[2+ |\varepsilon_{\ell}| e\Psi(\ell,k)A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}\right]}{1-|\varepsilon_\ell |A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}/d_\ell}\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}= E(\ell,k,\varepsilon)\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}. \end{eqnarray*} \vskip 6pt\noindent This yields (\ref{restoll}) and proves the Lemma. \end{proof} Now recall that the sequence $\{\rho_j\}$ is decreasing. Therefore: \begin{equation} \|{\mathcal N}_{j,\varepsilon}\|_{\rho_\ell,k}\leq \|{\mathcal N}_{j,\varepsilon}\|_{\rho_j,k}= \|\overline{{\mathcal V}}_{j,\varepsilon}\|_{\rho_j,k} \leq \|{{\mathcal V}}_{j,\varepsilon}\|_{\rho_j,k}, \quad \;j=0,\ldots,\ell-1. \end{equation} \vskip 4pt\noindent At this point we can specify the sequence $d_\ell, \ell=1,2,\ldots$, setting: \vskip 4pt\noindent \begin{equation} \label{ddelta} d_\ell:=\frac{\rho}{(\ell+1)^2}, \qquad \ell=0,1,2,\ldots \end{equation} \vskip 4pt\noindent Remark that (\ref{ddelta}) yields $$ d- \sum_{\ell=0}^\infty d_\ell=\rho-\frac{\pi^2}{6}>\frac{\rho}{2}. $$ as well as the following estimate \begin{equation} \label{stimapigreco} \Pi(\ell,k)\leq \frac{[2(k+1)^2]^{k+1}}{e^{k}\rho^{k}} \end{equation} \vskip 2pt\noindent We are now in position to discuss the convergence of the recurrence (\ref{restoll}). \begin{proposition} \label{estl} Let: \begin{equation} \label{condep} |\varepsilon|< \varepsilon^\ast(\gamma,\tau,k):= \frac{1}{e^{24(3+2\tau)}(k+2)^{2\tau}\|{\mathcal V}\|_{\rho,k}} \end{equation} \vskip 4pt\noindent \begin{equation} \label{condrho} \rho>\lambda(k):=1+8\gamma\tau^\tau [2(k+1)^2]. \end{equation} Then the following estimate holds: \vskip 8pt\noindent \begin{equation} \label{rec2} \|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k} \leq \left(e^{8(3+2\tau)} \|V_0\|_{\rho,k}\right)^{2^{\ell}}=\left(e^{8(3+2\tau)} \|{\mathcal V}_0\|_{\rho,k}\right)^{2^{\ell}}, \quad \ell=0,1,2,\ldots \quad V_0:=V. \end{equation} \end{proposition} \vskip 4pt\noindent \begin{proof} We proceed by induction. The assertion is true for $\ell=0$. Now assume inductively: \vskip 6pt\noindent \begin{equation} \label{Hell} |\varepsilon_j|\|{\mathcal V}_{j,\varepsilon}\|_{\rho_j,k}\leq (k+2)^{-2\tau( j+1)}, \qquad\quad 0\leq j\leq \ell. \end{equation} \vskip 6pt\noindent Out of (\ref{Hell}) we prove the validity of (\ref{rec2}) and of (\ref{stimadenE}); to complete the induction it will be enough to show that (\ref{rec2}) implies the validity of (\ref{Hell}) for $j=\ell+1$. \vskip 6pt Let us first estimate $\theta_\ell({\mathcal N},\varepsilon)$ as defined by \eqref{theta} assuming the validity of \eqref{Hell} . We obtain: \begin{eqnarray*} && \theta_\ell({\mathcal N},\varepsilon)\leq \theta_{\ell,k}({\mathcal N},\varepsilon) \leq \sum_{s=0}^{\ell-1}|\varepsilon_s|\|{\mathcal V}\|_{\rho_s,k}/d_s = \frac{1}{\rho}\sum_{s=0}^{\ell-1}\,(s+1)^2(k+2)^{-2\tau (s+1)}= \\ && \frac{1}{4\rho}\frac{d^2}{d\tau^2}\sum_{s=0}^{\ell-1}\,(k+2)^{-2\tau (s+1)} =\frac{1}{4\rho}\frac{d^2}{d\tau^2}[(k+2)^{-2\tau}\frac{1-(k+2)^{-2\tau \ell}}{1-(k+2)^{-2\tau}} \leq \frac{1}{\rho}(k+2)^{-2}\leq \frac{1}{\rho} \end{eqnarray*} because $\tau>l-1\geq 1$. Now $\rho>1$ entails that \begin{equation} \label{dentheta} \frac1{1-\theta_\ell}<\frac{\rho}{\rho-1}. \end{equation} \vskip 4pt\noindent Hence we get, by (\ref{Pll}) and (\ref{Theta}), the further $(\ell,\varepsilon)-$in\-de\-pen\-dent estimate: \begin{equation} \label{Hells} P(\ell,k,\varepsilon)\leq \frac{\rho^{k+1}}{(\rho-1)^{k+1}}\left((k+2)^2{\rho}\right)^{-k-1}=\left(\frac{1}{(\rho-1)(k+2)^2}\right)^{k+1}. \end{equation} whence, by (\ref{alk}): \begin{eqnarray} \nonumber && A(\ell,k,\varepsilon)\leq \gamma\frac{\tau^\tau (\ell+1)^{2\tau}}{(e\rho)^\tau}[1+[2(k+1)^2]^{k+1}\left[(\rho-1)(k+2)^2\right]^{-(k+1)}(e\rho^3)^{-k}] \\ \label{stimaAell} && \leq \gamma\frac{\tau^\tau (\ell+1)^{2\tau}}{(e\rho)^\tau}[1+\frac{2}{(\rho-1)^{k+1}}(e\rho^3)^{-k}]. \end{eqnarray} \vskip 5pt\noindent Upon application of the inductive assumption we get: \vskip 5pt\noindent \begin{eqnarray*} && |\varepsilon_\ell | \Psi_{\ell,k}A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}/d_\ell\leq \frac{ 4^k [2(k+1)^2]^{k+3}}{e^{k+3}\rho^{k+4}}(\ell+1)^{2\tau+8}|\varepsilon_\ell | A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k} \\ && \leq \gamma\frac{\tau^{\tau} (\ell+1)^{2(\tau+4)}}{(e\rho)^{\tau}}[1+\frac{2}{(\rho-1)^{k+1}}(e\rho^3)^{-k}]\frac{ 4^k [2(k+1)^2]^{k+3}}{e^{k+3}\rho^{k+4}} (k+2)^{-2(\ell+1)\tau} \\ && \leq \left(\frac{2(\tau+4)}{2\tau\ln{(k+2)}}\right)^{2(\tau+4)}(k+2)^{-\frac{4(\tau+4)}{2\tau\ln{(k+2)}}}\frac{ 4^k [2(k+1)^2]^{k+3}}{e^{k+3}\rho^{k+4}}\frac{\gamma\tau^{\tau}} {(e\rho)^{\tau}}[1+\frac{2}{(\rho-1)^{k+1}}(e\rho^3)^{-k}] \end{eqnarray*} \vskip 5pt\noindent because $$ \sup_{\ell\geq 0} (\ell+1)^{2(\tau+4)}(k+2)^{-2(\ell+1)\tau} =\left(\frac{2(\tau+4)}{2\tau\ln{(k+2)}}\right)^{2(\tau+4)}(k+2)^{-\frac{4(\tau+4)}{2\tau\ln{(k+2)}}}. $$ \vskip 4pt\noindent Hence: \begin{equation} \label{stimaBpsi} |\varepsilon_\ell | \Psi_{\ell,k}A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}/d_\ell\leq \frac1{2e} \end{equation} provided \begin{equation} \label{2condep} \rho\geq \lambda(k); \qquad \lambda(k)=1+8\gamma\tau^\tau [2(k+1)^2]. \end{equation} \vskip 4pt\noindent Since $\Psi_{\ell,k}\geq 1$, if (\ref{2condep}) holds, (\ref{stimaBpsi}) a fortiori yields \vskip 4pt\noindent $$ |\varepsilon_\ell | A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}/d_\ell\leq \frac1{2}. $$ \vskip 4pt\noindent Therefore, by (\ref{resto22}): $$ E(\ell,k,\varepsilon) \leq 3 \Psi_{\ell,k}A(\ell,k,\varepsilon) \leq 6 \gamma\frac{\tau^\tau (\ell+1)^{2\tau}}{(e\rho)^\tau} \Psi_{\ell,k} $$ and (\ref{restoll}) in turn entails: $$ \|{\mathcal V}_{\ell+1}\|_{\rho_\ell+1,k}\leq \Phi_{\ell,k} \|{\mathcal V}_\ell\|_{\rho_\ell,k}^2, \quad \Phi_{\ell,k}:=6 \gamma\frac{\tau^\tau (\ell+1)^{2\tau}}{(e\rho)^\tau} \Psi_{\ell,k}. $$ This last inequality immediately yields \begin{equation} \label{rec3} \|{\mathcal V}_{\ell+1}\|_{\rho_\ell,k} \leq [\|{\mathcal V}\|_{\rho,k}]^{2^{\ell+1}}\prod_{m=0}^{\ell}\Phi_{\ell -m,k}^{2m}. \end{equation} \vskip 3pt\noindent Now: \begin{eqnarray*} \Phi_{\ell,k}= 6 \gamma\frac{\tau^\tau (\ell+1)^{2\tau}}{(e\rho)^\tau} \frac{(k+1)^24^{2k}}{ed_{\ell}^3}\frac{[2(k+1)^2]^{k+1}}{e^{k+\tau}d_\ell^{\tau}\delta_\ell^{k}}\leq \gamma\nu(k,\tau,\rho)(\ell+1)^{6+4\tau} \end{eqnarray*} \vskip 5pt\noindent \begin{eqnarray*} \label{nu} && \nu(k,\tau,\rho):=6\frac{\tau^{\tau} 4^{2k}[2(k+1)^2]^{k+2}}{e^{k+\tau+1}\rho^{k+\tau+3}}\leq 6\frac{\tau^{\tau} 4^{2k}[2(k+1)^2]^{k+2}}{e^{k+\tau+1}\lambda(k)^{k+\tau+3}} \leq \\ && \leq 6\frac{\tau^{\tau} 4^{2k}[2(k+1)^2]^{k+2}}{e^{k+\tau+1}[8\gamma\tau^\tau 2(k+1)^2]^{k+\tau+3}} \leq 6\left(\frac{2}{e}\right)^k\frac{1}{e^{\tau+1}\gamma^{k+\tau+3}[2(k+1)^2]^{\tau+1}} \leq \\ && \leq \frac{6}{\gamma^{\tau+3}\tau^{\tau^2+2}(2e)^{\tau+1}} \end{eqnarray*} \vskip 5pt\noindent Therefore \vskip 4pt\noindent \begin{equation} \label{gammanu} \gamma\nu(k,\tau,\rho)\leq \frac{6}{\gamma^{\tau+2}\tau^{\tau^2+2}(2e)^{\tau+1}} <1 \end{equation} \vskip 6pt\noindent because $\tau>1$ and $\gamma>1$. As a consequence, since $\Phi_{j,k}\leq \Phi_{\ell,k}, j=1,\ldots$, we get: \vskip 5pt\noindent \begin{eqnarray*} && \prod_{m=1}^{\ell}\Phi^{2m}_{\ell+1-m,k} \leq [\Phi_{\ell,k}]^{\ell(\ell+1)}\leq [\gamma\nu(k,\tau,\rho)]^{\ell(\ell+1)} (\ell+1)^{(6+4\tau)\ell(\ell+1)}\leq (\ell+1)^{(6+4\tau)\ell(\ell+1)} \end{eqnarray*} \vskip 3pt\noindent Now $\ell(\ell+1)<2^{\ell+1}$, $\forall\,\ell\in{\mathcal N}$. Hence we can write: $$ (\ell+1)^{(6+4\tau)\ell(\ell+1)} < [e^{(24+16\tau)}]^{2^{\ell+1}}. $$ The following estimate is thus established \begin{eqnarray} \label{stimapsi} && \prod_{m=0}^{\ell}\Psi^{2m}_{\ell -m,k} \leq [e^{8(3+2\tau)}] ^{2^{\ell+1}}. \end{eqnarray} If we now define: \begin{eqnarray} \label{mu} && \mu :=e^{8(3+2\tau)}, \qquad \mu_\ell:=\mu^{2^\ell} \end{eqnarray} then (\ref{rec3}) and (\ref{stimapsi}) yield: \vskip 6pt\noindent \begin{eqnarray} && \label{GVS} \|{\mathcal V}_{\ell+1,\varepsilon}\|_{\ell+1,k} \leq \left[\mu_\ell\|{\mathcal V}_\ell\|_{\rho_\ell,k}\right]^{2}\leq \left[\|{\mathcal V}\|_{\rho,k}\,\mu\right]^{2^{\ell+1}} \\ \label{GVSS} && \varepsilon_{\ell+1}\|{\mathcal V}_{\ell+1,\varepsilon}\|_{\ell+1,k} \leq \left[ \|{\mathcal V}\|_{\rho_\ell,k}\,\mu_\ell\varepsilon_\ell\right]^{2} \leq \left[ \|{\mathcal V}\|_{\rho,k}\,\mu\varepsilon\right]^{2^{\ell+1}} \end{eqnarray} \vskip 5pt\noindent Let us now prove out of (\ref{GVS},\ref{GVSS}) that the condition (\ref{Hell}) preserves its validity also for $j=\ell+1$. We have indeed, by the inductive assumption (\ref{Hell}) and (\ref{GVS}): \begin{eqnarray*} \label{verifica} && |\varepsilon_{\ell+1}|{\mathcal V}_{\ell+1,\varepsilon}\|_{\ell+1,k} \leq \left[ \|{\mathcal V}\|_{\rho_\ell,k}\,\mu_\ell\varepsilon_\ell\right]^{2}\leq (k+2)^{-2\tau(\ell+1)}\varepsilon_\ell(\mu_\ell)^2\|{\mathcal V}\|_{\rho_\ell,k} \\ && \leq (k+2)^{-2\tau(\ell+1)}\left[\varepsilon\mu^3\|{\mathcal V}\|_{\rho,k}\right]^{2^\ell}\leq (k+2)^{-2\tau(\ell+2)} \end{eqnarray*} provided \vskip 4pt\noindent \begin{equation} \label{epsast} |\varepsilon|< \frac{1}{\mu^3\|{\mathcal V}\|_{\rho,k}(k+2)^{2\tau}}= \frac{1}{e^{24(3+2\tau)}\|{\mathcal V}\|_{\rho,k}(k+2)^{2\tau}}:=\varepsilon^\ast(\gamma,\tau,k) \end{equation} \vskip 8pt\noindent where the last expression follows from (\ref{mu}). This proves (\ref{condep}), and concludes the proof of the Proposition. \end{proof} \noindent \begin{theorem}\label{final}[Final estimates of $W_\ell$, $N_\ell$, $V_\ell$] \newline Let ${\mathcal V}$ fulfill Assumption (H2-H4). Then the following estimates hold, $\forall \ell\in\Bbb N$: \vskip 4pt\noindent \begin{eqnarray} \label{stimafw} \varepsilon_\ell \|W_{\ell,\varepsilon}\|_{\rho_{\ell+1},k}\leq \gamma\left(\frac{\tau}{e}\right)^\tau (\ell+1)^{2\tau}(1+8\gamma\tau^\tau [2(k+1)^2])^{-\tau}\cdot (\mu \varepsilon \|{\mathcal V}\|_{\rho})^{2^{\ell}}. \end{eqnarray} \begin{eqnarray} \label{stimafn} \varepsilon_\ell \|N_{\ell,\varepsilon}\|_{\rho_\ell,k}\leq \varepsilon_\ell \|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}\leq \left[ \|{\mathcal V}\|_{\rho}\,\varepsilon \mu\right]^{2^{\ell}}. \end{eqnarray} \begin{eqnarray} \label{stimafv} \varepsilon_{\ell+1} \|V_{\ell+1,\varepsilon}\|_{\rho_{\ell+1},k}\leq \left[ \|V\|_{\rho}\,\varepsilon \mu\right]^{2^{\ell+1}}. \end{eqnarray} \end{theorem} \begin{proof} Since ${\mathcal V}$ does not depend on $\hbar$, obviously $ |{\mathcal V}\|_{\rho,k}\equiv \|{\mathcal V}\|_{\rho}$. Then formula (\ref{Thm5.1}) yields, on account of (\ref{stimaAell}), (\ref{dentheta}), (\ref{2condep}), (\ref{GVS}), (\ref{GVSS}) and of the obvious inequalities $e\rho^{-3}<1$, $\rho/(\rho -1) >1$ when $\rho >\lambda(k)$: \vskip 3pt\noindent \begin{eqnarray*} \nonumber && \label{stimaWfl} \varepsilon_\ell \|W_{\ell,\varepsilon}\|_{\rho_{\ell},k} \leq \gamma\frac{\tau^\tau (\ell+1)^{2\tau}}{(e\rho)^\tau}[1+\frac{2}{(\rho-1)^{k+1}}(e\rho^3)^{-k}](\mu \varepsilon \|{\mathcal V}\|_{\rho})^{2^{\ell}} \\ && \leq 2 \gamma\frac{\tau^\tau (\ell+1)^{2\tau}}{(e\rho)^\tau}(\mu \varepsilon \|{\mathcal V}\|_{\rho})^{2^{\ell}}\leq \gamma\left(\frac{\tau}{e}\right)^\tau (\ell+1)^{2\tau}(1+8\gamma\tau^\tau [2(k+1)^2])^{-\tau}\cdot (\mu \varepsilon \|{\mathcal V}\|_{\rho})^{2^{\ell}}. \end{eqnarray*} \vskip 5pt\noindent because of the straightforward inequality $$ [1+\frac{2}{(\rho-1)^{k+1}}(e\rho^3)^{-k}] <1 $$ which in turn follows from $\gamma>1$. This proves (\ref{stimafw}). Moreover, since ${\mathcal N}_{\ell,\varepsilon}=\overline{{\mathcal V}}_{\ell,\varepsilon}$, again by (\ref{GVS}), (\ref{GVSS}): \begin{equation}\nonumber \label{stiman} \varepsilon_\ell \|{\mathcal N}_{\ell,\varepsilon}\|_{\rho_\ell,k}= \varepsilon_\ell \|\overline{{\mathcal V}}_{\ell,\varepsilon}\|_{\rho_\ell,k}\leq \left[ \|{\mathcal V}\|_{\rho}\,\varepsilon \mu\right]^{2^{\ell}}. \end{equation} The remaining assertion follows once more from (\ref{GVSS}). This concludes the proof of the Theorem. \end{proof} \begin{remark} \label{verifica1} (\ref{stimafw}) yields, with $\displaystyle K:= \gamma\left(\frac{\tau}{e}\right)^\tau (1+8\gamma\tau^\tau [2(k+1)^2])^{-\tau}$: \vskip 4pt\noindent $$ \varepsilon_\ell \frac{\|W_{\ell,\varepsilon}\|_{\rho_{\ell+1},k}}{d_\ell}\leq K\varepsilon^{2^\ell}(\ell+1)^{2(\tau+1)}\|{\mathcal V}\|_\rho^{2^\ell} $$ This yields: $$ |\varepsilon|\left(\frac{\|W_{\ell,\varepsilon}\|_{\rho_{\ell+1},k}}{d_\ell}\right)^{2^{-\ell}}\leq [K(\ell+1)^{2(\tau+1)}]^{2^{-\ell}}\|{\mathcal V}\|_{\rho}\to \|{\mathcal V}\|_\rho, \quad \ell\to\infty $$ so that (\ref{condepell}) is actually fulfilled for $\displaystyle |\varepsilon|< \frac1{\|{\mathcal V}\|_\rho}.$ \end{remark} \begin{corollary} \label{maincc} In the above assumptions set: \begin{equation} \label{Un} U_{n,\varepsilon}(\hbar):= \prod_{s=0}^ne^{i\varepsilon_{n-s}W_{n-s,\varepsilon}}, \quad n=0,1,\ldots. \end{equation} Then: \begin{enumerate} \item $U_{n,\varepsilon}(\hbar)$ is a unitary operator in $L^2(\T^l)$, with $$ U_{n,\varepsilon}(\hbar)^\ast=U_{n,\varepsilon}(\hbar)^{-1}=\prod_{s=0}^ne^{-i\varepsilon_{s}W_{s,\varepsilon}} $$ \item Let: \begin{equation} S_{n,\varepsilon}(\hbar):=U_{n,\varepsilon}(\hbar)({\mathcal L}_\omega+\varepsilon V)U_{n,\varepsilon}(\hbar)^{-1} \end{equation} Then: \begin{eqnarray} S_n&=&D_{n,\varepsilon}(\hbar)+\varepsilon_{n+1}V_{n+1,\varepsilon} \\ D_{n,\varepsilon}(\hbar)&=&L_\omega+\sum_{s=1}^n\varepsilon_sN_{s,\varepsilon} \end{eqnarray} The corresponding symbols are: \begin{eqnarray} && {\mathcal S}_n(\xi,x;\hbar)={\mathcal D}_{n,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)+\varepsilon_{n+1}V_{n+1,\varepsilon}({\mathcal L}_\omega(\xi),x;\hbar) \\ \label{sumD} && {\mathcal D}_{n,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)={\mathcal L}_\omega(\xi)+\sum_{s=1}^n \varepsilon_s{\mathcal N}_{s,\varepsilon}({\mathcal L}_\omega (\xi),\hbar). \end{eqnarray} Here the operators $W_{s,\varepsilon}$, $N_{s,\varepsilon}$, $V_{\ell+1,\varepsilon}$ and their symbols ${\mathcal W}_{s,\varepsilon}$, ${\mathcal N}_{s,\varepsilon}$, ${\mathcal V}_{\ell+1,\varepsilon}$ fulfill the above estimates. \item Let $\varepsilon^\ast$ be defined as in (\ref{condep}). Remark that $ \varepsilon^\ast(\cdot,k)> \varepsilon^\ast(\cdot,k+1), \,k=0,1,\ldots$. Then, if $|\varepsilon|<\varepsilon(k,\cdot)$: \begin{equation} \lim_{n\to\infty}{\mathcal D}_{n,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)={\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar) \end{equation} where in the convergence takes place in the $C^k([0,1];C^\omega (\rho/2))$ topology, namely \begin{equation} \label{limD} \lim_{n\to\infty}\|{\mathcal D}_{n,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)-{\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)\|_{\rho/2,k}=0. \end{equation} \end{enumerate} \end{corollary} \begin{proof} Since Assertions (1) and (2) are straightforward, we limit ourselves to the simple verification of Assertion (3). If $|\varepsilon|<\varepsilon^\ast(\cdot,k)$ then $\displaystyle \|V\|_{\rho,k}\mu \varepsilon < \Lambda<1$. Recalling that $\|\cdot\|_{\rho,,k} \leq \|\cdot\|_{\rho^\prime,k}$ whenever $\rho\leq \rho^\prime$, and that $\rho_\ell <\rho/2$, $\forall\,\ell \in {\Bbb N}$, (\ref{stimafv}) yields: \begin{eqnarray*} && \varepsilon_{n+1}\|{\mathcal V}_{n+1,\varepsilon}\|_{\rho/{2},k}\leq \varepsilon_{n+1}\|{\mathcal V}_{n+1,\varepsilon}\|_{\rho_{n+1},k}\leq \\ && \left[\|V\|_{\rho,k}\mu \varepsilon\right]^{2^{n+1}}\to 0, \quad n\to\infty, \;k\;{\rm fixed}. \end{eqnarray*} In the same way, by (\ref{stimafn}): \begin{eqnarray*} && \|{\mathcal N}_{n,\varepsilon}\|_{\rho/{2},k}\leq \|{\mathcal N}_{n,\varepsilon}\|_{\rho_{n},k}= \|\overline{{\mathcal V}}_{n,\varepsilon}\|_{\rho_{n},k}\leq \|{\mathcal V}_{n,\varepsilon}\|_{\rho_{n},,k}\leq \\ && \left[\|V\|_{\rho,k}\mu \varepsilon\right]^{2^{n}}\to 0, \quad n\to\infty, \;k\;{\rm fixed}. \to 0, \quad n\to\infty, \;k\;{\rm fixed}. \end{eqnarray*} This concludes the proof of the Corollary. \end{proof} \vskip 1cm\noindent \section{Convergence of the iteration and of the normal form.} \label{iteration} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0 \setcounter{theorem}{0 Let us first prove the uniform convergence of the unitary transformation sequence as $n\to\infty$. Recall that $\varepsilon^\ast(\cdot,k)> \varepsilon^\ast(\cdot,k+1), \; k=0,1,\ldots$, and recall the abbreviation $\|\cdot\|_{\rho,0}:=\|\cdot\|_{\rho}$. Define moreover: \begin{equation} \label{epszero} \varepsilon^\ast:=\varepsilon^\ast_0=\varepsilon^\ast(\gamma,\tau,0). \end{equation} where $\varepsilon^\ast(\gamma,\tau,0)$ is defined by (\ref{epsast}). Then: \begin{lemma} \label{Wsequence} Let $\hbar$ be fixed, and $\displaystyle |\varepsilon|<\varepsilon^\ast_0$. Consider the sequence $\displaystyle \{U_{n,\varepsilon}(\hbar)\}$ of unitary operators in $L^2(\T^l)$ defined by (\ref{Un}). Then there is a unitary operator $U_{\infty,\varepsilon}(\hbar)$ in $L^2(\T^l)$ such that $$ \lim_{n\to\infty}\|U_{n,\varepsilon}(\hbar)-U_{\infty,\varepsilon}(\hbar)\|_{L^2\to L^2}=0 $$ \end{lemma} \begin{proof} Without loss we can take $\hbar=1$. We have, for $p=1,2,\ldots$: \begin{eqnarray*} && U_{n+p,\varepsilon}-U_{n,\varepsilon}=\Delta_{n+p,\varepsilon}e^{i\varepsilon_n W_n}\cdots e^{i\varepsilon W_1}, \quad \Delta_{n+p,\varepsilon}:=(e^{i\varepsilon_{n+p}W_{n+p}}\cdots e^{i\varepsilon_{n+1}W_{n+1}}-I) \\ && \|U_{n+p,\varepsilon}-U_{n,\varepsilon}\|_{L^2\to L^2}\leq 2\|\Delta_{n+p,\varepsilon}\|_{L^2\to L^2} \end{eqnarray*} Now we apply the mean value theorem and obtain $$ e^{i\varepsilon_\ell W_{\ell,\varepsilon}}=1+\beta_{\ell,\varepsilon} \quad \beta_{\ell,\varepsilon}:=i\varepsilon_\ell W_{\ell,\varepsilon} \int_0^{\varepsilon_\ell}e^{i\varepsilon^\prime_\ell W_{\ell,\varepsilon}}\,d\varepsilon^\prime_\ell , $$ whence, by (\ref{stimafw}) in which we make $k=0$: \begin{equation} \label{stimaesp} \|\beta_{\ell,\varepsilon}\|\leq \varepsilon_\ell \|W_{\ell,\varepsilon}\|_{\rho_{\ell}}\leq \varepsilon_\ell \|W_{\ell,\varepsilon}\|_{\rho_{\ell},k} \leq \gamma\tau^\tau (\ell+1)^{2\tau}\frac{(1+8\gamma\tau^\tau [2(k+1)^2])^{2-\tau}}{64\gamma^2\tau^{2\tau}[2(k+1)^2]^4}\cdot (\mu \varepsilon \|{\mathcal V}\|_{\rho})^{2^{\ell}} \leq A^\ell \end{equation} for some $A<1$. Now: \begin{eqnarray*} && \Delta_{n+p,\varepsilon}=[(1+\beta_{n+p,\varepsilon}\varepsilon_{n+p})(1+\beta_{n+p-1,\varepsilon}\varepsilon_{n+p-1})\cdots (1+\beta_{n+1,\varepsilon}\varepsilon_{n+1})]=\sum_{j=1}^p\beta_{n+j,\varepsilon}\varepsilon_{n+j} \\ && +\sum_{j_1<j_2=1}^p\beta_{n+j_1,\varepsilon}\varepsilon_{n+j_1}\beta_{n+j_2,\varepsilon}\varepsilon_{n+j_2}+ \sum_{j_1<j_2<j_3=1}^p\beta_{n+j_1,\varepsilon}\varepsilon_{n+j_1}\beta_{n+j_2,\varepsilon}\varepsilon_{n+j_2}\beta_{n+j_3,\varepsilon}\varepsilon_{n+j_3} \\ && +\ldots +\beta_{n+1,\varepsilon}\cdots\beta_{n+p,\varepsilon}\varepsilon_{n+1}\cdots\varepsilon_{n+p} \end{eqnarray*} Therefore, by (\ref{stimaesp}): \begin{eqnarray*} && \|\Delta_{n+p,\varepsilon}\|_{L^2\to L^2}\leq \sum_{j=1}^pA^j+\sum_{j_1<j_2=1}^pA^{n+j_1}A^{n+j_2}+\sum_{j_1<j_2<j_3=1}^pA^{n+j_1}A^{n+j_2}A^{n+j_3}+\ldots \\ && \leq A^n\frac{A}{1-A^n}+A^{2n}\left(\frac{A}{1-A^n}\right)^2+\ldots +A^{pn}\left(\frac{A}{1-A^n}\right)^p= \\ && \frac{A^n}{1-A^n}\left[1+A^n\left(\frac{A}{1-A}\right)+\ldots+A^{(p-1)n}\left(\frac{A}{1-A^n}\right)^{p-1}\right]= \\ && \frac{A^n}{1-A^n}\frac{1}{1-A^n\frac{A}{1-A}}\to 0,\quad n\to\infty,\quad \forall\,p>0 \end{eqnarray*} \vskip 5pt\noindent Hence $\{U_{n,\varepsilon}(\hbar)\}_{n\in{\Bbb N}}$ is a Cauchy sequence in the operator norm, uniformly with respect to $|\varepsilon|<\varepsilon^\ast_0$, and the Lemma is proved. \end{proof} We are now in position to prove existence and analyticity of the limit of the KAM iteration, whence the uniform convergence of the QNF. \vskip 0.3cm\noindent {\bf Proof of Theorems \ref{mainth} and \ref{regolarita}} \newline The operator family $H_\varepsilon$ is self-adjoint in $L^2(T^l)$ with pure point spectrum $\forall\,\varepsilon\in{\mathcal R}$ because $V$ is a continuous operator. By Corollary \ref{maincc}, the operator sequence $\{D_{n,\varepsilon}(\hbar)\}_{n\in {\Bbb N}}$ admits for $|\varepsilon|<\varepsilon^\ast_0$ the uniform norm limit $$ D_{\infty,\hbar}(L_\omega,\hbar)=L_\omega+\sum_{m=0}^\infty\varepsilon^{2^m}N_{m,\varepsilon}(L_\omega,\hbar) $$ of symbol ${\mathcal D}_{\infty,\hbar}({\mathcal L}_\omega(\xi))$. The series is norm-convergent by (\ref{stimafn}). By Lemma (\ref{Wsequence}), $D_{\infty,\hbar}(L_\omega,\hbar)$ is unitarily equivalent to $H_\varepsilon$. The operator family $\varepsilon\mapsto D_{\infty,\varepsilon}(\hbar)$ is holomorphic for $|\varepsilon|<\varepsilon^\ast_0$, uniformly with respect to $\hbar\in[0,1]$. As a consequence, $D_{\infty,\varepsilon}(\hbar)$ admits the norm-convergent expansion: $$ D_{\infty,\varepsilon}(L_\omega,\hbar)=L_\omega+\sum_{s=1}^\infty B_s(L_\omega,\hbar)\varepsilon^s, \quad |\varepsilon|<\varepsilon^\ast_0 $$ which is the convergent quantum normal form. On the other hand, (\ref{limD}) entails that the symbol ${\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)$ is a $\J(\rho/2)$-valued holomorphic function of $\varepsilon$, $|\varepsilon|<\varepsilon^\ast_0$, continuous with respect to $\hbar\in [0,1]$. Therefore it admits the expansion \begin{equation} \label{fnormale} {\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)={\mathcal L}_\omega(\xi)+\sum_{s=1}^\infty{\mathcal B}_s({\mathcal L}_\omega(\xi),\hbar)\varepsilon^s, \quad |\varepsilon|<\varepsilon^\ast \end{equation} convergent in the $\|\cdot\|_{\rho/2}$-norm, with radius of convergence $\varepsilon^\ast_0$. Hence, in the notation of Theorem \ref{mainth}, ${\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)\equiv {\mathcal B}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)$. By construction, ${\mathcal B}_s({\mathcal L}_\omega(\xi),\hbar)$ is the symbol of $B_s(L_\omega,\hbar)$. ${\mathcal B}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)$ is the symbol yielding the quantum normal form via Weyl's quan\-ti\-za\-tion. Likewise, the symbol ${\mathcal W}_{\infty,\varepsilon}(\xi,x,\hbar)$ is a $J(\rho/2)$-valued holomorphic function of $\varepsilon$, $|\varepsilon|<\varepsilon^\ast$, continuous with respect to $\hbar\in [0,1]$, and admits the expansion: \begin{equation} \label{fgen} {\mathcal W}_{\infty,\varepsilon}(\xi,x,\hbar)=\langle\xi,x\rangle+\sum_{s=1}^\infty{\mathcal W}_s(\xi,x,\hbar)\varepsilon^s, \quad |\varepsilon|<\varepsilon^\ast_0 \end{equation} convergent in the $\|\cdot\|_{\rho/2}$-norm, once more with radius of convergence $\varepsilon^\ast_0$. Since Since $\|{\mathcal B}_s\|_1 \leq \|{\mathcal B}_s\|_{\rho/2}$, $\|{\mathcal W}_s \|_1\leq \|{\mathcal W}_s\|_{\rho/2} $ $\forall\,\rho>0$. By construction, ${\mathcal B}_{\infty,\varepsilon}(\xi,x,\hbar)={\mathcal B}_{\infty,\varepsilon}(t,x,\hbar)|_{t={\mathcal L}_\omega(\xi)}$. Theorem \ref{mainth} is proved . Remark that the principal symbol of ${\mathcal B}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)$ is just the convergent Birkhoff normal form: $$ {\mathcal B}_{\infty,\varepsilon}={\mathcal L}_\omega(\xi)+\sum_{s=1}^\infty{\mathcal B}_s({\mathcal L}_\omega(\xi))\varepsilon^s, \quad |\varepsilon|<\varepsilon^\ast_0 $$ Theorem (\ref{regolarita}) is a direct consequence of (\ref{limD}) on account of the fact that $$ \sum_{\gamma=0}^r\max_{\hbar\in [0,1]} \|\partial^\gamma_\hbar {\mathcal B}_\infty(t;\varepsilon,\hbar) \|_{\rho/2}\leq \|{\mathcal B}_\infty\|_{\rho/2,k} $$ Remark indeed that by (\ref{limD}) the series (\ref{fnormale}) converges in the $\|\cdot\|_{\rho/2,r}$ norm if $|\varepsilon|<\varepsilon^\ast(\cdot,r)$. Therefore ${\mathcal B}_s(t,\hbar)\in C^r([0,1];C^\omega(\{t\in\Bbb C\,|\,|\Im t|<\rho/2\})$ and the formula (\ref{EQF}) follows from (\ref{fnormale}) upon Weyl quantization. This concludes the proof of the Theorem. \vskip 1.0cm\noindent \begin{appendix} \section{The quantum normal form} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0 \setcounter{theorem}{0 \noindent The quantum normal form in the framework of semiclassical analysis has been introduced by Sj\"ostrand \cite{Sj}. We follow here the presentation of \cite{BGP}. \vskip 6pt\noindent {\bf 1. The formal construction} Given the operator family $\varepsilon\mapsto H_\varepsilon=L_\omega+\varepsilon V$, look for a unitary transformation $\displaystyle U(\omega,\varepsilon,\hbar)=e^{i W(\varepsilon)/\hbar}: L^2(\T^l)\leftrightarrow L^2(\T^l)$, $W(\varepsilon)=W^\ast(\varepsilon)$, such that: \begin{equation} \label{A1} S(\varepsilon):=UH_\varepsilon U^{-1}=L(\omega)+\varepsilon B_1+\varepsilon^2 B_2+\ldots+ \varepsilon^k R_k(\varepsilon) \end{equation} where $[B_p,L_0]=0$, $p=1,\ldots,k-1$. Recall the formal commutator expansion: \begin{equation} S(\varepsilon)=e^{it W(\varepsilon)/\hbar}He^{-it W(\varepsilon)/\hbar}=\sum_{l=0}^\infty t^lH_l,\quad H_0:=H,\quad H_l:=\frac{[W,H_{l-1}]}{i\hbar l}, \;l\geq 1 \label{A2} \end{equation} and look for $W(\varepsilon)$ under the form of a power series: $W(\varepsilon)=\varepsilon W_1+\varepsilon^2W_2+\ldots$. Then (\ref{A2}) becomes: \begin{equation} \label{A3} S(\varepsilon)=\sum_{s=0}^{k-1}\varepsilon^s P_s +\varepsilon^{k}{R}^{(k)} \end{equation} where \begin{equation} \label{A4} P_0=L_\omega;\quad {P}_s:=\frac{[W_s,H_0]}{i\hbar}+V_s,\quad s\geq 1, \;V_1\equiv V \end{equation} \begin{eqnarray*} V_s =\sum_{r=2}^s\frac{1}{r!}\sum_{{j_1+\ldots+j_r=s}\atop {j_l\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},H_0]\ldots]}{(i\hbar)^r} +\sum_{r=2}^{s-1}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=s-1}\atop {j_l\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},V]\ldots]}{(i\hbar)^r} \end{eqnarray*} \begin{eqnarray*} {R}^{(k)}=\sum_{r=k}^\infty\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k}\atop {j_l\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},L_\omega]\ldots]}{(i\hbar)^r} +\sum_{r=k-1}^{\infty}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k-1}\atop {j_l\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},V]\ldots]}{(i\hbar)^r} \end{eqnarray*} Since $V_s$ depends on $W_1,\ldots,W_{s-1}$, (A1) and (A3) yield the recursive homological equations: \begin{equation} \label{A5} \frac{[W_s,P_0]}{i\hbar} +V_s=B_s, \qquad [L_0,B_s]=0 \end{equation} To solve for $S$, $W_s$, $B_s$, we can equivalently look for their symbols. The equations (\ref{A2}), (\ref{A3}), (\ref{A4}) become, once written for the symbols: \begin{equation} \label{A6} \Sigma(\varepsilon)=\sum_{l=0}^\infty {{\mathcal H}}_l,\quad {{\mathcal H}}_0:={\mathcal L}_\omega+\varepsilon {\mathcal V},\quad {{\mathcal H}}_l:=\frac{\{w,{{\mathcal H}}_{l-1}\}_M}{ l}, \;l\geq 1\end{equation} \begin{equation} \label{A7} \Sigma(\varepsilon)=\sum_{s=0}^{k}\varepsilon^s {\mathcal P}_s +\varepsilon^{k+1}{{\mathcal R}}^{(k+1)} \end{equation} where \begin{equation} \label{A8} {\mathcal P}_0={\mathcal L}_\omega;\qquad {\mathcal P}_s :=\{{\mathcal W}_s,{\mathcal P}_0 \}_M+{\mathcal V}_s,\quad s=1, \ldots,\qquad {\mathcal V}_1\equiv {\mathcal V}_0={\mathcal V} \end{equation} \begin{eqnarray*} && {\mathcal V}_s :=\sum_{r=2}^s\frac{1}{r!}\sum_{{j_1+\ldots+j_r=s}\atop {j_l\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal L}_\omega\}_M\ldots\}_M + \\ && +\sum_{r=1}^{s-1}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=s-1}\atop {j_l\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal V}\}_M\ldots\}_M, \quad s>1 \end{eqnarray*} \begin{eqnarray*} && {{\mathcal R}}^{(k)}=\sum_{r=k}^\infty\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k}\atop {j_l\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal L}_\omega\}_M\ldots\}_M+ \\ && \sum_{r=k-1}^{\infty}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k-1}\atop {j_l\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal V}\}_M\ldots\}_M\end{eqnarray*} In turn, the recursive homological equations become: \begin{equation} \label{A9} \{{\mathcal W}_s,{\mathcal L}_{\omega}\}_M +{\mathcal V}_s={\mathcal B}_s, \qquad \{{\mathcal L}_{\omega},{\mathcal B}_s\}_M =0 \end{equation} \vskip 6pt\noindent {\bf 2. Solution of the homological equation and estimates of the solution} \vskip 3pt\noindent The key remark is that $\{{\mathcal A},{\mathcal L}_\omega\}_M=\{{\mathcal A},{\mathcal L}_\omega\}$ for any smooth symbol ${\mathcal A}(\xi;x;\hbar)$ because ${\mathcal L}_\omega$ is linear in $\xi$. The homological equation (A.9) becomes therefore \begin{equation} \label{A10} \{{\mathcal W}_s,{\mathcal L}_\omega\} +{\mathcal V}_s={\mathcal B}_s, \qquad \{{\mathcal L}_\omega,{\mathcal B}_s\} =0 \end{equation} We then have: \begin{proposition} Let ${\mathcal V}_s(\xi,x;\hbar)\in\J(\rho_s)$. Then the equation \begin{equation} \label{A11} \{{\mathcal W}_s,{\mathcal L}_\omega\} +{\mathcal V}_s={\mathcal B}_s, \qquad \{{\mathcal L}_\omega,{\mathcal B}_s\} =0 \end{equation} admits $\forall\,0<d_s<\rho_s$ the solutions ${\mathcal B}_s({\mathcal L}_\omega(\xi;)\hbar)\in \J(\rho_s)$, ${\mathcal W}\in\J(\rho-d_s)$ given by: \begin{equation} \label{A12} {\mathcal B}_s(\xi;\hbar)=\overline{{\mathcal V}_s}; \quad {\mathcal W}_s(\xi,x;\hbar)={\mathcal L}_\omega^{-1}{\mathcal V}_s, \quad {\mathcal L}_\omega^{-1}{\mathcal V}_s:=\sum_{0\neq\in\Z^l }\frac{{\mathcal V}_{s.q}({\mathcal L}_\omega(\xi))}{i\langle\omega,q\rangle}e^{i\langle q,x\rangle}. \end{equation} Moreover: \begin{equation} \label{stimaWs} \|{\mathcal B}_s\|_{\rho_s}\leq \|{\mathcal V}_s\|_{\rho_s}; \qquad \|{\mathcal W}_s\|_{\rho_s-d_s} \leq \gamma \left(\frac{\tau}{d_s}\right)^\tau \|{\mathcal V}_s\|_{\rho_s}. \end{equation} \end{proposition} \begin{proof} ${\mathcal B}_s$ and ${\mathcal W}_s$ defined by (A.12) clearly solve the homological equation (A.11). The estimate for ${\mathcal B}_s$ is obvious, and the estimate for ${\mathcal W}_s$ follows once more by the small denominator inequality (\ref{DC}). \end{proof} By definition of $\|\cdot\|_{\rho}$ norm: \begin{equation} \label{A13} \|B_s\|_{L^2\to L^2}\leq \|B_s\|_{\rho} \leq \|{\mathcal V}_s\|_{\rho_s}; \quad \|B_s\|_{L^2\to{\mathcal L}^2}\leq \|B_s\|_{\rho} \leq \|{\mathcal V}_s\|_{\rho_s} \end{equation} Hence all terms of the quantum normal form and the remainder can be recursively estimated in terms of $\|{\mathcal V}\|_{\rho}$ by Corollary 3.11. Setting now, for $s\geq 1$: \begin{eqnarray*} && \rho_s:=\rho- s d_s, \quad d_s< \frac{\rho}{s+1}; \qquad \rho_0:=\rho \\ && \mu_s:=8\gamma \tau^\tau \frac{E}{d_s^\tau\delta_s^2}, \quad E:=\|{\mathcal V}\|_{\rho}. \end{eqnarray*} we actually have, applying without modification the argument of \cite{BGP}, Proposition 3.2: \begin{proposition} Let $\mu_s<1/2, s=1,\ldots,k$. Set: $$ K:=\frac{8\cdot 2^{\tau+5}\gamma\tau^\tau}{\rho^{2+\tau}}. $$ Then the following estimates hold for the quantum normal form: \begin{eqnarray*} && \sum_{s=1}^k \|B_s\|_{\rho/2}\varepsilon^s \leq \sum_{s=1}^k \|{\mathcal B}_s\|_{\rho/2}\varepsilon^s\leq \sum_{s=1}^k E^sK^s s^{(\tau+2)s}\varepsilon^s \\ && {} \\ && \|R_{k+1}\|_{\rho/2}\leq \|{\mathcal R}_{k+1}\|_{\rho/2}\leq (EK)^{k+1}(k+1)^{(\tau+2)(k+1)}\varepsilon^{k+1} \end{eqnarray*} \end{proposition} \vskip 1cm\noindent \end{appendix} \newpage \section{Introduction} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0}% \setcounter{theorem}{0 \noindent \subsection{Quantization formulae} The establishment of a quantization formula (QF) for the eigenvalues of the Schr\"o\-din\-ger\ operators is a classical mathematical problem of quantum mechanics (see e.g.\cite{FM}). To review the notion of QF, consider first a semiclassical pseudodifferential operator $H$ (for this notion, see e.g.\cite{Ro}) acting on $L^2({\mathcal R}^l)$, $l\geq 1$, of order $m$, self-adjoint with pure-point spectrum, with (Weyl) symbol $\sigma_H(\xi,x)\in C^\infty({\mathcal R}^l\times{\mathcal R}^l;{\mathcal R})$. \begin{definition}\label{quant} {\it We say that $H$ admits an $M$-smooth {\rm exact} QF, $M\geq 2$, if there exists a function $\mu:$ $(A,\hbar)\mapsto \mu(A,\hbar)\in C^M({\mathcal R}^l\times [0,1]; {\mathcal R})$ such that: \begin{enumerate} \item $\mu(A,\hbar)$ admits an asymptotic expansion up to order $M$ in $\hbar$ uniformly on compacts with respect to $A\in{\mathcal R}^l$; \item $\forall\hbar\in]0,1]$, there is a sequence $n_k:=(n_{k_1},\ldots,n_{k_l})\subset \Z^l$ such that all eigenvalues $\lambda_{k}(\hbar)$ of $H$ admit the representation: \begin{equation} \label{FQ1} \lambda_{k}(\hbar)=\mu(n_k\hbar,\hbar). \end{equation} \end{enumerate}} \end{definition} \begin{remark} (Link with the Maslov index) \label{maslov} Consider any function $f:\ {\mathcal R}^l\to{\mathcal R}^l$ with the property $\langle f(A),\nabla\mu(A,0)\rangle $ $=\partial_\hbar\mu(A,0)$. Then we can rewrite the asymptotic expansion of $\mu$ at second order as : \begin{equation} \mu(n_k\hbar,\hbar)=\mu(n_k\hbar+\hbar f(n_k\hbar))+O(\hbar^2). \end{equation} When $f(m\hbar)=\nu, \;\nu\in\Bbb Q^l$, the Maslov index \cite{Ma} is recovered. Moreover, when \begin{equation} \label{QF2} |\lambda_{k}(\hbar)-\mu(n_k\hbar,\hbar)|=O(\hbar^M), \quad \hbar\to 0, \quad M\geq 2 \end{equation} then we speak of {\it approximate} QF of order $M$. \end{remark} \begin{example} (Bohr-Som\-mer\-feld-Ein\-stein for\-mu\-la). Let $\sigma_H$ fulfill the conditions of the Liouville-Arnold theorem (see e.g.\cite{Ar1}, \S 50). Denote $A=(A_1,\ldots,A_l)\in {\mathcal R}^l$ the action variables, and $E(A_1,\ldots,A_l)$ the symbol $\sigma_H$ expressed as a function of the action variables. Then the Bohr-Som\-mer\-feld-Ein\-stein for\-mu\-la (BSE) QF is \begin{equation} \label{QF3} \lambda_{n,\hbar}=E((n_1+\nu/4)\hbar,\ldots,(n_l+\nu/4)\hbar)+O(\hbar^2) \end{equation} where $\nu=\nu(l)\in\Bbb N\cup\{0\}$ is the Maslov index \cite{Ma}. When $H$ is the Schr\"o\-din\-ger\ operator, and $\sigma_H$ the corresponding classical Hamiltonian, (\ref{QF3}) yields the approximate eigenvalues, i.e. the approximate quantum energy levels. In the particular case of a quadratic, positive definite Hamiltonian, which can always be reduced to the harmonic oscillator with frequencies $\omega_1>0,\ldots,\omega_l>0$, the BSE is an exact quantization formula in the sense of Definition 1.1 with $\nu=2$, namely: $$ \mu(A,\hbar)=E(A_1+\hbar/2,\ldots,A_l+\hbar/2) =\sum_{k=1}^l\omega_k(A_k+\hbar/2) $$ \end{example} \vskip 10pt To our knowledge, if $l>1$ the only known examples of exact QF in the sense of Definition 1.1 correspond to classical systems integrable by separation of variables, such that each separated system admits in turn an exact QF, as in the case of the Coulomb potential (for exact QFs for general one-dimensional Schr\"o\-din\-ger\ operators see \cite{Vo}). For general integrable systems, only the approximate BSE formula is valid. Non-integrable systems admit a formal approximate QF, the so-called Einstein-Brillouin-Keller (EBK), recalled below, provided they possess a normal form to all orders. In this paper we consider a perturbation of a linear Hamiltonian on $T^\ast\T^l={\mathcal R}^l\times\T^l$, and prove that the corresponding quantized operator can be unitarily conjugated to a function of the differentiation operators via the construction of a quantum normal form which converges uniformly with respect to $\hbar\in [0,1]$. This yields immediately an exact, $\infty$-smooth QF. The uniformity with respect to $\hbar$ yields also an explicit family of classical Hamiltonians admitting a convergent normal form, thus making the system integrable. \subsection{Statement of the results} Consider the Hamiltonian family ${\mathcal H}_\varepsilon: {\mathcal R}^l\times \T^l\rightarrow {\mathcal R}, (\xi,x)\mapsto {\mathcal H}_\varepsilon(\xi,x)$, indexed by $\varepsilon\in{\mathcal R}$, defined as follows: \begin{equation} {\mathcal H}_\varepsilon(\xi,x):={\mathcal L}_\omega(\xi)+\varepsilon {\mathcal V}(x,\xi);\quad {\mathcal L}_\omega(\xi):=\langle\omega,\xi\rangle, \quad\omega\in{\mathcal R}^l,\quad {\mathcal V}\in C^\infty({\mathcal R}^l\times\T^l;{\mathcal R}). \end{equation} Here $\xi\in{\mathcal R}^l, x\in\T^l$ are canonical coordinates on the phase space ${\mathcal R}^l\times\T^l$, the $2l-$cylinder. ${\mathcal L}_\omega(\xi)$ generates the linear Hamiltonian flow $\xi_i\mapsto \xi_i, x_i\mapsto x_i+\omega_it$ on ${\mathcal R}^l\times\T^l$. For $l>1$ the dependence of ${\mathcal V}$ on $\xi$ makes non-trivial the integrability of the flow of ${\mathcal H}_\varepsilon$ when $\varepsilon\neq 0$, provided the {\it frequencies} $\omega:=(\omega_1,\ldots, \omega_l)$ are independent over $\Bbb Q$ and fulfill a diophantine condition such as (\ref{DC}) below. Under this assumption it is well known that ${\mathcal H}_\varepsilon$ admits a {\it normal form} at any order (for this notion, see e.g. \cite{Ar2}, \cite{SM}). Namely, $\forall\,N\in\Bbb N$ a canonical bijection ${\mathcal C}_{\varepsilon,N}:{\mathcal R}^l\times\T^l\leftrightarrow {\mathcal R}^l\times\T^l$ close to the identity can be constructed in such a way that: \begin{equation} \label{CNF} ({\mathcal H}_\varepsilon\circ {\mathcal C}_{\varepsilon,N})(\xi,x)={\mathcal L}_\omega(\xi)+\sum_{k=1}^N {\mathcal B}_k(\xi;\omega)\varepsilon^k+\varepsilon^{N+1}{\mathcal R}_{N+1,\varepsilon}(\xi,x) \end{equation} This makes the flow of ${\mathcal H}_\varepsilon(\xi,x)$ integrable up to an error of order $\varepsilon^{N+1}$. Here ${\mathcal C}_{\varepsilon,N}$ is the flow at time $1$ generated by the Hamiltonian \begin{equation} \label{FGen} {\mathcal W}^N_\varepsilon(\xi,x):=\langle\xi,x\rangle+\sum_{k=1}^N{\mathcal W}_k(\xi,x)\varepsilon^k. \end{equation} The functions ${\mathcal W}_k(\xi,x): {\mathcal R}^l\times \T^l\to{\mathcal R}$ are recursively computed by canonical perturbation theory via the standard Lie transform method of Deprit\cite{De} and Hori\cite{Ho} (see also e.g \cite{Ca}). To describe the quantum counterpart, let $H_\varepsilon=L_\omega+\varepsilon V$ be the operator in $L^2(\T^l)$ of symbol ${\mathcal H}_\varepsilon$, with domain $D(H_\varepsilon)= H^1(\T^l)$ and action specified as follows: \begin{eqnarray} \forall u\in D(H_\varepsilon), \quad H_\varepsilon u= L_\omega u+Vu, \quad L_\omega u=\sum_{k=1}^l\omega_kD_ku, \;\; D_k u:=-i\hbar\partial_{x_k}u. \end{eqnarray} $V$ is the Weyl quantization of ${\mathcal V}$ (formula (\ref{1erweyl}) below). Since {\it uniform} quantum normal forms (see e.g. \cite{Sj},\cite{BGP},\cite{Po1}, \cite{Po2}) are not so well known as the classical ones, let us recall here their definition. The construction is reviewed in Appendix. \begin{definition} [Quantum normal form (QNF)]\label{QuNF} {\it We say that a family of operators $H_\varepsilon$ $\varepsilon$-close (in the norm resolvent topology) to $H_0=L_\omega$ admits a uniform quantum normal form (QNF) at any order if \begin{itemize} \item[(i)] There exists a sequence of continuous self-adjoint operators $W_k(\hbar)$ in $L^2(\T^l)$, $k=1,\ldots$ and a sequence of functions $B_k(\xi_1,\ldots,\xi_l,\hbar)\in C^\infty({\mathcal R}^l\times [0,1];{\mathcal R})$, such that, defining $\forall\,N\in\Bbb N$ the family of unitary operators: \begin{eqnarray} \label{QNF} U_{N,\varepsilon}(\hbar)=e^{iW_{N,\varepsilon}(\hbar)/\hbar}, \quad W_{N,\varepsilon}(\hbar)=\sum_{k=1}^N W_k(\hbar)\varepsilon^k \end{eqnarray} we have: \begin{eqnarray} \label{AQNF} && U_{N,\varepsilon}(\hbar)H_\varepsilon U_{N,\varepsilon}^\ast(\hbar)=L_\omega+\sum_{k=1}^N B_k(D_1,\ldots,D_l,\hbar)\varepsilon^k+\varepsilon^{N+1}R_{N+1,\varepsilon}(\hbar). \end{eqnarray} \item [(ii)] The continuous operators $W_k$, $B_k(D,\hbar)$, $R_{N+1}$ admit smooth symbols ${\mathcal W}_k, {\mathcal B}_k, \mathcal R_{N+1}(\varepsilon)$, which reduce to the classical normal form construction (\ref{CNF}) and (\ref{FGen}) as $\hbar\to 0$: \begin{equation} \label{princip} {\mathcal B}_k(\xi;0)={\mathcal B}_k(\xi);\quad {\mathcal W}_k(\xi,x,0)={\mathcal W}_k(\xi,x),\quad \mathcal R_{N+1,\varepsilon}(x,\xi;0)=\mathcal R_{N+1,\varepsilon}(x,\xi) \end{equation} \end{itemize}} \end{definition} (\ref{AQNF}) entails that $H_\varepsilon$ commutes with $H_0$ up to an error of order $\varepsilon^{N+1}$; hence the following approximate QF formula holds for the eigenvalues of $H_\varepsilon$: \begin{equation} \label{AQF} \lambda_{n,\varepsilon}(\hbar)=\hbar\langle n,\omega\rangle+\sum_{k=1}^N {\mathcal B}_k(n_1\hbar,\ldots,n_l\hbar,\hbar)\varepsilon^k+O(\varepsilon^{N+1}). \end{equation} \vskip 6pt\noindent \begin{definition} \label{QNFConv}{(Smoothly and uniformly convergent quantum normal forms)} {\it We say that the QNF is smoothlly (with respect to $(\xi,x)\in{\mathcal R}^l\times\T^l)$ and uniformly (with respect to $\hbar$)} convergent, {\it if there is $\varepsilon^\ast>0$ such that, for $|\varepsilon|<\varepsilon^\ast$ and any $\alpha,\beta,\gamma\in\mathbb N^l$, one has} \vskip 5pt\noindent \begin{eqnarray} && \label{convunifQ1} \sum_{k=1}^\infty\,\sup_{{\mathcal R}^l\times\T^l\times [0,1]}|D^\alpha_\xi D^\beta_x{\mathcal W}_k(\xi,x;\hbar)\varepsilon^k|<+\infty \\ && \label{convunifQ2} \sum_{k=1}^\infty\,\sup_{{\mathcal R}^l\times [0,1]}|D^\gamma_\xi{\mathcal B}_k(\xi,\hbar)\varepsilon^k|<+\infty. \end{eqnarray} \vskip 5pt\noindent \end{definition} \noindent (\ref{convunifQ1},\ref{convunifQ2}) entail that, if $|\varepsilon|<\varepsilon^\ast$, we can define the symbols \vskip 3pt\noindent \begin{eqnarray} \label{somma} && {\mathcal W}_{\infty}(\xi,x;\varepsilon,\hbar):=\langle \xi,x\rangle+\sum_{k=1}^\infty{\mathcal W}_k(\xi,x;\hbar)\varepsilon^k\in C^M({\mathcal R}^l\times\T^l\times [0,\varepsilon^\ast] \times[0,1];\Bbb C), \\ \label{somma1} && {\mathcal B}_{\infty}(\xi;\varepsilon,\hbar):={\mathcal L}_\omega(\xi)+\sum_{k=1}^\infty{\mathcal B}_k(\xi;\hbar)\varepsilon^k \in C^M({\mathcal R}^l\times [0,\varepsilon^\ast] \times[0,1];\Bbb C) \end{eqnarray} such that, $\forall\,\alpha,\beta,\gamma\in\mathbb N^l$ \begin{eqnarray} \label{stimasomma} && \sup_{{\mathcal R}^l\times\T^l\times [0,1]}|D^\alpha_\xi D^\beta_x{\mathcal W}_{\infty}(\xi,x;\varepsilon,\hbar)-\langle\xi,x\rangle|<+\infty, \\ \label{stimasomma1} && \sup_{{\mathcal R}^l\times [0,1]} |D^\gamma{\mathcal B}_{\infty}(\xi;\varepsilon,\hbar)|<+infty \end{eqnarray} \vskip 3pt\noindent The uniform convergence of the QNF has the following straightforward consequences: \begin{itemize} \item[(A1)] {\it By the Calderon-Vaillancourt theorem (see \S 3 below) the Weyl quantizations $W_{\infty}(\varepsilon,\hbar)$, $B_{\infty}(\varepsilon,\hbar)$ of ${\mathcal W}_{\infty}(\xi,x;\varepsilon,\hbar)$, ${\mathcal B}_{\infty}(\varepsilon,\hbar)$ are continuous operator in $L^2(\T^l)$. Then:} \begin{eqnarray*} && e^{iW_{\infty}(\varepsilon,\hbar)/\hbar}H_\varepsilon e^{-iW_{\infty}(\varepsilon,\hbar)/\hbar}=B_{\infty}(D_1,\ldots,D_l;\varepsilon,\hbar). \\ && B_{\infty}(D_1,\ldots,D_l;\varepsilon,\hbar):=L_\omega+\sum_{k=1}^\infty B_k(D_1,\ldots,D_l;\hbar)\varepsilon^k. \end{eqnarray*} \item[(A2)] {\it The eigenvalues of $H_\varepsilon$ are given by the {\rm exact} quantization formula:} \begin{equation} \label{QF} \lambda_{n}(\hbar,\varepsilon)={\mathcal B}_{\infty}(n\hbar,\hbar,\varepsilon), \qquad n\in\Z^l, \quad \varepsilon\in {\frak D}^\ast:=\{\varepsilon\in {\mathcal R}\,|\,|\varepsilon|<\varepsilon^\ast\} \end{equation} \item [(A3)] {\it The classical normal form is convergent, uniformly on compacts with respect to $\xi\in{\mathcal R}^l$, and therefore if $\varepsilon\in {\frak D}^\ast$ the Hamiltonian ${\mathcal H}_\varepsilon(\xi,x)$ is integrable.} \end{itemize} Let us now state explicit conditions on $V$ ensuring the uniform convergence of the QNF. \newline Given $\F(t,x)\in C^\infty({\mathcal R}\times\T^l;{\mathcal R})$, consider its Fourier expansion \begin{equation} \label{FFE} \F(t,x)=\sum_{q\in\Z^l}\F_q(t)e^{i\langle q,x\rangle}. \end{equation} and define $ \F_\omega \in C^\infty({\mathcal R}^l\times\T^l;{\mathcal R})$ in the following way: \vskip 4pt\noindent \begin{eqnarray} && \label{Fouom} \F_\omega(\xi,x):=\F({\mathcal L}_\omega(\xi),x)=\sum_{q\in\Z^l}\F_{\omega,q}(\xi)e^{i\langle q,x\rangle}, \\ && \F_{\omega,q}(\xi):=(\F_q\circ {\mathcal L}_\omega)(\xi)=\frac1{(2\pi)^{l/2}}\int_{\mathcal R}\widehat{\F}_q(p)e^{-ip{\mathcal L}_\omega(\xi)}\,dp= \\ && = \frac1{(2\pi)^{l/2}}\int_{\mathcal R}\widehat{\F}_q(p)e^{-i\langle p\omega,\xi\rangle}\,dp, \quad p\omega :=(p\omega_1,\ldots,p\omega_l ). \end{eqnarray} \vskip 4pt\noindent Here, as above, ${\mathcal L}_\omega(\xi)=\langle\omega,\xi\rangle$. \vskip 4pt\noindent Given $\rho>0$, introduce the weighted norms: \begin{eqnarray} && \|\F_{\omega,q}(\xi)\|_\rho:=\int_{\mathcal R}|\widehat{\F}_q(p)|e^{\rho |p|}|\,dp \\ && \|\F_\omega(x,\xi)\|_{\rho}:=\sum_{q\in\Z^l}\,e^{\rho |q|}\|\F_{\omega,q}\|_\rho \end{eqnarray} \vskip 4pt\noindent We can now formulate the main result of this paper. Assume: \vskip 4pt\noindent \begin{itemize} \item[(H1)] There exist $\gamma >0, \tau \geq l$ such that the frequencies $\omega$ fulfill the diophantine condition \begin{equation} \label{DC} |\langle\omega,q\rangle|^{-1}\leq \gamma |q|^{\tau}, \quad q \in\Z^l, \; q\neq 0. \end{equation} \item[(H2)] $V_\omega$ is the Weyl quantization of ${\mathcal V}_\omega(\xi,x)$ (see Sect.3 below), that is: \vskip 8pt\noindent \begin{equation}\label{1erweyl} V_\omega f(x)=\int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{{\mathcal V}}_q(p) e^{i\langle q,x\rangle+\hbar p\langle \omega,q\rangle/2}f(x+\hbar p\omega)\,dp, \quad f\in L^2(\T^l). \end{equation} \vskip 5pt\noindent Here ${\mathcal V}_\omega(\xi,x)={\mathcal V}(\langle\omega,\xi\rangle,x)$ for some smooth function ${\mathcal V}(t;x): {\mathcal R}\times\T^l\to {\mathcal R}$. \vskip 10pt\noindent \item[(H3)] There is $ \rho>2 $ such that $ \|{\mathcal V}_\omega\|_{\rho}<+\infty.$ \end{itemize} \vskip 7pt\noindent Clearly under these conditions the operator family $ H_\varepsilon:=L_\omega+\varepsilon V_\omega$, $D(H_\varepsilon) =H^1(\T^l)$, $\varepsilon\in{\mathcal R}$, is self-adjoint in $L^2(\T^l)$ and has pure point spectrum. We can then state the main results. \vskip 4pt\noindent \begin{theorem} \label{mainth} {\rm (Uniform convergence)} \newline Assume the validity of conditions (H1-H3). Let the diophantine constants $\gamma$, $\tau$ be such that: \vskip 7pt\noindent \begin{equation} \label{DC1} \gamma\tau^{\tau}(\tau+2)^{4(\tau+2)}< \frac12. \end{equation} \vskip 5pt\noindent Then $ H_\varepsilon$ admits a smoothly, uniformly convergent quantum normal form ${\mathcal B}_{\infty,\omega}(\xi,\varepsilon,\hbar)$, in the sense of Definition 1.5. The radius of convergence is not smaller than: \vskip 6pt\noindent \begin{equation} \label{rast} \varepsilon^\ast(\tau):= \frac{1}{2^{2\tau}e^{24(2+\tau)}\|{\mathcal V}_\omega\|_{\rho}}. \end{equation} Furthermore ${\mathcal B}_{\infty}(t,\varepsilon,\hbar)$ is holomorphic with respect to $t$ in $\{t\in\Bbb C\,|\,|\Im t|<\rho/2\}$. \end{theorem} \vskip 6pt Our second result concerns the regularity of ${\mathcal B}_{\infty,\omega}(\xi;\varepsilon,\hbar)$ with respect to $\hbar$. This property will depend on the radius of convergence as shown in the following Theorem. Although this point is not discussed here, we believe that ${\mathcal B}_{\infty,\omega}(\xi;\varepsilon,\hbar)$ has Gevrey regularity with respect to the Planck constant. \begin{theorem} \label{regolarita} {\rm (Regularity with respect to $\hbar$).} \newline For $r=0,1,\ldots$ let the diophantine constants $\gamma$, $\tau$ be such that: \vskip 7pt\noindent \begin{equation} \label{DC2} \gamma\tau^{\tau}(r+\tau+2)^{4(r+\tau+2)}< \frac12. \end{equation} \vskip 4pt\noindent and let: \begin{eqnarray} && \label{Dr} {\frak D}(\tau,r):=\{\varepsilon\in\Bbb C\,|\,|\varepsilon|<\varepsilon^\ast(\tau,r)\}, \\ \label{epastr} && \varepsilon^\ast(\tau,r):= \frac{1}{e^{24(2+r+\tau)}(r+2)^{2\tau}\|{\mathcal V}_\omega\|_{\rho}}=e^{-24 r}\left(\frac 2{2+r}\right)^{2\tau}\varepsilon^\ast(\tau) \end{eqnarray} \vskip 4pt\noindent Then, under the validity of conditions (H1-H3), there exists $C_r=C_r(\varepsilon^\ast)>0$ such that, for $\varepsilon\in {\frak D}(\tau,r)$: \vskip 4pt\noindent \begin{eqnarray} \label{stimaG1} \sum_{\gamma=0}^r\max_{\hbar\in [0,1]} \|\partial^\gamma_\hbar {\mathcal B}_{\infty,\omega}(.;\varepsilon,\hbar) \|_{\rho/2}\leq C_r, \;\;r=0,1,\ldots \end{eqnarray} In particular: ${\mathcal B}_{\infty,\omega}(\xi;\varepsilon,.)\in C^r([0,1])$ uniformly w.r.t. $\xi\in{\mathcal R}^l$ and $|\varepsilon|<\varepsilon^\ast(\tau,r)$. \end{theorem} \noindent {\bf Remark} \newline Since (see \S 2 below) functions $\F(t,\varepsilon,\hbar)$ such that $\displaystyle \sup_{\hbar\in [0,1]}\|\F(\cdot,\varepsilon,\hbar)\|_\rho$ are holomorphic w. r. t. $t$ in $\{t\in\Bbb C\,|\,|\Im t|<\rho\}$, \eqref{stimaG1} taken for $r=0$ yields a quantitative restatement of Theorem \ref{mainth}. \vskip 4pt In view of Definition \ref{quant}, the following statement is a straightforward consequence of the above Theorems: \begin{corollary}[Quantization formula]\label{QFE} ${\mathcal H}_\varepsilon$ admits an exact, $\infty$-smooth quantization formula in the sense of Definition 1.1. That is, $\forall\,r\in\Bbb N$, $\forall \,|\epsilon|<\varepsilon^\ast (\tau,k)$ given by (\ref{epastr}), the eigenvalues of $H_\varepsilon$ are expressed by the formula: \begin{equation} \label{EQF} \lambda(n,\hbar,\varepsilon)={\mathcal B}_{\infty,\omega}(n\hbar,\varepsilon, \hbar) ={\mathcal L}_\omega(n\hbar)+\sum_{s=1}^\infty{\mathcal B}_s({\mathcal L}_\omega(n\hbar),\hbar)\varepsilon^s \end{equation} where ${\mathcal B}_{\infty,\omega}(\xi,\varepsilon, \hbar)$ belongs to $C^r({\mathcal R}^l\times [0,\varepsilon^\ast(\cdot,r)]\times [0,1])$, and admits an asymptotic expansion at order $r$ in $\hbar$, uniformly on compacts with respect to $(\xi,\varepsilon)\in{\mathcal R}^l\times [0,\varepsilon^\ast(\cdot,r)]$. \end{corollary} {\bf Remarks} \begin{itemize} \item[(i)] (\ref{stimaG1}) and (\ref{EQF}) entail also that the Einstein-Brillouin-Keller (EBK) quantization formula: \begin{equation} \label{EBK} \lambda_{n,\varepsilon}^{EBK}(\hbar):={\mathcal L}_\omega(n\hbar)+\sum_{s=1}^\infty {\mathcal B}_s({\mathcal L}_\omega(n\hbar))\varepsilon^s={\mathcal B}_{\infty,\omega}(n\hbar,\varepsilon),\quad n\in\Z^l \end{equation} reproduces here ${\rm Spec}(H_\varepsilon)$ up to order $\hbar$. \item[(ii)] Apart the classical Cherry theorem yielding convergence of the Birkhoff normal form for smooth perturbations of the harmonic flow with {\it complex} frequencies when $l=2$ (see e.g. \cite{SM}, \S 30; the uniform convergence of the QNF under these conditions is proved in \cite{GV}), no simple convergence criterion seems to be known for the QNF nor for the classical NF as well. (See e.g.\cite{PM}, \cite{Zu}, \cite{St} for reviews on convergence of normal forms). Assumptions (1) and (2) of Theorem \ref{mainth} entail Assertion (A2) above. Hence they represent, to our knowledge, a first explicit convergence criterion for the NF. \item[(iii)] In comparison to earlier results on QNF and quantization formulas \cite{Sj}, \cite{BGP}, \cite{Po1}, \cite{Po2}, we remark that the present ones are {\it exact} and {\it purely quantum}: i.e. it they are valid for $\hbar$ fixed, and not only asymptotically as $\hbar \to 0$ modulo an error term of order $\hbar^\infty$ or $e^{-C/\hbar}$ . \end{itemize} Remark that ${\mathcal L}_\omega(\xi)$ is also the form taken by harmonic-oscillator Hamiltonian in ${\mathcal R}^{2l}$, $$ {\mathcal P}_0(\eta,y;\omega):= \sum_{s=1}^l\omega_s(\eta^2_s+y_s^2), \quad (\eta_s,y_s)\in{\mathcal R}^2,\quad s=1,\ldots,l $$ if expressed in terms of the action variables $\xi_s>0, \,s=1,\ldots,l$, where $$ \xi_s:=\eta^2_s+y_s^2=z_s\overline{z}_s, \quad z_s:=y_s+i\eta_s. $$ {Assuming} (\ref{DC}) {\it and} the property \begin{equation} \label{Rk1} {\mathcal B}_k(\xi)=(\F_k\circ{\mathcal L}_\omega(\xi))=\F_{k}(\sum_{s=1}^l \omega_s z_s\overline{z}_s), \quad k=0,1,\ldots \end{equation} R\"ussmann \cite{Ru} (see also \cite{Ga}) proved convergence of the Birkhoff NF if the perturbation ${\mathcal V}$, expressed as a function of $(z,\overline{z})$, is {\it in addition} holomorphic at the origin in $\Bbb C^{2l}$. No explicit condition on ${\mathcal V}$ seems to be known ensuring {\it both} (\ref{Rk1}) and the holomorphy. In this case instead we {\it prove} that the assumption ${\mathcal V}(\xi,x)={\mathcal V}({\mathcal L}_\omega(\xi),x)$ entails (\ref{Rk1}), uniformly in $\hbar\in [0,1]$; namely, we construct $\F_s(t;\hbar):{\mathcal R}\times [0,1]\to{\mathcal R}$ such that: \begin{equation} \label{Rk} {\mathcal B}_s(\xi;\hbar)=\F_s({\mathcal L}_\omega(\xi);\hbar):=\F_{\omega,s}(\xi;\hbar), \quad s=0,1,\ldots \end{equation} The conditions of Theorem \ref{mainth} cannot however be transported to R\"ussmann's case: the map \vskip 6pt\noindent $$ {\mathcal T}(\xi,x)=(\eta,y):= \begin{cases} \eta_i=-\sqrt{\xi_i}\sin x_i, \\ y_i=\sqrt{\xi_i}\cos x_i, \end{cases}\quad i=1,\ldots,l, $$ \vskip 6pt\noindent namely, the inverse transformation into action-angle variable, is defined only on ${\mathcal R}_+^l\times\T^l$ and does not preserve the analyticity at the origin. On the other hand, ${\mathcal T}$ is an analytic, canonical map between ${\mathcal R}_+^l\times\T^l$ and ${\mathcal R}^{2l}\setminus\{0,0\}$. Assuming for the sake of simplicity ${\mathcal V}_0=0$ the image of ${\mathcal H}_\varepsilon$ under ${\mathcal T}$ is: \begin{eqnarray} \label{H0} ({\mathcal H}_\varepsilon \circ {\mathcal T})(\eta,y)= \sum_{s=1}^l\omega_s(\eta^2_s+y_s^2)+\varepsilon ({\mathcal V}\circ {\mathcal T})(\eta,y):={\mathcal P}_0(\eta,y)+\varepsilon {\mathcal P}_1(\eta,y) \end{eqnarray} where \begin{eqnarray} && \label{H1} {\mathcal P}_1(\eta,y)=({\mathcal V}\circ {\mathcal T})(\eta,y)={\mathcal P}_{1,R}(\eta,y)+{\mathcal P}_{1,I}(\eta,y), \;(\eta,y)\in{\mathcal R}^{2l}\setminus\{0,0\}. \end{eqnarray} \begin{eqnarray} && \nonumber {\mathcal P}_{1,R}(\eta,y)=\frac12\sum_{k\in\Z^l}(\Re{{\mathcal V}}_k\circ{\mathcal H}_0)(\eta,y)\prod_{s=1}^l \left(\frac{\eta_s-iy_s}{\sqrt{\eta^2_s+y_s^2}}\right)^{k_s} \\ \nonumber && {\mathcal P}_{1,I}(\eta,y)=\frac12\sum_{k\in\Z^l} (\Im{{\mathcal V}}_k\circ{\mathcal H}_0)(\eta,y)\prod_{s=1}^l \left(\frac{\eta_s-iy_s}{\sqrt{\eta^2_s+y_s^2}}\right)^{k_s} \end{eqnarray} \vskip 4pt\noindent If ${\mathcal V}$ fulfills Assumption (H3) of Theorem \ref{mainth}, both these series converge uniformly in any compact of ${\mathcal R}^{2l}$ away from the origin and ${\mathcal P}_1$ is holomorphic on ${\mathcal R}^{2l}\setminus\{0,0\}$. Therefore Theorem \ref{mainth} immediately entails a convergence criterion for the Birkhoff normal form generated by perturbations holomorphic away from the origin. We state it under the form of a corollary: \begin{corollary} \label{mainc} {\rm (A convergence criterion for the Birkhoff normal form)} Under the assumptions of Theorem \ref{mainth} on $\omega$ and ${\mathcal V}$, consider on ${\mathcal R}^{2l}\setminus\{0,0\}$ the holomorphic Hamiltonian family $P_\varepsilon(\eta,y):={\mathcal P}_0(\eta,y)+\varepsilon{\mathcal P}_1(\eta,y)$, $\varepsilon\in{\mathcal R}$, where ${\mathcal P}_0$ and ${\mathcal P}_1$ are defined by (\ref{H0},\ref{H1}). Then the Birkhoff normal form of $H_\varepsilon$ is uniformly convergent on any compact of ${\mathcal R}^{2l}\setminus\{0,0\}$ if $|\varepsilon|<\varepsilon^\ast (\gamma,\tau)$. \end{corollary} \vskip 0.5cm\noindent \subsection{Strategy of the paper} The proof of Theorem \ref{mainth} rests on an implementation in the quantum context of R\"ussmann's argument\cite{Ru} yielding convergence of the KAM iteration when the complex variables $(z,\overline{z})$ belong to an open neighbourhood of the origin in $\Bbb C^{2l}$. Conditions (\ref{DC}, \ref{Rk}) prevent the occurrence of accidental degeneracies among eigenvalues at any step of the quantum KAM iteration, in the same way as they prevent the formation of resonances at the same step in the classical case. However, the global nature of quantum mechanics prevents phase-space localization; therefore, and this is the main difference, at each step the coefficients of the homological equation for the operator symbols not only have an additional dependence on $\hbar$ but also have to be controlled up to infinity. These difficulties are overcome by exploiting the closeness to the identity of the whole procedure, introducing adapted spaces of symbols i(Section \ref{not}), which account also for the properties of differentiability with respect to the Planck constant. The link between quantum and classical settings is provided by a sharp (i.e. without $\hbar^\infty$ approximation) Egorov Theorem established in section \ref{sectionegorov}. Estimates for the solution of the quantum homological equation and their recursive properties are obtained in sections \ref{hom} (Theorem \ref{homo}) and \ref{towkam} (Theorem \ref{resto}) respectively. Recursive estimates are established in Section \ref{recesti} (Theorem \ref{final}) and the proof of our main result is completed in section \ref{iteration}. The link with the usual construction of the quantum normal form described in Appendix. \vskip 1cm\noindent \section{Norms and first estimates} \label{not} \setcounter{equation}{0}% \setcounter{theorem}{0 Let $m,l=1,2,\dots$. For $(\xi,x,\hbar)\mapsto\F(\xi,x;\hbar)\in C^\infty({\mathcal R}^m\times\T^l\times [0,1]; \Bbb C)$ and $ (\xi,\hbar)\mapsto\G(\xi;\hbar)\in C^\infty({\mathcal R}^m\times [0,1]; \Bbb C)$, consider, for $p\in{\mathcal R}^m$ and $q\in\Z^m$ the following Fourier transforms \vskip 6pt\noindent \begin{definition}[Fourier transforms]\label{deffour} \begin{equation} \widehat{\G}(p;\hbar)=\frac1{(2\pi)^{m/2}}\int_{{\mathcal R}^m}\G(\xi;\hbar)e^{-i\langle p,\xi\rangle}\,dx \end{equation} \begin{equation} \widetilde{\F}(\xi,q;\hbar):=\frac1{(2\pi)^{m/2}}\int_{\T^l}\F(\xi,x;\hbar)e^{-i\langle q,x\rangle}\,dx . \quad \end{equation} \end{definition} Note that \begin{equation} \label{FE1} \F(\xi,x;\hbar)=\sum_{q\in\Z^l}\widetilde{\F}(\xi,q;\hbar)e^{-i\langle q,x\rangle} \end{equation} \begin{equation} \label{FE2} \widehat{\F}(p,q;\hbar)=\frac1{(2\pi)^{m/2}}\int_{{\mathcal R}^m}\widetilde{\F}(\xi,q;\hbar)e^{-i\langle p,\xi\rangle}\,dx \end{equation} \vskip 10pt\noindent It is convenient to rewrite the Fourier representations (\ref{FE1}, \ref{FE2}) under the form a single Lebesgue-Stieltjes integral. Consider the product measure on ${\mathcal R}^m\times {\mathcal R}^l$: \begin{eqnarray} \label{pm1} && d\lambda (t):=dp\,d\nu(s), \quad t:=(p,s)\in{\mathcal R}^m\times {\mathcal R}^l; \\ \label{pm2} && dp:=\prod_{k=1}^m\,dp_k;\quad d\nu(s):=\prod_{h=1}^l \sum_{q_h\leq s_h} \delta (s_h-q_h), \;q_h\in\Z, h=1,\ldots,l \end{eqnarray} Then: \begin{equation} \label{IFT} \F(\xi,x;\hbar)=\int_{{\mathcal R}^m\times{\mathcal R}^l}\,\widehat{\F}(p,s;\hbar)e^{i\langle p,\xi\rangle +i\langle s,x\rangle}\,d\lambda(p,s) \end{equation} \begin{definition}[Norms I] {\it For $\rho\geq 0$, $\sigma\geq 0$, we introduce the weighted norms } \vskip 3pt\noindent \begin{eqnarray} \label{norma1} |\G|^\dagger_{\sigma}&:=&\max_{\hbar\in [0,1]}\|\widehat{\G}(.;\hbar)\|_{L^1({\mathcal R}^m,e^{\sigma |p|}dp)}=\max_{\hbar\in [0,1]}\int_{{\mathcal R}^l}\|\widehat{\G}(.;\hbar)\|\,e^{\sigma |p|}\,dp. \\ \label{norma1k} |\G|^\dagger_{\sigma,k}&:=&\max_{\hbar\in [0,1]}\sum_{j=0}^k\|(1+|p|^2)^{\frac{k-j}{2}}\partial^j_\hbar\widehat{\G}(.;\hbar)\|_{L^1({\mathcal R}^m,e^{\sigma |p|}dp)};\quad |\G|^\dagger_{\sigma;0}:=|\G|^\dagger_{\sigma}. \end{eqnarray} \end{definition} \begin{remark} By noticing that $\vert p\vert\leq\vert p^\prime-p\vert+\vert p^\prime\vert$ and that, for $x\geq 0$, $\displaystyle x^je^{-\delta x}\leq \frac 1 e\left(\frac j{\delta}\right)^j$, we immediately get the inequalities \begin{equation}\label{plus} \vert \F\G\vert^\dagger_{\sigma}\leq\vert \F\vert_{\sigma}^\dagger\cdot \vert \G\vert_{\sigma}^\dagger, \end{equation} \vskip 6pt\noindent \begin{equation} \label{diff} \vert (I-\Delta^{j/2})\F\vert_{\sigma-\delta}^\dagger\leq \frac1 e\left(\frac j\delta\right)^j\vert \F\vert_\sigma^\dagger, \quad k\geq 0. \end{equation} \end{remark} Set now for $ k\in\Bbb N\cup\{0\} $: \begin{equation}\label{muk} \mu_{k}(t):=(1+|t|^{2})^{\frac k 2}=(1+|p|^{2}+|s|^{2})^{\frac k 2}. \end{equation} and note that \begin{equation} \mu_k(t-t^\prime)\leq 2^{\frac k 2} \mu_k(t)\mu_k( t ^\prime). \end{equation} because $|x-x^\prime|^2\leq 2(|x|^2+|x^\prime|^2)$. \begin{definition}[Norms II] {\it Consider $\F(\xi,x;\hbar)\in C^\infty({\mathcal R}^m\times \T^l\times[0,1];\Bbb C)$, with Fourier expansion \begin{equation} \label{FF} \F(\xi,x;\hbar)=\sum_{q\in\Z^l}\,\widetilde{\F}(\xi,q;\hbar)e^{i\langle q,x\rangle} \end{equation} \begin{itemize} \item [(1)] Set: \begin{eqnarray} \label{sigmak} \Vert \F\Vert^\dagger_{\rho,k}:=\max_{\hbar\in [0,1]}\sum_{\gamma=0}^k \int_{{\mathcal R}^m\times {\mathcal R}^l}\vert \mu_{k-\gamma}(p,s)\partial^\gamma_\hbar\widehat{\F}(p,s;\hbar)\vert e^{\rho(\vert s\vert+\vert p\vert)}\,d\lambda(p,s). \end{eqnarray} \vskip 4pt\noindent \item [(2)] Let ${\mathcal O}_\omega$ be the set of functions ${\Phi}:{\mathcal R}^l\times\T^l\times[0,1]\to\Bbb C$ such that $\Phi(\xi,x;\hbar)=\F({\mathcal L}_\omega(\xi),x;\hbar)$ for some $\F:\ {\mathcal R}\times\T^l\times [0,1]\to \Bbb C$. Define, for $\Phi\in {\mathcal O}_\omega$: \begin{eqnarray}\label{sigom} \Vert \Phi\Vert_{\rho,k}:=\max_{\hbar\in [0,1]}\sum_{\gamma=0}^k \int_{{\mathcal R}}\vert \mu_{k-\gamma}( p\omega,q) \partial^\gamma_\hbar\widehat{\F}(p,s;\hbar)\vert e^{\rho(\vert s\vert+\vert p\vert}\,d\lambda(p,s). \end{eqnarray} \vskip 4pt\noindent \item [(3)] Finally we denote $Op^W(\F)$ the Weyl quantization of $\F$ recalled in Section \ref{sectionweyl} and \begin{eqnarray} \label{normsymb'} \J^\dagger_k(\rho)&=&\{\F \,|\,\Vert \F\Vert^\dagger_{\rho,k}<\infty\}, \\ \label{normop'} J^\dagger_k(\rho)&=&\{Op^W(\F)\,|\,\F\in\J^\dagger_k(\rho)\}, \\ \label{normsymb} \J_k(\rho)&=&\{\F\in {\mathcal O}_\omega\,|\,\Vert \F\Vert_{\rho,k}<\infty\}, \\ \label{normop} J_k(\rho)&=&\{Op^W(\F)\,|\,\F\in\J_k(\rho)\}. \end{eqnarray} \end{itemize}} \end{definition} Finally we denote: $L^1_\sigma({\mathcal R}^m):=L^1({\mathcal R}^m,e^{\sigma |p|}dp)$. \begin{remark} Note that, if $\F(\xi,x,\hbar)$ is independent of $x$, i.e. $\widetilde{\F}(\xi,q,\hbar)=\F(\xi,\hbar)\delta_{q,0}$, then: \begin{equation} \label{normeid} \|\F\|^\dagger_{\rho,k}=|\F|^\dagger_{\rho,k}; \quad \|\F\|_{\rho,k}=|\F|_{\rho,k} \end{equation} while in general \begin{eqnarray} && \|\F\|_{\rho,k}\leq \|\F\|_{\rho^\prime,k^\prime} \quad {\rm whenever}\; k\geq k^\prime,\,\rho\leq \rho^\prime; \end{eqnarray} \end{remark} \begin{remark} (Regularity properties) \vskip 4pt\noindent Let $\F\in \J_k^\dagger(\rho), k\geq 0$. Then: \begin{enumerate} \item There exists $K(\alpha,\rho,k)$ such that \begin{equation} \label{maggC} \max_{\hbar\in [0,1]}\|\F(\xi,x;\hbar)\|_{C^\alpha({\mathcal R}^m\times\T^l)}\leq K \|\F\|^\dagger_{\rho,k}, \quad \alpha\in\Bbb N \end{equation} and analogous statement for the norm $\|\cdot\|_{\rho,k}$. \item Let $\rho>0$, $k\geq 0$. Then $\F(\xi,x;\hbar)\in C^k([0,1];C^\omega(\{|\Im \xi|<\rho\}\times \{|\Im x|<\rho\})$ and \vskip 4pt\noindent \begin{equation} \label{supc} \sup_{\{|\Im \xi|<\rho\}\times \{|\Im x|<\rho\}}|\F(\xi,x;\hbar)|\leq \|\F\|^\dagger_{\rho,k}. \end{equation} \vskip 6pt\noindent Analogous statements for $\F\in \J_k(\rho)$. \end{enumerate} \end{remark} We will show in section \ref{sectionweyl} that: \begin{eqnarray} \|Op^W(F)\|_{\mathcal B(L^2)}\leq \|\F\|_{\rho,k}\ \ \ \forall k,\ \rho >0. \end{eqnarray} In what follows we will often use the notation $\F$ also to denote the function $\F({\mathcal L}_\omega(\xi))$, and, correspondingly, $\|\F\|_{\rho,k}$ to denote $\|\F_\omega\|_{\rho,k}$, because the indication of the belonging to $J$ or $J^\dagger$, respectively, is already sufficient to mark the distinction of the two cases. \begin{remark} Without loss of generality we may assume: \begin{equation} |\omega |:=|\omega_1|+\ldots+|\omega_l |\leq 1 \end{equation} Indeed, the general case $|\omega|=\alpha |\omega^\prime|$, $|\omega^\prime|\leq 1$, $\alpha>0$ arbitrary reduces to the former one just by the rescaling $\varepsilon\to \alpha\varepsilon$. \end{remark} \vskip 1.0cm\noindent \section{Weyl quantization, matrix elements, commutator estimates}\label{sectionweyl} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0}% \setcounter{theorem}{0 \subsection{Weyl quantization: action and matrix elements} We sum up here the canonical (Weyl) quantization procedure for functions (classical observables) defined on the phase space ${\mathcal R}^l\times\T^l$. In the present case it seems more convenient to consider the representation (unique up to unitary equivalences) of the natural Heisenberg group on ${\mathcal R}^l\times\T^l$. Of course this procedure yields the same quantization as the standard one via the Br\'ezin-Weil-Zak transform (see e.g. \cite{Fo}, \S 1.10) and has already been employed in \cite{CdV}, \cite{Po1},\cite{Po2}). \par Let $\Bbb H_l({\mathcal R}^l\times{\mathcal R}^l\times{\mathcal R})$ be the Heisenberg group over $\displaystyle {\mathcal R}^{2l+1}$ (see e.g.\cite{Fo}, Chapt.1). Since the dual space of ${\mathcal R}^l\times\T^l$ under the Fourier transformation is ${\mathcal R}^l\times\Z^l$, the relevant Heisenberg group here is the subgroup of $\Bbb H_l({\mathcal R}^l\times{\mathcal R}^l\times{\mathcal R})$, denoted by $\Bbb H_l({\mathcal R}^l\times\Z^l\times{\mathcal R})$, defined as follows: \begin{definition}[Heisenberg group] \label{HSG} {\it Let $u:=(p,q), p\in{\mathcal R}^l, q\in\Z^l$, and let $ t\in{\mathcal R}$. Then $\Bbb H_l({\mathcal R}^l\times\Z^l\times{\mathcal R})$ is the subgroup of $\Bbb H_l({\mathcal R}^l\times{\mathcal R}^l\times{\mathcal R})$ topologically equivalent to ${\mathcal R}^l\times\Z^l\times{\mathcal R}$ with group law \begin{equation} \label{HGL} (u,t)\cdot (v,s)= (u+v, t+s+\frac12\Omega(u,v)) \end{equation} Here $\Omega(u,v)$ is the canonical $2-$form on ${\mathcal R}^l\times\Z^l$:} \begin{equation} \label{2forma} \Omega(u,v):=\langle u_1,v_2\rangle-\langle v_1,u_2\rangle \end{equation} \end{definition} $\Bbb H_l({\mathcal R}^l\times\Z^l\times{\mathcal R})$ is the Lie group generated via the exponential map from the Heisenberg Lie algebra ${\mathcal H}{\mathcal L}_l(\Z^l\times{\mathcal R}^l\times{\mathcal R})$ defined as the vector space ${\mathcal R}^l\times\Z^l\times{\mathcal R}$ with Lie bracket \begin{equation} \label{LA} [(u,t)\cdot (v,s)]= (0, 0,\Omega(u,v)) \end{equation} The unitary representations of $\Bbb H_l({\mathcal R}^l\times\Z^l\times{\mathcal R})$ in $L^2(\T^l)$ are defined as follows \begin{equation} \label{UR} (U_\hbar(p,q,t)f)(x):=e^{i\hbar t +i\langle q,x\rangle+\hbar\langle p.q\rangle/2}f(x+\hbar p) \end{equation} $\forall\,\hbar\neq 0$, $\forall\,(p,q,t)\in{\mathcal H}_l$, $\forall\,f\in L^2(\T^l)$. These representations fulfill the Weyl commutation relations \begin{equation} \label{Weyl} U_\hbar(u)^\ast =U_\hbar(-u), \qquad U_\hbar(u)U_\hbar(v)=e^{i\hbar\Omega(u,v)}U(u+v) \end{equation} For any fixed $\hbar>0$ $U_\hbar$ defines the Schr\"o\-din\-ger\ representation of the Weyl commutation relations, which also in this case is unique up to unitary equivalences (see e.g. \cite{Fo}, \S 1.10). Consider now a family of smooth phase-space functions indexed by $\hbar$, ${\mathcal A}(\xi,x,\hbar):{\mathcal R}^l\times\T^l\times [0,1]\to\Bbb C$, written under its Fourier representation \vskip 4pt\noindent \begin{equation} \label{FFR} {\mathcal A}(\xi,x,\hbar)=\int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{{\mathcal A}}(p,q;\hbar)e^{i(\langle p.\xi\rangle +\langle q,x\rangle)}\,dp=\int_{{\mathcal R}^l\times {\mathcal R}^l}\widehat{{\mathcal A}}(p,s;\hbar)e^{i(\langle p.\xi\rangle +\langle s,x\rangle)}\,d\lambda(p,s) \end{equation} \vskip 6pt\noindent \begin{definition}[Weyl quantization] \label{Qdef} {\it The (Weyl) quantization of ${\mathcal A}(\xi,x;\hbar)$ is the operator $A(\hbar)$ definde as} \begin{eqnarray} \label{Wop} && (A(\hbar)f)(x):=\int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{{\mathcal A}}(p,q;\hbar)U_\hbar(p,q)f(x)\,dp \\ && \nonumber = \int_{{\mathcal R}^l\times{\mathcal R}^l}\widehat{{\mathcal A}}(p,s;\hbar)U_\hbar(p,s)f(x)\,d\lambda(p,s) \quad f\in L^2(\T^l) \end{eqnarray} \end{definition} \noindent \begin{remark} Formula \eqref{Wop} can be also be written as \begin{equation} \label{Wopq} (A(\hbar)f)(x)=\sum_{q\in\Z^l}\mbox{ where }A(q,\hbar)f, \quad (A(q,\hbar)f)(x)=\int_{{\mathcal R}^l}\,\widehat{{\mathcal A}}(p,q;\hbar)U_\hbar(p,q)f(x)\,dp \end{equation} \end{remark} \noindent From this we compute the action of $A(\hbar)$ on the canonical basis in $L^2(\T^l)$: $$ e_m(x):=(2\pi)^{-l/2}e^{i\langle m, x\rangle}, \quad x\in\T^l, \;m\in\Z^l . $$ We have: \begin{lemma} \label{azione} \begin{equation} \label{azioneAop} A(\hbar)e_m(x)= \sum_{q\in\Z^l}e^{i\langle (m+q),x\rangle}\widetilde{{\mathcal A}}(\hbar (m+q/2),q,\hbar) \end{equation} \end{lemma} \begin{proof} By \eqref{Wopq}, it is enough to prove that the action of $A(q,\hbar)$ is \begin{equation} \label{azioneAq} A(q,\hbar)e_m(x)= e^{i\langle (m+q),x\rangle}\widetilde{{\mathcal A}}(\hbar(m+q/2),q,\hbar) \end{equation} Applying Definition \ref{Qdef} we can indeed write: \begin{eqnarray*} && (A(q,\hbar)e_m)(x)=(2\pi)^{-l/2}\int_{{\mathcal R}^l}\widehat{A}(p,q;\hbar)e^{i\langle q,x\rangle+i\hbar \langle p,q\rangle/2}e^{i\langle m,(x+\hbar p)\rangle}\,dp \\ && =(2\pi)^{-l/2}e^{i\langle (m+q),x\rangle}\,\int_{{\mathcal R}^l}\widehat{A}(p;q,\hbar)e^{i\hbar \langle p,(m+q/2)\rangle}\,dp =e^{i\langle (m+q),x\rangle}\widetilde{{\mathcal A}}(\hbar(m+q/2),q,\hbar). \end{eqnarray*} \end{proof} We note for further reference an obvious consequence of \eqref{azioneAq}: \begin{equation} \label{ortogq} \langle A(q,\hbar)e_m,A(q,\hbar)e_n\rangle_{L^2(\T^l)}=0,\;m\neq n;\quad \langle A(r,\hbar)e_m,A(q,\hbar)e_n\rangle_{L^2(\T^l)}=0,\;r\neq q. \end{equation} As in the case of the usual Weyl quantization, formula (\ref{Wop}) makes sense for tempered distributions ${\mathcal A}(\xi,x;\hbar)$ \cite{Fo}. Indeed we prove in this context, for the sake of completeness, a simpler, but less general, version of the standard Calderon-Vaillancourt criterion: \begin{proposition} Let $A(\hbar)$ by defined by (\ref{Wop}). Then \vskip 8pt\noindent \begin{equation} \label{CV} \Vert A(\hbar)\Vert_{L^2\to L^2}\leq \frac{2^{l+1}}{l+2}\cdot \frac{\pi^{(3l-1)/2}}{\Gamma(\frac{l+1}{2})}\,\sum_{|\alpha|\leq 2k}\,\Vert \partial_x^k{\mathcal A}(\xi,x;\hbar)\Vert_{L^\infty({\mathcal R}^l\times\T^l)}. \end{equation} where $$ k=\begin{cases} \frac{l}{2}+1,\quad l\;{\rm even} \\ {} \\ \frac{l+1}{2}+1,\quad l\;{\rm odd}. \end{cases} $$ \end{proposition} \begin{proof} Consider the Fourier expansion $$ u(x)=\sum_{m\in\Z^l}\,\widehat{u}_me_m(x),\quad u\in L^2(\T^l). $$ Since: $$ \|A(q,\hbar)\widehat{u}_me_m\|^2=|\widetilde{{\mathcal A}}(\hbar(m+q/2),q,\hbar) |^2\cdot |\widehat{u}_m|^2 $$ by Lemma \ref{azione} and \eqref{ortogq} we get: \begin{eqnarray*} \|A (\hbar)u\|^2&\leq & \sum_{(q,m)\in\Z^l\times\Z^l}\|A(q,\hbar)\widehat{u}_m e_m\|^2 = \sum_{(q,m)\in\Z^l\times\Z^l}|{\mathcal A}(\hbar (m+q/2),q,\hbar)|^2\cdot |\widehat{u}_m|^2 \\ &\leq& \sum_{q\in\Z^l}\,\sup_{\xi\in{\mathcal R}^l}|{\mathcal A}(\xi,q,\hbar) |^2\sum_{m\in\Z^l}|\widehat{u}_m|^2 = \sum_{q\in\Z^l}\,\sup_{\xi\in{\mathcal R}^l}|{\mathcal A}(\xi,q,\hbar) |^2\|u\|^2 \\ &\leq& \big[ \sum_{q\in\Z^l}\,\sup_{\xi\in{\mathcal R}^l}|{\mathcal A}(\xi,q,\hbar) |\big]^2\|u\|^2 \end{eqnarray*} Therefore: \[ \Vert A(\hbar)\Vert_{L^2\to L^2} \leq \sum_{q\in\Z^l}\,\sup_{\xi\in{\mathcal R}^l}\vert{\mathcal A}(\xi,q,\hbar)\vert. \] Integration by parts entails that, for $k\in \Bbb N$, and $\forall \,g\in C^\infty(\T^l)$: \vskip 8pt\noindent \begin{eqnarray*} && \left |\int_{\T^l}e^{i\langle q,x\rangle}g(x)dx\right |=\frac 1 {1+|q|^{2k}}\left |\int_{\T^l} e^{i\langle q,x\rangle}(1+(-\triangle_x)^k) g(x)dx\right | \\ && \leq \frac 1 {1+\vert q\vert^{2k}}(2\pi)^l\sup_{\T^l}\sum_{|\alpha|\leq 2k}\vert \partial_x^\alpha g(x)\vert . \end{eqnarray*} \vskip 8pt\noindent Let us now take: \begin{equation} \label{kappa} k=\begin{cases} \frac{l}{2}+1,\quad l\;{\rm even} \\ {} \\ \frac{l+1}{2}+1,\quad l\;{\rm odd} \end{cases} \Longrightarrow \begin{cases} 2k-l+1=3,\quad l\;{\rm even} \\ 2k-l+1=2,\quad l\;{\rm odd} \end{cases} \end{equation} \vskip 4pt\noindent Then $2k-l+1\geq 2$, and hence: $$ \sum_{q\in\Z^l}\,\frac1{1+\vert q\vert^{2k}}\leq 2\int_{{\mathcal R}^l}\,\frac{du_1\cdots du_l}{1+\|u\|^{2k}}\leq 2\frac{\pi^{(l-1)/2}}{\Gamma(\frac{l+1}{2})}\int_0^\infty\frac{\rho^{l-1}}{1+\rho^{2k}}\,d\rho. $$ Now: \begin{eqnarray*} && \int_0^\infty\frac{\rho^{l-1}}{1+\rho^{2k}}\,d\rho =\frac 1 {2k}\int_0^\infty\,\frac{u^{l/2k -1}}{1+u}\,du \\ && \leq \frac 1 {2k}\left(\int_0^1\, u^{l/2k -1}\,du+\int_1^\infty\,{u^{l/2k -2}}\,du\right)=\frac{1}{(4k-l)(2k-l)} \end{eqnarray*} This allows us to conclude: \begin{eqnarray*} \sum_{q\in\Z^l}\,\sup_\xi\vert{\mathcal A}(\xi,q,\hbar)\vert &\leq &(2\pi)^l \sum_{|\alpha|\leq 2k}\Vert \partial_x^{\alpha}{\mathcal A}(\xi,x;\hbar)\Vert_{L^\infty({\mathcal R}^l\times\T^l)}\cdot \sum_{q\in\Z^l}\,\frac1{1+\vert q\vert^{2k}} \\ &\leq & 2^{l+1}\cdot \frac{\pi^{(3l-1)/2}}{\Gamma(\frac{l+1}{2})}\frac{1}{l+2}\sum_{|\alpha|\leq 2k}\,\Vert \partial_x^k{\mathcal A}(\xi,x;\hbar)\Vert_{L^\infty({\mathcal R}^l\times\T^l)}. \end{eqnarray*} with $k$ given by (\ref{kappa}). This proves the assertion. \end{proof} \begin{remark} Thanks to Lemma \ref{azione} we immediately see that, when ${\mathcal A}(\xi, x,\hbar)=\F({\mathcal L}_\omega(\xi),x;\hbar)$, \vskip 8pt\noindent \begin{eqnarray} \label{quant2} && A(\hbar)f=\int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{\F}(p,q;\hbar)U_h(p\omega ,q)f\,dp \\ \nonumber && = \int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{\F}(p,q;\hbar)e^{i\langle q,x\rangle+i\hbar p\langle\omega,q\rangle/2}f(x+\hbar p\omega)\,dp\quad f\in L^2(\T^l) \end{eqnarray} \vskip 5pt\noindent where, again, $p\omega:=(p\omega_1,\dots,p\omega_l)$. Explicitly, \eqref{azioneAq} and \eqref{azioneAop} become: \begin{eqnarray} \label{azioneAom} && A(\hbar)e_m(x)= \sum_{q\in\Z^l}e^{i\langle (m+q),x\rangle}\widetilde{\F}(\hbar \langle\omega,(m+q/2)\rangle,q,\hbar) \\ && \label{azioneAqom} A(q,\hbar)e_m(x)= e^{i\langle (m+q),x\rangle}\widetilde{\F}(\hbar\langle\omega,(m+q/2)\rangle,q,\hbar) \end{eqnarray} \end{remark} \begin{remark} If ${\mathcal A}$ does not depend on $x$, then $\widetilde{{\mathcal A}}(\xi,q,\hbar)=0, q\neq 0$, and (\ref{azioneAop}) reduces to the standard (pseudo) differential action \begin{eqnarray} (A(\hbar) u)(x)=\sum_{m\in\Z^l}\overline{{\mathcal A}}(m\hbar ,\hbar) \widehat{u}_m e^{i\langle m,x\rangle}=\sum_{m\in\Z^l}\overline{{\mathcal A}}(-i\hbar\nabla,\hbar) \widehat{u}_m e^{i\langle m,x\rangle} \end{eqnarray} because $-i\hbar\nabla e_m=m\hbar e_m$. On the other hand, if ${\mathcal A}$ does not depend on $\xi$ (\ref{azioneAop}) reduces to the standard multiplicative action \begin{equation} (A (\hbar)u)(x)=\sum_{q\in\Z^l}\widetilde{{\mathcal A}}(q,\hbar)e^{i\langle q,x\rangle}\sum_{m\in\Z^l}\widehat{u}_m e^{i\langle m,x\rangle}={\mathcal A}(x,\hbar)u(x) \end{equation} \end{remark} \noindent \begin{corollary} \label{corA} Let $A(\hbar): L^2(\T^l)\to L^2(\T^l)$ be defined by \eqref{Wop} (Definition \ref{Qdef}) and $A(q,\hbar)$ by \eqref{Wopq}. Then: \begin{enumerate} \item $\forall\rho\geq 0, \forall\,k\geq 0$ we have: \begin{equation}\label{stimz} \Vert A(\hbar)\Vert_{L^2\to L^2}\leq\Vert{\mathcal A}\Vert^\dagger_{\rho,k} \end{equation} and, if ${\mathcal A}(\xi, x,\hbar)={\mathcal A}({\mathcal L}_\omega(\xi),x;\hbar)$ \begin{equation}\label{stimg} \Vert A(\hbar)\Vert_{L^2\to L^2}\leq\Vert{\mathcal A}\Vert_{\rho,k}. \end{equation} \item \begin{eqnarray} \label{elm44} && \langle e_{m+s}, A(q,\hbar)e_m\rangle =\delta_{q,s}\widetilde{{\mathcal A}}( (m+q/2)\rangle\hbar,q,\hbar) \\ && \label{elm55} \langle e_{m+s},A(\hbar)e_m\rangle =\widetilde{{\mathcal A}}((m+s/2)\hbar,s,\hbar)={(2\pi)^{-\frac m 2}}\int\limits_{\T^m}{\mathcal A}((m+s/2)\hbar,x,\hbar)e^{-i\langle s,x\rangle}dx \end{eqnarray} and, if ${\mathcal A}(\xi, x,\hbar)=\F({\mathcal L}_\omega(\xi),x;\hbar)$ \begin{eqnarray} \label{elm4} && \langle e_{m+s}, A(q,\hbar)e_m\rangle =\delta_{q,s}\widetilde{\F}(\langle\omega, (m+q/2)\rangle\hbar,q,\hbar) =\delta_{q,s}\widetilde{\F}({\mathcal L}_\omega (m+s/2)\hbar,q,\hbar) \\ && \label{elm5} \langle e_{m+s},A(\hbar)e_m\rangle =\widetilde{\F}(\langle\omega,(m\hbar+s\hbar/2)\rangle,s,\hbar) =\widetilde{\F}({\mathcal L}_\omega(m\hbar+s\hbar/2),s,\hbar) \end{eqnarray} Equivalently: \begin{equation} \langle e_m,A(\hbar) e_n\rangle=\widetilde{\F}(\langle \omega,(m+n)\rangle\hbar/2,m-n,\hbar) \end{equation} \item $A(\hbar)$ is an operator of order $-\infty$, namely there exists $C(k,s)>0$ such that \begin{equation} \|A(\hbar)u\|_{H^k(\T^l)}\leq C(k,s)\|u\|_{H^s(\T^l)}, \quad (k,s)\in{\mathcal R},\; k\geq s \end{equation} \end{enumerate} \end{corollary} \begin{proof} (1) Formulae \eqref{stimz} and \eqref{stimg} are straighforward consequences of Formula (\ref{maggC}). \vskip 5pt\noindent (2) \eqref{azioneAop} and \eqref{azioneAq} immediately yield \eqref{elm44} and \eqref{elm55}. Moreover, (\ref{elm4}) immediately yields (\ref{elm5}). In turn, (\ref{elm4}) follows at once by \eqref{azioneAq}. \vskip 5pt\noindent (3) The condition ${\mathcal A}\in\J(\rho)$ entails: \begin{eqnarray} \label{stimaexp} \sup_{(\xi;\hbar)\in{\mathcal R}^l\times [0,1]}|{\mathcal A}(\xi;q,\hbar)|e^{\rho |q|}\leq e^{\rho |q|}\max_{\hbar\in [0,1]}\|\widehat{{\mathcal A}}(p;q,\hbar)\|_1\to 0, \;|q|\to \infty. \end{eqnarray} Therefore: \begin{eqnarray*} \|A(\hbar)u\|^2_{H^k}&\leq& \sum_{(q,m)\in\Z^l\times\Z^l}(1+|q|^2)^k\widetilde{{\mathcal A}}((m+q/2)\hbar,q,\hbar) |^2\cdot |\widehat{u}_m|^2 \\ &\leq& \sum_{q\in\Z^l}\,\sup_{q,m}(1+|q|^2)^k|\widetilde{{\mathcal A}}((m+q/2)\hbar,q,\hbar) |^2\sum_{m\in\Z^l}\,(1+|m|^2)^{s}|\widehat{u}_m|^2 \\ &=& C(k,s)\|u\|^2_{H^s} \\ C(k,s)&:=&\sum_{q\in\Z^l}\,\sup_{q,m}(1+|q|^2)^k|\widetilde{{\mathcal A}}((m+q/2)\hbar,q,\hbar)|^2 \end{eqnarray*} where $0<C(k,s)<+\infty$ by (\ref{stimaexp}) above. The Corollary is proved. \end{proof} \subsection{Compositions, Moyal brackets} We first list the main pro\-perties which are straightforward consequences of the definition, as in the case of the standard Weyl quantization in ${\mathcal R}^{2l}$. First introduce the abbreviations \begin{eqnarray} \label{tt} && t:=(p,s); \quad t^\prime=(p^\prime,s^\prime);\quad \omega t:=(p\omega,s) \\ \label{omt} && \Omega_\omega(t^\prime -t,t^\prime):=\langle(p^\prime-p)\omega,s^\prime\rangle- \langle (s^\prime-s),p^\prime\omega \rangle =\langle p^\prime\omega ,s\rangle-\langle s^\prime,p\omega \rangle. \end{eqnarray} Given $\F(\hbar), \G(\hbar)\in \J_k(\rho)$, define their twisted convolutions: \vskip 3pt\noindent \begin{eqnarray} && \label{twc} (\widehat{\F}(\hbar){\widetilde{\ast}} \widehat{\G}(\hbar))(p,q;\hbar):= \int_{{\mathcal R}\times{\mathcal R}^l}\widehat{\F}(t^\prime -t;\hbar) \widehat{\G}(t^\prime;\hbar)e^{ i[\hbar \Omega_\omega(t^\prime -t,t^\prime)/2]}\,d\lambda(t^\prime) \\ \nonumber && {} \\ && \label{flat3} (\F\sharp\G)(x,\xi,\hbar):= \int_{{\mathcal R}\times{\mathcal R}^l} (\widehat{\F}(\hbar){\widetilde{\ast}} \widehat{\G}(\hbar))(t,\hbar)e^{i\langle s,x\rangle+p{\mathcal L}_\omega(\xi)}\,d\lambda(t) \\ \nonumber {} \\ \label{MBB1} && \widehat{\mathcal C}(p,q;\hbar):= \frac{1}{\hbar}\int_{{\mathcal R}\times{\mathcal R}^l}\widehat{\F}(t^\prime -t,\hbar) \widehat{\G}(t^\prime,\hbar)\sin[\hbar \Omega_\omega(t^\prime -t,t^\prime)/2]\,d\lambda(t^\prime) \\ \nonumber {} \\ && \label{IFMM1} {\mathcal C}(x,\xi;\hbar):=\int_{{\mathcal R}\times{\mathcal R}^l} \widehat{\mathcal C}(p,s;\hbar)e^{ip{\mathcal L}_\omega(\xi) +i\langle s,x\rangle}\,d\lambda(t) \end{eqnarray} \vskip 5pt\noindent Once more by the same argument valid for the Weyl quantization in ${\mathcal R}^{2l}$: \begin{proposition} \label{Quant2} The following composition formulas hold: \vskip 5pt\noindent \begin{eqnarray} \label{Comm6} && F(\hbar)G(\hbar)= \int_{{\mathcal R}\times{\mathcal R}^l}(\widehat{\F}(\hbar){\widetilde{\ast}} \widehat{\G}(\hbar))(t;\hbar) U_\hbar(\omega t)\,d\lambda(t). \end{eqnarray} \vskip 3pt\noindent \begin{eqnarray} \label{Comm7} && \frac{[F(\hbar),G(\hbar)]}{i\hbar}= \int_{{\mathcal R}\times{\mathcal R}^l}\widehat{\mathcal C}(t;\hbar)U_\hbar(\omega t)\,d\lambda(t) \end{eqnarray} \vskip 5pt\noindent \end{proposition} \begin{remark} The symbol of the product $F(\hbar)G(\hbar)$ is then $(\F\sharp\G)({\mathcal L}_\omega(\xi),x,\hbar)$ and the symbol of the commutator $[F(\hbar),G(\hbar)]/i\hbar$ is $ {\mathcal C}({\mathcal L}_\omega(\xi),x;\hbar)$, which is by definition the Moyal bra\-cket of the symbols $\F, \G$. From (\ref{MBB1}) we get the asymptotic expansion: \vskip 3pt\noindent \begin{eqnarray} \label{Mo6} && \widehat{\mathcal C}(p,q;\omega;\hbar)=\sum_{j=0}^\infty\frac{(-1)^j \hbar^{2j}}{(2j+1)!} D^j(p,q;\omega) \\ && D^j(p,q;\omega):=\int_{{\mathcal R}\times{\mathcal R}^l}\widehat{\F}(t^\prime-t,\hbar) \widehat{\G}(t^\prime,\hbar)[\Omega_\omega(t^\prime -t,t^\prime)^j\,d\lambda(t^\prime) \end{eqnarray} \vskip 3pt\noindent whence the asymptotic expansion for the Moyal bracket \vskip 3pt\noindent \begin{eqnarray} \label{Moexp} && \{\F, \G\}_M({\mathcal L}_\omega(\xi),x;\hbar)=\{\F, \G\}({\mathcal L}_\omega(\xi),x,\hbar)+ \\ \nonumber && \sum_{|r+j|=0}^\infty\frac{(-1)^{|r|}\hbar^{|r+j|}}{r!sj}[\partial_x^r \omega\partial^j_{\mathcal L} \F({\mathcal L}_\omega(\xi),x)]\cdot [ \omega\partial^j_{\mathcal L} \partial_x^r G({\mathcal L}_\omega(\xi),x,\hbar)]- \\ \nonumber && -\sum_{|r+j|=0}^\infty\frac{(-1)^{|r|}\hbar^{|r+j|}}{r!j!}[\partial_x^r \omega\partial^j_{\mathcal L} \G({\mathcal L}_\omega(\xi),x)]\cdot [ \omega\partial^j_{\mathcal L} \partial_x^r F({\mathcal L}_\omega(\xi),x,\hbar)] \end{eqnarray} Remark that: \begin{equation} \label{Mo5} \{\F, \G\}_M({\mathcal L}_\omega(\xi),x;\hbar)=\{\F, \G\}({\mathcal L}_\omega(\xi),x)+O(\hbar) \end{equation} In particular, since $L_\omega(\xi)$ is linear, we have $\forall\,\F(\xi;x;\hbar)\in C^\infty({\mathcal R}^l\times\T^l\times[0,1])$: \begin{equation} \label{MP} \{\F, {\mathcal L}_\omega(\xi)\}_M({\mathcal L}_\omega(\xi),x;\hbar)=\{\F, {\mathcal L}_\omega(\xi)\}({\mathcal L}_\omega(\xi),x;\hbar) \end{equation} \end{remark} The observables $\F(\xi,x;\hbar)\in\J(\rho)$ enjoy the crucial property of stability under compositions of their dependence on ${\mathcal L}_\omega(\xi)$ (formulae (\ref{flat3}) and (\ref{IFMM1}) above). As in \cite{BGP}, we want to estimate the relevant quantum observables uniformly with respect to $\hbar$, i.e. through the weighted norm (\ref{sigom}). \vskip 3pt\noindent \subsection{Uniform estimates} The following proposition is the heart of the estimates needed for the convergence of the KAM iteration. The proof will be given in the next (sub)section. Even though we could limit ourselves to symbols in $\J(\rho)$, we consider for the sake of generality and further reference also the general case of symbols belonging to $\J^\dagger(\rho)$. \begin{proposition} \label{stimeMo} Let $F$, $G\in J^\dagger_k(\rho)$, $k=0,1,\ldots$, $d=d_1+d_2$. Let $\F, \G$ be the corresponding symbols, and $0<d+d_1<\rho$. Then: \begin{enumerate} \item[\bf{($1^\dagger$)}] $FG\in J^\dagger_k(\rho)$ and fulfills the estimate \vskip 3pt\noindent \begin{equation} \label{2conv'} \|FG\|_{\mathcal B(L^2)}\leq \|\F\sharp\G\|^\dagger_{\rho,k} \leq (k+1)4^k \|\F\|^\dagger_{\rho,k} \cdot \|\G\|^\dagger_{\rho,k} \end{equation} \vskip 3pt\noindent \item[\bf{($2^\dagger$)}] $\displaystyle \frac{[F,G]}{i\hbar}\in J^\dagger_k(\rho-d)$ and fulfills the estimate \vskip 3pt\noindent \begin{eqnarray} \label{normaM2'} \left\Vert\frac{[F,G]}{i\hbar}\right\Vert_{\mathcal B(L^2)}\leq \|\{\F,\G\}_M\|_{\rho-d-d_1,k}^\dagger \leq \frac{(k+1)4^k}{e^2d_1(d+d_1)}\|\F\|_{\rho,k}^\dagger \|\G\|_{\rho-d,k}^\dagger \end{eqnarray} \vskip 3pt\noindent \item[\bf{($3^\dagger$)}] $\F\G \in \J^\dagger_k(\rho)$, and \begin{equation} \label{simple'} \|\F\G\|^\dagger_{\rho,k} \leq (k+1)4^k \|\F\|^\dagger_{\rho,k} \cdot \|\G\|^\dagger_{\rho,k} \end{equation} \end{enumerate} Moreover if $F$, $G\in J_k(\rho)$, $k=0,1,\ldots$, and $\F, \G\in\J_k(\rho)$, then: \begin{enumerate} \item[\bf{(1)}] $FG\in J_k(\rho)$ and fulfills the estimate \vskip 3pt\noindent \begin{equation} \label{2conv} \|FG\|_{\mathcal B(L^2)}\leq \|\F\sharp\G\|_{\rho,k} \leq (k+1)4^k \|\F\|_{\rho,k} \cdot \|\G\|_{\rho,k} \end{equation} \vskip 3pt\noindent \item[\bf{(2)}] $\displaystyle \frac{[F,G]}{i\hbar}\in J_k(\rho-d)$ and fulfills the estimate \vskip 5pt\noindent \begin{equation} \label{normaM2} \left\Vert\frac{[F,G]}{i\hbar}\right\Vert_{\mathcal B(L^2)}\leq \|\{\F,\G\}_M\|_{\rho-d-d_1,k} \leq \frac{(k+1)4^k}{e^2d_1(d+d_1)}\|\F\|_{\rho,k}\cdot \|\G \|_{\rho-d,k} \end{equation} \item[\bf{(3)}] $\F \G \in \J_k(\rho)$ and \begin{equation} \label{simple} \|\F\G\|_{\rho,k} \leq (k+1)4^k \|\F\|_{\rho,k} \cdot \|\G\|_{\rho,k}. \end{equation} \end{enumerate} \vskip 3pt\noindent \end{proposition} \begin{remark} The operators $F(\hbar)$ with the uniform norm $\|F\|_{\rho,k}, k=0,1,\ldots$ form a Banach subalgebra (without unit) of the algebra of the continuous operators in $L^2(\T^l)$. \end{remark} Before turning to the proof we state and prove two further useful results. \begin{corollary} \label{multipleM} Let $\F,\G\in\J_k(\rho)$, and let $0<d<\rho$, $r\in {\Bbb N}$. Then: \vskip 4pt\noindent \begin{eqnarray} \label{stimaMr} \frac{1}{r!}\|\underbrace{\{\F,\{\F,\ldots,\{\F}_{r\ times},{\mathcal G}\}_M\}_M\ldots\}_M\|_{\rho-d,k} \leq \left(\frac{(k+1)4^k}{ed^2}\right)^r\|\F\|_{\rho,k}^r \|{\mathcal G}\|_{\rho,k} \end{eqnarray} \end{corollary} \begin{proof} We follow the argument of \cite{BGP}, Lemma 3.5. If $d+d_1=d_2$, ({3.42}) entails: \vskip 6pt\noindent $$ \|\{\F,\G\}_M\|_{\rho-d_2,k}\leq \frac{C_k}{e^2d_2d_1}\|\F\|_{\rho,k}\cdot\|\G\|_{\rho-d,k},\quad C_k:=(k+1)4^k. $$ \vskip 6pt\noindent Set now $\displaystyle d=\frac{r-1}{r}d_2$ which yields $\displaystyle d_1=\frac{d_2}{r}$. Then: \vskip 4pt\noindent \begin{eqnarray} \label{1} && \|\{\F,\G\}_M\|_{\rho-d_2,k}\leq \frac{C_k}{e^2d_2\frac{d_2}{r}}\|\F\|_{\rho,k}\cdot\|\G\|_{\rho-\frac{r-1}{r}d_2,k}=\frac{C_kr}{(ed_2)^2} \|\F\|_{\rho,k}\cdot\|\G\|_{\rho-\frac{r-1}{r}d_2,k}. \end{eqnarray} \vskip 4pt\noindent Therefore: \vskip 4pt\noindent \begin{eqnarray*} \|\{\F,\{\F,\G\}_M\}_M\|_{\rho-d_2,k}\leq \frac{C_kr}{(ed_2)^2} \|\F\|_{\rho,k}\cdot\|\{\F,\G\}_M\|_{\rho-\frac{r-1}{r}d_2,k}. \end{eqnarray*} \vskip 4pt\noindent To estimate $\|\{\F,\G\}_M\|_{\rho-\frac{r-1}{r}d_2,k}$ we repeat the argument yielding \eqref{1} with $\displaystyle \frac{r-1}{r}d_2$ in place of $d_2$. We get: $$ \frac{r-1}{r}d_2= \frac{r-2}{r}d_2+ \frac{1}{r}d_2 $$ and therefore \vskip 4pt\noindent $$ \|\{\F,\G\}_M\|_{\rho-\frac{r-1}{r}d_2,k}\leq \frac{C_k}{ed_2(\frac{r-1}{r})\frac{d_2}{r}}\|\F\|_{\rho,k}\cdot\|\G\|_{\rho -\frac{r-2}{r}d_2,k} $$ $$ \leq \frac{C_kr}{(ed_2)^2}\left(\frac{r}{r-1}\right) \|\F\|_{\rho,k}\cdot \Vert \G\Vert_{\rho-\frac{r-2}{r}d_2,k} $$ \vskip 6pt\noindent whence \begin{eqnarray*} && \|\{\F,\{\F,\G\}_M\}_M\|_{\rho-d_2,k}\leq \frac{(C_kr)^2}{(ed_2)^4}\left(\frac{r}{r-1}\right) \|\F\|_{\rho,k}^2\cdot \Vert \G\Vert_{\rho-\frac{r-2}{r}d_2,k}. \end{eqnarray*} Iterating $r$ times we get: \vskip 6pt\noindent \begin{equation} \label{2} \frac1{r!}\|\underbrace{\{\F,\{\F,\cdots,\{\F}_{r\ times},\G\}_M\}_M,\cdots\}_M\|_{\rho-d_2,k}\leq \frac{(C_kr)^{r}r^{2r+1}} {(ed_2)^{2r}r!^2} \|\F\|^r_{\rho,k}\cdot\|\G\|_{\rho,k}. \end{equation} \vskip 6pt\noindent By the Stirling formula: \vskip 6pt\noindent \begin{eqnarray*} \frac{r^{2r+1}} {(ed_2)^{2r}r!^2}\leq \frac1{(ed_2^2)^{r}} \frac 1{\sqrt{2\pi }} \frac1{(d_2^2)^{r}} \leq \frac1{(d_2^2)^{r}}. \end{eqnarray*} \vskip 6pt\noindent Since $C_k=(k+1)4^k$, \eqref{2} yields \eqref{stimaMr} up to the abuse of notation $d_2=d$. \end{proof} \begin{corollary} \label{stimaP} Let $\F(\xi;x;\hbar)\in\J_k(\rho)$, $\rho>0$, $k=0,1,\ldots$. Then $\{\F,{\mathcal L}_\omega\}_M\in\J_k(\rho-d)$ $\forall\,0<d<\rho$ and the following estimates hold: \begin{equation} \label{stimapp} \|[F,L_\omega]/i\hbar\|_{\rho-d,k}= \|\{\F,{\mathcal L}_\omega\}_M\|_{\rho-d,k} \leq \frac{1}{ed}\|\F\|_{\rho,k} \end{equation} \begin{eqnarray} \label{stimaMpp} && \|\underbrace{[F,[\cdots,[F}_{r\ times},L_\omega]\cdots]/(i\hbar)^r\|_{\rho-d,k}= \|\{\F,\cdots,\{\F,{\mathcal L}_\omega\}_M\cdots\}_M\|_{\rho-d,k} \\ && \nonumber \\ && \nonumber \leq \frac1{ed}\left(\frac{(k+1)4^k}{ed^2}\right)^r\|\F\|_{\rho,k}^r \end{eqnarray} \end{corollary} \begin{proof} By (\ref{MP}): $$ \{\F,{\mathcal L}_\omega\}_M=\{\F,{\mathcal L}_\omega\}=-\langle \omega,\nabla_x\rangle\F(\xi,x;\hbar)=\sum_{q\in\Z^l}\langle\omega,q\rangle e^{i\langle q,x\rangle}\int_{{\mathcal R}}\widehat{\F}_q(p;\hbar)e^{ip{\mathcal L}_\omega(\xi)}\,dp $$ and therefore: \begin{eqnarray*} && \|\{\F,{\mathcal L}_\omega\}_M\|_{\rho-d,k}\leq \|\{\F,{\mathcal L}_\omega\}\|_{\rho-d,k}\leq \sum_{q\in\Z^l}|\langle\omega,q\rangle|e^{(\rho-d)|q|}\|\F_q\|_{\rho,k} \leq \\ && \sup_{q\in\Z^l}\langle\omega,q\rangle|e^{-d|q|}\sum_{q\in\Z^l}e^{\rho|q|}\|\F_q\|_{\rho,k} \leq \frac{1}{ed}\|\F\|_{\rho,k} \end{eqnarray*} because $|\omega|\leq 1$ by Remark 2.6. This proves (\ref{stimapp}). (\ref{stimaMpp}) is a direct consequence of Corollary \ref{multipleM}. \end{proof} \subsection{Proof of Proposition \ref{stimeMo}} \subsubsection{Three lemmata} \label{2l} The proof will use the three following Lemmata. \begin{lemma} \label{symp} Let $p,p'\in{\mathcal R}^{l},\ s,s^\prime\in{\mathcal R}^l$. Define $t:=(p,s), t^\prime:=(p^\prime,s^\prime)$. Let $\Omega_\omega(\cdot)$ and $\mu_j(\cdot)$ be defined by (\ref{omt}) and (\ref{muk}), respectively. Then: \begin{equation} \vert\Omega_\omega(t,t^\prime)\vert^j\leq 2^j\mu_j(t)\mu_j(t^\prime). \end{equation} \end{lemma} The proof is straightforward, because $\vert\Omega_\omega(t,t^\prime)\vert\leq 2\vert t\vert\vert t^\prime\vert$ and $|\omega|\leq 1$. \begin{lemma}\label{sin} \begin{equation} \left\vert\frac{d^m}{d\hbar^m}\frac{\sin{\hbar x/2}}\hbar\right\vert\leq \frac{\vert x\vert^{m+1}}{2^{m+1}}. \end{equation}. \end{lemma} \begin{proof} Write: \vskip 6pt\noindent \begin{eqnarray*} \frac{d^m}{d\hbar^m}\frac{1}{\hbar}\sin{\hbar x/2} =\frac{d^m}{d\hbar^m}\frac12\int_0^x\cos{\hbar t/2}\,dt =\frac{(-\hbar)^m}{2^{m+1}}\int_0^xt^m\cos^{(m)}{(\hbar t/2)}\,dt \leq\frac{\hbar^m}{2^{m+1}}\int_0^xt^m\,dt. \end{eqnarray*} \vskip 6pt\noindent whence $$ \left\vert\frac{d^m}{d\hbar^m}\frac{\sin{\hbar x/2}}\hbar\right\vert\leq \frac{\hbar^m}{2^{m+1}}\left\vert \int_0^xt^m\,dt\right\vert =\frac{\hbar^m\vert x\vert^{m+1}}{2^{m+1}(m+1)}\leq \frac{\vert x\vert^{m+1}}{2^{m+1}}. $$ \end{proof} \begin{lemma} \label{MoyalS} Let $(\F,G)\in\J^\dagger_\rho$, $0<d+d_1<\rho$, $t=(p,s)$, $t^\prime=(p^\prime,s^\prime)$, $|t|:=|p|+|s|$, $|t^\prime|:=|p^\prime|+|s^\prime|$. Then: \begin{equation} \|\{\F,\G\}_M\|_{\rho-d-d_1}^\dagger \leq \frac{1}{e^2d_1(d+d_1)}\|\F\|_\rho^\dagger \|\G\|_{\rho-d}^\dagger \end{equation} \end{lemma} \begin{proof} We have by definition \begin{eqnarray*} && |\{\F,\G\}_M\|^\dagger_{\rho-d-d_1}\leq \frac{1}{\hbar}\int_{{\mathcal R}^{2l}}e^{(\rho-d-d_1)|t|}d\lambda(t)\int_{{\mathcal R}^{2l}}|\F(t^\prime)\G(t^\prime-t)|\cdot |\sin{\hbar(t^\prime-t)\wedge t^\prime/\hbar}|\,d\lambda(t^\prime) \\ && \leq \int_{{\mathcal R}^{2l}}e^{(\rho-d-d_1)|t|}d\lambda(t)\int_{{\mathcal R}^{2l}}|\F(t^\prime)|\cdot |\G(t^\prime-t)|\cdot |(t^\prime-t)|\cdot |t^\prime|\,d\lambda(t^\prime) \\ && =\int_{{\mathcal R}^{2l}}e^{(\rho-d-d_1)|t|}d\lambda(t)\int_{{\mathcal R}^{2l}}|\F(u+t/2)\G(u-t/2)|\cdot |u-t/2|\cdot |u+t/2|\,d\lambda(u) \\ && =\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}}e^{(\rho-d-d_1)(|x|+|y|)}|\F(x)\G(y)|\cdot |x|\cdot |y|\,d\lambda(x)d\lambda(y) \leq \\ && \frac{1}{d_1(d+d_1)}\int_{{\mathcal R}^{2l}}|\F(x)|e^{\rho |x|}\,d\lambda(x) \int_{{\mathcal R}^{2l}}|\G(y)|e^{(\rho-d) |y|}\,d\lambda(x)\leq \frac{1}{e^2d_1(d+d_1)}\|\F\|_\rho^\dagger\|\G\|_{\rho-d}^\dagger \end{eqnarray*} because $\displaystyle \sup_{\alpha\in{\mathcal R}}|\alpha| e^{-\delta\alpha}=\frac{1}{e\delta}, \delta>0$. \end{proof} \subsubsection{ Assertion {\mbox{\bf ($1^\dagger$)}}}\label{1'} By definition \begin{eqnarray} && \|\F(\hbar)\sharp\G(\hbar)\|_{\rho,k}^\dagger= \nonumber \sum_{\gamma=0}^k\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}}| \partial^\gamma_\hbar [\widehat{\F}(t^\prime-t,\hbar) \widehat{\G}(t^\prime,\hbar)e^{i\hbar\Omega_\omega(t^\prime,t^\prime-t)}] |\mu_{k-\gamma}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t) \nonumber \end{eqnarray} whence \begin{eqnarray} && \|\F(\hbar)\sharp\G(\hbar)\|^\dagger_{\rho,k}= \nonumber \\ && \sum_{\gamma=0}^k\sum_{j=0}^\gamma\binom {\gamma}{j}\int_{{\mathcal R}^{2l}\times {\mathcal R}^{2l} }\vert\partial_\hbar^{\gamma-j} [\widehat{\F}(t^\prime-t,\hbar) \widehat{\G}(t^\prime,\hbar)]\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^j \mu_{k-\gamma}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t)= \nonumber \\ && \nonumber \sum_{\gamma=0}^k\sum_{j=0}^\gamma\sum_{i=0}^{\gamma-j}\binom {\gamma}{j}\binom{j}{i}\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}} \vert\partial_\hbar^{\gamma-j-i}\widehat{\F}(t^\prime-t,\hbar) \partial_\hbar^{i}\widehat{\G}(t^\prime,\hbar) \vert\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^j\mu_{k-\gamma}(t)e^{\rho|t|}\,d\lambda(t^\prime) d\lambda(t) \end{eqnarray} By Lemma \ref{symp} and the inequality $\displaystyle \mu_k(t^\prime-t)\leq 2^{k/2}\mu_k(t^\prime)\mu_k(t)$ we get, with $t=(p,s): t^\prime=(p^\prime,s^\prime)$ \begin{eqnarray*} && \vert\Omega_\omega(t^\prime-t,t^\prime)\vert^j\mu_{k-\gamma}(t)\leq 2^j\mu_j(t^\prime-t)\mu_j(t^\prime)\mu_{k-\gamma}(t) \\ && \leq 2^j\mu_jt^\prime-t)\mu_j(t^\prime)\mu_{k-\gamma}(t)2^{(k-\gamma)/2}\mu_{k-\gamma}(t^\prime -t)\mu_{k-\gamma}(t) \\ && \leq 2^{j+(k-\gamma)/2}\mu_{k-\gamma+j}(t^\prime -t)\mu_{k-\gamma+j}(t) \end{eqnarray*} Denote now $\gamma-j-i=k-\gamma^\prime$, $i=k-\gamma^{\prime\prime}$ and remark that $j\leq\gamma^\prime$, $i\leq\gamma-j$. Then: \begin{eqnarray*} 2^{j+(k-\gamma)/2}\mu_{k-\gamma+j}(t^\prime -t)\mu_{k-\gamma+j}(t) \leq 2^k\mu_{\gamma^\prime}(t^\prime)\mu_{\gamma^{\prime\prime}}(t) \end{eqnarray*} Since $\displaystyle \binom {\gamma}{j}\binom{j}{i}\leq 4^k$ and the sum over $k$ has $(k+1)$ terms we get: \begin{eqnarray*} && \|\F(\hbar)\sharp\G(\hbar)\|^\dagger_{\rho,k} \leq \\ && (k+1)4^k\,\sum_{\gamma^\prime,\gamma^{\prime\prime}=0}^k\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}} |\partial^{k-\gamma^\prime}_\hbar\widehat{\F}(t^\prime -t,\hbar) |\partial^{k-\gamma^{\prime\prime}}_\hbar\widehat{\G}(t^\prime,\hbar)| \mu_{\gamma^\prime}(t^\prime -t)\mu_{\gamma^{\prime\prime}}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t) \end{eqnarray*} Now we can repeat the argument of Lemma \ref{MoyalS} to conclude: \begin{eqnarray*} \|\F(\hbar)\sharp\G(\hbar)\|_{\rho,k}^\dagger \leq (k+1)4^k \|\F\|^\dagger_{\rho,k} \cdot \|\G\|^\dagger_{\rho,k} \end{eqnarray*} which is (\ref{2conv'}). Assertion {\mbox{\bf ($3^\dagger$)}}, formula (\ref{simple'}) is the particular case of (\ref{2conv'}) obtained for $\Omega_\omega=0$, and Assertion ${\bf (3)}$, formula (\ref{simple}), is in turn particular case of (\ref{simple'}) . \subsubsection{ Assertion{\mbox{\bf ($2^\dagger$)}}}\label{2'} By definition: \begin{eqnarray*} \|\{\F(\hbar),\G(\hbar)\}_M\|^\dagger_{\rho,k}= \sum_{\gamma=0}^k\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}}| \partial^\gamma_\hbar [\widehat{\F}(t^\prime -t,\hbar) \widehat{\G}(t^\prime,\hbar)\sin\hbar\Omega(t^\prime-t,t^\prime)/\hbar] |\mu_{k-\gamma}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t). \end{eqnarray*} Lemma \ref{sin} entails: $$ \vert\partial_\hbar^j \sin\hbar\Omega(t^\prime-t,t^\prime)/\hbar\vert\leq \vert \Omega(t^\prime-t,t^\prime)\vert^{j+1} $$ and therefore: \begin{eqnarray} && \|\{\F(\hbar),\G(\hbar)\}_M\|_{\rho,k}\leq \nonumber \\ && \sum_{\gamma=0}^k\sum_{j=0}^\gamma\binom {\gamma}{j}\int_{{\mathcal R}^{2l}\times {\mathcal R}^{2l} }\vert\partial_\hbar^{\gamma-j} [\widehat{\F}(t^\prime -t,\hbar) \widehat{\G}(t^\prime,\hbar)]\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^{j+1} \mu_{k-\gamma}(t)e^{\rho(|t|}\,d\lambda(t^\prime) d\lambda(t)= \nonumber \\ && \nonumber \sum_{\gamma=0}^k\sum_{j=0}^\gamma\sum_{i=0}^{\gamma-j}\binom {\gamma}{j}\binom{j}{i}\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}} \vert\partial_\hbar^{\gamma-j-i}\widehat{\F}(t^\prime -t,\hbar) \partial_\hbar^{i}\widehat{\G}(t^\prime,\hbar) \vert\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^{j+1}\mu_{k-\gamma}(t)e^{\rho |t|}\,d\lambda(t^\prime) d\lambda(t) \end{eqnarray} Let us now absorb a factor $\vert\Omega_\omega(t^\prime-t,t^\prime)\vert^{j}$ in exactly the same way as above, and recall that $\vert\Omega_\omega(t^\prime-t,t^\prime)\vert\leq \vert (t^\prime-t)t^\prime\vert$. We end up with the inequality: \begin{eqnarray*} && \|\{\F(\hbar),\G(\hbar)\}_M\|^\dagger_{\rho,k} \leq \\ && (k+1)4^k\,\sum_{\gamma^\prime,\gamma^{\prime\prime}=0}^k\int_{{\mathcal R}^{2l}\times{\mathcal R}^{2l}} |\partial^{k-\gamma'}_\hbar\widehat{\F}(t^\prime -t,\hbar) |\partial^{k-\gamma"}_\hbar\widehat{\G}(t^\prime,\hbar)| |t^\prime -t||t^\prime| \mu_{\gamma^\prime}(t^\prime -t)\mu_{\gamma^{\prime\prime}}(t^\prime)e^{\rho( |t|}\,d\lambda(t^\prime) d\lambda(t) \end{eqnarray*} Repeating once again the argument of Lemma \ref{MoyalS} we finally get: \begin{eqnarray*} \|\{\F(\hbar),\G(\hbar)\}_M\|^\dagger_{\rho-d-d_1,k} \leq \frac{(k+1)4^k}{e^2d_1(d+d_1)} \|\F\|^\dagger_{\rho,k} \cdot \|\G\|^\dagger_{\rho-d,k} \end{eqnarray*} which is (\ref{normaM2'}). Once more, Assertion ${\bf (2)}$ is a particular case of (\ref{normaM2'}) and Assertion ${\bf (1)}$ a particular case of (\ref{2conv'}). This completes the proof of Proposition \ref{stimeMo}. \vskip 1cm \section{A sharper version of the semiclassical Egorov theorem}\label{sectionegorov} Let us state and prove in this section a particular variant of the semiclassical Egorov theorem (see e.g.\cite{Ro}) which establishes the relation between the unitary transformation $\displaystyle e^{i\varepsilon W/i\hbar}$ and the canoni\-cal transformation $\phi^\varepsilon_{{\mathcal W}_0}$ generated by the flow of the symbol ${\mathcal W}(\xi,x;\hbar)|_{\hbar=0}:={\mathcal W}_0(\xi,x)$ (principal symbol) of $W$ at time $1$. The present version is sharper in the sense that the usual one allows for a $O(\hbar^\infty)$ error term. \begin{theorem} Let $\rho>0, k=0,1,\ldots$ and let $A,W\in J^\dagger_k(\rho)$ with symbols $\mathcal A,\ \mathcal W$. Then: \begin{equation} \nonumber S_\varepsilon:=e^{i\frac {\varepsilon W}\hbar}(L_\omega+A)e^{-i\frac {\varepsilon W}\hbar}=L_\omega+B \end{equation} where: \begin{enumerate} \item $\forall\,0<d<\rho$, $B\in J^\dagger_k(\rho-d)$; \item \begin{eqnarray*} \|{\mathcal B}\|^\dagger_{\rho-d,k}\leq\frac{|\varepsilon|(k+1)4^k \|{\mathcal W}\|_{\rho,k}}{ed^2}\left[1-|\varepsilon|(k+1)4^k\|{\mathcal W}\|_{\rho,k}/{ed^2}\right]^{-1}\left[\|{\mathcal A}\|_{\rho,k}+1/{de}\right]\end{eqnarray*} \item Moreover the symbol $\mathcal B$ of $B$ is such that: $$ {\mathcal L}_\omega+{\mathcal B}=({\mathcal L}_\omega+\mathcal A)\circ \Phi^\varepsilon_{\mathcal W_0}+O(\hbar) $$ where $\Phi^\varepsilon_{\mathcal W_0}$ is the Hamiltonian flow of $\mathcal W_0:=\mathcal W|_{\hbar=0}$ at time $\varepsilon$. \item Assertions (1), (2), (3) hold true when $(A,B,W)\in J_k(\rho)$ with $\|{\mathcal A}\|^\dagger_{\rho,k}$, $\|{\mathcal B}\|^\dagger_{\rho,k}$, $\|{\mathcal W}\|^\dagger_{\rho,k}$ replaced by $\|{\mathcal A}\|_{\rho,k}$, $\|{\mathcal B}\|_{\rho,k}$, $\|{\mathcal W}\|_{\rho,k}$. \end{enumerate} \end{theorem} \begin{proof}The proof is the same in both cases, since it it is based only on Proposition \ref{stimeMo}. Therefore we limit ourselves to the $\J_k(\rho)$ case. By Corollary \ref{corA}, Assertion (3), under the present assumptions $H^1(\T^l)$, the domain of the self-adjoint operator $\F(L_\omega)+A$, is left invariant by the unitary operator $\displaystyle e^{i\frac {\varepsilon W}{\hbar}}$. Therefore on $H^1(\T^l)$ we can write the commutator expansion $$ S_\varepsilon=L_\omega+\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}\underbrace{[W,[W,\ldots,[W}_{m\ times},L_\omega]\ldots]+ \sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}\underbrace{[W,[W,\ldots,[W}_{m\ times},A]\ldots] $$ whence the corresponding expansions for the symbols (from now on we'll skip the $\underbrace{\ldots\ldots\ldots}_{m\ times}$ notation) \begin{eqnarray*} && {\mathcal S}(x,\xi;\hbar,\varepsilon)={\mathcal L}_\omega(\xi)+\sum_{m=1}^\infty \frac{\varepsilon^m}{m!}\{{\mathcal W},\{{\mathcal W},\ldots,\{{\mathcal W},{\mathcal L}_\omega\}_M\ldots\}_M \\ && +\sum_{m=1}^\infty \frac{\varepsilon^m}{m!}\{{\mathcal W},\{{\mathcal W},\ldots,\{{\mathcal W},{\mathcal A}\}_M\ldots\}_M \end{eqnarray*} because $\{{\mathcal W},{\mathcal L}_\omega\}_M=\{{\mathcal W},{\mathcal L}_\omega\}$ by the linearity of ${\mathcal L}_\omega$. Now apply Corollaries \ref{multipleM} and \ref{stimaP}. We get, denoting once again $C_k=(k+1)4^k$: \begin{eqnarray*} && \|\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,L_\omega]\ldots]\|_{L^2\to L^2}\leq \|\sum_{m=1}^\infty \frac{\varepsilon^m}{m!}\{{\mathcal W},\{{\mathcal W},\ldots,\{{\mathcal W},{\mathcal L}_\omega\}_M\ldots\}_M\|_{\rho-d,k} \\ && \leq \sum_{m=1}^\infty \frac{|\varepsilon|^m}{m!}\|\{{\mathcal W},\{{\mathcal W},\ldots,\{-i\langle\omega,\nabla_x\rangle {\mathcal W}\}_M\ldots\}_M\|_{\rho-d,k}\leq \frac{1}{ed}\sum_{m=1}^\infty \left(\frac{|\varepsilon|C_k\|{\mathcal W}\|_{\rho,k} }{ed^2}\right)^m \end{eqnarray*} \begin{eqnarray*} && \|\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,A]\ldots] \|_{L^2\to L^2} \leq \|\sum_{m=1}^\infty \frac{\varepsilon^m}{m!}\{{\mathcal W},\{{\mathcal W},\ldots,\{{\mathcal W},{\mathcal A}\}_M\ldots\}_M\|_{\rho-d,k} \\ && \leq \|{\mathcal A}\|_{\rho,k}\sum_{m=1}^\infty \left(\frac{|\varepsilon|C_k\|{\mathcal W}\|_{\rho,k} }{ed^2}\right)^m \end{eqnarray*} Now define: \begin{equation} \label{Aprimo} B:=\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,{\mathcal L}_\omega]\ldots]+\sum_{m=1}^\infty \frac{(i\varepsilon)^m}{ \hbar^m m!}[W,[W,\ldots,[W,A]\ldots]. \end{equation} Then we can write: \vskip 4pt\noindent \begin{eqnarray*} && \|{\mathcal B}\|_{\rho-d,k}\leq\frac{|\varepsilon|C_k\|{\mathcal W}\|_{\rho,k}}{ed^2}\left[1-|\varepsilon|C_k\|{\mathcal W}\|_{\rho,k}/{ed^2}\right]^{-1}\left[\|{\mathcal A}\|_{\rho,k}+1/{de}\right] \\ && =\frac{|\varepsilon|(k+1)4^k \|{\mathcal W}\|_{\rho,k}}{ed^2}\left[1-|\varepsilon|(k+1)4^k\|{\mathcal W}\|_{\rho,k}/{ed^2}\right]^{-1}\left[\|{\mathcal A}\|_{\rho,k}+1/{de}\right] \end{eqnarray*} \vskip 4pt\noindent This proves assertions (1) and (2). \newline By Remark 2.9, we have: \begin{eqnarray*} && {\mathcal S}^0_\varepsilon(x,\xi;\hbar)|_{\hbar=0}={\mathcal L}_\omega+{\mathcal B}_\varepsilon(\xi,x;\hbar)|_{\hbar=0}= \\ && \sum_{k=0}^\infty \frac{(\varepsilon)^k}{k!}\{{\mathcal W}_0,\{{\mathcal W},\ldots,\{{\mathcal W}_0,{{\mathcal L}+{\mathcal A}}\}\ldots\}=e^{\varepsilon {\mathcal L}_{{\mathcal W}_0}}({\mathcal L}_\omega+{\mathcal A}) \end{eqnarray*} where ${\mathcal L}_{{\mathcal W}_0}\F=\{{\mathcal W},\F\}$ denote the Lie derivative with respect to the Hamiltonian flow generated by ${\mathcal W}_0$. Now, by Taylor's theorem $$ e^{\varepsilon {\mathcal L}_{{\mathcal W}_0}}({\mathcal L}_\omega+{\mathcal A})=({\mathcal L}_\omega+{\mathcal A})\circ \phi^\varepsilon_{{\mathcal W}_0}(x,\xi) $$ and this concludes the proof of the Theorem. \end{proof} \begin{remark} Let $W$ be a solution of the homological equation (\ref{heq}). Then the explicit expression of ${\mathcal W}_0$ clearly is: $$ {\mathcal W}_0=\frac1{\F^\prime({\mathcal L}_\omega(\xi))}\sum_{q\in\Z^\ell}\frac{{\mathcal V}_q (\xi)}{\langle \omega,q\rangle}e^{i\langle q,x\rangle} $$ and $$ e^{\varepsilon {\mathcal L}_{{\mathcal W}_0}}(\F({\mathcal L}_\omega)+\varepsilon{\mathcal A})=\F(L_\omega)+\varepsilon {\mathcal N}_{0,\varepsilon}({\mathcal L}_\omega)+O(\varepsilon^2). $$ \end{remark} Thus ${\mathcal W}_0$ coincides with the expression obtained by first order canonical perturbation theory. \vskip 1cm\noindent \section{Homological equation: solution and estimate} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0}% \setcounter{theorem}{0 Let us briefly recall the well known KAM iteration in the quantum context. \par The first step consists in looking for an $L^2(\T^l)$-unitary map $U_{0,\varepsilon}=e^{i\varepsilon W_0/\hbar}$, $W_0=W_0^\ast$, such that $$ S_{0,\varepsilon}:=U_{0,\varepsilon}(L_\omega+\varepsilon V_0)U_{0,\varepsilon}^\ast=\F_{1,\varepsilon}(L_\omega)+\varepsilon^2 V_{1,\varepsilon}, \quad V_0:=V, \quad \F_{1,\varepsilon}(L_\omega)=L_\omega+\varepsilon N_0(L_\omega). $$ Expanding to first order near $\varepsilon=0$ we get that the two unknowns $W_0$ and $N_0$ must solve the equation $$ \frac{[L_\omega,W_0]}{i\hbar}+V=N_0 $$ $V_{1,\varepsilon}$ is the second order remainder of the expansion. Iterating the procedure: \vskip 3pt\noindent \begin{eqnarray*} && U_{\ell,\varepsilon}:= e^{i\varepsilon^{2^\ell}W_\ell/\hbar}; \\ && S_{\ell,\varepsilon}:=U_{\ell.\varepsilon}(\F_{\ell,\varepsilon}(L_\omega)+\varepsilon^{2^{\ell}} V_{\ell,\varepsilon})U_{\ell,\varepsilon}^\ast= = \F_{\ell+1,\varepsilon}(L_\omega)+\varepsilon^{2^{\ell+1}} V_{\ell+1}(\varepsilon), \\ && \frac{[\F_{\ell,\varepsilon}(L_\omega),W_{\ell,\varepsilon}]}{i\hbar}+V_{\ell,\varepsilon}=N_{\ell,\varepsilon} \end{eqnarray*} \vskip 3pt\noindent With abuse of notation, we denote by $\F_{\ell,\varepsilon}({\mathcal L}_\omega,\hbar)$, ${\mathcal N}_{\ell,\varepsilon}({\mathcal L}_\omega,\hbar)$, ${\mathcal V}_{\ell,\varepsilon}({\mathcal L}_\omega,\hbar)$ the corresponding symbols. \newline The KAM iteration procedure requires therefore the solution in $J_k(\rho)$ of the operator homological equation in the two unknowns $W$ and $M$ (here we have dropped the dependence on $\ell$ and $\varepsilon$, and changed the notation from $N$ to $M$ to avoid confusion with what follows): \begin{equation} \label{heq} \frac{[\F(L_\omega),W]}{i\hbar}+V=M(L_\omega) \end{equation} with the requirement $M(L_\omega)\in J_k(\rho)$; the solution has to be expressed in terms of the corresponding Weyl symbols $({\mathcal L}_\omega, {\mathcal W}, {\mathcal V}, {\mathcal M})\in\J_k(\rho)$ in order to obtain estimates uniform with respect to $\hbar$. Moreover, the remainder has to be estimated in terms of the estimates for $W, M$. \newline Equation (\ref{heq}), written for the symbols, becomes \begin{equation} \label{Mo} \{\F({\mathcal L}_\omega(\xi),\hbar),{\mathcal W}(x,\xi;\hbar)\}_M+{\mathcal V}(x,L_\omega(\xi);\hbar)={\mathcal M}({\mathcal L}_\omega(\xi),\hbar) \end{equation} \subsection{The homological equation}\label{hom} We will construct and estimate the solution of (\ref{heq}), actually solving (\ref{Mo}) and estimating its solution, under the following assumptions on $\F$: \vskip 5pt\noindent {\textbf{Condition (1)}} {\it $(u,\hbar)\mapsto \F(u;\hbar)\in C^\infty({\mathcal R}\times [0,1]; {\mathcal R})$;} \vskip 4pt\noindent {\textbf{Condition (2)}} $$ \inf_{(u,\hbar)\in{\mathcal R}\times [0,1]}\partial_u\F(u;\hbar)>0; \quad \lim_{|u|\to \infty}\frac{|\F(u,\hbar)|}{|u|}=C>0 $$ {\it uniformly with respect to $\hbar\in [0,1]$.} \vskip 5pt\noindent {\textbf{Condition (3)}} {\it Set: \begin{equation} \label{Kappa} \K_\F(u,\eta,\hbar)=\frac{\eta}{\F(u+\eta,\hbar)-\F(u,\hbar)} \end{equation} Then there is $0<\Lambda(\F)<+\infty$ such that} \begin{equation} \label{KB} \sup_{u\in{\mathcal R},\eta\in{\mathcal R},\hbar\in [0,1]}\vert\K_\F(u,\eta,\hbar)\vert<\Lambda. \end{equation} \vskip 5pt\noindent The first result deals with the identification of the operators $W$ and $M$ through the determination of their matrix elements and corresponding symbols ${\mathcal W}$ and ${\mathcal M}$. \begin{proposition} \label{WN} Let $V\in J(\rho)$, $\rho>0$, and let $W$ and $M$ be the minimal closed operators in $L^2(\T^n)$ generated by the infinite matrices \vskip 5pt\noindent \begin{equation} \label{sheq1} \langle e_m,We_{m+q}\rangle =\frac{i\hbar\langle e_m,Ve_{m+q}\rangle}{\F(\langle \omega,m\rangle\hbar,\hbar)-\F(\langle\omega,(m+q)\rangle\hbar,\hbar)},\quad q\neq 0, \quad \langle e_m,We_m\rangle=0 \end{equation} \vskip 6pt\noindent \begin{equation} \langle e_m,Me_m\rangle=\langle e_m,Ve_m\rangle,\qquad \langle e_m,Me_{m+q}\rangle=0, \quad q\neq 0 \label{sheq2} \end{equation} on the eigenvector basis $e_m: m\in\Z^l$ of $L_\omega$. Then: \begin{enumerate} \item $W$ and $M$ are continuous and solve the homological equation (\ref{heq}); \item The symbols ${\mathcal W}(x,\xi;\hbar)$ and ${\mathcal M}(\xi,\hbar)$ have the expression: \begin{eqnarray} \label{defW} && {\mathcal M}(\xi;\hbar)=\overline{\widetilde{{\mathcal V}}}({\mathcal L}_\omega(\xi);\hbar);\quad {\mathcal W}({\mathcal L}_\omega(\xi),x;\hbar)=\sum_{q\in\Z^l,q\neq 0}\widetilde{{\mathcal W}}({\mathcal L}_\omega(\xi),q;\hbar)e^{i\langle q,x\rangle} \\ && \widetilde{{\mathcal W}}({\mathcal L}_\omega(\xi),q;\hbar):=\frac{i\hbar\widetilde{{\mathcal V}}({\mathcal L}_\omega(\xi);q;\hbar)}{\F({\mathcal L}_\omega(\xi);\hbar)-\F({\mathcal L}_\omega(\xi+q),\hbar)}, \;q\neq 0; \quad \overline{\widetilde{{\mathcal W}}}({\mathcal L}_\omega(\xi);\hbar)=0. \end{eqnarray} \vskip 4pt\noindent Here the series in (\ref{defW}) is $\|\cdot\|_\rho$ convergent; $\overline{\widetilde{{\mathcal V}}}({\mathcal L}_\omega(\xi);\hbar)$ is the $0$-th coefficient in the Fourier expansion of ${\mathcal V}({\mathcal L}_\omega(\xi),x,\hbar)$: $$ {\mathcal V}({\mathcal L}_\omega(\xi),x,\hbar)=\sum_{q\in\Z^l}\,{\widetilde{{\mathcal V}}}({\mathcal L}_\omega(\xi),q;\hbar)e^{i\langle q,x\rangle}. $$ \end{enumerate} \end{proposition} \begin{proof} Writing the homological equation in the eigenvector basis $e_m: m\in\Z^l$ we get \vskip 7pt\noindent \begin{equation} \label{mheq} \langle e_m,\frac{[\F(L_\omega),W]}{i\hbar}e_n\rangle+\langle e_m,Ve_n\rangle=\langle e_m,M(L_\omega)e_n\rangle\delta_{m,n} \end{equation} \vskip 5pt\noindent which immediately yields (\ref{sheq1},\ref{sheq2}) setting $n=m+q$. As far the continuity is concerned, we have: \vskip 6pt\noindent $$ \frac{i\hbar}{\F(\langle \omega,m\rangle\hbar,\hbar)-\F(\langle\omega,(m+q)\rangle\hbar,\hbar)}=\langle\omega,q\rangle^{-1}\frac{\eta} {\F(\langle \omega,m\rangle\hbar,\hbar)-\F(\langle\omega,m\rangle\hbar+\eta,\hbar)},\quad \eta:=\langle q,\omega\rangle\hbar. $$ \vskip 7pt\noindent and therefore, by (\ref{KB}) and the diophantine condition: $$ |\langle e_m,We_{m+q}\rangle|\leq \gamma |q|^\tau\Lambda |\langle e_m,Ve_{m+q}\rangle|. $$ The assertion now follows by Corollary \ref{corA}, which also entails the $\|\cdot\|_\rho$ convergence of the series (\ref{defW}) because ${\mathcal V}\in \J_\rho$. Finally, again by Corollary \ref{corA}, formulae \eqref{elm4}, \eqref{elm5}, we can write $$ \langle e_m,We_{m+q}\rangle= \widetilde{{\mathcal W}}(\langle \omega,(m+q/2)\rangle\hbar,q,\hbar); \quad \langle e_m,Me_m\rangle={\mathcal M}(\langle\omega,m\rangle\hbar,\hbar)=\widetilde{{\mathcal V}}({\mathcal L}_\omega(m\hbar),0,\hbar) $$ and this concludes the proof of the Proposition. \end{proof} The basic example of $\F$ is the following one. Let: \begin{eqnarray} \label{FNl} && \bullet \qquad \F_{\ell}(u,\varepsilon;\hbar)=u+\Phi_{\ell}(u,\varepsilon,\hbar),\qquad \ell=0,1,2,\ldots \\ && \bullet \qquad \Phi_{\ell}(\varepsilon,\hbar):=\varepsilon{\mathcal N}_{0}(u;\varepsilon,\hbar)+\varepsilon^2{\mathcal N}_{1}(u;\varepsilon,\hbar)+\ldots+\varepsilon_{\ell}{\mathcal N}_{\ell}(u,\varepsilon,\hbar), \quad \varepsilon_{j}:=\varepsilon^{2^{j}}. \end{eqnarray} where we assume holomorphy of $\varepsilon\mapsto {\mathcal N}_s(u,\varepsilon,\hbar)$ in the unit disk and the existence of $\rho_0>\rho_1>\ldots>\rho_{\ell}>0$ such that: \begin{itemize} \item[($N_s$)] $\displaystyle\qquad\qquad\qquad\qquad\quad \max_{|\varepsilon|\leq 1} \vert{\mathcal N}\vert_{\rho_s}<\infty, \qquad .$ \end{itemize} Denote, for $\zeta\in{\mathcal R}$: \vskip 6pt\noindent \begin{equation} \label{gl} g_\ell(u,\zeta;\varepsilon,\hbar):=\frac{\Phi_{\ell-1}(u+\zeta;\varepsilon,\hbar)-\Phi_{\ell-1}(u;\varepsilon,\hbar)}{\zeta} \end{equation} \vskip 6pt\noindent Let furthermore: \begin{eqnarray} \label{ddll} && 0<d_{\ell}<\ldots<d_0<\rho_0, \quad 0<\rho_0:=\rho; \\ && \nonumber \rho_{s+1}=\rho_s-d_{s}>0, \;s=0,\ldots,\ell-1 \\ && \delta_\ell:=\sum_{s=0}^{\ell-1}d_\ell <\rho \end{eqnarray} and set, for $j=1,2,\ldots$: \begin{eqnarray} \label{theta} && \theta_{\ell,k}({\mathcal N},\varepsilon):=\sum_{s=0}^{\ell-1}\frac{|\varepsilon_s|\,|{\mathcal N}_s|_{\rho_s,k}}{ed_{s}}, \qquad \theta_{\ell}({\mathcal N},\varepsilon):=\theta_{\ell,0}({\mathcal N},\varepsilon). \end{eqnarray} By Remark 2.4 we have \begin{eqnarray} \label{Theta} && \theta_{\ell,k}({\mathcal N},\varepsilon)=\sum_{s=0}^{\ell-1}\frac{|\varepsilon_s|\,\|{\mathcal N}_s\|_{\rho_s,k}}{ed_{s}} \end{eqnarray} \begin{lemma} \label{propN} In the above assumptions: \begin{enumerate} \item For any $R>0$ the function $\zeta\mapsto g_\ell(u,\zeta,\varepsilon,\hbar)$ is holomorphic in $\{\zeta\;|\,\,|\zeta|<R\,|\,|\Im\zeta|<\rho\}$, uniformly on compacts with respect to $(u,\varepsilon,\hbar)\in{\mathcal R}\times{\mathcal R}\times [0,1]$; \vskip 5pt\noindent \item For any $n\in\Bbb N\cup\{0\}$: \begin{equation} \label{convN} \sup_{{\zeta\in{\mathcal R}}}\,|[g(u,\zeta,\varepsilon,\hbar)]^n|_{\rho_\ell}\leq [\theta_{\ell}({\mathcal N},\varepsilon)]^{n} \end{equation} \item Let: \begin{equation} \label{epbar} \max_{|\varepsilon|\leq L}{\theta_{\ell}({\mathcal N},\varepsilon)}<1, \qquad L>0. \end{equation} Then: \begin{equation} \label{stimaKg} \sup_{\zeta\in{\mathcal R};u\in{\mathcal R}}|\K_\F(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}\leq \frac{1}{|\zeta|}\cdot \frac1{1-\theta_{\ell}({\mathcal N},\varepsilon)} \end{equation} \item \begin{eqnarray} && \label{stimadgu} \sup_{\zeta\in{\mathcal R}}\,|\partial^j_u g(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}\leq \theta_{\ell,j}({\mathcal N},\varepsilon) \\ && \label{stimadgeta} \sup_{\zeta\in{\mathcal R}}\,|\partial^j_\zeta g(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}\leq \theta_{\ell,j}({\mathcal N},\varepsilon) \\ && \label{stimadgh} \sup_{\zeta\in{\mathcal R}}\,|\partial^j_\hbar g(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell }\leq \theta_{\ell,j}({\mathcal N},\varepsilon). \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} The holomorphy is obvious given the holomorphy of ${\mathcal N}_s(u;\varepsilon,\hbar)$. To prove the estimate (\ref{convN}), denoting $\widehat{{\mathcal N}}_s(p,\varepsilon,\hbar)$ the Fourier transform of ${\mathcal N}_s(\xi,\varepsilon,\hbar)$ we write \vskip 4pt\noindent \begin{eqnarray} && \label{gF} g_\ell(u,\zeta,\varepsilon,\hbar)=\frac{1}{\zeta}\sum_{s=0}^{\ell-1}\,\varepsilon_s\,\int_{\mathcal R}\widehat{{\mathcal N}}_\ell(p,\varepsilon,\hbar)(e^{i\zeta p}-1)e^{iu p}\,dp= \\ \nonumber && \frac{2i}{\zeta}\sum_{s=0}^{\ell-1}\,\varepsilon_s\,\int_{\mathcal R}\widehat{{\mathcal N}}_\ell(p,\varepsilon,\hbar)e^{ip(u+\zeta/2)}\sin{\zeta p/2}\,dp\qquad\quad \end{eqnarray} which entails: \begin{eqnarray*} && \sup_{{\zeta\in{\mathcal R}}}|g_\ell(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}=\sup_{{\zeta\in{\mathcal R}}}\int_{\mathcal R}\,|\widehat{g}_\ell (p,\zeta,\varepsilon,\hbar)|e^{\rho_\ell |p|}\,dp \\ && \leq \max_{\hbar\in [0,1]}\sum_{s=0}^{\ell-1}|\varepsilon_s|\, \int_{\mathcal R}|\widehat{{\mathcal N}}_s(p,\varepsilon,\hbar) p|e^{(\rho_s-d_s) |p|}\,dp \leq \frac1{e}\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\,\frac{|{\mathcal N}_s|_{\rho_s}}{d_s}= \theta_\ell({\mathcal N},\varepsilon,1)\,\qquad 0<d_s<\rho_s. \end{eqnarray*} \vskip 4pt\noindent Hence Assertion (3) of Proposition \ref{stimeMo}, considered for $k=0$, immediately yields (\ref{convN}). Finally, if $g_\ell$ is defined by (\ref{gl}), then: $$ \K_\F(u,\zeta,\varepsilon,\hbar)=\frac{1}{\zeta}\frac{1}{1+ g_\ell(u,\zeta,\varepsilon,\hbar)} $$ and the estimate (\ref{stimaKg}) follows from (\ref{convN}) which makes possible the expansion into the geome\-trical series \begin{equation} \label{sgg} \frac{1}{1+g_\ell(u,\zeta,\varepsilon,\hbar)}=\sum_{n=0}^\infty\,(-1)^n\,g_\ell(u,\zeta,\varepsilon,\hbar)^n \end{equation} \vskip 5pt\noindent convergent in the $\theta_{\ell}({\mathcal N},\varepsilon)$ norm. To see (\ref{stimadgu}), remark that (\ref{gF}) yields: \vskip 5pt\noindent \begin{eqnarray*} && \partial^j_u g_\ell(u,\zeta,\varepsilon,\hbar)=\frac{2}{\zeta}\sum_{s=0}^{\ell-1}\,\varepsilon_s\,\int_{\mathcal R}\widehat{{\mathcal N}}_\ell(p,\varepsilon,\hbar)(ip)^j e^{ip(u+\zeta)/2}\sin{\zeta p/2}\,dp. \end{eqnarray*} Therefore: \begin{eqnarray*} && \sup_{{\zeta\in{\mathcal R}}}\,| \partial^j_u g_\ell(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell}\leq \sup_{{\zeta\in{\mathcal R}}}\,\max_{\hbar\in [0,1]} 2\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\int_{\mathcal R}|\widehat{{\mathcal N}}_s(p,\varepsilon,\hbar)||p|^j|\sin{\zeta p/2}|/{\zeta}|e^{\rho_\ell|p|}\,dp \\ && \leq \sup_{{\zeta\in{\mathcal R}}}\,\max_{\hbar\in [0,1]} 2\sum_{s=}^{\ell-1}\,|\varepsilon_s|\int_{\mathcal R}|\widehat{{\mathcal N}}_s(p,\varepsilon,\hbar)||p|^j|\sin{\zeta p/2}|/{\zeta}|e^{(\rho_s-d_s)|p|}\,dp \\ && \leq \sup_{p\in{\mathcal R}}\,[|p|\,\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\,e^{-d_s |p|}]\max_{\hbar\in [0,1]}\int_{\mathcal R}\,|p|^j\widehat{{\mathcal N}}(p,\varepsilon,\hbar)e^{\rho_s|p|}\,dp \\ && \leq \frac1{e}\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\frac{|{\mathcal N}_s|_{\rho_s,j}}{d_s}\leq \theta_{\ell,j}({\mathcal N},\varepsilon) \end{eqnarray*} (\ref{stimadgeta}) is proved by exactly the same argument. Finally, to show (\ref{stimadgh}) we write: \begin{eqnarray*} && \sup_{{\zeta\in{\mathcal R}}}| \partial^j_\hbar g_\ell(u,\zeta,\varepsilon,\hbar)|_{\rho_\ell} \leq \sup_{{\zeta\in{\mathcal R}}}\max_{\hbar\in [0,1]} 2\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\int_{\mathcal R}|\partial^j_\hbar\widehat{{\mathcal N}_s}(p,\varepsilon,\hbar)|\cdot|\sin{\zeta p/2}|/{\zeta}|e^{\rho_\ell |p|}\,dp \\ && \leq \max_{\hbar\in [0,1]}\sum_{s=0}^{\ell-1}\,|\varepsilon_s|\int_{\mathcal R}|\partial^j_\hbar\widehat{{\mathcal N}}(p,\varepsilon,\hbar)|e^{(\rho_s-d_s)|p|}\,dp \leq \theta_{\ell}({\mathcal N},\varepsilon) \end{eqnarray*} \vskip 4pt\noindent This proves the Lemma. \end{proof} By \textbf{Condition (1)} the operator family $\hbar \mapsto \F(L_\omega;\varepsilon,\hbar)$, defined by the spectral theorem, is self-adjoint in $L^2(\T^l)$; by \textbf{Condition (2)} $D(\F(L_\omega))=H^1(\T^l)$. Since $L_\omega$ is a first order operator with symbol ${\mathcal L}_\omega$, the symbol of $\F(L_\omega;\varepsilon,\hbar)$ is $\F({\mathcal L}_\omega(\xi),\varepsilon,\hbar)$. We can now state the main result of this section. Let $\F_\ell(x,\varepsilon,\hbar)$ be as in Lemma \ref{propN}, which entails the validity of \textbf{Conditions (1), (2), (3)}. \begin{theorem} \label{homo} \label{homeq} Let $V_\ell\in J_k(\rho_\ell)$, $\ell=0,1\ldots$, $V_1\equiv V$ for some $\rho_\ell> \rho_{\ell+1}>0$, $k=0,1,\ldots$. Let ${\mathcal V}_\ell({\mathcal L}_\omega(\xi),x;\varepsilon,\hbar)\in\J_k(\rho)$ be its symbol. Then for any $\displaystyle \theta_{\ell}({\mathcal N},\varepsilon)<1$ the homological equation (\ref{heq}), rewritten as \vskip 4pt\noindent \begin{equation} \label{heqell} \frac{[\F_\ell(L_\omega),W_\ell]}{i\hbar}+V_{\ell}=N_\ell(L_\omega,\varepsilon) \end{equation} \vskip 6pt\noindent \begin{equation} \label{Moell} \{\F_\ell({\mathcal L}_\omega(\xi),\varepsilon,\hbar),{\mathcal W}_\ell(x,\xi;\varepsilon,\hbar)\}_M+{\mathcal V}_{\ell}(x,L_\omega(\xi);\varepsilon,\hbar)={\mathcal N}_\ell({\mathcal L}_\omega(\xi),\varepsilon,\hbar) \end{equation} \vskip 4pt\noindent admits a unique solution $(W_\ell,N_\ell)$ of Weyl symbols ${\mathcal W}_\ell({\mathcal L}_\omega(\xi),x;\varepsilon,\hbar)$, ${\mathcal N}_\ell({\mathcal L}_\omega(\xi),\varepsilon,\hbar)$ such that \begin{enumerate} \item $W_\ell=W^\ast_\ell\in J_k(\rho_\ell)$, with: \begin{eqnarray} && \label{Thm5.1} \|W_\ell\|_{\rho_{\ell+1},k}=\|{\mathcal W}\|_{\rho_{\ell+1},k}\leq A(\ell,k,\varepsilon)\|{\mathcal V}_\ell\|_{\rho_{\ell},k} \\ \nonumber && {} \\ && \label{Adrk} A(\ell,k,\varepsilon)=\gamma \frac{\tau^\tau}{(ed_\ell)^\tau}\left[1+\frac{2^{k+1}(k+1)^{2(k+1)}k^k}{(e\delta_\ell)^{k}[1-\theta_\ell({\mathcal N},\varepsilon)]^{k+1}} \theta_{\ell,k}^{k+1}\right]. \end{eqnarray} \vskip 6pt\noindent \item ${\mathcal N}_\ell=\overline{{\mathcal V}}_\ell$; therefore ${\mathcal N}_\ell\in J_k(\rho_\ell)$ and $ \|{\mathcal N} \|_{\rho_\ell,k} \leq \|{\mathcal V}_\ell\|_{\rho_\ell,k} .$ \end{enumerate} \end{theorem} \begin{proof} The proof of (2) is obvious and follows from the definition of the norms $\Vert\cdot\Vert_\rho$ and $\Vert\cdot\Vert_{\rho,k}$. The self-adjointess property $W=W^*$ is implied by the construction itself, which makes $W$ symmetric and bounded. Consider ${\mathcal W}_\ell$ as defined by (\ref{defW}). Under the present assumptions, by Lemma \ref{propN} we have: \vskip 8pt\noindent $$ \widetilde{{\mathcal W}}_\ell({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar):=\frac1{\langle\omega,q\rangle}\frac{i\hbar\widetilde{{\mathcal V}}_\ell({\mathcal L}_\omega(\xi);q;\varepsilon,\hbar)}{1+ g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)}, \quad q\neq 0; \quad \widetilde{{\mathcal W}}_\ell(\cdot,0;\hbar)=0. $$ \vskip 8pt\noindent By the $\|\cdot\|_{\rho_\ell}$-convergence of the series (\ref{sgg}) we can write \begin{eqnarray} && \partial^\gamma_\hbar \widetilde{{\mathcal W}}_\ell({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar)=\sum_{n=0}^\infty\,(-\varepsilon)^n\,\partial^\gamma_\hbar \widetilde{{\mathcal W}}_{\ell,n}({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar), \\ && \widetilde{{\mathcal W}}_{\ell,n}({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar)=\frac1{\langle\omega,q\rangle}\widetilde{{\mathcal V}}_\ell({\mathcal L}_\omega(\xi);q;\varepsilon,\hbar)[g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)]^n \\ \label{derivateWn} && \partial^\gamma _\hbar\widetilde{{\mathcal W}}_{\ell,n}({\mathcal L}_\omega(\xi),q;\varepsilon,\hbar)= \\ \nonumber &&\sum_{j=0}^\gamma\,\binom{\gamma}{j}\,\partial^{\gamma-j}_\hbar \widetilde{{\mathcal V}}_\ell({\mathcal L}_\omega(\xi);q;\varepsilon,\hbar)D^j_\hbar [g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)]^n \end{eqnarray} \vskip 4pt\noindent where $D_\hbar$ denotes the total derivative with respect to $\hbar$. We need the following preliminary result. \begin{lemma} \label{derivateg} Let $\zeta(\hbar):=\langle\omega,q\rangle\hbar$. Then: \begin{enumerate} \item \begin{eqnarray} \label{stimadghh} |D^j_\hbar g_\ell({\mathcal L}_\omega(\xi),\zeta(\hbar),\varepsilon,\hbar)|_{\rho_\ell} \leq (j+1) ({2|q|})^j \theta_{\ell,j}({\mathcal N},\varepsilon)^2 \end{eqnarray} \item \begin{eqnarray} \label{stimadgjn} |D^j_\hbar [g_\ell({\mathcal L}_\omega(\xi);\zeta(\hbar),\varepsilon,\hbar)]^n|_{\rho_\ell}\leq 2n^j (\theta_\ell({\mathcal N},\varepsilon))^{n-j} [2(j+1)|q|]^j\theta_{\ell,j}({\mathcal N},\varepsilon)^{2j}. \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} The expression of total derivative $D_\hbar g$ is: \begin{equation} \label{Dom} D_\hbar g(\cdot;\langle\omega,q\rangle\hbar,\varepsilon,\hbar)=(\langle\omega,q\rangle\ \frac{\partial}{\partial\zeta}+\frac{\partial}{\partial\hbar})\left.g_\ell(\cdot;\zeta,\varepsilon,\hbar)\right|_{\zeta=\langle\omega,q\rangle\hbar} \end{equation} By Leibnitz's formula we then have: \begin{equation} D^j_\hbar g_\ell(\cdot;\langle\omega,q\rangle\hbar,\varepsilon,\hbar)=\sum_{i=0}^j\,\binom{j}{i}\langle\omega,q\rangle^{j-i}\frac{\partial^{j-i}g_\ell}{\partial\zeta^{j-i}}\frac{\partial^i g_\ell}{\partial\hbar^{i}} \end{equation} Apply now (\ref{simple}) with $k=0$, (\ref{stimadgu}) and (\ref{stimadgh}). We get: \vskip 5pt\noindent \begin{eqnarray*} \left\vert\frac{\partial^{j-i}g_\ell}{\partial\zeta^{j-i}}\frac{\partial^i g_\ell}{\partial\hbar^{i}}\right\vert_{\rho_\ell}\leq (j+1)2^j \theta_{\ell,j}({\mathcal N},\varepsilon)^2 \end{eqnarray*} whence, since $|\omega|\leq 1$: \begin{eqnarray} \label{stimaDjg} \left\vert\frac{D^jg_\ell}{D\hbar^j}\right\vert_{\rho_\ell} \leq (j+1)(2)^j{|q|^j}\theta_{\ell,j}({\mathcal N},\varepsilon)^2 \end{eqnarray} This proves Assertion (1). To prove Assertion (2), let us first note that \begin{equation} D^j_\hbar [g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)]^n=P_{n,j}\left(g_\ell,\frac{Dg_\ell}{D\hbar},\ldots,\frac{D^jg_\ell}{D\hbar^j}\right). \end{equation} \vskip 5pt\noindent where $P_{n,j}(x_1,\ldots,x_j)$ is a homogeneous polynomial of degree $n$ with $n^j$ terms. Explicitly: $$ P_{n,j}\left(g_\ell,\frac{Dg_\ell}{D\hbar},\ldots,\frac{D^jg_\ell}{D\hbar^j}\right)=\sum_{j=1}^n\,{g_\ell}^{n-j}\,\prod_{{k=1}\atop {j_1+\ldots+j_k=j}}^j \frac{D^{j_k}g_\ell}{D\hbar^{j_k}}. $$ Now (\ref{stimadghh}), (\ref{stimaDjg}) and Proposition \ref{stimeMo} (3) entail: \begin{eqnarray*} && |D^j_\hbar [g_\ell({\mathcal L}_\omega(\xi);\langle\omega,q\rangle\hbar,\varepsilon,\hbar)]^n|_{\rho_\ell}\leq n^j|g|_{\rho_\ell}^{n-j} \prod_{{k=1}\atop {j_1+\ldots+j_k=j}}^j 2(j_k+1)\left({2|q|}\right)^{j_k}\theta_{\ell,j_k}({\mathcal N},\varepsilon)^2 \\ && \leq 2n^j (\theta_\ell({\mathcal N},\varepsilon))^{n-j} [2(j+1)|q|]^j\theta_{\ell,j}({\mathcal N},\varepsilon)^{2j}. \end{eqnarray*} This concludes the proof of the Lemma. \end{proof} \noindent To conclude the proof of the theorem, we must estimate the $\|\cdot\|_{\rho_{\ell+1},k}$ norm of the derivatives $\displaystyle \partial^\gamma _\hbar{\mathcal W}_{\ell,n}({\mathcal L}_\omega(\xi),x;\varepsilon,\hbar)$. Obviously: \begin{equation} \label{serieW} \|{\mathcal W}_\ell(\xi,x;\varepsilon,\hbar)\|_{\rho_\ell+1,k}\leq \sum_{n=0}^\infty\,\|{\mathcal W}_{\ell,n}(\xi,x;\varepsilon,\hbar)\|_{\rho_{\ell+1,k}}. \end{equation} \vskip 4pt\noindent For $n=0$: \begin{eqnarray*} && \|{\mathcal W}_{\ell,0}(\xi,x;\varepsilon,\hbar)\|_{\rho_{\ell+1,k}}\leq \gamma\sum_{\gamma=0}^k\int_{{\mathcal R}\times{\mathcal R}^l}|\partial^\gamma_\hbar\widehat{{\mathcal W}}_{\ell,0}(p,s;\cdot)||s|^{\tau}\mu_{k-\gamma}(p\omega,s)\,e^{\rho_{\ell+1} (|p|+|s|)}\,d\lambda(p,s) \\ && \leq \gamma\sum_{\gamma=0}^k\int_{{\mathcal R}\times{\mathcal R}^l}|\partial^\gamma_\hbar\widehat{{\mathcal V}}_{\ell,0}(p,s;\cdot)||s|^{\tau}\mu_{k-\gamma}(p\omega,s)\,e^{\rho_{\ell+1} (|p|+|s|)}\,d\lambda(p,s)\leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau}\|{\mathcal V}_\ell\|{\rho_{\ell,k}} \end{eqnarray*} where the inequality follows again by the standard majorization \vskip 6pt\noindent $$ e^{\rho_{\ell+1} (|p|+|s|)}=e^{\rho_{\ell} (|p|+|s|)}e^{-d_\ell(|p|+|s|)}, \quad \sup_{s\in{\mathcal R}^l}[|s|^\tau e^{-d_\ell |s|}]\leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau} $$ \vskip 4pt\noindent on account of the small denominator estimate (\ref{DC}). For $n>0$ we can write, on account of (\ref{pm1},\ref{pm2}): \begin{eqnarray*} && \|{\mathcal W}_{\ell,n}(\xi,x;\cdot)\|_{\rho_{\ell+1},k}=\sum_{\gamma=0}^k\int_{{\mathcal R}\times{\mathcal R}^l}|\partial^\gamma_\hbar\widehat{{\mathcal W}}_{\ell,n}(p,s;\cdot)||s|^{\tau}\mu_{k-\gamma}(p\omega,s)\,e^{\rho_{\ell+1} (|p|+|s|)}\,d\lambda(p,s)\leq \\ && \leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau}\sum_{\gamma=0}^k\sum_{j=0}^\gamma \,\binom{\gamma}{j}\,\int_{{\mathcal R}^l}{\mathcal Q}(s,\cdot)e^{\rho_\ell |s|}\,d\nu(s) \end{eqnarray*} where \begin{eqnarray*} {\mathcal Q}(s,\cdot):=\int_{\mathcal R}|[\partial^{\gamma-j}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)]\ast [D^j_\hbar \widehat{g}^{\,\ast_n}_\ell(p;\langle\omega,s\rangle\hbar,\cdot)] \mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell |p|}\,dp \end{eqnarray*} Here $\ast$ denotes convolution with respect only to the $p$ variable, and $\widehat{g}^{\,\ast_i n}_\ell(p,\zeta,\cdot)$ denotes the $n-$th convolution of $\widehat{g}_\ell$ with itself, i.e. the $p$-Fourier transform of $g^n_\ell$. Now, by Assertion (3) of Proposition (\ref{stimeMo}) and the above Lemma: \begin{eqnarray*} && \int_{{\mathcal R}^l}{\mathcal Q}(s,\cdot)e^{\rho_\ell |s|}\,d\nu(s)= \\ && =\int_{{\mathcal R}\times{\mathcal R}^l}|[\partial^{\gamma-j}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)]\ast_\xi [D^j_\hbar g^{\ast_\xi n}_\ell(p;\langle\omega,s\rangle\hbar,\cdot)] \mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell(|p|+|s|)}\,d\lambda(p,s) \\ && \leq \int_{{\mathcal R}^l}\left[\int_{{\mathcal R}}|[\partial^{\gamma-j}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\hbar)]\ast [D^j_\hbar \widehat{g}^{\,\ast_ n}(p;\langle\omega,s\rangle\hbar,\cdot)]|\mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell |p|}\,dp\right]e^{\rho_\ell |s|} \,d\nu(s) \\ && \leq 2A(j)^j\theta_\ell({\mathcal N},\varepsilon)^{n-j}\int_{{\mathcal R}^l}\int_{{\mathcal R}}|\partial^{\gamma-j}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)|\mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell |p|}|s|^{j}e^{\rho_\ell |s|} \,\,dp d\nu(s), \end{eqnarray*} with \[ A(j):= 2n (j+1) \theta_{\ell,j}({\mathcal N},\varepsilon)^{2}. \] This yields, with $\delta_\ell$ defined by (\ref{ddll}): \begin{eqnarray*} && \|{\mathcal W}_{\ell,n}(\xi,x;\cdot)\|_{\rho_\ell+1,k}\leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau}\sum_{\gamma=0}^k\int_{{\mathcal R}\times{\mathcal R}^l}|\partial^\gamma_\hbar\widehat{{\mathcal W}}_{\ell,n}(p,s;\cdot)\mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell(|p|+|s|)}\,d\lambda(p,s)\leq \\ && \leq \frac{\gamma \tau^\tau (k+1)(2A(k))^k}{(ed_\ell)^\tau}\theta_\ell({\mathcal N},\varepsilon)^{n-j}\sum_{\gamma=0}^k\int_{{\mathcal R}\times {\mathcal R}^l} |\partial^{\gamma}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)|\cdot \mu_{k-\gamma}(p\omega,s)\,e^{\rho_\ell |p|}|s|^{j}e^{\rho_\ell |s|} \,\,d\lambda(p,s) \\ && \leq \frac{\gamma \tau^\tau (k+1)(2A(k))^k}{(ed_\ell)^\tau}\frac{k^{k}}{(e\delta_\ell)^{k}}\theta_\ell({\mathcal N},\varepsilon)^{n-j}\sum_{\gamma=0}^k\int_{{\mathcal R}^l}\int_{{\mathcal R}}|\partial^{\gamma}_\hbar \widehat{{\mathcal V}}_{\ell}(p;s;\cdot)| \mu_{k-\gamma}(p\omega,s)e^{\rho |p|}e^{\rho |s|}\,d\lambda(p,s) \\ && \leq \gamma\frac{\tau^\tau}{(ed_\ell)^\tau}\frac{(k+1)k^{k}}{(e\delta_\ell)^{k}} 2(2n)^k(\theta_\ell({\mathcal N},\varepsilon))^{n-j}(k+1)^k\theta_{\ell,k}^{2k} \|{\mathcal V}_\ell\|_{\rho,k}. \end{eqnarray*} \vskip 4pt\noindent Therefore, by (\ref{serieW}): \begin{eqnarray*} && \|{{\mathcal W}}_\ell(\xi;x;\varepsilon,\hbar)\|_{\rho_{\ell+1},k} \leq \sum_{n=0}^\infty\,{{\mathcal W}}_{\ell,n} (\xi;x;\varepsilon,\hbar)\|_{\rho_{\ell+1},k} \leq \\ && \leq \gamma \frac{\tau^\tau}{(ed_\ell)^\tau}\|{\mathcal V}_\ell\|_{\rho_\ell,k}\left[1+\frac{2^{k+1}(k+1)^{k+1}k^k}{(e\delta_\ell)^{k}} \theta_{\ell,k}^{2k}\sum_{n=1}^\infty\, n^k (\theta_\ell({\mathcal N},\varepsilon))^{n-j}\right] \\ && \leq \gamma \frac{\tau^\tau}{(ed_\ell)^\tau}\|{\mathcal V}_\ell\|_{\rho_\ell,k}\left[1+\frac{2^{k+1}(k+1)^{k+1}k^k}{(e\delta_\ell)^{k}} \theta_{\ell,k}^{2k-j}\sum_{n=1}^\infty\, n^k (\theta_\ell({\mathcal N},\varepsilon))^{n}\right] \\ && \leq\gamma \frac{\tau^\tau}{(ed_\ell)^\tau}\|{\mathcal V}_\ell\|_{\rho_\ell,k}\left[1+\frac{2^{k+1}(k+1)^{2(k+1)}k^k}{(e\delta_\ell)^{k}[(1- \theta_\ell({\mathcal N},\varepsilon)^{k+1}]} \theta_{\ell,k}^{k+1}\right]. \end{eqnarray*} \vskip 4pt\noindent because $j\leq k$, and \begin{eqnarray*} && \sum_{n=1}^\infty \,n^kx^n\leq \sum_{n=1}^\infty\,(n+1)\cdots (n+k) x^n=\frac{d^k}{dx^k}\,\sum_{n=1}^\infty x^{n+k} \\ && =\frac{d^k}{dx^k}\frac{x^{k+1}}{1-x}=(k+1)!\sum_{j=0}^{k+1}\left(k+1-j\atop j\right) \frac{x^{k+1-j}}{(1-x)^j}\leq \frac{2^{k+1}(k+1)!}{(1-x)^{k+1}}. \end{eqnarray*} By the Stirling formula this estimate concludes the proof of the Theorem. \end{proof} \vskip 2pt\noindent \subsection{Towards KAM iteration}\label{towkam} Let us now prove the estimate which represents the starting point of the KAM iteration: \begin{theorem} \label{resto} Let $\F_\ell$ and $V_\ell$ be as in Theorem \ref{homeq}, and let $W_\ell$ be the solution of the homological equation (\ref{heq}) as constructed and estimated in Theorem \ref{homo}. Let (\ref{epbar}) hold and let furthermore \begin{equation} \label{condepell} |\varepsilon|<\overline{\varepsilon}_\ell, \quad \overline{\varepsilon}_\ell:=\left(\frac{d_\ell}{\|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}}\right)^{2^{-\ell}}. \end{equation} Then we have: \begin{equation} \label{resto1} e^{i\varepsilon_\ell W_\ell/\hbar}(\F_\ell(L_\omega)+\varepsilon_\ell V_\ell)e^{-i\varepsilon_\ell W_\ell/\hbar}=(\F_\ell+\varepsilon_\ell N_\ell)(L_\omega)+\varepsilon_\ell^2V_{\ell+1,\varepsilon} \end{equation} where, $\forall\,0<2d_\ell<\rho_\ell$ and $k=0,1,\ldots$: \begin{eqnarray} && \label{resto2} \|V_{\ell+1,\varepsilon}\|_{\rho_\ell-2d_\ell,k}\leq C(\ell,k,\varepsilon) \frac{\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}} {1-{|\varepsilon_\ell |}(k+1)4^k A(\ell,k,\varepsilon) \|{\mathcal V}\|_{\rho_\ell,k}/{(ed_\ell)^2}} \\ && \nonumber {} \\ \label{Cdrk} && C(\ell,k,\varepsilon):=\frac{(k+1) 4^{k}}{(ed_\ell)^2}{A(\ell,k,\varepsilon)}\left[2+|\varepsilon_{\ell} |\frac{(k+1) 4^{k}}{(ed_\ell)^2 }{A(\ell,k.\varepsilon)\|{\mathcal V}_\ell\|_{\rho_\ell,k}}{} \right] \end{eqnarray} \vskip 6pt\noindent Here $A(\ell,k,\varepsilon)$ is defined by (\ref{Adrk}). \end{theorem} \begin{remark} We will verify in the next section (Remark \ref{verifica} below) that (\ref{condepell}) is actually fulfilled for $|\varepsilon|<1/|{\mathcal V}|_\rho$. \end{remark} \begin{proof} To prove the theorem we need an auxiliary result, namely: \begin{lemma} \label{RResto4} For $\ell=0,1,\ldots$ let $\rho_\ell>0, \rho_0:=\rho$, $A\in J_k(\rho)$, $W_\ell\in J_k(\rho_\ell)$, $k=0,1,\ldots$. Let $W_\ell^\ast=W_\ell$, and define: \begin{equation} \label{resto5} A_{\varepsilon}(\hbar):=e^{i\varepsilon_\ell W_\ell/\hbar}Ae^{-i\varepsilon_\ell W/\hbar}. \end{equation} Then, for $\displaystyle |\varepsilon_\ell|< [e d^2_\ell/((k+1)4^k\|{\mathcal W}\|_{\rho_{\ell+1},k})]^{2^{-\ell}}$, and $\forall\,0<d_\ell<\rho_\ell$, $k=0,1,\ldots$: \vskip 8pt\noindent \begin{equation} \label{resto6} \|A_{\varepsilon}(\hbar)\|_{\rho_\ell-d_\ell,k}\leq \frac{\|{\mathcal A}\|_{\rho_\ell,k}}{1-|\varepsilon_\ell | (k+1)4^k \|{\mathcal W}\|_{\rho_{\ell+1},k}/(ed^2_\ell)} \end{equation} \vskip 4pt\noindent \end{lemma} \begin{proof} Since the operators $W_\ell$ and $A$ are bounded, there is ${\varepsilon}_0>0$ such that the commutator expansion for $A_{\varepsilon}(\hbar)$: \vskip 4pt\noindent $$ A_{\varepsilon}(\hbar)=\sum_{m=0}^\infty \frac{(i\varepsilon_\ell)^m}{ \hbar^m m!}[W_\ell,[W_\ell,\ldots,[W_\ell,A]\ldots] $$ \vskip 4pt\noindent is norm convergent for $|\varepsilon|<\varepsilon_0$ if $\hbar\in]0,1[$ is fixed. The corresponding expansion for the symbols is \vskip 4pt\noindent $$ {\mathcal A}_{\varepsilon}(\hbar)=\sum_{m=0}^\infty \frac{(\varepsilon_\ell)^m}{m!}\{{\mathcal W}_\ell,\{{\mathcal W},\ldots,\{{\mathcal W}_\ell,{\mathcal A}\}_M\ldots\}_M $$ \vskip 4pt\noindent Now we can apply once again Corollary \ref{multipleM}. We get: \vskip 5pt\noindent \begin{equation} \frac{1}{m!}\|\{{\mathcal W}_\ell,\{{\mathcal W}_\ell,\ldots,\{{\mathcal W}_\ell,{\mathcal A}\}_M\ldots\}_M\|_{\rho_\ell-d_\ell,k} \leq \left(\frac{(k+1)4^{k}\|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}}{ed^2_\ell}\right)^m \|{\mathcal A}\|_{\rho_\ell,k} \end{equation} \vskip 5pt \noindent Therefore: \vskip 4pt\noindent \begin{eqnarray*} \|A_{\varepsilon}(\hbar)\|_{\rho_\ell-d_\ell,k} &\leq & \|{\mathcal A}\|_{\rho_\ell,k}\sum_{m=0}^\infty |\varepsilon_\ell|^m [(k+1)4^{k}\|{\mathcal W}\|_{\rho_{\ell+1},k}/(ed^2_\ell)]^m \\ &=& \frac{\|{\mathcal A}\|_{\rho_\ell,k}}{1-|\varepsilon_\ell | (k+1)4^k \|{\mathcal W}\|_{\rho_{\ell+1},k}/(ed^2_\ell)} \end{eqnarray*} \vskip 4pt\noindent and this concludes the proof. \end{proof} $W_\ell$ solves the homological equation (\ref{heq}). Then by Theorem \ref{homo} $W_\ell=W_\ell^\ast\in J_k(\rho_\ell-d_\ell)$, $k=0,1,\ldots$; in turn, by Assertion (3) of Corollary \ref{corA} the unitary operator $\displaystyle e^{i\varepsilon_\ell W_\ell/\hbar}$ leaves $H^1(\T^l)$ invariant. Therefore the unitary image of $H_\varepsilon$ under $\displaystyle e^{i\varepsilon_\ell W/\hbar}$ is the real-holomorphic operator family in $L^2(\T^l)$ \begin{equation} \label{S} \varepsilon_\ell\mapsto S_{\varepsilon_\ell}:=e^{i\varepsilon_\ell W_\ell/\hbar}(\F_\ell(L_\omega)+\varepsilon_\ell V_\ell)e^{-i\varepsilon_\ell W/\hbar}, \quad D(S(\varepsilon_\ell))=H^1(\T^l) \end{equation} Computing its Taylor expansion at $\varepsilon_\ell=0$ with second order remainder we obtain: \begin{eqnarray}\label{lemmm} && S_{\varepsilon_\ell}u=\F_\ell(L_\omega)u+\varepsilon_\ell N_\ell(L_\omega)u+ \varepsilon_\ell^2 V_{\ell+1,\varepsilon}u, \quad u\in H^1(\T^l) \\ \nonumber && {} \\ && V_{\ell+1,\varepsilon_\ell}=\frac12\int_0^{\varepsilon_\ell} (\varepsilon_\ell -t)e^{i t W_\ell/\hbar}\left(\frac{[N_\ell,W_\ell]}{i\hbar}+\frac{[W_\ell,V_\ell]}{i\hbar}+t \frac{[W_\ell,[W_\ell,V_\ell]]}{(i\hbar)^2}\right)e^{-itW_\ell/\hbar}\,dt \end{eqnarray} \vskip 5pt\noindent To see this, first remark that $S_0=\F(L_\omega)$. Next, we compute, as equalities between continuous operators in $L^2(\T^l)$: \begin{eqnarray*} && S^\prime_{\varepsilon_\ell}=e^{i\varepsilon_\ell W/\hbar}([\F_\ell(L_\omega),W_\ell]/i\hbar +V_\ell+\varepsilon_\ell [V,W]/i\hbar)e^{-i\varepsilon_\ell W/\hbar}= \\ && e^{i\varepsilon_\ell W/\hbar}(N_\ell+\varepsilon_\ell [V_\ell,W_\ell]/i\hbar)e^{i\varepsilon_\ell W_\ell/\hbar}; \qquad S^\prime_0= N_\ell \\ && S^{\prime\prime}_{\varepsilon_\ell}=e^{i\varepsilon_\ell W_\ell/\hbar}([N_\ell,W_\ell]/i\hbar + [V_\ell,W_\ell]/i\hbar +\varepsilon_\ell [W_\ell,[W_\ell,V_\ell]]/(i\hbar)^2)e^{-i\varepsilon_\ell W_\ell/\hbar}, \end{eqnarray*} and this proves (\ref{lemmm}) by the second order Taylor's formula with remainder: $$ S_{\varepsilon_\ell}=S(0)+\varepsilon_\ell S^\prime_0+\frac12\int_0^{\varepsilon_\ell} (\varepsilon_\ell-t)S^{\prime\prime}(t),dt $$ The above formulae obviously yield \begin{equation} \label{stimar2} \| {V}_{l+1,\varepsilon_\ell}\|\leq |\varepsilon_\ell |^2 \max_{0\leq |t|\leq |\varepsilon_\ell |}\|S^{\prime\prime}(t)\| \end{equation} Set now: \begin{equation} \label{R1} R_{\ell+1,\varepsilon_\ell}:=[N_\ell,W_\ell]/i\hbar + [V_\ell,W_\ell]/i\hbar +\varepsilon_\ell [W_\ell,[W_\ell,V_\ell]]/(i\hbar)^2 \end{equation} $R_{\ell+1,\varepsilon_\ell}$ is a continuous operator in $L^2$, corresponding to the symbol \begin{equation} \label{simbR1} {\mathcal R}_{\ell+1,\varepsilon_\ell}({\mathcal L}_\omega(\xi),x;\hbar)=\{{\mathcal N}_\ell,{\mathcal W}_\ell\}_M+\{{\mathcal V}_\ell,{\mathcal W}_\ell\}_M+\varepsilon_\ell\{{\mathcal W}_\ell,\{{\mathcal W}_\ell,{\mathcal V}_\ell\}_M\}_M \end{equation} Let us estimate the three terms individually. By Theorems \ref{homo} and \ref{stimeMo} we can write, with $A(\ell,k,\varepsilon)$ given by (\ref{Adrk}): \begin{eqnarray*} && \|[N_\ell,W_\ell]/i\hbar\|_{\rho_\ell-d_\ell,k}\leq \|\{{\mathcal N}_\ell,{\mathcal W}_\ell\}_M\|_{\rho_\ell-d_\ell,k}\leq \frac{(k+1)4^k}{(ed_\ell)^2}\|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}\|{\mathcal N}_\ell\|_{\rho_\ell,k} \\ && \leq \frac{(k+1)4^k}{(ed_\ell)^2} A(\ell,k,\varepsilon)\|{\mathcal V}_\ell\|^2_{\rho_\ell,k} \\ && \|[V_\ell,W_\ell]/i\hbar\|_{\rho_\ell-d_\ell,k}\leq\|\{{\mathcal V}_\ell,{\mathcal W}_\ell\}_M\|_{\rho_\ell-d_\ell,k}\leq \frac{(k+1)4^k}{(ed_\ell)^2}\|{\mathcal V}_\ell\|_{\rho_\ell,k}\|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}\leq \\ && \leq \frac{(k+1)4^k}{(ed_\ell)^2}A(\ell,k,\varepsilon)\|{\mathcal V}_\ell\|^2_{\rho_\ell,k} \\ && \|[W_\ell,[W_\ell,V_\ell]]/(i\hbar)^2\|_{\rho_\ell-d_\ell,k}\leq \|\{{\mathcal W}_\ell,\{{\mathcal W}_\ell,{\mathcal V}_\ell\}_M\}_M\|_{\rho_\ell-d_\ell,k}\leq \frac{(k+1)^2 4^{2k}}{(ed_\ell)^4} \|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}^2 \|{\mathcal V}_\ell\|_{\rho_\ell,k} \\ && \leq \frac{(k+1)^2 4^{2k}}{(ed_\ell)^4}A(\ell,k,\varepsilon)^2\|{\mathcal V}_\ell\|_{\rho_\ell,k}^3 \end{eqnarray*} \vskip 6pt\noindent We can now apply Lemma \ref{RResto4}, which yields: \vskip 2pt\noindent \begin{eqnarray*} && \|e^{i\varepsilon_\ell W_\ell/\hbar}[N_\ell,W_\ell] e^{-i\varepsilon_\ell W_\ell/\hbar}/i\hbar\|_{\rho_\ell-d_\ell-d^\prime_\ell,k}\leq \frac{(k+1) 4^{k}}{(ed_\ell)^2}\Xi(\ell,k) \\ && \|e^{i\varepsilon_\ell W_\ell/\hbar}[V_\ell,W_\ell] e^{-i\varepsilon_\ell W_\ell/\hbar}/i\hbar\|_{\rho_\ell-d_\ell-d^\prime_\ell,k}\leq \frac{(k+1) 4^{k}}{(ed_\ell)^2 }\Xi(\ell,k) \\ && \|e^{i\varepsilon_\ell W_\ell/\hbar}[W_\ell,[W_\ell,V_\ell]] e^{-i\varepsilon_\ell W_\ell/\hbar}/(i\hbar)^2\|_{\rho_\ell-d_\ell-d^\prime_\ell,k}\leq \frac{(k+1)^2 4^{2k}}{(ed_\ell)^4 }\Xi_1(\ell,k) \end{eqnarray*} where \begin{eqnarray} && \label{Xi} \Xi(\ell,k):= A(\ell,k,\varepsilon)\cdot\frac{\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}} {1-|\varepsilon_\ell (k+1)4^k |\|{\mathcal W}\|_{\rho_{\ell+1},k}/(ed^2_\ell)} \\ && \label{Xi1} \Xi_1(\ell,k)=A(\ell,k,\varepsilon)^2\cdot \frac{\|{\mathcal V}\|^3_{\rho_\ell,k}} {1-|\varepsilon_\ell (k+1)4^k |\|{\mathcal W}\|_{\rho_{\ell+1},k}/(ed^2_\ell)} \end{eqnarray} \vskip 6pt\noindent Therefore, summing the three inequalities we get \vskip 3pt\noindent \begin{eqnarray*} && \|V_{\ell+1,\varepsilon}\|_{\rho_\ell-d_\ell-d^\prime_\ell,k} \leq \\ && \frac{(k+1) 4^{k}}{(ed_\ell)^2 }A(\ell,k,\varepsilon) \frac{\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}} {1-|\varepsilon_\ell | (k+1)4^k \|{\mathcal W}_\ell\|_{\rho_{\ell+1},k}/(ed^2_\ell)}\left[2+|\varepsilon_\ell|\frac{(k+1) 4^{k}}{(ed_\ell)^2 }A(\ell,k,\varepsilon){\|{\mathcal V}_\ell\|_{\rho_\ell,k}} \right] \end{eqnarray*} \vskip 8pt\noindent If we choose $d^\prime_\ell=d_\ell$ this is (\ref{resto2}) on account of Theorem \ref{homo}. This concludes the proof of Theorem \ref{resto}. \end{proof} \vskip 1cm\noindent \section{Recursive estimates}\label{recesti} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \def{\mathcal P}{{\mathcal P}} \setcounter{equation}{0}% \setcounter{theorem}{0 Consider the $\ell$-th step of the KAM iteration. Summing up the results of the preceding Section we can write: \begin{eqnarray*} && \bullet\ S_{\ell,\varepsilon}:=e^{i\varepsilon_\ell W_\ell/\hbar}\cdots e^{i\varepsilon_2 W_1/\hbar}e^{i\varepsilon W_0/\hbar}(\F(L_\omega)+\varepsilon V)e^{-i\varepsilon W_0/\hbar}e^{-i\varepsilon_2 W_1/\hbar}\cdots e^{-i\varepsilon_\ell W_\ell/\hbar} \\ && = e^{i\varepsilon_\ell W_\ell/\hbar}(\F_{\ell,\varepsilon}(L_\omega)+\varepsilon^{2^\ell} V_{\ell,\varepsilon})e^{-i\varepsilon_\ell W_\ell/\hbar} =\F_{\ell+1,\varepsilon}(L_\omega)+\varepsilon_{\ell +1} V_{\ell +1,\varepsilon}, \\ && \bullet\ \F_{\ell,\varepsilon}(L_\omega)=\F(L_\omega)+\sum_{k=1}^{\ell-1} \varepsilon_kN_k(L_\omega), \quad [\F_{\ell}(L_\omega),W_\ell]/i\hbar +V_{\ell,\varepsilon} =N_\ell(L_\omega,\varepsilon) \\ && \bullet V_{\ell+1,\varepsilon}=\frac12\int_0^{\varepsilon_\ell} (\varepsilon_\ell -t)e^{i t W_\ell/\hbar}R_{\ell+1,t}e^{-itW_\ell/\hbar}\,dt \\ && \bullet\ R_{\ell+1,\varepsilon}:=[N_{\ell},W_{\ell}]/\hbar+[W_{\ell},V_{\ell,\varepsilon}]/\hbar+\varepsilon_{\ell} [W_{\ell},[W_{\ell},V_{\ell,\varepsilon}]]/\hbar^2 \end{eqnarray*} \vskip 6pt\noindent We now proceed to obtain recursive estimates for the above quantities in the $\|\cdot\|_{\rho_\ell,k}$ norm. Consider (\ref{resto2}) and denote: \vskip 6pt\noindent \begin{eqnarray} && \label{stimaPsi} \Psi(\ell,k)=\frac{(k+1)4^k}{(ed_\ell)^2}; \quad \Pi(\ell,k):= \frac{[2(k+1)^2]^{k+1}k^k}{e^{k}d_\ell^{k}} \\ \label{Pll} && P(\ell,k,\varepsilon):=\frac{\theta_{\ell,k}({\mathcal N},\varepsilon)^{k+1}}{[1-\theta_\ell({\mathcal N},\varepsilon)]^{k+1}} \end{eqnarray} \vskip 6pt\noindent where $\theta_{\ell,k}({\mathcal N},\varepsilon)$ is defined by (\ref{Theta}). (\ref{stimaPsi}) and (\ref{Pll}) yield \begin{equation} \label{alk} A(\ell,k,\varepsilon)= \gamma \frac{\tau^\tau}{(ed_\ell)^\tau}[1+\Pi(\ell,k)P(\ell,k,\varepsilon)]. \end{equation} Set furthermore: \begin{eqnarray} && \label{resto22} E({\ell}, k,\varepsilon) := \frac{\Psi(\ell,k)A(\ell,k,\varepsilon)[2+ |\varepsilon_{\ell}| \Psi(\ell,k)A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}]}{1-|\varepsilon_\ell | \Psi(\ell,k) A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}} \end{eqnarray} Then we have: \begin{lemma} \label{stimaVl+1} Let: \begin{equation} \label{stimadenE} |\varepsilon_\ell | \Psi(\ell,k)A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}<1. \end{equation} Then: \begin{equation} \label{restoll} \|V_{\ell+1,\varepsilon}\|_{\rho_{\ell+1},k}\leq E({\ell}, k,\varepsilon)\|V_{\ell,\varepsilon}\|^{2}_{\rho_{\ell},k} \end{equation} \vskip 3pt\noindent \end{lemma} \begin{remark} The validity of the assumption (\ref{stimadenE}) is to be verified in Proposition \ref{estl} below. \end{remark} \begin{proof} By (\ref{Cdrk}), (\ref{stimaPsi}) and (\ref{alk}) we can write: \begin{equation} C(\ell,k,\varepsilon)\leq \Psi(\ell,k)A(\ell,k,\varepsilon))\left[2+ |\varepsilon_{\ell}| \Psi(\ell,k)A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}\right] \end{equation} and therefore, by (\ref{resto2}): \begin{eqnarray*} && \|V_{\ell+1,\varepsilon}\|_{\rho_\ell-2d_\ell,k}\leq C(\ell,k,\varepsilon) \frac{\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}} {1-|\varepsilon_\ell| \Psi(\ell,k) A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}} \\ && {} \\ && \leq \frac{\Psi(\ell,k)A(\ell,k,\varepsilon)\left[2+ |\varepsilon_{\ell}| \Psi(\ell,k)A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}\right]}{1-|\varepsilon_\ell | \Psi(\ell,k)A(\ell,k,\varepsilon)\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}}\|{\mathcal V}_\ell\|^2_{\rho_\ell,k} \\ && = E(\ell,k,\varepsilon)\|{\mathcal V}_\ell\|^2_{\rho_\ell,k}. \end{eqnarray*} \vskip 6pt\noindent This yields (\ref{restoll}) and proves the Lemma. \end{proof} Now recall that the sequence $\{\rho_j\}$ is decreasing. Therefore: \begin{equation} \|{\mathcal N}_{j,\varepsilon}\|_{\rho_\ell,k}\leq \|{\mathcal N}_{j,\varepsilon}\|_{\rho_j,k}= \|\overline{{\mathcal V}}_{j,\varepsilon}\|_{\rho_j,k} \leq \|{{\mathcal V}}_{j,\varepsilon}\|_{\rho_j,k}, \quad \;j=0,\ldots,\ell-1. \end{equation} \vskip 4pt\noindent At this point we can specify the sequence $d_\ell, \ell=1,2,\ldots$, setting: \vskip 4pt\noindent \begin{equation} \label{ddelta} d_\ell:=\frac{\rho}{(\ell+1)^2}, \qquad \ell=0,1,2,\ldots \end{equation} \vskip 4pt\noindent Remark that (\ref{ddelta}) yields $$ d- \sum_{\ell=0}^\infty d_\ell=\rho-\frac{\pi^2}{6}>\frac{\rho}{2}. $$ as well as the following estimate \begin{equation} \label{stimapigreco} \Pi(\ell,k)\leq \frac{[2(k+1)^2]^{k+1}k^k(\ell+1)^{2k}}{e^{k}\rho^{k}} \end{equation} \vskip 2pt\noindent We are now in position to discuss the convergence of the recurrence (\ref{restoll}). \begin{proposition} \label{estl} Let: \begin{equation} \label{condrho} \rho> 2 \end{equation} \begin{equation} \label{condep} |\varepsilon|< \varepsilon^\ast(\tau,k):= \frac{1}{e^{24(2+k+\tau)}(k+2)^{2\tau}\|{\mathcal V}\|_{\rho,k}} \end{equation} \vskip 4pt\noindent \begin{equation} \label{3condep} \gamma\tau^\tau (k+\tau+2)^{4(k+\tau+2)} < \frac{1}{2} \end{equation} \vskip 8pt\noindent Then the following estimate holds: \vskip 8pt\noindent \begin{equation} \label{rec2} \|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k} \leq \left(e^{8( 2+k+\tau)} \|{\mathcal V}\|_{\rho,k}\right)^{2^{\ell}}, \quad \ell=1,2,\ldots. \end{equation} \end{proposition} \vskip 4pt\noindent \begin{proof} We proceed by induction. The assertion is true for $\ell=0$. Now assume inductively: \vskip 6pt\noindent \begin{equation} \label{Hell} |\varepsilon_j|\|{\mathcal V}_{j,\varepsilon}\|_{\rho_j,k}\leq (k+2)^{-2\tau( j+1)}, \end{equation} \vskip 6pt\noindent for $0\leq j\leq \ell$. Out of this we prove the validity of (\ref{rec2}) and of (\ref{stimadenE}); to complete the induction it will be enough to show that (\ref{rec2}) implies the validity of (\ref{Hell}) for $j=\ell+1$. \vskip 6pt A preliminary result is the estimate of $ |\varepsilon_\ell | \Psi_{\ell,k}A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}$: \begin{lemma} \label{stimadenE1} Let \eqref{Hell} hold. Then: \begin{equation} \label{stimadenE2} |\varepsilon_\ell | \Psi_{\ell,k} | A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}\leq \frac1{2}. \end{equation} \end{lemma} \begin{proof} Let us first estimate $\theta_\ell({\mathcal N},\varepsilon)$ as defined by \eqref{theta} assuming the validity of \eqref{Hell} . We obtain: \begin{eqnarray*} && \theta_\ell({\mathcal N},\varepsilon)\leq \theta_{\ell,k}({\mathcal N},\varepsilon) \leq \sum_{s=0}^{\ell-1}|\varepsilon_s|\|{\mathcal V}\|_{\rho_s,k}/d_s = \frac{1}{\rho}\sum_{s=0}^{\ell-1}\,(s+1)^2(k+2)^{-2\tau (s+1)}= \\ && \frac{1}{4\rho}\frac{d^2}{d\tau^2}\sum_{s=0}^{\ell-1}\,(k+2)^{-2\tau (s+1)} =\frac{1}{4\rho}\frac{d^2}{d\tau^2}[(k+2)^{-2\tau}\frac{1-(k+2)^{-2\tau \ell}}{1-(k+2)^{-2\tau}} \leq \frac{1}{\rho}(k+2)^{-2}\leq \frac{1}{\rho} \end{eqnarray*} because $\tau>l-1\geq 1$. Now $\rho>1$ entails that \begin{equation} \label{dentheta} \frac1{1-\theta_\ell}<\frac{\rho}{\rho-1}. \end{equation} \vskip 4pt\noindent Hence we get, by (\ref{Pll}) and (\ref{Theta}), the further $(\ell,\varepsilon)-$in\-de\-pen\-dent estimate: \begin{equation} \label{Hells} P(\ell,k,\varepsilon)\leq \frac{\rho^{k+1}}{(\rho-1)^{k+1}}\left((k+2)^2{\rho}\right)^{-k-1}\leq \left(\frac{1}{(k+2)^2}\right)^{k+1}. \end{equation} whence, by (\ref{alk}): \begin{eqnarray} \nonumber && A(\ell,k,\varepsilon)\leq \gamma\frac{\tau^\tau (\ell+1)^{2\tau}}{(e\rho)^\tau}[1+[2(k+1)^2]^{k+1}\left[(k+2)^2\right]^{-(k+1)}k^{k}(\ell+1)^{2k}] \\ \label{stimaAell} && \leq 4\gamma\frac{\tau^\tau (\ell+1)^{2(\tau+k)}k^k}{(e\rho)^\tau}. \end{eqnarray} \vskip 5pt\noindent Upon application of the inductive assumption and \eqref{stimaAell} we get: \vskip 5pt\noindent \begin{eqnarray*} && |\varepsilon_\ell | \Psi_{\ell,k}A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}\leq \frac{ 4^{k} (k+1)}{e^{2}\rho^{2}}(\ell+1)^{4}|\varepsilon_\ell | A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k} \\ && \leq \gamma\tau^{\tau} \frac{ 4^{k+1} (k+1)}{(e\rho)^{\tau+2}} (\ell+1)^{2(k+\tau+2)}k^k|\varepsilon_\ell |\|{\mathcal V}\|_{\rho_\ell,k} \\ && \leq \gamma\tau^{\tau} \frac{ 4^{k+1} (k+1)}{(e\rho)^{\tau+2}} (\ell+1)^{2(k+\tau+2)}k^k(k+2)^{-2(\ell+1)\tau} \end{eqnarray*} \vskip 5pt\noindent whence \vskip 4pt\noindent \begin{eqnarray} \label{stimaAell2} && |\varepsilon_\ell | \Psi_{\ell,k}A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}\leq \gamma\tau^{\tau} \frac{ 4^{k+1} (k+1)}{(e\rho)^{\tau+2}}k^k\kappa(k,\tau)^{2(k+\tau+2)}(k+2)^{-2\kappa(k,\tau)}\\ && \kappa(k,\tau):=\frac{k+\tau+2}{\tau\ln{(k+\tau)}} \end{eqnarray} because $$ \sup_{\ell\geq 0} (\ell+1)^{2(\tau+k+2)}(k+2)^{-2(\ell+1)\tau} =\kappa(k,\tau)^{2(k+\tau+2)}(k+2)^{-2\kappa(k,\tau)}. $$ \vskip 4pt\noindent Hence: \begin{equation} \label{stimaBpsi} |\varepsilon_\ell | \Psi_{\ell,k}A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k}\leq \frac1{2} \end{equation} provided \eqref{condrho} and \eqref{3condep} hold. As a matter of fact: \vskip 5pt\noindent \begin{eqnarray*} && |\varepsilon_\ell | \Psi_{\ell,k}A(\ell,k,\varepsilon)\|{\mathcal V}\|_{\rho_\ell,k} \leq \gamma\tau^{\tau} \frac{ 4^{k+1} (k+1)}{(e\rho)^{\tau+2}}k^k\kappa(k,\tau)^{2(k+\tau+2)}(k+2)^{-2\kappa(k,\tau)} \\ && \leq \gamma\tau^{\tau} [4(k+1)]^{k+1}(k+\tau+2)^{2(k+\tau+2)} \leq \gamma\tau^{\tau} (k+\tau+2)^{4(k+\tau+2)} \end{eqnarray*} \vskip 5pt\noindent because $e\rho >1$, $4(k+1)< (k+\tau+2)^2$ since $\tau\geq 2$, and $\kappa(k,\tau)\leq (k+\tau+2)$. Hence \eqref{stimaBpsi} is implied by the inequality \begin{equation} \label{4condep} \gamma\tau^{\tau} (k+\tau+2)^{4(k+\tau+2)} <\frac12 \end{equation} \vskip 6pt\noindent which is \eqref{3condep}. The Lemma is proved. \end{proof} \vskip 4pt\noindent {\it Proof of Proposition \ref{estl}}. By (\ref{resto22}): $$ E(\ell,k,\varepsilon) \leq 5 \Psi_{\ell,k}A(\ell,k,\varepsilon) \leq 20 \gamma\tau^\tau (\ell+1)^{2(\tau+k)}k^k \Psi_{\ell,k} $$ once more because $e\rho>1$. (\ref{restoll}) in turn entails: $$ \|{\mathcal V}_{\ell+1,\varepsilon}\|_{\rho_\ell+1,k}\leq \Phi_{\ell,k} \|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}^2, \quad \Phi_{\ell,k}:=20 \gamma\tau^\tau (\ell+1)^{2(\tau+k)} \Psi_{\ell,k}. $$ This last inequality immediately yields \begin{equation} \label{rec3} \|{\mathcal V}_{\ell+1,\varepsilon}\|_{\rho_{\ell+1},k} \leq [\|{\mathcal V}\|_{\rho,k}]^{2^{\ell+1}}\prod_{m=0}^{\ell}\Phi_{\ell -m,k}^{2m}. \end{equation} \vskip 3pt\noindent Now: \begin{eqnarray*} && \Phi_{\ell,k}= 20 \gamma\tau^\tau (\ell+1)^{2(\tau+k)}k^k \frac{(k+1)4^{k}}{(ed_{\ell})^2}\leq \nu(k,\tau)(\ell+1)^{2(k+\tau+2)} \\ && \nu(k,\tau):=20\gamma \tau^{\tau} 4^{k}(k+1)k^k \end{eqnarray*} \vskip 5pt\noindent Now the following inequality is easily checked: \vskip 4pt\noindent \begin{eqnarray} \label{gammanu} \nu(k,\tau)= 20\gamma\tau^\tau (k+1)4^k k^k \leq 2 \gamma\tau^\tau (k+\tau+2)^{4(k+\tau+2)} \end{eqnarray} \vskip 5pt\noindent because $\tau\geq 2$, and therefore, by \eqref{3condep}, \eqref{condrho} we get: $\nu(k,\tau)\leq 1$. As a consequence we have \begin{equation} \Phi_{\ell,k}(\ell+1)^{2(k+\tau+2)} \end{equation} Moreover, since $\Phi_{j,k}\leq \Phi_{\ell,k}, j\leq \ell$, we get, by \eqref{gammanu}: \vskip 5pt\noindent \begin{eqnarray*} \prod_{m=1}^{\ell}\Phi^{2m}_{\ell+1-m,k} \leq [\Phi_{\ell,k}]^{\ell(\ell+1)}\leq (\ell+1)^{2(k+\tau+2)\ell(\ell+1)} \end{eqnarray*} \vskip 3pt\noindent Now using $\ell(\ell+1)\log{\ell+1}<4\times2^{\ell+1}$, $\forall\,\ell\in\Bbb N$, we get: $$ (\ell+1)^{2(k+\tau+2)\ell(\ell+1)} < [e^{8(k+\tau+2)}]^{2^{\ell+1}}. $$ The following estimate is thus established \begin{eqnarray} \label{stimapsi} && \prod_{m=0}^{\ell}\Phi^{2m}_{\ell -m,k} \leq [e^{8(k+\tau+2)}] ^{2^{\ell+1}}. \end{eqnarray} If we now define: \begin{eqnarray} \label{mu} && \mu :=e^{8(k+\tau+2)}, \end{eqnarray} then (\ref{rec3}) and (\ref{stimapsi}) yield: \vskip 6pt\noindent \begin{eqnarray} && \label{GVS} \|{\mathcal V}_{\ell+1,\varepsilon}\|_{\rho_{\ell+1,k}} \leq \left[\mu^{2^l}\|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}\right]^{2}\leq \left[\|{\mathcal V}\|_{\rho,k}\,\mu\right]^{2^{\ell+1}} \\ \label{GVSS} && \mbox{ and therefore }\nonumber\\ && \varepsilon_{\ell+1}\|{\mathcal V}_{\ell+1,\varepsilon}\|_{\rho_{\ell+1,k}} \leq \left[ \|{\mathcal V}\|_{\rho_\ell,k}\,\mu^{2^l}\varepsilon_\ell\right]^{2} \leq \left[ \|{\mathcal V}\|_{\rho,k}\,\mu\varepsilon\right]^{2^{\ell+1}} \end{eqnarray} \vskip 5pt\noindent \eqref{GVS} is exactly \eqref{rec2}. Let us now prove out of (\ref{GVS},\ref{GVSS}) that the condition (\ref{Hell}) preserves its validity also for $j=\ell+1$. We have indeed, by the inductive assumption (\ref{Hell}) and (\ref{GVS}): \begin{eqnarray*} \label{verifica} && |\varepsilon_{\ell+1}| \|{\mathcal V}_{\ell+1,\varepsilon}\|_{\rho_{\ell+1,k}} \leq \left[ \|{\mathcal V}\|_{\rho_\ell,k}\,\mu^{2^l}\varepsilon_\ell\right]^{2}\leq (k+2)^{-2\tau(\ell+1)}\varepsilon_\ell(\mu^{2^l})^2\|{\mathcal V}\|_{\rho_\ell,k} \\ && \leq (k+2)^{-2\tau(\ell+1)}\left[\varepsilon\mu^3\|{\mathcal V}\|_{\rho,k}\right]^{2^\ell}\leq (k+2)^{-2\tau(\ell+2)} \end{eqnarray*} provided \vskip 4pt\noindent \begin{equation} \label{epsast} |\varepsilon|< \frac{1}{\mu^3\|{\mathcal V}\|_{\rho,k}(k+2)^{2\tau}}= \frac{1}{e^{24(k+\tau+2)}\|{\mathcal V}\|_{\rho,k}(k+2)^{2\tau}}:=\varepsilon^\ast(\tau,k) \end{equation} \vskip 8pt\noindent where the last expression follows from (\ref{mu}). This proves (\ref{Hell}) for $j=\ell+1$, and concludes the proof of the Proposition. \end{proof} \noindent \begin{theorem}\label{final}[Final estimates of $W_\ell$, $N_\ell$, $V_\ell$] \newline Let ${\mathcal V}$ fulfill Assumption (H2-H4), and let \eqref{3condep} be verified. Then the following estimates hold, $\forall \ell\in\Bbb N$: \vskip 4pt\noindent \begin{eqnarray} \label{stimafw} \varepsilon_\ell \|W_{\ell,\varepsilon}\|_{\rho_{\ell+1},k}\leq (\ell+1)^{2(\tau+k)}\cdot(\mu \varepsilon \|{\mathcal V}\|_{\rho})^{2^{\ell}}. \end{eqnarray} \begin{eqnarray} \label{stimafn} \varepsilon_\ell \|N_{\ell,\varepsilon}\|_{\rho_\ell,k}\leq \varepsilon_\ell \|{\mathcal V}_{\ell,\varepsilon}\|_{\rho_\ell,k}\leq \left[ \|{\mathcal V}\|_{\rho}\,\varepsilon \mu\right]^{2^{\ell}}. \end{eqnarray} \begin{eqnarray} \label{stimafv} \varepsilon_{\ell+1} \|V_{\ell+1,\varepsilon}\|_{\rho_{\ell+1},k}\leq \left[ \|{\mathcal V}\|_{\rho}\,\varepsilon \mu\right]^{2^{\ell+1}}. \end{eqnarray} \end{theorem} \begin{proof} Since ${\mathcal V}$ does not depend on $\hbar$, obviously $ \|{\mathcal V}\|_{\rho,k}\equiv \|{\mathcal V}\|_{\rho}$. Then formula (\ref{Thm5.1}) yields, on account of (\ref{stimaAell}), (\ref{dentheta}), (\ref{3condep}), (\ref{GVS}), (\ref{GVSS}): \vskip 3pt\noindent \begin{eqnarray*} \nonumber && \label{stimaWfl} \varepsilon_\ell \|W_{\ell,\varepsilon}\|_{\rho_{\ell+1},k} \leq \gamma\tau^\tau(\ell+1)^{2(k+\tau)}k^k \|{\mathcal V}\ell,\varepsilon\|_{\rho_\ell,k}\leq \\ && \leq 2 \frac12 (\ell+1)^{2(k+\tau)}\cdot (\mu \varepsilon \|{\mathcal V}\|_{\rho})^{2^{\ell}}. \end{eqnarray*} \vskip 5pt\noindent This proves (\ref{stimafw}). Moreover, since ${\mathcal N}_{\ell,\varepsilon}=\overline{{\mathcal V}}_{\ell,\varepsilon}$, again by (\ref{GVS}), (\ref{GVSS}): \begin{equation}\nonumber \label{stiman} \varepsilon_\ell \|{\mathcal N}_{\ell,\varepsilon}\|_{\rho_\ell,k}= \varepsilon_\ell \|\overline{{\mathcal V}}_{\ell,\varepsilon}\|_{\rho_\ell,k}\leq \left[ \|{\mathcal V}\|_{\rho}\,\varepsilon \mu\right]^{2^{\ell}}. \end{equation} The remaining assertion follows once more from (\ref{GVSS}). This concludes the proof of the Theorem. \end{proof} \begin{remark} \label{verifica1} (\ref{stimafw}) yields: \vskip 4pt\noindent $$ \varepsilon_\ell \frac{\|W_{\ell,\varepsilon}\|_{\rho_{\ell+1},k}}{d_\ell}\leq 4\gamma\tau^\tau\varepsilon^{2^\ell}(\ell+1)^{2(k+\tau+1)}\|{\mathcal V}\|_\rho^{2^\ell} $$ This inequality in turn entails: $$ |\varepsilon|\left(\frac{\|W_{\ell,\varepsilon}\|_{\rho_{\ell+1},k}}{d_\ell}\right)^{2^{-\ell}}\leq [4\gamma\tau^\tau(\ell+1)^{2(k+\tau+1)}]^{2^{-\ell}}\|{\mathcal V}\|_{\rho}\to \|{\mathcal V}\|_\rho, \quad \ell\to\infty $$ so that (\ref{condepell}) is actually fulfilled for $\displaystyle |\varepsilon|< \frac1{\|{\mathcal V}\|_\rho}.$ \end{remark} \begin{corollary} \label{maincc} In the above assumptions set: \begin{equation} \label{Un} U_{n,\varepsilon}(\hbar):= \prod_{s=0}^ne^{i\varepsilon_{n-s}W_{n-s,\varepsilon}}, \quad n=0,1,\ldots. \end{equation} Then: \begin{enumerate} \item $U_{n,\varepsilon}(\hbar)$ is a unitary operator in $L^2(\T^l)$, with $$ U_{n,\varepsilon}(\hbar)^\ast=U_{n,\varepsilon}(\hbar)^{-1}=\prod_{s=0}^ne^{-i\varepsilon_{s}W_{s,\varepsilon}} $$ \item Let: \begin{equation} S_{n,\varepsilon}(\hbar):=U_{n,\varepsilon}(\hbar)({\mathcal L}_\omega+\varepsilon V)U_{n,\varepsilon}(\hbar)^{-1} \end{equation} Then: \begin{eqnarray} S_n&=&D_{n,\varepsilon}(\hbar)+\varepsilon_{n+1}V_{n+1,\varepsilon} \\ D_{n,\varepsilon}(\hbar)&=&L_\omega+\sum_{s=1}^n\varepsilon_sN_{s,\varepsilon} \end{eqnarray} The corresponding symbols are: \begin{eqnarray} && {\mathcal S}_n(\xi,x;\hbar)={\mathcal D}_{n,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)+\varepsilon_{n+1}V_{n+1,\varepsilon}({\mathcal L}_\omega(\xi),x;\hbar) \\ \label{sumD} && {\mathcal D}_{n,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)={\mathcal L}_\omega(\xi)+\sum_{s=1}^n \varepsilon_s{\mathcal N}_{s,\varepsilon}({\mathcal L}_\omega (\xi),\hbar). \end{eqnarray} Here the operators $W_{s,\varepsilon}$, $N_{s,\varepsilon}$, $V_{\ell+1,\varepsilon}$ and their symbols ${\mathcal W}_{s,\varepsilon}$, ${\mathcal N}_{s,\varepsilon}$, ${\mathcal V}_{\ell+1,\varepsilon}$ fulfill the above estimates. \item Let $\varepsilon^\ast$ be defined as in (\ref{condep}). Remark that $ \varepsilon^\ast(\cdot,k)> \varepsilon^\ast(\cdot,k+1), \,k=0,1,\ldots$. Then, if $|\varepsilon|<\varepsilon(k,\cdot)$: \begin{equation} \lim_{n\to\infty}{\mathcal D}_{n,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)={\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar) \end{equation} where in the convergence takes place in the $C^k([0,1];C^\omega (\rho/2))$ topology, namely \begin{equation} \label{limD} \lim_{n\to\infty}\|{\mathcal D}_{n,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)-{\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)\|_{\rho/2,k}=0. \end{equation} \end{enumerate} \end{corollary} \begin{proof} Since Assertions (1) and (2) are straightforward, we limit ourselves to the simple verification of Assertion (3). If $|\varepsilon|<\varepsilon^\ast(\tau,k)$ then $\displaystyle \|V\|_{\rho,k}\mu \varepsilon < \Lambda<1$. Recalling that $\|\cdot\|_{\rho,,k} \leq \|\cdot\|_{\rho^\prime,k}$ whenever $\rho\leq \rho^\prime$, and that $\rho_\ell <\rho/2$, $\forall\,\ell \in {\Bbb N}$, (\ref{stimafv}) yields: \begin{eqnarray*} && \varepsilon_{n+1}\|{\mathcal V}_{n+1,\varepsilon}\|_{\rho/{2},k}\leq \varepsilon_{n+1}\|{\mathcal V}_{n+1,\varepsilon}\|_{\rho_{n+1},k}\leq \\ && \left[\|V\|_{\rho,k}\mu \varepsilon\right]^{2^{n+1}}\to 0, \quad n\to\infty, \;k\;{\rm fixed}. \end{eqnarray*} In the same way, by (\ref{stimafn}): \begin{eqnarray*} && \|{\mathcal N}_{n,\varepsilon}\|_{\rho/{2},k}\leq \|{\mathcal N}_{n,\varepsilon}\|_{\rho_{n},k}= \|\overline{{\mathcal V}}_{n,\varepsilon}\|_{\rho_{n},k}\leq \|{\mathcal V}_{n,\varepsilon}\|_{\rho_{n},,k}\leq \\ && \left[\|V\|_{\rho,k}\mu \varepsilon\right]^{2^{n}}=\left[\|V\|_{\rho}\mu \varepsilon\right]^{2^{n}}\to 0, \quad n\to\infty, \;k\;{\rm fixed}. \ \end{eqnarray*} This concludes the proof of the Corollary. \end{proof} \vskip 1cm\noindent \section{Convergence of the iteration and of the normal form.} \label{iteration} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0 \setcounter{theorem}{0 Let us first prove the uniform convergence of the unitary transformation sequence as $n\to\infty$. Recall that $\varepsilon^\ast(\tau,k)> \varepsilon^\ast(\tau,k+1), \; k=0,1,\ldots$, and recall the abbreviation $\|\cdot\|_{\rho,0}:=\|\cdot\|_{\rho}$. Let $\varepsilon^\ast(\tau)$ be defined by \eqref{rast}. Then: \begin{lemma} \label{Wsequence} Let $\hbar$ be fixed, and $\displaystyle |\varepsilon|<\varepsilon^\ast(\tau)$. Consider the sequence $\displaystyle \{U_{n,\varepsilon}(\hbar)\}$ of unitary operators in $L^2(\T^l)$ defined by (\ref{Un}). Then there is a unitary operator $U_{\infty,\varepsilon}(\hbar)$ in $L^2(\T^l)$ such that $$ \lim_{n\to\infty}\|U_{n,\varepsilon}(\hbar)-U_{\infty,\varepsilon}(\hbar)\|_{L^2\to L^2}=0 $$ \end{lemma} \begin{proof} We have, for $p=1,2,\ldots$: \begin{eqnarray*} && U_{n+p,\varepsilon}-U_{n,\varepsilon}=\Delta_{n+p,\varepsilon}e^{i\varepsilon_n \frac{W_n}\hbar}\cdots e^{i\varepsilon \frac{W_1}\hbar}, \quad \Delta_{n+p,\varepsilon}:=(e^{i\varepsilon_{n+p}\frac{W_{n+p}}\hbar}\cdots e^{i\varepsilon_{n+1}\frac{W_{n+1}}\hbar}-I) \\ && \|U_{n+p,\varepsilon}-U_{n,\varepsilon}\|_{L^2\to L^2}\leq 2\|\Delta_{n+p,\varepsilon}\|_{L^2\to L^2} \end{eqnarray*} Now we apply the mean value theorem and obtain $$ e^{i\varepsilon_\ell \frac{W_{\ell,\varepsilon}}\hbar}=1+\beta_{\ell,\varepsilon} \quad \beta_{\ell,\varepsilon}:=i\varepsilon_\ell \frac{W_{\ell,\varepsilon}}\hbar \int_0^{\varepsilon_\ell}e^{i\varepsilon^\prime_\ell\frac{ W_{\ell,\varepsilon}}\hbar}\,d\varepsilon^\prime_\ell , $$ whence, by (\ref{stimafw}) in which we make $k=0$: \begin{equation} \label{stimaesp} \hbar\|\beta_{\ell,\varepsilon}\| \leq \varepsilon_\ell \|W_{\ell,\varepsilon}\|_{\rho_{\ell}} = \varepsilon_\ell \|{\mathcal W}_{\ell,\varepsilon}\|_{\rho_{\ell},0} \leq 4\gamma\tau^\tau (\ell+1)^{2\tau} \leq A^\ell \end{equation} for some $A<1$. Now: \begin{eqnarray*} && \Delta_{n+p,\varepsilon}=[(1+\beta_{n+p,\varepsilon}\varepsilon_{n+p})(1+\beta_{n+p-1,\varepsilon}\varepsilon_{n+p-1})\cdots (1+\beta_{n+1,\varepsilon}\varepsilon_{n+1})-1]=\sum_{1\leq j\leq p}\beta_{n+j,\varepsilon}\varepsilon_{n+j} \\ && +\sum_{1\leq j_1<j_2\leq p}\beta_{n+j_1,\varepsilon}\varepsilon_{n+j_1}\beta_{n+j_2,\varepsilon}\varepsilon_{n+j_2}+ \sum_{1\leq j_1<j_2<j_3\leq p}\beta_{n+j_1,\varepsilon}\varepsilon_{n+j_1}\beta_{n+j_2,\varepsilon}\varepsilon_{n+j_2}\beta_{n+j_3,\varepsilon}\varepsilon_{n+j_3} \\ && +\ldots +\beta_{n+1,\varepsilon}\cdots\beta_{n+p,\varepsilon}\varepsilon_{n+1}\cdots\varepsilon_{n+p} \end{eqnarray*} Therefore, by (\ref{stimaesp}): \begin{eqnarray*} && \|\Delta_{n+p,\varepsilon}\|_{L^2\to L^2}\leq \sum_{1\leq j\leq p}\frac{A^{n+j}}\hbar+\sum_{1\leq j_1<j_2\leq p}\frac{A^{n+j_1}A^{n+j_2}}{\hbar^2}+\sum_{1\leq j_1<j_2<j_3\leq p}\frac{A^{n+j_1}A^{n+j_2}A^{n+j_3}}{\hbar^3}+\ldots \\ && \leq \frac{A^{n}}\hbar\frac{A}{1-A}+\frac{A^{2n}}{\hbar^2}\left(\frac{A}{1-A}\right)^2+\ldots +\frac{A^{pn}}{\hbar^p}\left(\frac{A}{1-A}\right)^p \\ && \leq \frac{A^n}{\hbar(1-A)}\frac{1}{1-\frac{A^n}{\hbar(1-A)}}\ \ \ \mbox{ for } n>\frac{\log{(\hbar(1-A))}}{\log{A}}. \end{eqnarray*} Therfeore \[ \Delta_{n+p,\varepsilon}\to 0,\quad n\to\infty,\quad \forall\,p,\hbar>0. \] \vskip 5pt\noindent Hence $\{U_{n,\varepsilon}(\hbar)\}_{n\in{\Bbb N}}$ is a Cauchy sequence in the operator norm, uniformly with respect to $|\varepsilon|<\varepsilon^\ast_0$, and the Lemma is proved. \end{proof} We are now in position to prove existence and analyticity of the limit of the KAM iteration, whence the uniform convergence of the QNF. \vskip 0.3cm\noindent {\bf Proof of Theorems \ref{mainth} and \ref{regolarita}} \newline The operator family $H_\varepsilon$ is self-adjoint in $L^2(T^l)$ with pure point spectrum $\forall\,\varepsilon\in{\mathcal R}$ because $V$ is a continuous operator. By Corollary \ref{maincc}, the operator sequence $\{D_{n,\varepsilon}(\hbar)\}_{n\in {\Bbb N}}$ admits for $|\varepsilon|<\varepsilon^\ast_0$ the uniform norm limit $$ D_{\infty,\hbar}(L_\omega,\hbar)=L_\omega+\sum_{m=0}^\infty\varepsilon^{2^m}N_{m,\varepsilon}(L_\omega,\hbar) $$ of symbol ${\mathcal D}_{\infty,\hbar}({\mathcal L}_\omega(\xi))$. The series is norm-convergent by (\ref{stimafn}). By Lemma (\ref{Wsequence}), $D_{\infty,\hbar}(L_\omega,\hbar)$ is unitarily equivalent to $H_\varepsilon$. The operator family $\varepsilon\mapsto D_{\infty,\varepsilon}(\hbar)$ is holomorphic for $|\varepsilon|<\varepsilon^\ast_0$, uniformly with respect to $\hbar\in[0,1]$. As a consequence, $D_{\infty,\varepsilon}(\hbar)$ admits the norm-convergent expansion: $$ D_{\infty,\varepsilon}(L_\omega,\hbar)=L_\omega+\sum_{s=1}^\infty B_s(L_\omega,\hbar)\varepsilon^s, \quad |\varepsilon|<\varepsilon^\ast(\tau) $$ which is the convergent quantum normal form. On the other hand, (\ref{limD}) entails that the symbol ${\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)$ is a $\J(\rho/2)$-valued holomorphic function of $\varepsilon$, $|\varepsilon|<\varepsilon^\ast (\tau)$, continuous with respect to $\hbar\in [0,1]$. Therefore it admits the expansion \begin{equation} \label{fnormale} {\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)={\mathcal L}_\omega(\xi)+\sum_{s=1}^\infty{\mathcal B}_s({\mathcal L}_\omega(\xi),\hbar)\varepsilon^s, \quad |\varepsilon|<\varepsilon^\ast(\tau) \end{equation} convergent in the $\|\cdot\|_{\rho/2}$-norm, with radius of convergence $\varepsilon^\ast(\tau)$. Hence, in the notation of Theorem \ref{mainth}, ${\mathcal D}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)\equiv {\mathcal B}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)$. By construction, ${\mathcal B}_s({\mathcal L}_\omega(\xi),\hbar)$ is the symbol of $B_s(L_\omega,\hbar)$. ${\mathcal B}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)$ is the symbol yielding the quantum normal form via Weyl's quan\-ti\-za\-tion. Likewise, the symbol ${\mathcal W}_{\infty,\varepsilon}(\xi,x,\hbar)$ is a $\J(\rho/2)$-valued holomorphic function of $\varepsilon$, $|\varepsilon|<\varepsilon^\ast(\tau)$, continuous with respect to $\hbar\in [0,1]$, and admits the expansion: \begin{equation} \label{fgen} {\mathcal W}_{\infty,\varepsilon}(\xi,x,\hbar)=\langle\xi,x\rangle+\sum_{s=1}^\infty{\mathcal W}_s(\xi,x,\hbar)\varepsilon^s, \quad |\varepsilon|<\varepsilon^\ast \end{equation} convergent in the $\|\cdot\|_{\rho/2}$-norm, once more with radius of convergence $\varepsilon^\ast(\tau)$. Since Since $\|{\mathcal B}_s\|_1 \leq \|{\mathcal B}_s\|_{\rho/2}$, $\|{\mathcal W}_s \|_1\leq \|{\mathcal W}_s\|_{\rho/2} $ $\forall\,\rho>0$. By construction, ${\mathcal B}_{\infty,\varepsilon}(\xi,x,\hbar)={\mathcal B}_{\infty,\varepsilon}(t,x,\hbar)|_{t={\mathcal L}_\omega(\xi)}$. Theorem \ref{mainth} is proved. Remark furthermore that the principal symbol of ${\mathcal B}_{\infty,\varepsilon}({\mathcal L}_\omega(\xi),\hbar)$ is just the convergent Birkhoff normal form: $$ {\mathcal B}_{\infty,\varepsilon}={\mathcal L}_\omega(\xi)+\sum_{s=1}^\infty{\mathcal B}_s({\mathcal L}_\omega(\xi))\varepsilon^s, \quad |\varepsilon|<\varepsilon^\ast(\tau) $$ \vskip 4pt Theorem (\ref{regolarita}) is a direct consequence of (\ref{limD}) on account of the fact that $$ \sum_{\gamma=0}^r\max_{\hbar\in [0,1]} \|\partial^\gamma_\hbar {\mathcal B}_\infty(t;\varepsilon,\hbar) \|_{\rho/2}\leq \|{\mathcal B}_\infty\|_{\rho/2,k} $$ Remark indeed that by (\ref{limD}) the series (\ref{fnormale}) converges in the $\|\cdot\|_{\rho/2,r}$ norm if $|\varepsilon|<\varepsilon^\ast(\tau,r)$. Therefore ${\mathcal B}_s(t,\hbar)\in C^r([0,1];C^\omega(\{t\in\Bbb C\,|\,|\Im t|<\rho/2\})$ and the formula (\ref{EQF}) follows from (\ref{fnormale}) upon Weyl quantization. This concludes the proof of the Theorem. \vskip 1.0cm\noindent \begin{appendix} \section{The quantum normal form} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \setcounter{equation}{0 \setcounter{theorem}{0 \noindent The quantum normal form in the framework of semiclassical analysis has been introduced by Sj\"ostrand \cite{Sj}. We follow here the presentation of \cite{BGP}. \vskip 6pt\noindent {\bf 1. The formal construction} Given the operator family $\varepsilon\mapsto H_\varepsilon=L_\omega+\varepsilon V$, look for a unitary transformation $\displaystyle U(\omega,\varepsilon,\hbar)=e^{i W(\varepsilon)/\hbar}: L^2(\T^l)\leftrightarrow L^2(\T^l)$, $W(\varepsilon)=W^\ast(\varepsilon)$, such that: \begin{equation} \label{A1} S(\varepsilon):=UH_\varepsilon U^{-1}=L(\omega)+\varepsilon B_1+\varepsilon^2 B_2+\ldots+ \varepsilon^k R_k(\varepsilon) \end{equation} where $[B_p,L_0]=0$, $p=1,\ldots,k-1$. Recall the formal commutator expansion: \begin{equation} S(\varepsilon)=e^{it W(\varepsilon)/\hbar}He^{-it W(\varepsilon)/\hbar}=\sum_{l=0}^\infty t^lH_l,\quad H_0:=H,\quad H_l:=\frac{[W,H_{l-1}]}{i\hbar l}, \;l\geq 1 \label{A2} \end{equation} and look for $W(\varepsilon)$ under the form of a power series: $W(\varepsilon)=\varepsilon W_1+\varepsilon^2W_2+\ldots$. Then (\ref{A2}) becomes: \begin{equation} \label{A3} S(\varepsilon)=\sum_{s=0}^{k-1}\varepsilon^s P_s +\varepsilon^{k}{R}^{(k)} \end{equation} where \begin{equation} \label{A4} P_0=L_\omega;\quad {P}_s:=\frac{[W_s,H_0]}{i\hbar}+V_s,\quad s\geq 1, \;V_1\equiv V \end{equation} \begin{eqnarray*} V_s =\sum_{r=2}^s\frac{1}{r!}\sum_{{j_1+\ldots+j_r=s}\atop {j_l\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},H_0]\ldots]}{(i\hbar)^r} +\sum_{r=2}^{s-1}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=s-1}\atop {j_l\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},V]\ldots]}{(i\hbar)^r} \end{eqnarray*} \begin{eqnarray*} {R}^{(k)}=\sum_{r=k}^\infty\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k}\atop {j_l\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},L_\omega]\ldots]}{(i\hbar)^r} +\sum_{r=k-1}^{\infty}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k-1}\atop {j_l\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},V]\ldots]}{(i\hbar)^r} \end{eqnarray*} Since $V_s$ depends on $W_1,\ldots,W_{s-1}$, (A1) and (A3) yield the recursive homological equations: \begin{equation} \label{A5} \frac{[W_s,P_0]}{i\hbar} +V_s=B_s, \qquad [L_0,B_s]=0 \end{equation} To solve for $S$, $W_s$, $B_s$, we can equivalently look for their symbols. The equations (\ref{A2}), (\ref{A3}), (\ref{A4}) become, once written for the symbols: \begin{equation} \label{A6} \Sigma(\varepsilon)=\sum_{l=0}^\infty {{\mathcal H}}_l,\quad {{\mathcal H}}_0:={\mathcal L}_\omega+\varepsilon {\mathcal V},\quad {{\mathcal H}}_l:=\frac{\{w,{{\mathcal H}}_{l-1}\}_M}{ l}, \;l\geq 1\end{equation} \begin{equation} \label{A7} \Sigma(\varepsilon)=\sum_{s=0}^{k}\varepsilon^s {\mathcal P}_s +\varepsilon^{k+1}{\mathcal R}^{(k+1)} \end{equation} where \begin{equation} \label{A8} {\mathcal P}_0={\mathcal L}_\omega;\qquad {\mathcal P}_s :=\{{\mathcal W}_s,{\mathcal P}_0 \}_M+{\mathcal V}_s,\quad s=1, \ldots,\qquad {\mathcal V}_1\equiv {\mathcal V}_0={\mathcal V} \end{equation} \begin{eqnarray*} && {\mathcal V}_s :=\sum_{r=2}^s\frac{1}{r!}\sum_{{j_1+\ldots+j_r=s}\atop {j_l\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal L}_\omega\}_M\ldots\}_M + \\ && +\sum_{r=1}^{s-1}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=s-1}\atop {j_l\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal V}\}_M\ldots\}_M, \quad s>1 \end{eqnarray*} \begin{eqnarray*} && {\mathcal R}^{(k)}=\sum_{r=k}^\infty\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k}\atop {j_l\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal L}_\omega\}_M\ldots\}_M+ \\ && \sum_{r=k-1}^{\infty}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k-1}\atop {j_l\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal V}\}_M\ldots\}_M\end{eqnarray*} In turn, the recursive homological equations become: \begin{equation} \label{A9} \{{\mathcal W}_s,{\mathcal L}_{\omega}\}_M +{\mathcal V}_s={\mathcal B}_s, \qquad \{{\mathcal L}_{\omega},{\mathcal B}_s\}_M =0 \end{equation} \vskip 6pt\noindent {\bf 2. Solution of the homological equation and estimates of the solution} \vskip 3pt\noindent The key remark is that $\{{\mathcal A},{\mathcal L}_\omega\}_M=\{{\mathcal A},{\mathcal L}_\omega\}$ for any smooth symbol ${\mathcal A}(\xi;x;\hbar)$ because ${\mathcal L}_\omega$ is linear in $\xi$. The homological equation (A.9) becomes therefore \begin{equation} \label{A10} \{{\mathcal W}_s,{\mathcal L}_\omega\} +{\mathcal V}_s={\mathcal B}_s, \qquad \{{\mathcal L}_\omega,{\mathcal B}_s\} =0 \end{equation} We then have: \begin{proposition} Let ${\mathcal V}_s(\xi,x;\hbar)\in\J(\rho_s)$. Then the equation \begin{equation} \label{A11} \{{\mathcal W}_s,{\mathcal L}_\omega\} +{\mathcal V}_s={\mathcal B}_s, \qquad \{{\mathcal L}_\omega,{\mathcal B}_s\} =0 \end{equation} admits $\forall\,0<d_s<\rho_s$ the solutions ${\mathcal B}_s({\mathcal L}_\omega(\xi;)\hbar)\in \J(\rho_s)$, ${\mathcal W}\in\J(\rho-d_s)$ given by: \begin{equation} \label{A12} {\mathcal B}_s(\xi;\hbar)=\overline{{\mathcal V}_s}; \quad {\mathcal W}_s(\xi,x;\hbar)={\mathcal L}_\omega^{-1}{\mathcal V}_s, \quad {\mathcal L}_\omega^{-1}{\mathcal V}_s:=\sum_{0\neq\in\Z^l }\frac{{\mathcal V}_{s.q}({\mathcal L}_\omega(\xi))}{i\langle\omega,q\rangle}e^{i\langle q,x\rangle}. \end{equation} Moreover: \begin{equation} \label{stimaWs} \|{\mathcal B}_s\|_{\rho_s}\leq \|{\mathcal V}_s\|_{\rho_s}; \qquad \|{\mathcal W}_s\|_{\rho_s-d_s} \leq \gamma \left(\frac{\tau}{d_s}\right)^\tau \|{\mathcal V}_s\|_{\rho_s}. \end{equation} \end{proposition} \begin{proof} ${\mathcal B}_s$ and ${\mathcal W}_s$ defined by (A.12) clearly solve the homological equation (A.11). The estimate for ${\mathcal B}_s$ is obvious, and the estimate for ${\mathcal W}_s$ follows once more by the small denominator inequality (\ref{DC}). \end{proof} By definition of $\|\cdot\|_{\rho}$ norm: \begin{equation} \label{A13} \|B_s\|_{L^2\to L^2}\leq \|B_s\|_{\rho} \leq \|{\mathcal V}_s\|_{\rho_s}; \quad \|B_s\|_{L^2\to{\mathcal L}^2}\leq \|B_s\|_{\rho} \leq \|{\mathcal V}_s\|_{\rho_s} \end{equation} Hence all terms of the quantum normal form and the remainder can be recursively estimated in terms of $\|{\mathcal V}\|_{\rho}$ by Corollary 3.11. Setting now, for $s\geq 1$: \begin{eqnarray*} && \rho_s:=\rho- s d_s, \quad d_s< \frac{\rho}{s+1}; \qquad \rho_0:=\rho \\ && \mu_s:=8\gamma \tau^\tau \frac{E}{d_s^\tau\delta_s^2}, \quad E:=\|{\mathcal V}\|_{\rho}. \end{eqnarray*} we actually have, applying without modification the argument of \cite{BGP}, Proposition 3.2: \begin{proposition} Let $\mu_s<1/2, s=1,\ldots,k$. Set: $$ K:=\frac{8\cdot 2^{\tau+5}\gamma\tau^\tau}{\rho^{2+\tau}}. $$ Then the following estimates hold for the quantum normal form: \begin{eqnarray*} && \sum_{s=1}^k \|B_s\|_{\rho/2}\varepsilon^s \leq \sum_{s=1}^k \|{\mathcal B}_s\|_{\rho/2}\varepsilon^s\leq \sum_{s=1}^k E^sK^s s^{(\tau+2)s}\varepsilon^s \\ && {} \\ && \|R_{k+1}\|_{\rho/2}\leq \|{\mathcal R}_{k+1}\|_{\rho/2}\leq (EK)^{k+1}(k+1)^{(\tau+2)(k+1)}\varepsilon^{k+1} \end{eqnarray*} \end{proposition} \vskip 1cm\noindent \end{appendix} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Assouad spectrum and quasi-Assouad dimension} The Assouad dimension is an important notion of dimension designed to capture extreme local scaling properties of a given metric space. Its distance from the upper box dimension, which measures average global scaling, can be interpreted as a quantifiable measure of inhomogeneity. Motivated by this idea, Fraser and Yu introduced the Assouad spectrum which is designed to interpolate between the upper box dimension and Assouad dimension, and thus reveal more precise geometric information about the set, see \cite{Spectraa}. Here we recall the basic definitions and, for concreteness, we consider non-empty compact sets $F \subseteq \mathbb{R}^d$, although the general theory extends beyond this setting. For a bounded set $E \subseteq \mathbb{R}^d$ and a scale $r>0$ we let $N(E,r)$ be the minimum number of sets of diameter $r$ required to cover $E$. The Assouad dimension of $F$ is defined by \[ \dim_\text{A} F \ = \ \inf \Bigg\{ \alpha \ : \ ( \exists \, C) \, (\forall \, 0<r<R<1) \, (\forall x \in F) \ N\big( B(x,R) \cap F, r \big) \ \leq \ C \bigg(\frac{R}{r}\bigg)^\alpha \Bigg\}. \] The Assouad spectrum is the function defined by \[ \theta \ \mapsto \ \dim_{\mathrm{A}}^\theta F \ = \ \inf \bigg\{ \alpha \ : \ (\exists C>0) \, (\forall 0<R<1) \, (\forall x \in F) \ N \big( B(x,R) \cap F ,R^{1/\theta} \big) \ \leq \ C \left(\frac{R}{R^{1/\theta}}\right)^\alpha \bigg\} \] where $\theta$ varies over $(0,1)$. The related quasi-Assouad dimension is defined by \[ \dim_{\mathrm{qA}} F \ = \ \lim_{\theta \to 1} \inf \bigg\{ \alpha \ : \ (\exists C>0) \, (\forall 0<r \leq R^{1/\theta} \leq R< 1) \, (\forall x \in F) \ N \big( B(x,R) \cap F ,r \big) \ \leq \ C \left(\frac{R}{r}\right)^\alpha \bigg\} \] and the upper box dimension is defined by \[ \overline{\dim}_\mathrm{B} F \ = \ \inf \Bigg\{ \alpha \ : \ ( \exists \, C) \, (\forall \, 0<R<1) \ N\big( F, r \big) \ \leq \ C \bigg(\frac{1}{r}\bigg)^\alpha \Bigg\}. \] These dimensions are all related, but their relative differences can be subtle. We summarise some important facts to close this section. For any $\theta \in (0,1)$ we have \[ \overline{\dim}_\text{B} F \leq \dim_{\mathrm{A}}^\theta F \leq \dim_{\mathrm{qA}} F \leq \dim_{\mathrm{A}} F \] and any of these inequalities can be strict. Moreover, the Assouad spectrum is a continuous function of $\theta$ and also satisfies \begin{equation} \label{spectrumbound} \dim_{\mathrm{A}}^\theta F \leq \frac{\overline{\dim}_\text{B} F}{1-\theta}. \end{equation} We also note that for a given $\theta$ it is not necessarily true that the Assouad spectrum is given by the expression after the limit in the definition of the quasi-Assouad dimension: this notion is by definition monotonic in $\theta$ but the spectrum is not necessarily monotonic \cite[Section 8]{Spectraa}. However, it has recently been shown in \cite{ScottishCanadian} that $\dim_{\mathrm{qA}} F = \lim_{\theta \to 1} \dim_{\mathrm{A}}^\theta F$ and, combining this with (\ref{spectrumbound}), we see that the Assouad spectrum necessarily interpolates between the upper box dimension and the quasi-Assouad dimension. For more information, including basic properties, concerning the upper box dimension, see ~\cite[Chapters 2-3]{Falconer}. For the Assouad dimension, see \cite{Fraser14, Luukkainen, Robinson}, the quasi-Assouad dimension, see \cite{Hare}, and for the Assouad spectrum, see \cite{Spectraa, Spectrab, ScottishCanadian}. \section{Self-affine carpets: random and deterministic} In this paper we consider random self-affine carpets. More specifically, random 1-variable analogues of the self-affine sets introduced by Bedford and McMullen in the 1980s. In the deterministic setting, the box dimensions were computed independently by Bedford and McMullen \cite{Bedford, McMullen} and the Assouad dimension was computed by Mackay \cite{Mackay}. The Assouad spectrum was computed by Fraser and Yu \cite{Spectrab}, and these results also demonstrated that the quasi-Assouad and Assouad dimensions coincide by virtue of the spectrum reaching the Assouad dimension. In the random setting, the (almost sure) box dimensions were first computed by Gui and Li \cite{GuiLi} for fixed subdivisions and by Troscheit~\cite{Troscheit18} in the most general setting that we are aware of. The (almost sure) Assouad dimension was computed by Fraser, Miao and Troscheit \cite{FraserMiaoTroscheit}. In this article we compute the quasi-Assouad dimension and the Assouad spectrum in the random setting. Unlike in the deterministic case, we find that the quasi-Assouad dimension and Assouad dimension are usually almost surely distinct. Further, the quasi-Assouad dimension is in general also distinct from the box dimension. This is in stark contrast to the conformal setting, where it was shown that the quasi-Assouad dimension (and thus Assouad spectrum) is almost surely equal to the upper box dimension (and distinct from the Assouad dimension), see \cite{Troscheit17}. We close this section by describing our model. Let $\Lambda = \{ 1, \dots, |\Lambda |\}$ be a finite index set and for each $i \in \Lambda$ fix integers $n_i > m_i \geq 2$ and divide the unit square $[0,1]^2$ into a uniform $m_i \times n_i$ grid. For each $i \in \Lambda$ let $\mathcal{I}_i$ be a non-empty subset of the set of $m_i^{-1} \times n_i^{-1}$ rectangles in the grid and let $N_i = |\mathcal{I}_i|$. Let $B_i$ be the number of distinct columns which contain rectangles from $\mathcal{I}_i$, and $C_i$ be the maximum number of rectangles in $\mathcal{I}_i$ which are contained in a particular column. Note that $1 \leq B_i \leq m_i$, $1 \leq C_i \leq n_i$ and $N_i \leq B_i C_i$. For each rectangle $j \in \mathcal{I}_i$, let $S_j$ be the unique orientation preserving affine map which maps the unit square $[0,1]^2$ onto the rectangle $j$. Let $\Omega = \Lambda^\mathbb{N}$ and for each $\omega = (\omega_1, \omega_2, \dots) \in \Omega$, we are interested in the corresponding attractor \[ F_\omega \, = \, \bigcap_{k \geq 1}\ \bigcup_{j_1\in \mathcal{I}_{\omega_1}, \dots, j_k \in\mathcal{I}_{\omega_k}} S_{j_1} \circ \dots \circ S_{j_k}\left( [0,1]^2 \right). \] By randomly choosing $\omega \in \Omega$, we randomly choose an attractor $F_\omega$ and we wish to make statements about the generic nature of $F_\omega$. For this, we need a measure on $\Omega$. Let $\{p_i\}_{i \in \Lambda}$ be a set of probability weights, that is, for each $i \in \Lambda$, $0<p_i<1$ and $\sum_{i \in \Lambda} p_i = 1$. We extend these basic probabilities to a Borel measure $\mathbb{P}$ on $\Omega$ in the natural way, which can be expressed as the infinite product measure \[ \mathbb{P} = \prod_{k \in \mathbb{N}} \sum_{i \in \Lambda} p_i \delta_{i}\,\,, \] where $\Omega$ is endowed with the product topology and $\delta_i$ is a unit mass concentrated at $i \in \Lambda$. Note that the deterministic model can be recovered if $|\Lambda | = 1$, that is, there is only one ``pattern" available, which is therefore chosen at every stage in the process. In this case, the deterministic attractor is the unique non-empty set $F \subseteq [0,1]^2$ satisfying \[ F = \bigcup_{j \in \mathcal{I}_1} S_j(F). \] \section{Results} Our main result is an explicit formula which gives the Assouad spectrum of our random self-affine sets almost surely. For simplicity we suppress summation over $i \in \Lambda$ to simple summation over $i$ throughout this section. \begin{thm} \label{CarpetsSpectra} For $\mathbb{P}$ almost all $\omega \in \Omega$, we have \[ \dim_{\mathrm{A}}^\theta F_\omega \ = \ \left\{ \begin{array}{ccc} \dfrac{1}{1- \theta} \left( \dfrac{\sum_{i } p_i \log \left( B_iC_i^\theta N_i^{-\theta} \right) }{ \sum_{i } p_i\log m_i} + \dfrac{\sum_{i } p_i \log \left(N_iB^{-1}_iC_i^{-\theta}\right)}{\sum_{i } p_i \log n_i} \right) & \colon & 0< \theta \leq \dfrac{\sum_{i } p_i \log m_i}{\sum_{i } p_i \log n_i} \\ \\ \dfrac{\sum_{i } p_i\log B_i }{\sum_{i } p_i\log m_i} + \dfrac{\sum_{i } p_i\log C_i}{\sum_{i } p_i\log n_i} & \colon & \dfrac{\sum_{i } p_i \log m_i}{\sum_{i } p_i \log n_i}< \theta <1\,, \end{array} \right. \] where $F_\omega$ is the $1$-variable random Bedford-McMullen carpet associated with $\omega \in \Omega$. \end{thm} As an immediate consequence of Theorem \ref{CarpetsSpectra} we obtain a formula for the quasi-Assouad dimension which holds almost surely. \begin{cor} \label{CarpetsQuasi} For $\mathbb{P}$ almost all $\omega \in \Omega$, we have \[ \dim_{\mathrm{qA}}F_\omega \ = \ \frac{\sum_{i } p_i\log B_i }{\sum_{i } p_i\log m_i} + \frac{\sum_{i } p_i\log C_i}{\sum_{i } p_i\log n_i}\,, \] where $F_\omega$ is the $1$-variable random Bedford-McMullen carpet associated with $\omega \in \Omega$. \end{cor} \begin{proof} This follows immediately from Theorem \ref{CarpetsSpectra} and the fact that $\dim_{\mathrm{A}}^\theta E \to \dim_{\mathrm{qA}} E $ as $\theta \to 1$ for any set $E\subseteq{\R^d}$, see \cite[Corollary 2.2]{ScottishCanadian}. \end{proof} Note that the result in \cite{FraserMiaoTroscheit} states that for $\mathbb{P}$ almost all $\omega \in \Omega$, we have \[ \dim_\textup{A} F_\omega \ = \ \max_{i\in\Lambda} \, \frac{\log B_i}{\log m_i} \, + \, \max_{i\in\Lambda} \, \frac{\log C_i}{\log n_i}. \] Therefore Corollary \ref{CarpetsQuasi} demonstrates the striking difference between the Assouad and quasi-Assouad dimensions in the random setting. In particular the almost sure value of the Assouad dimension does not depend on the weights $\{p_i\}_{i \in \Lambda}$, but the almost sure value of the quasi-Assouad dimension depends heavily on the weights. The almost sure value of the Assouad dimension is also extremal in the sense that it is the maximum over all realisations $\omega \in \Omega$, whereas the quasi-Assouad dimension is an average. Recall that the quasi-Assouad and Assouad dimensions always coincide for deterministic self-affine carpets, see \cite{Spectrab}. It is worth noting that the Assouad dimension of the random attractor is at least the maximal Assouad dimension of the deterministic attractors, whereas the quasi-Assouad dimension is bounded above by the dimension of the individual attractors. That is, letting $\underline{i}=(i,i,i,\dots)$, \[ \dim_{\mathrm{A}} F_\omega \geq \max_{i\in\Lambda}\dim_{\mathrm{A}} F_{\underline i}\;\;(a.s.)\quad\text{and}\quad \dim_{\mathrm{qA}} F_\omega \leq \max_{i\in\Lambda}\dim_{\mathrm{qA}} F_{\underline i}\;\;(a.s.). \] Typically these inequalities are strict and it is further possible that $\dim_{\mathrm{qA}} F_\omega < \min_{i\in\Lambda}\dim_{\mathrm{qA}} F_{\underline i}$\,, almost surely, see Figure~\ref{fig:spectrum} and the example in Section~\ref{sect:extremeExample}. Finally, note that the almost sure values of the Assouad and quasi-Assouad dimensions coincide if and only if there exists $\alpha, \beta \in (0,1]$ such that for all $i \in \Lambda$ we have $\frac{\log B_i}{\log m_i} = \alpha$ and $\frac{\log C_i}{\log n_i} = \beta$. This follows by considering `weighted mediants'. In particular, the two terms giving the quasi-Assouad dimension are weighted mediants of the fractions $\frac{\log B_i}{\log m_i}$ and $\frac{\log C_i}{\log n_i}$ respectively. It is well-known that weighted mediants are equal to the maximum if and only if all the fractions coincide. In particular, coincidence of all of the deterministic Assouad (and quasi-Assouad) dimensions is \emph{not} sufficient to ensure almost sure coincidence of the Assouad and quasi-Assouad dimensions in the random case. Simple algebraic manipulation yields the following random analogue of \cite[Corollary 3.5]{Spectrab}. In particular, the random variable $\dim_{\mathrm{A}}^\theta F_\omega$ can be expressed in terms of the random variables $ \overline{\dim}_\mathrm{B} F_\omega $ and $\dim_\mathrm{qA} F_\omega $. \begin{cor} For $\mathbb{P}$ almost all $\omega \in \Omega$, we have \[ \dim_{\mathrm{A}}^\theta F_\omega \ = \ \min \left\{ \frac{ \overline{\dim}_\mathrm{B} F_\omega \ - \ \theta \, \left(\dim_\mathrm{qA} F_\omega - \left( \dim_\mathrm{qA} F_\omega - \overline{\dim}_\mathrm{B} F_\omega\right) \frac{\sum_{i } p_i \log n_i}{\sum_{i } p_i \log m_i} \right) }{1- \theta} , \ \dim_\mathrm{qA} F \right\} \] where $F_\omega$ is the $1$-variable random Bedford-McMullen carpet associated with $\omega \in \Omega$. \end{cor} Note that \cite[Corollary 3.5]{Spectrab} is formulated using the Assouad dimension instead of the quasi-Assouad dimension (although they are equal in the deterministic case). Our result shows that the quasi-Assouad dimension is really the `correct' notion to use here. \subsection{Generic Example} \label{sect:genericExample} For illustrative purposes, we exhibit a representative example and provide pictures of the random and deterministic carpets along with their spectra. Let $\Lambda=\{1,2\}$ and $\mathbb{P}$ be the $1/2$--$1/2$ Bernoulli probability measure on $\Omega=\Lambda^{\N}$. That is, we consider two iterated function systems that we choose with equal probability. The first iterated function system consists of $N_1=20$ maps, where the unit square is divided into $m_1=19$ by $n_1=21$ rectangles. There are $B_1=10$ columns containing at least one rectangle and the maximal number of rectangles in a particular column is $C_1=8$. For the attractor of this deterministic Bedford-McMullen carpet we obtain: \[ \dim_{\mathrm{A}}F_{\underline{1}}=\dim_{\mathrm{qA}}F_{\underline{1}}=\frac{\log10}{\log19}+\frac{\log8}{\log21}\approx1.465 \quad\text{and}\quad \overline{\dim}_{\mathrm{B}}F_{\underline{1}}=\frac{\log10}{\log19}+\frac{\log2}{\log21}\approx1.010 \] and the spectrum interpolates between these two values with a phase transition at $\log19 / \log21 \approx 0.967$. The spectrum is plotted in Figure~\ref{fig:spectrum} and the attractor is shown in Figure~\ref{FigureDet1}. \begin{figure}[tb] \begin{center} \begin{overpic}[width=35em]{Spectrums} \put(26,30){$\dim_{\mathrm{A}}^\theta F_\omega$} \put(7,53){$\dim_{\mathrm{A}}^\theta F_{\underline{2}}$} \put(74,20){$\dim_{\mathrm{A}}^\theta F_{\underline{1}}$} \end{overpic} \end{center} \caption{The Assouad spectra of the sets in the example of Section~\ref{sect:genericExample}. The deterministic spectra are shown in dashed lines and the almost sure spectrum in the random case is given by a solid line.} \label{fig:spectrum} \end{figure} {\setlength{\fboxsep}{0pt}\setlength{\fboxrule}{0.5pt} \begin{figure}[tb] \begin{center} \fbox{\includegraphics[width=14em]{BMCarpetsFirstMap}} \fbox{\includegraphics[width=14em]{BMCarpetsSecondMap}} \fbox{\includegraphics[width=14em]{BMCarpetsRandomMap}} \end{center} \caption{The attractors $F_{\underline{1}}$, $F_{\underline{2}}$, and $F_{\omega}$ for $\omega=(2,1,1,2,1,\dots)$ as used in the example in Section~\ref{sect:genericExample}.} \label{FigureDet1} \end{figure} } The second iterated function system consists of $N_2=5$ maps, where the unit square is divided into $m_2=2$ by $n_2=15$ rectangles. There are $B_2=2$ columns containing at least one rectangle and the maximal number of rectangles in a particular column is $C_2=4$. For the attractor of this deterministic Bedford-McMullen carpet we obtain: \[ \dim_{\mathrm{A}}F_{\underline{2}}=\dim_{\mathrm{qA}}F_{\underline{2}}=1+\frac{\log4}{\log15}\approx1.512 \quad\text{and}\quad \overline{\dim}_{\mathrm{B}}F_{\underline{2}}=1+\frac{\log5/2}{\log15}\approx1.338 \] and the spectrum interpolates between these two values with a phase transition at $\log2 / \log15 \approx 0.256$. The spectrum is plotted in Figure~\ref{fig:spectrum} and the attractor is shown in Figure~\ref{FigureDet1}. Our results now give the following values for almost every $\omega\in\Omega$: \[ \dim_{\mathrm{A}}F_\omega=1+\frac{\log8}{\log21}\approx 1.683 ,\quad \dim_{\mathrm{qA}}F_\omega=\frac{\log20}{\log38}+\frac{\log32}{\log315}\approx1.426, \] \[ \text{and}\quad \overline{\dim}_{\mathrm{B}}F_\omega=\frac{\log20}{\log38}+\frac{\log5}{\log315}\approx1.103. \] We note that in this example the almost sure value of the Assouad dimension exceeds that of the individual attractors, the almost sure quasi-Assouad dimension is less than the quasi-Assouad dimensions of the individual attractors, and that the phase transition in the spectrum occurs at $\log38/\log315\approx0.632$. \subsection{An extreme example} \label{sect:extremeExample} By constructing explicit examples, we demonstrate the following interesting phenomenon, which highlights the subtle difference between the quasi-Assouad and Assouad dimensions. For all $\varepsilon \in (0,1)$, there exist two deterministic self-affine carpets $E,F$ with $\overline{\dim}_\textup{B} E = \overline{\dim}_\textup{B} F = \dim_\textup{qA} E = \dim_\textup{qA} F = \dim_\textup{A} E = \dim_\textup{A} F = 1$ such that when one mixes the two constructions by randomising as above, one finds that almost surely \[ \dim_\textup{qA} F_\omega \leq \varepsilon< 2 = \dim_\textup{A} F_\omega. \] Let $\varepsilon \in (0,1)$, $\Lambda = \{1,2\}$, $m_1=2$, $n_1 = n$, $m_2=m$, $n_2 = m+1$, where $m,n$ are large integers which will be chosen later depending only on $\varepsilon$. Let $\mathcal{I}_1$ consist of both rectangles from a particular row in the first grid and $\mathcal{I}_2$ consist of all $(m+1)$ rectangles in a particular column of the second grid. The deterministic carpets associated with these systems are both unit line segments: a horizontal line in the first case and a vertical line in the second. Therefore both have all the dimensions we consider being equal to 1. Let $p_1=p_2=1/2$, although the precise choice of weights is not particularly important. It follows that for $\mathbb{P}$ almost all $\omega \in \Omega$ we have \[ \dim_{\mathrm{qA}}F_\omega \ = \ \frac{(1/2) \log 2 + (1/2) \log 1 }{(1/2)\log 2+ (1/2) \log m} + \frac{(1/2) \log 1 + (1/2) \log ( m+1)}{(1/2)\log n+ (1/2) \log (m+1)} = \frac{\log 2 }{\log (2 m)} + \frac{\log (m+1)}{\log n(m+1)}. \] Choose $m$ sufficiently large to ensure that $ \frac{\log 2 }{\log (2 m)} \leq \varepsilon/2$ and, now that $m$ is fixed, choose $n$ sufficiently large to ensure that $\frac{\log (m+1)}{\log n(m+1)} \leq \varepsilon/2$. The main result in \cite[Theorem 3.2]{FraserMiaoTroscheit} gives that for any choice of $m,n \geq 2$, $\dim_{\mathrm{A}}F_\omega = 2$ almost surely, and therefore the desired result follows. \section{Proofs} \subsection{Approximate squares} In this section we introduce (random) approximate squares, which are a common object in the study of self-affine carpets. Fix $\omega = (\omega_1, \omega_2, \dots) \in \Omega$, $R \in (0,1)$ and let $k_1^\omega(R)$ and $k_2^\omega(R)$ be the unique positive integers satisfying \begin{equation} \label{k1def} \prod_{l=1}^{k_1^\omega(R)} m_{\omega_l}^{-1} \, \leq \, R \, < \, \prod_{l=1}^{k_1^\omega(R)-1} m_{\omega_l}^{-1} \end{equation} and \begin{equation} \label{k2def} \prod_{l=1}^{k_2^\omega(R)} n_{\omega_l}^{-1} \, \leq \, R \, < \, \prod_{l=1}^{k_2^\omega(R)-1} n_{\omega_l}^{-1}, \end{equation} respectively. Also let \[ m_{\max} = \max_{i \in \Lambda} \, m_i \qquad \text{and} \qquad n_{\max} = \max_{i \in \Lambda} \, n_i. \] A rectangle $[a,b] \times [c,d] \subseteq [0,1]^2$ is called an \emph{approximate $R$-square} if it is of the form \[ S \big( [0,1]^2 \big) \cap \Big( \pi_1 \big( T\big( [0,1]^2 \big) \big) \times [0,1] \Big), \] where $\pi_1 : (x,y) \mapsto x$ is the projection onto the first coordinate and \[ S \ = \ S_{i_1} \circ \cdots \circ S_{ i_{k_2^\omega(R)}} \] and \[ T \ = \ S_{ i_1} \circ \cdots \circ S_{ i_{k_1^\omega(R)}}, \] for some common sequence $i_1, i_2, \dots $ with $i_j \in \mathcal{I}_{\omega_j}$ for all $j$. Here we say $Q$ is \emph{associated with the sequence} $i_1, i_2, \dots $, noting that the entries $i_1, i_2, \dots , i_{k_1^\omega(R)}$ determine $Q$. In particular, the base \[ b-a \ = \ \prod_{i=1}^{k_1^\omega(R)} m_{\omega_i}^{-1} \ \in \ (m_{\max}^{-1}R , R] \qquad \qquad \text{by (\ref{k1def})} \] and the height \[ d-c \ = \ \prod_{i=1}^{k_2^\omega(R)} n_{\omega_i}^{-1} \ \in \ (n_{\max}^{-1}R , R] \qquad \qquad \text{by (\ref{k2def})}\,, \] and so approximate $R$-squares are indeed approximately squares with base and height uniformly comparable to $R$, and therefore each other. \subsection{Proof strategy and notation} \label{sect:proofStrategy} In order to simplify the exposition of our proofs, we define the following weighted geometric averages of the important parameters: \[ \overline{N} = \prod_{i \in \Lambda} N_i^{p_i} \qquad \overline{B} = \prod_{i \in \Lambda} B_i^{p_i} \qquad \overline{C} = \prod_{i \in \Lambda} C_i^{p_i} \qquad \overline{m} = \prod_{i \in \Lambda} m_i^{p_i}\qquad \overline{n} = \prod_{i \in \Lambda} n_i^{p_i}. \] Using this notation, in order to prove our result it is sufficient to prove the following two statements: \begin{enumerate} \item[(1)] For all $\log \overline{m}/\log \overline{n}< \theta <1 $ we have that for $\mathbb{P}$ almost all $\omega \in \Omega$ \[ \dim_{\mathrm{A}}^\theta F_\omega \ \leq \ \frac{\log \overline{B} }{\log \overline{m}} + \frac{\log \overline{C}}{\log \overline{n}}. \] \item[(2)] For all $0<\theta < \log \overline{m}/\log \overline{n} $ we have that for $\mathbb{P}$ almost all $\omega \in \Omega$ \[ \dim_{\mathrm{A}}^\theta F_\omega \ = \ \frac{ 1}{1- \theta} \left( \frac{\log \overline{B} }{\log \overline{m}} + \frac{\log \overline{N}/\overline{B}}{\log \overline{n}} \right) \ - \ \frac{ \theta}{1- \theta} \left( \frac{\log (\overline{N}/\overline{C})}{\log \overline{m}} \, + \, \frac{\log \overline{C} }{\log \overline{n}}\right) . \] \end{enumerate} To see why this is sufficient, first note that since the Assouad spectrum is a continuous function in $\theta$, see \cite[Corollary 3.5]{Spectraa}, it is determined by its values on a countable dense set and so the above statements imply the \emph{a priori} stronger statements that for $\mathbb{P}$ almost all $\omega \in \Omega$, we have the given estimates for \emph{all} $\theta$. Secondly, since the Assouad spectrum necessarily approaches the quasi-Assouad dimension as $\theta \to 1$, (1) demonstrates that the quasi-Assouad dimension is at most \[ \frac{\log \overline{B} }{\log \overline{m}} + \frac{\log \overline{C}}{\log \overline{n}}. \] and since (2) demonstrates that the Assouad spectrum attains this value at $ \theta = \log \overline{m}/\log \overline{n} $, it follows from \cite[Corollary 3.6]{Spectraa} that it is constant in the interval $[ \log \overline{m}/\log \overline{n} ,1)$. Technically speaking \cite[Corollary 3.6]{Spectraa} proves that if the Assouad spectrum is equal to the \emph{Assouad} dimension at some $\theta' \in (0,1)$, then it is constant in the interval $[\theta',1)$, but the same proof allows one to replace the Assouad dimension with the quasi-Assouad dimension in this statement. Finally, note that to establish estimates for $\dim_{\mathrm{A}}^\theta F_\omega$, it suffices to replace balls of radius $R$ with approximate $R$-squares in the definition. That is, to estimate $N\left(Q \cap F_\omega, R^{1/\theta} \right) $ where $Q$ is associated to $i_1, i_2, \dots $ with $i_j \in \mathcal{I}_{\omega_j}$ for all $j$ instead of $N\left(B(x,R) \cap F_\omega, R^{1/\theta} \right) $ for $x \in F_\omega$. This is because balls and approximate squares are comparable and one can pass covering estimates concerning one to covering estimates concerning the other up to constant factors. This duality is standard and we do not go into the details. \subsection{Covering estimates} Let $\omega \in \Omega$, $\theta \in (0,1)$, $R \in (0,1)$ and $Q$ be an approximate $R$-square associated with the sequence $i_1, i_2, \dots $ with $i_j \in \mathcal{I}_{\omega_j}$ for all $j$. In what follows we describe sets of the form $S_{j_1} \circ \cdots \circ S_{ j_{l}}(F_\omega)$ as level $l$ \emph{cylinders} and level $(l+1)$ cylinders lying inside a particular level $l$ cylinder will be referred to as \emph{children}. Moreover, \emph{iteration} will refer to moving attention from a particular cylinder, or collection of cylinders, to the cylinders at the next level. We wish to estimate $N\left(Q \cap F_\omega, R^{1/\theta} \right) $ and to do this we decompose $Q \cap F_\omega$ into cylinders at level $k^\omega_2\left(R^{1/\theta}\right)$ and cover each cylinder independently. Therefore we first need to count how many level $k^\omega_2\left(R^{1/\theta}\right)$ cylinders lie inside $Q$. There are two cases, which we describe separately. \\ \\ \emph{Case (i)}: $k^\omega_1(R) < k^\omega_2\left(R^{1/\theta}\right)$.\\ We start by noting that $Q$ lies inside a (unique) level $k^\omega_2(R) $ cylinder. As we move to the next level only the children of this cylinder lying in a particular `column' will also intersect $Q$. Iterating inside cylinders intersecting $Q$ until level $k^\omega_1(R)$ yields a decomposition of $Q$ into several cylinders arranged in a single column each of which has base the same length as that of $Q$. The number of these cylinders is at most \[ \prod_{l=k_2^\omega(R)+1}^{k^\omega_1(R)} C_{\omega_l}\,, \] since each iteration from the $(l-1)$th level to the $l$th multiplies the number of cylinders intersecting $Q$ at the previous level by the number of rectangles in a particular column of $\mathcal{I}_{\omega_l}$ system, which is, in particular, bounded above by $C_{\omega_l}$. The situation is simpler from this point on. We continue to iterate inside each of the level $k^\omega_1(R)$ cylinders until level $k^\omega_2\left(R^{1/\theta}\right)$, but this time all of the children remain inside $Q$ at every iteration. Therefore we find precisely \[ \prod_{l=k^\omega_1(R)+1}^{k^\omega_2\left(R^{1/\theta}\right)} N_{\omega_l} \] level $k^\omega_2\left(R^{1/\theta}\right)$ cylinders inside each level $k^\omega_1(R)$ cylinder. As mentioned above, we now cover each of these cylinders individually. To do this, we further iterate inside each such cylinder until level $k_1^\omega\left(R^{1/\theta}\right)$ and group together cylinders at this level which lie in the same column. This decomposes the level $k_1^\omega\left(R^{1/\theta}\right)$ cylinders into approximate $R^{1/\theta}$ squares, each of which can be covered by 4 balls of diameter $R^{1/\theta}$. Therefore it only remains to count the number of distinct level $k_1^\omega\left(R^{1/\theta}\right)$ columns inside a level $k^\omega_2\left(R^{1/\theta}\right)$ cylinder. Iterating from the $(l-1)$th level to the $l$th level multiplies the number of columns by $B_{\omega_l}$ and therefore the number is \[ \prod_{l=k^\omega_2\left(R^{1/\theta}\right)+1}^{k_1^\omega\left(R^{1/\theta}\right)} B_{\omega_l} . \] Combining the three counting arguments from above yields \begin{equation}\label{eq:upperCounting1} N\left(Q \cap F_\omega, R^{1/\theta} \right) \ \leq \ 4 \left(\prod_{l=k_2^\omega(R)+1}^{k^\omega_1(R)} C_{\omega_l} \right) \ \left(\prod_{l=k^\omega_1(R)+1}^{k^\omega_2\left(R^{1/\theta}\right)} N_{\omega_l} \right) \ \left( \prod_{l=k^\omega_2\left(R^{1/\theta}\right)+1}^{k_1^\omega\left(R^{1/\theta}\right)} B_{\omega_l} \right) . \end{equation} Moreover, this estimate is sharp in the sense that we can always find a particular approximate $R$-square $Q$ such that \begin{equation}\label{eq:lowerCounting1} N\left(Q \cap F_\omega, R^{1/\theta} \right) \ \geq \ K \left(\prod_{l=k_2^\omega(R)+1}^{k^\omega_1(R)} C_{\omega_l} \right) \ \left(\prod_{l=k^\omega_1(R)+1}^{k^\omega_2\left(R^{1/\theta}\right)} N_{\omega_l} \right) \ \left( \prod_{l=k^\omega_2\left(R^{1/\theta}\right)+1}^{k_1^\omega\left(R^{1/\theta}\right)} B_{\omega_l} \right)\,, \end{equation} for some constant $K>0$ depending on $m_{\max}$ and $n_{\max}$. Such a $Q$ is provided by any approximate $R$-square where $T = S_{ i_1} \circ \cdots \circ S_{ i_{k_1^\omega(R)}}$ is chosen such that each map $i_j$ lies in a maximal column of $\mathcal{I}_j$, that is a column consisting of $C_j$ rectangles. Finally, the small constant $K$ in the lower bound appears since a single ball of diameter $R^{1/\theta}$ can only intersect at most a constant number of the approximate $R^{1/\theta}$ squares found above and therefore counting approximate $R^{1/\theta}$ squares is still comparable to counting optimal $R^{1/\theta}$ covers. \\ \\ \emph{Case (ii)}: $k^\omega_1(R) \geq k^\omega_2\left(R^{1/\theta}\right)$.\\ The distinctive feature of this case is that when one iterates inside the level $k^\omega_2(R) $ cylinder containing $Q$, one reaches the situation where the height of the cylinders is roughly $R^{1/\theta}$ (level $k^\omega_2\left(R^{1/\theta}\right) $) \emph{before} the cylinders lie completely inside $Q$ (level $k^\omega_1(R)$). This means that the middle term in the above product no longer appears. The rest of the argument is similar, however, and we end up with \begin{equation}\label{eq:upperCounting2} N\left(Q \cap F_\omega, R^{1/\theta} \right) \ \leq \ 4 \left(\prod_{l=k_2^\omega(R)+1}^{k^\omega_2\left(R^{1/\theta}\right)} C_{\omega_l} \right) \ \left( \prod_{l=k^\omega_1\left(R\right)+1}^{k_1^\omega\left(R^{1/\theta}\right)} B_{\omega_l} \right) . \end{equation} One subtle feature of this estimate is that we appear to skip from level $k^\omega_2\left(R^{1/\theta}\right)$ to level $k^\omega_1\left(R\right)$. This is to avoid over-counting due to the fact that, inside a level $k^\omega_2\left(R^{1/\theta}\right)$ cylinder intersecting $Q$, only a single level $k^\omega_1\left(R\right)$ column actually lies inside $Q$, and can thus contribute to the covering number. This column comprises of several $k^\omega_1\left(R\right)$ cylinders and, since the height of this column is comparable to $R^{1/\theta}$, to cover this column efficiently one only needs to count the number of level $k_1^\omega\left(R^{1/\theta}\right)$ columns inside a single level $k^\omega_1\left(R\right)$ cylinder. This gives the second multiplicative term in the estimate, which concerns the terms $B_{\omega_l}$. Once again, this bound is sharp in the sense that there exists an approximate $R$-square $Q$ such that \[ N\left(Q \cap F_\omega, R^{1/\theta} \right) \ \geq \ K \left(\prod_{l=k_2^\omega(R)+1}^{k^\omega_2\left(R^{1/\theta}\right)} C_{\omega_l} \right) \ \left( \prod_{l=k^\omega_1\left(R\right)+1}^{k_1^\omega\left(R^{1/\theta}\right)} B_{\omega_l} \right). \] \subsection{Proof of the Main Theorem} We start our proof with this lemma, which is a simple variant of a Chernoff bound for stopped sums of random variables. We write $\mathbb{P}\{a \geq b\}$ to denote $\mathbb{P}\left( \left\{ \omega \in \Omega : a \geq b\right\} \right)$ and write $\mathbb{E}(\cdot)$ for the expectation of a random variable with respect to $\mathbb{P}$. \begin{lma}\label{lma:randomUpperLemma} Let $\{X_i\}$ be a sequence of non-negative discrete i.i.d.\ random variables with finite expectation $0<\overline{X}=\E(X)<\infty$. Let $\widehat{k}\in\N$ and let $k\leq \widehat{k}$ be a random variable. Let $\tau>\widehat{k}$ be a stopping time with finite expectation. Then, for all $\varepsilon,t>0$, \begin{equation}\label{eq:firstUpperEstimate} \mathbb{P}\left\{ \sum_{i=k}^{\tau}X_i \geq (1+\varepsilon)(\tau-k+1)\overline{X} \right\} \leq \E\left( \E\left( e^{t(X-(1+\varepsilon)\overline{X})} \right)^{\tau-k} \right) \end{equation} and \begin{equation} \mathbb{P}\left\{ \sum_{i=k}^{\tau}X_i \leq (1-\varepsilon)(\tau-k+1)\overline{X} \right\} \leq \E\left( \E\left( e^{t(X-(1+\varepsilon)\overline{X})} \right)^{\tau-k} \right). \end{equation} Further, if $\tau-k\geq l$ for some $l\in\N$, then there exists $0<\gamma<1$ not depending on $\tau,k,l$ such that \begin{equation}\label{eq:secondUpperEstimate} \mathbb{P}\left\{ \sum_{i=k}^{\tau}X_i \geq (1+\varepsilon)(\tau-k+1)\overline{X} \right\} \leq \gamma^{l} \end{equation} and \begin{equation}\nonumber \mathbb{P}\left\{ \sum_{i=k}^{\tau}X_i \leq (1-\varepsilon)(\tau-k+1)\overline{X} \right\} \leq \gamma^{l}. \end{equation} \end{lma} \begin{proof} In what follows we write $\{\mathcal{F}_s\}_{s \geq 0}$ for the natural filtration of our event space. We prove \eqref{eq:firstUpperEstimate} and \eqref{eq:secondUpperEstimate}. The remaining estimates are proved similarly and we omit the details. We rearrange the left hand side of \eqref{eq:firstUpperEstimate} and multiply by $t>0$ to obtain \begin{align*} \mathbb{P}\left\{ \sum_{i=k}^{\tau}X_i \geq (1+\varepsilon)(\tau-k+1)\overline{X} \right\} &= \mathbb{P}\left\{\sum_{i=k}^{\tau}t(X_i-(1+\varepsilon)\overline{X}) \geq 0 \right\}\\ &= \mathbb{P}\left\{ \exp\left[ \sum_{i=k}^{\tau}Y_i \right]\geq 1 \right\}, \intertext{with $Y_i=t X_i- t(1+\varepsilon)\overline{X}$. Using Markov's inequality and continuing,} % &\leq \E\left(\exp\left[\sum_{i=k}^{\tau}Y_i\right] \right)\\ % &= \E\left( \E\left( \exp \left[ \sum_{i=k}^\tau Y_i \right] \Big| \mathcal{F}_{\tau-1}\right) \right)\\ % & = \E\left( \E\left( \exp Y_\tau \mid \mathcal{F}_{\tau-1}\right)\E\left( \exp\left[ \sum_{i=k}^{\tau-1}Y_i \right] \Big| \mathcal{F}_{\tau-1} \right) \right)\\ % & = \E \left( \E\left( \exp Y_0 \right)\E \left( \E\left( \exp\left[ \sum_{i=k}^{\tau-1}Y_i \right]\Big| \mathcal{F}_{\tau-2} \right)\Big|\mathcal{F}_{\tau-1} \right) \right)\\ % & = \E \left( \E\left( \exp Y_0 \right)^2 \E \left( \E\left( \exp\left[ \sum_{i=k}^{\tau-2}Y_i \right]\Big| \mathcal{F}_{\tau-2} \right)\Big|\mathcal{F}_{\tau-1} \right) \right)\\ % & \vdots\\ % & = \E \left( \E\left( \exp Y_0 \right)^{\tau-k} \right)\\ % & = \E\left( \E\left( e^{t(X-(1+\varepsilon)\overline{X})} \right)^{\tau-k} \right)\,, \end{align*} as required. To prove \eqref{eq:secondUpperEstimate} we consider \begin{equation*} \gamma_t = \E\left( e^{t\left( X_0-(1+\varepsilon)\overline{X} \right)} \right). \end{equation*} Since $X$ is discrete, we can differentiate with respect to $t$ for all $t\in\R$, and get \begin{align*} \frac{d}{dt}\E\left( e^{t\left( X_0-(1+\varepsilon)\overline{X} \right)} \right)\Big\rvert_{t=0} &= \E\left( \frac{d}{dt} e^{t\left( X_0-(1+\varepsilon)\overline{X} \right)} \right)\Big\rvert_{t=0}\\ \\ &= \E\left( \left( X_0-(1+\varepsilon)\overline{X} \right) e^{t\left( X_0-(1+\varepsilon)\overline{X} \right)} \right)\Big\rvert_{t=0}\\ & = \E\left( X_0-(1+\varepsilon)\overline{X} \right)=-\varepsilon \overline{X}<0. \end{align*} Thus, since $\gamma_0=1$, there exists $t>0$ such that $0<\gamma_t<1$. Note that $t$ (and thus $\gamma_t$) does not depend on on $\tau,k,l$ and we can now use \eqref{eq:firstUpperEstimate} together with the assumption that $\tau-k \geq l$ to obtain \eqref{eq:secondUpperEstimate}, where $\gamma=\gamma_t$. \end{proof} Note that from the definitions of $k_1^\omega(R)$ and $k_2^\omega(R)$ we can conclude that there exist constants $c_1,c_\theta>1$ such that for sufficiently small $R$, \[ k_1^\omega(R) \geq c_1 k_2^\omega(R), \quad k_1^\omega(R^{1/\theta})\geq c_\theta k_1^\omega (R), \quad\text{ and }\quad k_2^\omega(R^{1/\theta}) \geq c_\theta k_2^\omega(R). \] The relationship between $k_1^\omega(R)$ and $k_2^\omega(R^{1/\theta})$ is more complicated and depends heavily on $\omega$ and $R$. However, probabilistically we can say more. Let $\varepsilon>0$ and $q\in\N$. Note that, taking logarithms, \[ \mathbb{P}\left\{ \prod_{i=1}^q n_{\omega_i}^{-1} \leq (\overline{n})^{-(1+\varepsilon)q} \right\} =\mathbb{P} \left\{ \sum_{i=1}^q \log n_{\omega_i} \geq (1+\varepsilon)q\log \overline{n} \right\} \] and therefore, by Lemma~\ref{lma:randomUpperLemma}, there exists $0<\gamma<1$ such that \[ \mathbb{P}\left\{ \prod_{i=1}^q n_{\omega_i}^{-1} \leq (\overline{n})^{-(1+\varepsilon)q} \right\}\leq \gamma^{q-1}. \] Now, summing over $q$, we obtain \[ \sum_{q=1}^\infty \mathbb{P}\left\{ \prod_{i=1}^q n_{\omega_i}^{-1} \leq (\overline{n})^{-(1+\varepsilon)q} \right\}\leq \sum_{q=1}^\infty \gamma^{q-1} < \infty. \] Thus, by the Borel-Cantelli Lemma, almost surely there are at most finitely many $q$ such these events occur. We can similarly argue for a lower bound and conclude that for almost all $\omega\in\Omega$ there exists $q_\omega$ such that, \begin{equation}\label{eq:nproduct} (\overline{n})^{-(1+\varepsilon) q}\leq \prod_{i=1}^q n_{\omega_i}^{-1} \leq (\overline{n})^{-(1-\varepsilon) q}, \end{equation} for all $q\geq q_\omega$. Analogously, \begin{equation}\label{eq:mproduct} (\overline{m})^{-(1+\varepsilon) q}\leq \prod_{i=1}^q m_{\omega_i}^{-1} \leq (\overline{m})^{-(1-\varepsilon) q}, \end{equation} almost surely for all $q$ large enough. Without loss of generality we can assume $q_\omega$ to be identical for both products. Since $k_2^\omega(R)\geq -c\log R$ for some $c>0$ not depending on $\omega,R$, we see that there almost surely also exists an $R_\omega$ such that \eqref{eq:nproduct} and \eqref{eq:mproduct} hold for all $q\geq k_2^\omega(R)$, where $0<R\leq R_\omega$. Given these bounds we can determine the probabilistic relationship between $k_1^\omega(R)$ and $k_2^\omega(R^{1/\theta})$. Let $R_\omega$ be as above. Then by the definitions of $k_1^\omega(R)$ and $k_2^\omega(R^{1/\theta})$ we get, for all $R\leq R_\omega$, \[ (\overline{m})^{-(1+\varepsilon)k_1^\omega(R)}\leq\prod_{i=1}^{k_1^\omega(R)} m_{\omega_i}^{-1}\leq R < n_{\max}^\theta\left( \prod_{i=1}^{k_2^\omega(R^{1/\theta})} n_{\omega_i}^{-1} \right)^{\theta} \leq n_{\max}^\theta (\overline{n})^{-\theta(1-\varepsilon) k_2^\omega(R^{1/\theta})} \] and, after rearranging, \begin{equation}\label{eq:firstkRatio} \frac{k_1^\omega(R)}{k_2^\omega(R^{1/\theta})} > \theta \frac{1-\varepsilon}{1+\varepsilon} \frac{\log\overline{n}}{\log\overline{m}}-\theta\frac{\log n_{\max}}{(1+\varepsilon)\,k_2^\omega(R^{1/\theta})\log\overline{m}}. \end{equation} Similarly, by considering the complementary inequalities \[ \left(\prod_{i=1}^{k_2^\omega(R^{1/\theta})} n_{\omega_i}^{-1}\right)^\theta \leq R < m_{\max} \prod_{i=1}^{k_1^\omega(R)} m_{\omega_i}^{-1} \,, \] we obtain \begin{equation}\label{eq:secondkRatio} \frac{k_1^\omega(R)}{k_2^\omega(R^{1/\theta})}< \theta \frac{1+\varepsilon}{1-\varepsilon} \frac{\log\overline{n}}{\log\overline{m}}-\frac{\log m_{\max}}{(1-\varepsilon)\,k_2^\omega(R^{1/\theta})\log\overline{m}}. \end{equation} Now $\varepsilon>0$ was arbitrary and the last term in \eqref{eq:firstkRatio} and \eqref{eq:secondkRatio} vanishes as $R_\omega$ decreases. Therefore, for all $\delta>0$ and almost all $\omega\in\Omega$, there exists sufficiently small $R_\omega>0$ such that \begin{equation}\label{eq:limitRatio1} (1-\delta)\frac{\theta\log\overline{n}}{\log\overline{m}} \leq \frac{k_1^\omega(R)}{k_2^\omega(R^{1/\theta})}\leq (1+\delta)\frac{\theta\log\overline{n}}{\log\overline{m}}\,, \end{equation} for all $R<R_\omega$. Moreover, using the much simpler relationships derived above, we can assume without loss of generality that $R_\omega$ is small enough such that \begin{equation}\label{eq:limitRatio2} (1-\delta)\frac{\log\overline{n}}{\log\overline{m}}\leq\frac{k_1^\omega(R)}{k_2^\omega(R)}\leq (1+\delta)\frac{\log\overline{n}}{\log\overline{m}}, \end{equation} \begin{equation}\label{eq:limitRatio3} (1-\delta)\theta \leq \frac{k_1^\omega(R)}{k_1^\omega(R^{1/\theta})} \leq (1+\delta)\theta \quad\text{and}\quad (1-\delta)\theta \leq \frac{k_2^\omega(R)}{k_2^\omega(R^{1/\theta})} \leq (1+\delta)\theta \end{equation} all hold simultaneously for all $R<R_\omega$. \subsubsection{The upper bound for $\theta< \log\overline{m}/\log\overline{n}$} We assume throughout that $R_\omega$ is small enough for all inequalities in the last section to hold simultaneously (almost surely). Also, let $\delta>0$ be small enough such that the inequalities at the end of the previous section are all bounded away from $1$. That is we choose $\delta>0$ such that $(1+\delta)\theta<1$, $(1-\delta)\frac{\log\overline{n}}{\log\overline{m}}>1$ and, especially relevent to this section, \eqref{eq:limitRatio1} and $\theta< \log\overline{m}/\log\overline{n}$ imply that we can choose $\delta>0$ sufficiently small such that $k_1^\omega(R)<k_2^\omega(R^{1/\theta})$ almost surely for all $R<R_\omega$. Let $\varepsilon>0$ and consider the geometric average given by \begin{equation}\label{eq:firstAverage} \left(\overline{C}^{k_1^\omega(R)-k_2^\omega(R)}\;\overline{N}^{k_2^\omega(R^{1/\theta})-k_1^\omega(R)} \;\overline{B}^{k_1^\omega(R^{1/\theta})-k_2^\omega(R^{1/\theta})}\right)^{1+\varepsilon}. \end{equation} We want to determine the probability that there exists an approximate $R$ square at a given level such that we need more than the estimate in \eqref{eq:firstAverage} many $R^{1/\theta}$ squares to cover it. Note that for \eqref{eq:upperCounting1} to be larger than \eqref{eq:firstAverage}, at least one of the products must exceed the corresponding power of the average. Therefore, \begin{multline} \mathbb{P}\left\{ N\left( Q\cap F_\omega, R^{1/\theta} \right) \geq 4 \left(\overline{C}^{k_1^\omega(R)-k_2^\omega(R)}\;\overline{N}^{k_2^\omega(R^{1/\theta})-k_1^\omega(R)} \;\overline{B}^{k_1^\omega(R^{1/\theta})-k_2^\omega(R^{1/\theta})}\right)^{1+\varepsilon} \right\} \\ \leq \mathbb{P}\left\{ \left(\prod_{l=k_2^\omega(R)+1}^{k^\omega_1(R)} C_{\omega_l} \right)\geq \overline{C}^{(1+\varepsilon)(k_1^\omega(R)-k_2^\omega(R))}\right\} + \mathbb{P}\left\{ \left(\prod_{l=k^\omega_1(R)+1}^{k^\omega_2\left(R^{1/\theta}\right)} N_{\omega_l} \right) \geq \overline{N}^{(1+\varepsilon)(k_2^\omega(R^{1/\theta})-k_1^\omega(R))}\right\} \\+ \mathbb{P}\left\{\left( \prod_{l=k^\omega_2\left(R^{1/\theta}\right)+1}^{k_1^\omega\left(R^{1/\theta}\right)} B_{\omega_l} \right) \geq \overline{B}^{(1+\varepsilon)(k_1^\omega(R^{1/\theta})-k_2^\omega(R^{1/\theta}))}\right\}. \end{multline} Let us start by analysing the event involving $C_{\omega_l}$. We want to show that the product can only exceed the average behaviour at most finitely many times almost surely. That is, given $q\in\N$, we want to estimate \begin{equation} \label{event111} \mathbb{P}\left\{ \prod_{l=k_2^\omega(R)+1}^{k^\omega_1(R)} C_{\omega_l} \geq \overline{C}^{(1+\varepsilon)(k_1^\omega(R)-k_2^\omega(R))} \text{ for some $R \in (0,R_\omega)$ such that } k_2^\omega(R) = q \right\}. \end{equation} Notice that $k_1^\omega(R)$ is a stopping time and, by \eqref{eq:limitRatio2} and our assumption that $R_\omega$ and $\delta$ are chosen sufficiently small, $k_1^\omega(R) \geq (1-\delta)\log\overline{n}/\log\overline{m} \ k_2^\omega(R)$ and $c:=(1-\delta)\log\overline{n}/\log\overline{m} -1 >0$. Using Lemma~\ref{lma:randomUpperLemma}, we can bound \eqref{event111} above by \begin{align*} \mathbb{P}\left\{ \bigcup_{\substack{q': \exists R \in (0,R_\omega) \\ k_2^\omega(R) = q \text{ and } k_1^\omega(R) = q'}} \left\{ \sum_{l=q+1}^{q'} \log C_{\omega_l} \geq (1+\varepsilon)(q'-q)\log\overline{C} \right\} \right\} \leq L\gamma^{c(q-1)}\,, \end{align*} for some $0<\gamma<1$ where $L>0$ is a deterministic constant corresponding to the number of possible values for $k_1^\omega(R)$, given $k_2^\omega(R)$. Since \[ \sum_{q=1}^\infty L \gamma^{c(q-1)} < \infty\,, \] the Borel-Cantelli lemma implies that the product can exceed the average behaviour only finitely many times almost surely. The argument for $N_{\omega_l}$ and $B_{\omega_l}$ is identical due to the ratios given in \eqref{eq:limitRatio1}, \eqref{eq:limitRatio2}, and \eqref{eq:limitRatio3}. Therefore, there almost surely exists $q$ large enough---and hence $R_\omega'$ small enough---such that \[ N\left( Q\cap F_\omega, R^{1/\theta} \right) \leq 4 \left(\overline{C}^{k_1^\omega(R)-k_2^\omega(R)}\;\overline{N}^{k_2^\omega(R^{1/\theta})-k_1^\omega(R)} \;\overline{B}^{k_1^\omega(R^{1/\theta})-k_2^\omega(R^{1/\theta})}\right)^{1+\varepsilon}\,, \] for all $0<R<R_\omega'$. Using \eqref{eq:limitRatio1}, \eqref{eq:limitRatio2}, and \eqref{eq:limitRatio3} again, we obtain \[ k_1^\omega(R)-k_2^\omega(R)\leq \left( (1+\delta)\frac{\log\overline{n}}{\log\overline{m}}-1\right)k_2^\omega(R) \] \[ k_2^\omega(R^{1/\theta})-k_1^\omega(R) \leq \left(\frac{1+\delta}{1-\delta}\theta^{-1} - (1+\delta)\frac{\log\overline{n}}{\log\overline{m}} \right)k_2^\omega(R) \] and \[ k_1^\omega(R^{1/\theta})-k_2^\omega(R^{1/\theta}) \leq \left( \frac{1+\delta}{1-\delta}\frac{\log\overline{n}}{\log\overline{m}} - (1-\delta)^{-1} \right)\theta^{-1}k_2^\omega(R). \] Now, using $k_2^\omega(R) \leq -(1+\delta)\log R / \log\overline{n}$, we rearrange, \[ \overline{C}^{k_1^\omega(R)-k_2^\omega(R)}\leq \overline{C}^{-(1+\delta)^2\log R / \log\overline{m} - (-(1+\delta)\log R /\log\overline{n})} = R^{(1-1/\theta)s_c}, \] where \[ s_c = (1+\delta)^2\frac{\log\overline{C}}{\log\overline{m}}\frac{\theta}{1-\theta}-(1+\delta)\frac{\log\overline{C}}{\log\overline{n}}\frac{\theta}{1-\theta} \quad \to \quad s_C := \frac{\theta}{1-\theta}\left(\frac{\log\overline{C}}{\log\overline{m}} - \frac{\log\overline{C}}{\log\overline{n}} \right),\quad\text{as}\quad \delta\to0. \] We rearrange the other terms similarly to obtain \[ N\left( Q\cap F_\omega, R^{1/\theta} \right) \leq 4 R^{(1-1/\theta)(1+\varepsilon)(s_c+s_n+s_b)}, \] where \[ s_n = -(1+\delta)^2 \frac{\log\overline{N}}{\log\overline{m}}\frac{\theta}{1-\theta} + \frac{(1+\delta)^2}{1-\delta}\frac{\log\overline{N}}{\log\overline{n}}\frac{1}{1-\theta} \quad \to \quad s_N:=\frac{1}{1-\theta}\left(\frac{\log\overline{N}}{\log\overline{n}}-\theta \frac{\log\overline{N}}{\log\overline{m}} \right),\quad\text{as}\quad \delta\to0, \] and \[ s_b = -\frac{1+\delta}{1-\delta}\frac{\log\overline{B}}{\log\overline{n}}\frac{1}{1-\theta}+ \frac{(1+\delta)^2}{1-\delta} \frac{\log\overline{B}}{\log\overline{m}}\frac{1}{1-\theta} \quad \to \quad s_B:=\frac{1}{1-\theta}\left( \frac{\log\overline{B}}{\log\overline{m}}-\frac{\log\overline{B}}{\log\overline{n}} \right),\quad\text{as}\quad \delta\to0. \] For arbitrary $\varepsilon'>0$ we may assume $\delta>0$ is small enough such that $s_c+s_n+s_b \leq (1+\varepsilon')(s_C+s_N+s_B)$. Note that \begin{equation}\label{eq:dimensionCase1} s:=s_C+s_N+s_B = \frac{1}{1-\theta}\left[ \left( \frac{\log\overline{B}}{\log\overline{m}}+\frac{\log\overline{N}/\overline{B}}{\log\overline{n}} \right) - \theta\left( \frac{\log\overline{N}/\overline{C}}{\log\overline{m}}+ \frac{\log\overline{C}}{\log\overline{n}} \right) \right]. \end{equation} We can therefore conclude that, almost surely, every approximate square of length $R<R_\omega$ can be covered by fewer than \begin{equation}\nonumber 4 R^{(1-1/\theta)(1+\varepsilon)(1+\varepsilon')s} \end{equation} sets of diameter $R^{1/\theta}$. Thus the Assouad spectrum is bounded above by $(1+\varepsilon)(1+\varepsilon')s$ and by the arbitrariness of $\varepsilon,\varepsilon'$, also by $s$. \qed \subsubsection{The upper bound for $\theta> \log\overline{m}/\log\overline{n}$} The proof for this case follows along the same lines as $\theta < \log\overline{m}/\log\overline{n}$ and we will only sketch their differences. First note that $\theta> \log\overline{m}/\log\overline{n}$ implies the almost sure existence of a small enough $R_\omega$ such that \[ k_1^\omega(R)\geq (1-\delta)\theta \frac{\log\overline{n}}{\log\overline{m}} k_2^\omega(R^{1/\theta})\,, \] for all $R<R_\omega$. Again we assume without loss of generality that $R_\omega$ is chosen such that \eqref{eq:limitRatio1}, \eqref{eq:limitRatio2}, and \eqref{eq:limitRatio3} are satisfied for a given $\delta>0$. We also choose $\delta>0$ small enough to ensure that \[ (1-\delta)\theta \frac{\log\overline{n}}{\log\overline{m}} > 1. \] Let $\varepsilon>0$ and consider the geometric average \[ \left( \overline{C}^{k_2^\omega(R^{1/ \theta})-k_2^\omega(R)}\overline{B}^{k_1^\omega(R^{1/\theta})-k_1^\omega(R)} \right)^{1+\varepsilon}. \] We compare the upper bound given in \eqref{eq:upperCounting2} with the average above and obtain \begin{multline} \mathbb{P}\left\{ N\left( Q\cap F_\omega, R^{1/\theta} \right) \geq 4 \left(\overline{C}^{k_2^\omega(R^{1/\theta})-k_2^\omega(R)} \;\overline{B}^{k_1^\omega(R^{1/\theta})-k_1^\omega(R)}\right)^{1+\varepsilon} \right\} \\ \leq \mathbb{P}\left\{ \left(\prod_{l=k_2^\omega(R)+1}^{k^\omega_2(R^{1/\theta})} C_{\omega_l} \right)\geq \overline{C}^{(1+\varepsilon)(k_2^\omega(R^{1/\theta})-k_2^\omega(R))}\right\} + \mathbb{P}\left\{\left( \prod_{l=k^\omega_1\left(R\right)+1}^{k_1^\omega\left(R^{1/\theta}\right)} B_{\omega_l} \right) \geq \overline{B}^{(1+\varepsilon)(k_1^\omega(R^{1/\theta})-k_1^\omega(R))}\right\}. \nonumber \end{multline} Now using the same ideas as before, noting that $k_1^\omega(\cdot)$ and $k_2^\omega(\cdot)$ are stopping times, we can conclude that for almost every $\omega\in\Omega$ there exists $R_\omega$ such that \[ N\left( Q\cap F_\omega, R^{1/\theta} \right) \leq 4 \left( \overline{C}^{k_2^\omega(R^{1/ \theta})-k_2^\omega(R)}\overline{B}^{k_1^\omega(R^{1/\theta})-k_1^\omega(R)} \right)^{1+\varepsilon}\,, \] for all $R<R_\omega$. Using the estimates for $k_1^\omega(R^{1/\theta})/k_1^\omega(R)$ and $k_2^\omega(R^{1/\theta})/k_2^\omega(R)$ in \eqref{eq:limitRatio3} we see that there exists $\varepsilon'>0$ such that for sufficiently small $R$, \[ \left( \overline{C}^{k_2^\omega(R^{1/ \theta})-k_2^\omega(R)}\overline{B}^{k_1^\omega(R^{1/\theta})-k_1^\omega(R)} \right)^{1+\varepsilon} \leq R^{(1-1/\theta)(1+\varepsilon)(1+\varepsilon')s}, \] where \[ s=\frac{\log\overline{B}}{\log\overline{m}}+\frac{\log\overline{C}}{\log\overline{n}}. \] As before, this is sufficient to prove that for $\theta>\log\overline{m}/\log\overline{n}$, there almost surely exists $R_\omega$ such that all approximate $R$-squares with $R< R_\omega$ can be covered by fewer than \[ 4 R^{(1-1/\theta)(1+\varepsilon)(1+\varepsilon')s} \] sets of diameter $R^{1/\theta}$. This proves that $\dim_\textup{A}^\theta F_\omega \leq (1+\varepsilon)(1+\varepsilon')s$ almost surely, and hence, by arbitrariness of $\varepsilon,\varepsilon'>0$ that $\dim_\textup{A}^\theta F_\omega \leq s$ almost surely, as required. \qed \subsubsection{The lower bound for $\theta< \log\overline{m}/\log\overline{n}$} To prove almost sure lower bounds for $\dim_\textup{A}^\theta F_\omega$ we need to show that almost surely there exists a sequence $R_i \to 0$ such that for each $i$ there is an approximate $R_i$-square which requires at least a certain number of sets of diameter $R_i^{1/\theta}$ to cover it. Let $\theta< \log\overline{m}/\log\overline{n}$ and, as before, we choose $\delta>0$ small enough such that $k_1^\omega(R)<k_2^\omega(R^{1/\theta})$ almost surely for all small enough $R$. Let $\varepsilon>0$, and given $q\in\N$ and $\omega \in \Omega$, let \[ R_q = \prod_{l=1}^{q} n_{\omega_l}^{-1}\,, \] noting that $k_2^\omega(R_q)=q$ and $R_q \to 0$ as $q \to \infty$. We have \begin{multline}\label{eq:lowerProbBound} \mathbb{P}\Bigg\{ N\left( Q\cap F_\omega, R_q^{1/\theta} \right) \geq K \left(\overline{C}^{k_1^\omega(R_q)-k_2^\omega(R_q)}\;\overline{N}^{k_2^\omega(R_q^{1/\theta})-k_1^\omega(R_q)} \;\overline{B}^{k_1^\omega(R_q^{1/\theta})-k_2^\omega(R_q^{1/\theta})}\right)^{1-\varepsilon} \Bigg\} \\ \geq 1- \mathbb{P}\left\{ \prod_{l=k_2^\omega(R_q)+1}^{k^\omega_1(R_q)} C_{\omega_l} \leq \overline{C}^{(1-\varepsilon)(k_1^\omega(R_q)-k_2^\omega(R_q))}\quad\text{ or }\quad \prod_{l=k^\omega_1(R_q)-1}^{k^\omega_2\left(R_q^{1/\theta}\right)} N_{\omega_l} \leq \overline{N}^{(1-\varepsilon)(k_2^\omega(R_q^{1/\theta})-k_1^\omega(R_q))}\right. \\ \left. \text{or}\quad \prod_{l=k^\omega_2\left(R_q^{1/\theta}\right)+1}^{k_1^\omega\left(R_q^{1/\theta}\right)} B_{\omega_l} \leq \overline{B}^{(1-\varepsilon)(k_1^\omega(R_q^{1/\theta})-k_2^\omega(R_q^{1/\theta}))} \right\}. \end{multline} The last term is bounded above by \begin{multline}\nonumber \mathbb{P}\left\{ \prod_{l=k_2^\omega(R_q)+1}^{k^\omega_1(R_q)} C_{\omega_l} \leq \overline{C}^{(1-\varepsilon)(k_1^\omega(R_q)-k_2^\omega(R_q))}\right\}\\ +\mathbb{P}\left\{ \prod_{l=k^\omega_1(R_q)-1}^{k^\omega_2\left(R_q^{1/\theta}\right)} N_{\omega_l} \leq \overline{N}^{(1-\varepsilon)(k_2^\omega(R_q^{1/\theta})-k_1^\omega(R_q))}\right\}\\ +\mathbb{P}\left\{ \prod_{l=k^\omega_2\left(R_q^{1/\theta}\right)+1}^{k_1^\omega\left(R_q^{1/\theta}\right)} B_{\omega_l} \leq \overline{B}^{(1-\varepsilon)(k_1^\omega(R_q^{1/\theta})-k_2^\omega(R_q^{1/\theta}))}\right\} \end{multline} and by Lemma~\ref{lma:randomUpperLemma} and the union estimate used above each probability is bounded above by $L' \gamma^{c'q}$ for some constants $L', c'>0$ and $\gamma \in (0,1)$. Thus, there exists $q_0$ such that each term in the sum is bounded by $1/6$ for $q\geq q_0$ and thus the probability on the left hand side of \eqref{eq:lowerProbBound} is bounded below by $1/2$ for $q\geq q_0$. Denote the event on the left hand side of \eqref{eq:lowerProbBound} by $E_q$. Observe that the event only depends on the values of $\omega_i$ for $i$ satisfying $q= k_2^\omega(R_q) \leq i \leq k_1^\omega(R_q^{1/\theta})$ as the latter bound is a stopping time. By virtue of construction, there exists an integer $d \geq 1$ such that $k_1^\omega\left(R_{d^iq}^{1/\theta}\right)< d^{i+1} q$ for all $q$. Therefore the events $\{E_q, E_{dq}, E_{d^2 q},\dots\}$ are pairwise independent. Further, by the above argument, \[ \sum_{i=0}^{\infty}\mathbb{P}(E_{d^i q_0})\geq \sum_{i=0}^{\infty} 1/2 =\infty \] and so by the Borel-Cantelli Lemmas $E_q$ happens infinitely often. Therefore, adapting the argument involving $s_c, s_b$ and $s_b$ from above, we have proved that for all $\varepsilon'>0$ there almost surely exist infinitely many $q \in \mathbb{N}$ such that there exists an approximate $R_q$-square $Q$ such that \[ N\left( Q\cap F_\omega, R_q^{1/\theta} \right) \geq K R_q^{(1-1/\theta)(1-\varepsilon') s}, \] where $s=s_C+s_B+s_N$ is the target lower bound for the spectrum. This completes the proof. \qed \section*{Acknowledgements} This work was started while both authors were resident at the Institut Mittag-Leffler during the 2017 semester programme \emph{Fractal Geometry and Dynamics}. They are grateful for the stimulating environment. Much of the work was subsequently carried out whilst JMF visited the University of Waterloo in March 2018. He is grateful for the financial support, hospitality, and inspiring research atmosphere. \\
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The development of efficient techniques for computing scattering amplitudes in gauge theories has led to the discovery of new unexpected properties in the on--shell sector of these theories. In four dimensions, the use of stringy--inspired methods \cite{dixon}, twistor string theory \cite{witten} and AdS/CFT correspondence \cite{maldacena} has allowed to dig out hidden symmetries for the planar sector of the maximally supersymmetric Yang--Mills theory. In particular, planar ${\cal N}=4$ SYM has been proved to be integrable \cite{integrability} and the related Yangian symmetry \cite{Drummond:2009fd} to be responsible for a duality between scattering amplitudes and Wilson loops (WL). This duality has been checked perturbatively in many cases \cite{Drummond:2006rz}--\cite{Belitsky:2011zm}, while at strong coupling it relies on the self--duality of type IIB string on ${\rm AdS}_5 \times {\rm S}_5$ under a suitable combination of bosonic and fermionic T--dualities \cite{AM, BM}. Another duality has been found at weak coupling which involves WL and correlation functions of BPS operators \cite{AEKMS}--\cite{Adamo:2011dq}. Since AdS/CFT has been playing a fundamental role in the discovery of these new hidden properties and at the same time their perturbative confirmation represents a non--trivial test of the correspondence, it is mandatory to investigate whether similar properties emerge in other classes of theories for which a string dual description is known. We are interested in the class of three dimensional ${\cal N}=6$ ABJM theories \cite{ABJM} which are dual to type IIA string theory on ${\rm AdS}_4 \times {\rm CP}_3$. A distinguished feature of these models compared to the more famous ${\cal N}=4$ SYM in four dimensions, is that they are not maximally supersymmetric. Moreover, the proof of the amplitudes/WL duality in type IIA string on ${\rm AdS}_4 \times {\rm CP}_3$ is complicated by the emergence of singularities in the fermionic T--transformations \cite{ADO}--\cite{DO}. Therefore, a priori, it is not totally obvious that we should expect dualities and hidden symmetries to be realized in ABJM models exactly in the same way as in their four--dimensional counterpart. Preliminary results can be found in literature, concerning integrability \cite{MZ, GV, LMMSSST,MOS} and related Yangian symmetry \cite{BLM, Lee}. Perturbative investigation of these properties have been performed. At tree level, scattering amplitudes are invariant under dual superconformal symmetry \cite{HL, GHKLL} whose generators are the level--one generators of a Yangian symmetry \cite{BLM}. A first indication of the duality between scattering amplitudes and WL comes from the fact that at one--loop, both the four--point amplitude \cite{ABM} and the light--like four--polygon WL \cite{HPW, BLMPRS} vanish. Recently, $n$--point correlators of BPS scalar operators have been proved to vanish at one--loop \cite{BLMPRS}, so providing first evidence of a triad correlation functions/WL/amplitudes duality in three dimensions. However, non--trivial perturbative support to these dualities can come only at orders where these quantities do not vanish. At two loops, for the ABJM model the light--like four--polygon WL has been computed in the planar limit \cite{HPW}. In dimensional regularization, taking the light--like limit $(x_i - x_{i+1})^2 \equiv x_{i,i+1}^2 \to 0$, the result is non--vanishing and given by \footnote{This result differs from the one in the published version of Ref. \cite{HPW} by an overall minus sign and a different constant $K$. The authors of Ref. \cite{HPW} agree with us on the correctness of result (\ref{WL}).} \begin{equation} \langle W_4 \rangle^{(2)}= \lambda^2 \left[ - \frac{(-\mu_{WL}^2 x_{13}^2)^{2\epsilon}}{(2\epsilon)^2} - \frac{(-\mu_{WL}^2 x_{24}^2)^{2\epsilon}}{(2\epsilon)^2} + \frac12 \ln^2 {\left( \frac{x_{13}^2}{x_{24}^2} \right)} + C + {\cal O}(\epsilon) \right] \label{WL} \end{equation} where $\mu_{WL}$ is the (properly rescaled) UV mass scale of dimensional regularization, $\lambda \equiv N/K$ is the ABJM coupling constant and $C = \pi^2/2 + 2 \ln{2} + 5\ln^2{2} -a_6/4$, with $a_6$ a numerical constant. In this paper, using ${\cal N}=2$ superspace description and a direct Feynman diagram approach, we evaluate the two--loop contribution to the planar scattering superamplitude of four chiral superfields which, in components, gives rise to the amplitude for two scalars and two chiral fermions. The amplitude involves two external particles in the bifundamental representation of the $U(N) \times U(N)$ gauge group and two particles in the antibifundamental. This is what mostly resembles a MHV amplitude in four dimensions. Defining ${\cal M}_4$ to be the superamplitude divided by its tree level contribution, we find \begin{equation}\label{amplitude} \mathcal{M}^{(2)} \equiv \frac{\mathcal{A}_4^{(2 \, loops)}}{\mathcal{A}^{tree}_4} = \lambda^2\, \left[-\frac{( s/\mu'^2)^{-2\epsilon}}{(2\, \epsilon)^2}-\frac{(t/\mu'^2)^{-2\epsilon}}{(2\, \epsilon)^2}+\frac12\,\ln^2 \left(\frac{s}{t}\right)+{\cal C}+ \mathcal{O}(\epsilon)\right] \end{equation} where $\mu'$ is the (conveniently redefined) IR scale of dimensional regularization and ${\cal C}=4\zeta_2+3\ln^2 2$ is a numerical constant. This result has a number of remarkable properties. First of all, as in the ${\cal N}=4$ SYM case, the two--loop amplitude is proportional to the tree level contribution times a function of the kinematic invariants. We find that, up to an additive, scheme dependent constant, this function matches exactly the result (\ref{WL}) once the IR regularization is formally identified with the UV one and the particle momenta are expressed in terms of dual coordinates, $p_i = x_{i,i+1}$ (note also that the invariants in (\ref{WL}) differ from those in (\ref{amplitude}) by a sign, since the former was worked out in Minkowskian signature, whereas our result has been derived using the Euclidean metric). Therefore, at least for the four--point amplitude, we find evidence for the following identity \begin{equation} \ln{ {\cal M}_4} = \ln \langle W_4 \rangle + {\rm const.} \end{equation} that should hold order by order in the perturbative expansion of the two objects. Quite remarkably, the two--loop result we have found has the same functional structure as the one--loop correction to the four--point scattering amplitude for the four dimensional ${\cal N}=4$ SYM theory. As proved in \cite{Drummond:2007aua}, for ${\cal N}=4$ SYM all momentum integrals up to four loops are dual to four dimensional true conformally invariant integrals, well defined off--shell. As a consequence, the four--point amplitude satisfies anomalous Ward identities associated to dual conformal transformations \cite{Drummond:2007au}, as dual conformal invariance is broken in the on--shell limit by the appearance of IR divergences which require introducing a mass regulator. A natural question arises whether the same pattern is present in three dimensional ABJM theories. We briefly discuss dual conformal invariance of the momentum integrals that occur in our two--loop diagrammatic calculation, which does not assume dual conformal invariance a priori. At the level of the integrands every single diagram does not appear to be invariant under dual conformal invariance, since they transform non--trivially under inversion. Nevertheless we still expect this symmetry to be present in the on--shell amplitude, since our result matches the Wilson loop computation, which possesses the standard conformal invariance of the ABJM theory (eventually broken anomalously by UV divergences). This means that, on--shell, it should be possible to rewrite the amplitude as a linear combination of scalar integrals which are dual invariant in three dimensions. In the ${\cal N}=4$ SYM case, an ansatz for all--loop $n$--point MHV amplitudes has been proposed \cite{BDS, Bern:2008ap}, where the all--loop amplitudes exponentiate and turn out to be determined by the one--loop result times the perturbative expansion of the scaling function $f_{{\cal N}=4}(\lambda)$ as a function of the 't Hooft coupling. Remarkably, we find that the two--loop four--point function for the ABJM model can be obtained from the second order expansion of the same BDS--like ansatz where the four dimensional scaling function is substituted by the three dimensional one, $f_{CS}(\lambda)$ as obtained from the conjectured asymptotic Bethe equations \cite{GV}. Therefore, we make the conjecture that the all loop four--point amplitude is given by \begin{equation} \frac{\mathcal{A}_4}{\mathcal{A}_4^{tree}} = e^{Div + \frac{f_{CS}(\lambda)}{8}\left(\ln^2\left(\frac{s}{t}\right)+ \frac{4 \pi^2}{3} + 6 \ln^2 2\right) + C(\lambda) } \end{equation} where now $\lambda$ is the ABJM coupling and $C(\lambda)$ is a scheme--dependent constant. Since $f_{CS}(\lambda)$ is known up to order $\lambda^4$ \cite{LMMSSST,MOS}, we may predict the exact four--loop contribution to the four--point function (see eq. (\ref{conjecture})).\\ NOTE: When this work was already completed, a paper \cite{Chen:2011vv} appeared, which has significant overlap. Although we draw the same conclusions, we stress that our computation, being based on a direct Feynman diagram approach is completely independent of that in \cite{Chen:2011vv}, which makes use of generalized unitarity methods. \section{ABJM model in ${\cal N}=2$ superspace} An on--shell realization of ${\cal N}=6$ supersymmetric ABJM models can be given in terms of ${\cal N}=2$ three dimensional superspace \cite{Klebanov}. For $U(N) \times U(N)$ gauge group, the physical field content is organized into two vector multiplets $(V,\hat{V})$ in the adjoint representation of the first and the second $U(N)$'s, coupled to chiral multiplets $A^i$ and $B_i$ carrying a fundamental index $i=1,2$ of a global $SU(2)_A \times SU(2)_B$ and in the bifundamental and antibifundamental representations of the gauge group, respectively. The ${\cal N}=6$ supersymmetric action reads \begin{equation} {\cal S} = {\cal S}_{\mathrm{CS}} + {\cal S}_{\mathrm{mat}} \label{eqn:action} \end{equation} with \begin{eqnarray} \label{action} && {\cal S}_{\mathrm{CS}} = \frac{K}{4\pi} \, \int d^3x\,d^4\theta \int_0^1 dt\: \Big\{ \textstyle{Tr} \Big[ V \overline{D}^\alpha \left( e^{-t V} D_\alpha e^{t V} \right) \Big] - \textstyle{Tr} \Big[ \hat{V} \overline{D}^\alpha \left( e^{-t \hat{V}} D_\alpha e^{t \hat{V}} \right) \Big] \Big\} \non \\ \non \\ && {\cal S}_{\mathrm{mat}} = \int d^3x\,d^4\theta\: \textstyle{Tr} \left( \bar{A}_i e^V A^i e^{- \hat{V}} + \bar{B}^i e^{\hat V} B_i e^{-V} \right) \non \\ &~& ~ + \frac{2\pi i}{K} \int d^3x\,d^2\theta\: \epsilon_{ik} \epsilon^{jl} \, {\textstyle{Tr} (A^i B_j A^k B_l)} + \frac{2\pi i}{K} \int d^3x\,d^2\bar{\theta}\: \epsilon^{ik} \epsilon_{jl} \, {\textstyle{Tr} (\bar{A}_i \bar{B}^j \bar{A}_k\bar{B}^l} ) \non\\ \end{eqnarray} Here $K$ is an integer, as required by gauge invariance of the effective action. In the perturbative regime we take $\lambda \equiv \frac{N}{K} \ll 1$. The quantization of the theory can be easily carried on in superspace after performing gauge fixing (for details, see for instance \cite{BPS}). In momentum space and using Landau gauge, this leads to gauge propagators \begin{eqnarray} && \langle V^a_{\, b}(1) \, V^c_{\, d}(2) \rangle = \frac{4\pi}{K} \, \frac{1}{p^2} \, \, \delta^a_d \, \delta^c_b \times \overline{D}^\alpha D_\alpha \, \delta^4(\theta_1-\theta_2) \nonumber \\ && \langle \hat V^{\bar{a}}_{\bar{b}} (1) \, \hat V^{\bar{c}}_{\bar{d}}(2) \rangle = - \frac{4\pi}{K} \, \frac{1}{p^2} \, \, \delta^{\bar{a}}_{\bar{d}} \, \delta^{\bar{c}}_{\bar{b}} \times \overline{D}^\alpha D_\alpha \, \delta^4(\theta_1-\theta_2) \label{gaugeprop} \end{eqnarray} whereas the matter propagators are \begin{eqnarray} &&\langle \bar A^{\bar{a}}_{\ a}(1) \, A^b_{\ \bar{b}}(2) \rangle = \frac{1}{p^2} \,\, \delta^{\bar{a}}_{\ \bar{b}} \, \delta^{\ b}_{a} \times \delta^4(\theta_1 - \theta_2) \nonumber \\ && \langle \bar B^a_{\ \bar{a}}(1) \, B^{\bar{b}}_{\ b}(2) \rangle = \frac{1}{p^2} \,\, \delta^a_{\ b} \, \delta^{\ \bar{b}}_{\bar{a}} \times \delta^4(\theta_1 - \theta_2) \label{scalarprop} \end{eqnarray} where $a,b$ and $\bar{a}, \bar{b}$ are indices of the fundamental representation of the first and the second gauge groups, respectively. The vertices employed in our two--loop calculation can be easily read from the action (\ref{action}) and they are given by \begin{eqnarray} \label{vertices} && \int d^3x\,d^4\theta\: \left[ \textstyle{Tr} ( \bar{A}_i V A^i) - \textstyle{Tr} ( B_i V \bar{B}^i ) + \textstyle{Tr} ( \bar{B}^i {\hat V} B_i ) - \textstyle{Tr} ( A^i {\hat{V}} \bar{A}_i ) \right] \non \\ && \qquad + \frac{4\pi i}{K} \int d^3x\,d^2\theta\: \, \Big[ \textstyle{Tr} (A^1 B_1 A^2 B_2) - \textstyle{Tr} (A^1 B_2 A^2 B_1)\Big] ~+~ {\rm h.c.} \end{eqnarray} We work in euclidean superspace with the effective action defined as $e^{\Gamma} = \int e^S$. Since the vector fields are not propagating, the only non--trivial amplitudes of the theory are the ones involving matter external particles. In ${\cal N}=2$ superspace language this means having $A$, $B$ and their complex conjugates as external superfields. Given the structure of the vertices, it is straightforward to see that only amplitudes with an even number of external legs are non--vanishing. This is consistent with the requirement for the amplitudes to be Lorentz and dilatation invariant \cite{HL}. Each external scalar particle carries an on--shell momentum $p_{\alpha\beta}$ ($p^2 =0$), an $SU(2)$ index and color indices corresponding to the two gauge groups. We classify as particles the ones carrying $(N, \bar{N})$ indices and antiparticles the ones carrying $(\bar{N}, N)$ indices. Therefore, $(A^i, \bar{B}^j)$ are {\em particles}, whereas $(B_i, \bar{A}_j)$ are {\em antiparticles}. We are interested in the simplest non--trivial amplitudes, that is four--point amplitudes. Without loosing generality we consider the $(A B A B)$ superamplitude. All the other superamplitudes can be obtained from this one by $SU(4)$ R--symmetry transformations. The color indices can be stripped out, as we can write \begin{equation} {\cal A}_4 \left( A^{a_1}_{\, \bar{a}_1}\, B^{\bar{b}_2}_{\, b_2} \, A^{a_3}_{\, \bar{a}_3} B^{\bar{b}_4}_{\, b_4} \right) = \sum_{\sigma} {\cal A}_4(\sigma(1), \cdots , \sigma(4) ) \; \delta^{a_{\sigma(1)}}_{b_{\sigma(2)}} \, \delta^{\bar{b}_{\sigma(2)}}_{\bar{a}_{\sigma(3)}} \, \delta^{a_{\sigma(3)}}_{b_{\sigma(4)}} \delta^{\bar{b}_{\sigma(4)}}_{\bar{a}_{\sigma(1)}} \end{equation} where the sum is over exchanges of even or odd sites between themselves. \section{The four point amplitude at two loops} We study four--point scattering amplitudes of the type $(A^i B_j A^k B_l)$, where the external $A,B$ particles carry outgoing momenta $p_1,\dots,p_4$ ($p_i^2=0$). As usual, Mandelstam variables are defined by $s=(p_1+p_2)^2, t=(p_1+p_4)^2, u=(p_1+p_3)^2$. At tree level the amplitude is simply given by the diagram in Fig. \ref{0set} (a) associated to the classical superpotential in (\ref{vertices}). Its explicit expression is \begin{equation} {\cal A}_4^{tree}(A^i(p_1), B_j(p_2), A^k(p_3), B_l (p_4)) = \frac{2\pi i}{K} \epsilon^{ik} \epsilon_{jl} \end{equation} At one loop it has been proved to vanish \cite{ABM}. In ${\cal N}=2$ superspace language a symmetry argument shows that the only diagram that can be constructed (Fig. \ref{0set}$b$) leads to a vanishing contribution both off--shell and on--shell \cite{AndreaMati}. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{fig1.eps} \caption{Diagrams contributing to the tree level and 1--loop four--point scattering amplitude. } \label{0set} \end{figure} At two loops, in the planar sector, the amplitude can be read from the single trace part of the two--loop effective superpotential \begin{eqnarray} \label {effepotential} &&\Gamma^{(2)}[A,B]=\int d^2\theta d^3p_1\dots d^3p_4\,(2\pi)^3\,\delta^{(3)}({\sum}_i p_i)\times\\ &&\frac{2\pi i}{K}\epsilon_{ik}\epsilon^{jl}\,\mathrm{tr}\left(A^i(p_1) B_j(p_2)A^k(p_3)B_l(p_4)\right)\sum\limits_{X=a}^{g}\mathcal{M}^{(X)}(p_1,\dots,p_4) \end{eqnarray} where the sum runs over the first six diagrams in Fig. \ref{1set}, plus the contribution from the 1P--reducible (1PR) graph in Fig. \ref{1set}$g$ where the bubble indicates the two--loop correction to the chiral propagator. In (\ref{effepotential}) we have factorized the tree level expression, so that $ \mathcal{M}^{(X)}(p_1,\dots,p_4)$ are contributions to ${\cal A}_4^{(2 \, loops)}/{\cal A}_4^{tree}$. In order to evaluate the diagrams we fix the convention for the upper--left leg to carry momentum $p_1$ and name the other legs counterclockwise. The total contribution from every single graph is then given by summing over all possible permutations of the external legs accounting for the different scattering channels. The momentum--dependent contributions in (\ref{effepotential}) are the product of a combinatorial factor times a sum of ordinary Feynman momentum integrals arising after performing D--algebra on each supergraph (details can be found in \cite{AndreaMati, BLMPS2}). Massless scattering amplitudes are affected by IR divergences. We deal with them by dimensional regularization, $d = 3 -2\epsilon$, $\epsilon<0$. \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{fig2.eps} \caption{Diagrams contributing to the two--loop four--point scattering amplitude. The dark--gray blob represents one--loop corrections and the light--gray blob two--loop ones.} \label{1set} \end{figure} We begin by evaluating the simplest graph 2$a$. After performing D--algebra, its $s$--channel contribution shown in Fig. \ref{1set} is given by a two--loop factorized Feynman integral \begin{equation} \mathcal{D}^s_a=\mu^{4\epsilon}\int\frac{d^d k}{(2\pi)^d}\frac{d^d l}{(2\pi)^d}\frac{-(p_1+p_2)^2}{k^2\,(k+p_1+p_2)^2\, l^2\,(l-p_3-p_4)^2}= -G[1,1]^2\left(\frac{\mu^2}{s}\right)^{2\epsilon} \end{equation} where $\mu$ is the mass scale of dimensional regularization and the $G$ function is defined by \begin{equation} G[a,b]=\frac{\Gamma(a+b-d/2)\Gamma(d/2-a)\Gamma(d/2-b)} {(4\pi)^{d/2}\Gamma(a)\Gamma(b)\Gamma(d-a-b)} \end{equation} Taking into account all contributions of this type with color/flavor factors we obtain \begin{equation} \mathcal{M}^{(a)}=-(4\pi\lambda)^2 G[1,1]^2\left(\left(\frac{\mu^2}{s}\right)^{2\epsilon}+\left(\frac{\mu^2}{t}\right)^{2\epsilon}\right)=-3\zeta_2\lambda^2+\mathcal{O}(\epsilon) \end{equation} The contribution from diagram 2$b$, after D--algebra and with the particular assignment of momenta as in figure, is given by \begin{equation} \mathcal{D}^{s_1}_b=\mu^{4\epsilon}\int\frac{d^d k}{(2\pi)^d}\frac{d^d l}{(2\pi)^d}\frac{2(p_3+p_4)^2}{l^2\,(l+k)^2\,(k-p_4)^2\,(k+p_3)^2}= \frac{2 G[1,1] \Gamma(1+2\epsilon)\Gamma^2(-2\epsilon)}{(4\pi)^{d/2}\Gamma(1/2-3\epsilon)\,( s/\mu^2)^{2\epsilon}} \end{equation} Therefore, summing over all four contributions we get \begin{equation} \mathcal{M}^{(b)}=(4\pi\lambda)^2 \frac{ G[1,1] \Gamma(1+2\epsilon)\Gamma^2(-2\epsilon)}{(4\pi)^{d/2}\Gamma(1/2-3\epsilon)}\left(\left(\frac{\mu^2}{s}\right)^{2\epsilon}+\left(\frac{\mu^2}{t}\right)^{2\epsilon}\right) \end{equation} which is infrared divergent. Diagram 2$c$, in contrast with the previous ones, is infrared divergent even when considered off--shell. This unphysical infrared divergence is cured by adding the 1PR diagram corresponding to two--loop self--energy corrections to the superpotential, depicted in Fig. \ref{1set}$g$. In fact, the contribution from this diagram, when the correction is on the $p_4$ leg, yields \begin{equation}\label{gamba} \mathcal{D}^{4}_g=-3\,G[1,1]\,G[1,3/2+\epsilon]\,(p_4^2)^{-2\epsilon}+2G[1,1]^2\,(p_4^2)^{-2\epsilon} \end{equation} The first term of this expression is infrared divergent even off--shell, but precisely cancels the infrared divergence of diagram 2$c$. The second term in (\ref{gamba}) comes from a double factorized bubble and is finite when $d\to 3$, but since we take the momenta to be on--shell before expanding in $\epsilon$, this piece vanishes on--shell. It turns out that after this cancelation between diagrams 2$c$ and 2$g$ the remainder is proportional to the integral corresponding to diagram 2$b$. Precisely, we have \begin{equation} \mathcal{M}^{(c)}+\mathcal{M}^{(g)}=-3\mathcal{M}^{(b)} \end{equation} Diagrams of type 2$d$ can be evaluated by using Mellin--Barnes techniques. Specifically, with the momenta assignment as in figure, the D--algebra gives \begin{align} \mathcal{D}^{s1}_d &=\mu^{4\epsilon}\int \frac{d^d k}{(2\pi)^d}\frac{d^d l}{(2\pi)^d} \frac{Tr(\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}\gamma_{\sigma})\, p_4^{\mu}\,(p_3+p_4)^{\nu}\,(k+p_4)^{\rho}\,(l-p_4)^{\sigma}}{(k+p_4)^2\,(k-p_3)^2\,(k+l)^2\,(l-p_4)^2\,l^2}\\ &=-\frac{\Gamma^3(1/2-\epsilon)\Gamma(1+2\epsilon)\Gamma^2(-2\epsilon)} {(4\pi)^{d}\Gamma^2(1-2\epsilon)\Gamma(1/2-3\epsilon)\,(s/\mu^2)^{2\epsilon}} \end{align} and summing over the eight permutations multiplied by the corresponding flavor/color factors we obtain \begin{equation} \mathcal{M}^{(d)}= -(4\pi\lambda)^2\frac{2\Gamma^3(1/2-\epsilon)\Gamma(1+2\epsilon)\Gamma^2(-2\epsilon)} {(4\pi)^{d}\Gamma^2(1-2\epsilon)\Gamma(1/2-3\epsilon)}\left(\left(\frac{\mu^2}{s}\right)^{2\epsilon}+\left(\frac{\mu^2}{t}\right)^{2\epsilon}\right) \end{equation} Using the identities derived in \cite{AndreaMati} it is possible to write diagram 2$e$ as a combination of diagrams 2$b$ and 2$d$ plus a double factorized bubble which can be dropped when working on--shell. We find \begin{equation} \mathcal{M}^{(e)}=2\mathcal{M}^{(d)}+4\mathcal{M}^{(b)} \end{equation} The most complicated contribution comes from diagram 2$f$, which involves a nontrivial function of the ratio $s/t$ of kinematic invariants. Surprisingly, after some cancelations it turns out to be finite. The D--algebra for the specific choice of the external momenta as in figure results in the Feynman integral \begin{equation} \mathcal{D}^{234}_{f}=\mu^{4\epsilon}\int \frac{d^d k}{(2\pi)^d}\frac{d^d l}{(2\pi)^d} \frac{-Tr(\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}\gamma_{\sigma})\, p_4^{\mu}\,p_2^{\nu}\,k^{\rho}\,l^{\sigma}}{k^2\,(k-p_2)^2\,(k+l+p_3)^2\,(l-p_4)^2\,l^2} \end{equation} which after taking the on--shell limit can be expressed exactly as a single one--fold Mellin--Barnes integral which is finite in the limit $\epsilon\to 0$ \begin{align} &\mathcal{D}^{234}_{f}=\frac{(1+s/t)\Gamma^3(1/2-\epsilon)}{(4\pi)^d\Gamma^2(1-2\epsilon)\Gamma(1/2-3\epsilon)(t/\mu^2)^{2\epsilon}}\times\\ \times\int\limits^{+i\infty}_{-i\infty} \frac{d{\bf v}}{2\pi i}\Gamma(-{\bf v}) & \Gamma(-2\epsilon -{\bf v}) \Gamma^{*}(-1-2\epsilon-{\bf v})\Gamma^2(1+{\bf v})\Gamma(2+2\epsilon+{\bf v})\left(\frac{s}{t}\right)^{{\bf v}} \end{align} Taking into account the four permutations, flavor/color factors and expanding in $\epsilon$ we get \begin{equation} \mathcal{M}^{(f)}=\lambda^2\left(\tfrac{1}{2}\ln^2(s/t)+3\zeta_2\right)+\mathcal{O}(\epsilon) \end{equation} Collecting all the partial results, after some algebra we may reduce the result to the following compact form \begin{equation} \label{result} \mathcal{M}^{(2)} \equiv \frac{\mathcal{A}_4^{(2 \, loops)}}{\mathcal{A}^{tree}_4} = \lambda^2\, \left[-\frac{( s/\mu'^2)^{-2\epsilon}}{(2\, \epsilon)^2}-\frac{(t/\mu'^2)^{-2\epsilon}}{(2\, \epsilon)^2}+\frac12\,\ln^2 \left(\frac{s}{t}\right)+{\cal C}+ \mathcal{O}(\epsilon)\right] \end{equation} where $\mu'^2=8\pi e^{-\gamma}\,\mu^2$, and ${\cal C}$ is a constant given by ${\cal C}=4\zeta_2+3\ln^2 2$.\footnote{We note that the analytical value of the constant term matches the numerical result of \cite{Chen:2011vv}.} If we rotate to Minkowski spacetime with mostly minus signature and write the Mandelstam variables in terms of the dual ones, $s = -x_{13}^2, t=-x_{24}^2$, up to a (scheme--dependent) constant, our result matches the expression (\ref{WL}) for the two--loop expansion of a light--like Wilson loop, once we have identified the UV and IR rescaled regulators of the Wilson loop and scattering amplitude, as $1/\mu_{WL}^2 = \mu'^2$. \section{Dual conformal invariance} The two--loop result (\ref{result}) for the four--point amplitude in ABJM theories has the same functional structure as the one--loop correction to the four--point amplitude in four dimensional ${\cal N}=4$ SYM theory \cite{BDS, Drummond:2007aua}, provided that we rescale $\epsilon \to 2\epsilon$ there. In the ${\cal N}=4$ SYM case, the perturbative results for planar MHV scattering amplitudes can be expressed as linear combinations of scalar integrals that are off--shell finite in four dimensions and dual conformal invariant \cite{Drummond:2007aua}. Precisely, once written in terms of dual variables, $p_i = x_{i+1} - x_i$, the integrands times the measure are invariant under translations, rotations, dilatations and special conformal transformations. In particular, invariance under inversion, $x^{\mu} \to x^{\mu}/x^2$, rules out bubbles and triangles and up to two loops, only square--type diagrams appear. Dual conformal invariance is broken on--shell by IR divergences that require introducing a mass regulator. Therefore, conformal Ward identities acquire an anomalous contribution \cite{Drummond:2007au}. A natural question which arises is whether the two--loop result (\ref{result}) for three dimensional ABJM models exhibits dual conformal invariance. In order to answer this question, we concentrate on the momentum integrals associated to the four diagrams in Fig. \ref{1set} which are the ones that eventually combine to lead to the final result (\ref{result}). We study their behavior under dual conformal transformations when evaluated off--shell and in three dimensions. We first rewrite their expressions in terms of dual variables and then perform conformal transformations, the only non--trivial one being the inversion. Since under inversion $x_{ij}^2 \to \frac{x_{ij}^2}{x_i^2 x_j^2}$ and $d^dx_i \to \frac{d^dx_i}{(x_i^2)^d}$, it is easy to realize that, while in four dimensions the elementary invariant building block integrands are squares, in three dimensions they should be triangles. Therefore, it is immediate to conclude that the integrands associated to diagrams $\ref{1set}a-\ref{1set}b$ cannot be invariant, since they contain bubbles. On the other hand, diagrams $\ref{1set}d-\ref{1set}f$ contain triangles but also non--trivial numerators which concur to make the integrand non--invariant under inversion. Despite dual conformal invariance seems not to be a symmetry of the integrals arising from our Feynman diagram approach, in the previous Section we have showed that the on--shell amplitude, when written in dual space, has the same functional form of the light--like Wilson loop. As a consequence, on--shell the amplitude should possess dual conformal invariance, since Wilson loops inherit the ordinary conformal invariance of the ABJM theory, even though anomalously broken by UV divergences. As a consequence, it should be possible to rewrite expression (\ref{result}) for the on--shell amplitude as a linear combination of scalar integrals which are off--shell finite in three dimensions and manifestly dual conformal invariant at the level of the integrands\footnote{Note added: This task has been actually accomplished in \cite{Chen:2011vv}, where an explicit basis of dual conformal integrals has been determined on which the amplitude can be expanded.}. \section{A conjecture for the all--loop four--point amplitude} Our result in (\ref{result}) provides the first non--trivial contribution to the four--point scattering amplitude in the ABJM theory. The analogue quantity in four dimensional $\mathcal{N}=4$ SYM has been extensively studied and an all--loop iteration conjecture for it has been given in \cite{BDS, Bern:2008ap}. The result may be schematically written as \\ \begin{equation}\label{BDScon} \frac{\mathcal{A}}{\mathcal{A}^{tree}} = e^{Div + \frac{f_{\mathcal{N}=4}(\lambda)}{8}\left(\ln^2\left(\frac{s}{t}\right)+ \frac{4 \pi^2}{3}\right) + C(\lambda) } \end{equation} where $f_{\mathcal{N}=4}(\lambda)$ is the scaling function of $\mathcal{N}=4$ SYM in terms of the 't Hooft coupling $\lambda=g^2 N$, the constant $C(\lambda)$ is independent of the kinematic variables and the IR divergent contributions are grouped in the first term. It would be interesting to check whether a similar resummed expression may hold for scattering amplitudes in the three--dimensional case. Although we only computed the first non--trivial perturbative order for the amplitude, still we have some indications that this could be the case. At first, comparing the conjectured form of the asymptotic all loop Bethe equations for $\mathcal{N}=4$ SYM and ABJM theory, Gromov and Vieira noticed \cite{GV} that the scaling functions of the two theories should be related as \begin{equation}\label{3d4d} f_{CS}(\lambda)= \left. \frac{1}{2}f_{\mathcal{N}=4} (\lambda)\right|_{\frac{\sqrt{\lambda}}{4 \pi}\rightarrow h(\lambda)} \end{equation} where $h(\lambda)$ is the interpolating function of the magnon energy dispersion relation. The first perturbative orders of $h(\lambda)$ have been computed at both weak \cite{LMMSSST,MOS} and strong coupling \cite{Gaiotto:2008cg}--\cite{Astolfi:2011ju}. The weak coupling expansion \begin{equation} \label{hfun} h^2(\lambda) = \lambda^2 - 4 \zeta_2 \,\lambda^4 + \mathcal{O}(\lambda^6) \hspace{1.8cm} \lambda \ll 1 \end{equation} can be combined, using (\ref{3d4d}), with the known expansion of the 4d scaling function $f_{\mathcal{N}=4}(\lambda)= \lambda/2\pi^2 -1/96\pi^2 \lambda^2 + O(\lambda^3)$ . We are then able to write explicitly the 3d scaling function up to order $\lambda^4$ \begin{equation} \label{3df} f_{CS}(\lambda) = 4 \lambda^2 - 4 \pi^2 \lambda^4 + O(\lambda^6) \end{equation} Assuming (\ref{BDScon}) to hold also in the three dimensional case with the very same constant coefficients and plugging (\ref{3df}) in it, after expanding at order $\lambda^2$, we curiously find an exact correspondence with the result we explicitly computed in (\ref{result}). This suggests that for the three dimensional case, provided we use the correct scaling function, a completely analogous resummation may take place to give an expression for the amplitude of the form \begin{equation}\label{con} \frac{\mathcal{A}_4}{\mathcal{A}^{tree}_4} = e^{Div + \frac{f_{CS}(\lambda)}{8}\left(\ln^2\left(\frac{s}{t}\right)+ 8\zeta_2 + 6\,\ln^2 2\right) + C(\lambda) } \end{equation} If this is the case, using (\ref{3df}), we may predict the next non--trivial order for the finite remainder $F_4^{(4)}$ (in the notation of \cite{BDS}) of the four--point scattering amplitude \begin{equation} \label{conjecture} F_4^{(4)} = \frac{\lambda^4}{8} \ln^4 \left(\frac{s}{t}\right) + \lambda^4 \left(\frac{3}{2}\ln^2 2- \zeta_2\right) \ln^2\left(\frac{s}{t}\right) + {\rm Consts} \end{equation} A direct check of this prediction, either with a 4--loop scattering amplitude computation or using the duality with Wilson loops, could confirm the conjectured exact expression in (\ref{con}). \section{Conclusions} We briefly summarize the main results of this paper and discuss future developments. For three dimensional ABJM superconformal models, in a ${\cal N}=2$ superspace setup, we have computed the planar, two--loop corrections to the chiral $(ABAB)$ four--point superamplitude. We performed the calculation by a direct Feynman diagram approach, in a manifestly supersymmetric formalism. We have found a non--vanishing result which perfectly agrees with the two--loop result for a light--like four--polygon Wilson loop. This result represents the first non--trivial evidence of an amplitude/WL duality working in three dimensional superconformal theories and confirms the conjectured duality which seemed to arise trivially at one loop. Its functional structure resembles the one--loop planar four--point amplitude for ${\cal N}=4$ SYM theory in four dimensions. As in that case, it can be obtained from a BDS--like ansatz for the all--loop amplitude where the scaling function of four dimensions is substituted by the three dimensional one, as predicted by the conjectured Bethe equations. For ${\cal N}=4$ SYM theory the structure of the four--point BDS ansatz has been verified also at strong coupling \cite{AM}. It would be interesting to check whether applying the recipe of \cite{AM} for computing scattering amplitudes at strong coupling to the ABJM case, the result agrees with a three dimensional version of the BDS ansatz. From our weak coupling computation we expect this to be the case, at least at four points. A preliminary discussion will be given in \cite{BLMPS2}. An important question to be addressed is whether and how dual conformal invariance plays a role in three dimensional models. By explicit calculations, which do not assume dual conformal invariance a priori, we showed that the four--point on--shell amplitude is dual to the four--cusps light--like Wilson loop. This hints at the invariance of the result under dual conformal invariance, even though this symmetry is not manifest in our Feynman diagram approach. We will report on this issue more extensively in \cite{BLMPS2}. \vfill \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{\bf Introduction} Principal Component Analysis (PCA) is a method for decorrelation of multivariate data by finding the most optimal basis for a given problem and thus reducing its dimensionality. PCA is widely applied in industry, in particular, for image compression, classification and recognition tasks \cite{eigenfaces1991}, and many branches of science, see a short overview, for example, in \cite{PCA_overview_2016}. It was suggested to apply PCA to heavy-ion collisions data to bring out substructures from two-particle azimuthal correlations \cite{Ollitrault_2015}. In this article, PCA is applied directly to single-particle distributions in A--A collisions, namely, to azimuthal ($\varphi$) distribution, distribution in pseudorapidity ($\eta$) and to two-dimensional distribution $\eta$-$\varphi$. Mathematically, this means that we take distribution of particles in $M$ bins in each out of $N$ events, normalize with a number of particles in a given event, subtract in each bin an event-averaged value (in order to have zero mean in each bin) and apply PCA to the obtained $N$$\times$$M$ matrix (PCA is most often done through the singular value decomposition). As an output from PCA, we have a set of orthonormal eigenvectors (${\mathbf e}_i, i=1,...,M$), each of the length $M$ itself, which are ordered in such a way that corresponding variances ($\sigma_i, i=1,...,M$) descend from the largest to the smallest values. We get also coefficients $c_i^k (k=1,...,N$) of PCA decomposition so that the particle distribution in $k$-th event (denote it as ${\mathbf x}^{(k)}$ that is a vector with $M$ elements) can be written as \begin{equation} {\mathbf x}^{(k)} = \sum_{i=1}^M c_i^{(k)} \sigma_i {\mathbf e}_i = \sum_{i=1}^M a_i^{(k)} {\mathbf e}_i\, , \end{equation} where in the last equality the variances are absorbed into coefficients: $a_i^{(k)} \equiv c_i^{(k)} \sigma_i$. So, the first benefit of PCA is that the data matrix $N$$\times$$M$ is projected on a set of eigenvectors ${\mathbf e}_i$ that are the {\it most optimal basis} for given data. As the second benefit, we can keep only the first $K$ components ($K$$<$$M$) in order to have a good approximation for the data. An exact value of $K$ can be understood after a closer look at the PCA output. Single particle distribution, denoted by ${\mathbf x}^{(k)}$, can actually be $\varphi$, $\eta$ or $\eta$-$\varphi$ distributions -- results of the PCA applied in all the three cases are discussed below. \vspace{-0.2cm} \section{\bf Application of PCA to azimuthal distributions} \label{sec:PCA_to_single_particle_distr} \begin{figure}[b] \centering \begin{overpic}[width=0.995\textwidth, trim={3.9cm 0.1cm 4.9cm 1.2cm},clip] {figs/eigenvectors_for_phi_48_from_AMPT.pdf} \put(92,5.1){ \scriptsize \color{darkgray} AMPT } \end{overpic} \vspace{-3mm} \caption{ Eigenvectors from PCA of azimuthal distributions (48 $\varphi$-bins) from AMPT events (centrality class 10-70\%). First 8 eigenvectors are identified as Fourier harmonics of orders $n=2,3,1,4$ and are grouped in pairs in four panels. Ordering in $n$ reflects importance of a given harmonic (i.e. fraction of explaned variance, see text). Lines correspond to fit with a cosine function. } \label{eigenvectors_for_phi_48_from_AMPT} \end{figure} PCA was applied to 1.5 mln Pb-Pb events at $\sNN$=5 TeV simulated in the AMPT event generator \cite{AMPT}. Event-by-event $\varphi$-distributions in $M$=48 bins were taken for particles within $|\eta| < 0.8$ and transverse momentum range $0.2 < p_{\rm T} < 5.0$ GeV/$c$. The first eight eigenvectors are shown in Figure \ref{eigenvectors_for_phi_48_from_AMPT}, and one may immediately notice that they correspond to pairs of the cosine and sine functions, i.e. the Fourier basis (with arbitrary common phase shifts with respect to 0). In order to demonstrate this better, the eigenvectors are fitted with a cosine function (shown as lines) -- the phase shift between the pairs of the functions in each panel equals $\pi$/2 with 0.01\% precision. Fractions of explained variances associated to obtained eigenvectors are shown in Fig.\ref{fract_of_expl_var_phiBins_24_48}. Pairing of variances for eigenvectors again confirms the validity of association of eigenvectors with the Fourier basis. Eigenvectors with $i\gtrsim 10$ are just a statistical noise. It should be noted here that similar PCA analysis was performed recently in \cite{Liu_et_al:2019}, where eigenvectors resembles Fourier harmonics but shapes of them are somehow distorted\footnote{Possible explanations for such a distortion of the eigenvectors in \cite{Liu_et_al:2019} could be a small number of events ($N$=$2000$) used for PCA or some peculiarities in event simulation process.}. The PCA reveals the Fourier basis from event-by-event $\varphi$-distributions independently of centrality class and number of bins $M$. The explanation why PCA finds this basis as the optimal one is in the fact that a set of sine and cosine functions is a natural basis for periodic or rotationally invariant problems: events with similar characteristic structures like elliptic flow or jets may appear at various event plane angles, and the Fourier basis allows ``capturing'' this information in the most optimal way. \begin{figure}[t] \centering \begin{overpic}[width=0.44\textwidth, trim={0.1cm 0.cm 1.9cm 1.75cm},clip] {figs/fract_of_expl_var_phiBins_24_48.pdf} \put(77,31){ \footnotesize \color{darkgray} AMPT } \end{overpic}\vspace{-3mm} \caption{ Fractions of explained variances ($\sigma_i$/$\sum_{j=0}^M \sigma_j$) for corresponding eigenvectors. Results are shown for two numbers of $\varphi$-bins: $M=48$ (empty markers) and 24 (full markers). Labels $n$=2,3,1,4 for pairs of eigenvectors associated with sine and cosine functions are placed. } \label{fract_of_expl_var_phiBins_24_48} \end{figure} \section{\bf Flow coefficients from PCA and correction for statistical noise } After the basic functions are established and interpreted, coefficients of PCA decomposition also gain a definite meaning. Recall that flow phenomenon in heavy-ion collisions is usually studied using expansion of particle azimuthal probability density in a series: \begin{align} \begin{split} \label{flow} f(\varphi) = \frac{1}{2\pi} \big[ 1 + 2 \sum_{n=1}^\infty v_n \cos\big(n(\varphi-\Psi_n)\big) \big], \end{split} \end{align} where $v_n$ are the flow coefficients. If the decomposition \eqref{flow} is applied event-by-event, values of $v_n$ observed in the $k$-th event are related to the PCA coefficients (associated to eigenvectors shown above) as follows: ${v_2^{\rm obs}}^{(k)} = \sqrt{M\over 2} \sqrt{ {a_1^k}^2+{a_2^k}^2}$, ${v_3^{\rm obs}}^{(k)} = \sqrt{M\over 2} \sqrt{ {a_3^k}^2+{a_4^k}^2}$, and so on for $v_1$ and $v_4$. However, since ${v_n^{\rm obs}}$ coefficients are extracted event-by-event and a number of particles in each event is finite, they contain statistical fluctuations inside, while the task is to extract ``true'' ${v_n}$ averaged over dataset. It can be shown that these fluctuations can be subtracted in the following way: \begin{equation} \label{correction_for_noise} \big<(v_n^{\rm corr})^2\big> = \big<(v_n^{\rm obs})^2\big> - \big<(v_n^{\rm rand})^2\big> , \end{equation} where $v_n^{\rm rand}$ corresponds to Fourier coefficients extracted by applying PCA to events with randomized $\varphi$-angles\footnote{ This approach is used, for instance, in \cite{Jia:2015jga} and \cite{He_Qian_Huo:2017}.}. In case of small flow fluctuations and absence of non-flow effects, the true $v_n$ can thus be estimated as $\sqrt{\big<(v_n^{\rm corr})^2\big>}$. \begin{figure}[!b] \centering \begin{overpic}[width=0.67\textwidth, trim={1.58cm 1.45cm 2.65cm 2.92cm},clip] {figs/vn_obs_rand_corrected_USING_CALCULATED_HEIGHT_COEFF.pdf} \put(24,71){\small \bf \color{gray} MC model } \end{overpic}\vspace{-3mm} \caption{ Values of $v_n$ ($n$=2,3,1,4) before (solid blue squares) and after correction for statistical noise (red circles) as a function of number of $\varphi$-bins used in PCA (500$k$ toy events are used in each case). True $v_n$ are denoted by horizontal dashed lines. Open squares show $v_n$ in events with randomized $\eta$ and $\varphi$ of tracks. Number of particles is distributed by Gauss ($\mu$=1000, $\sigma$=40). } \label{vn_obs_rand_corrected} \end{figure} Performance of the correction procedure \eqref{correction_for_noise} is tested using a toy model with flow, where particles are distributed according to \eqref{flow} with some ``typical'' values of $v_n$. Analysis is done with different number of $\varphi$ bins, results are shown in Fig. \ref{vn_obs_rand_corrected}. Different $\varphi$-binnings allow one to investigate when PCA results become reliable for various harmonic orders $n$. It can be seen that corrected values (red circles) stabilize at true values at $n_{\varphi}\gtrsim$ 30 for $v_2$, $v_3$ and $v_4$, while the true value of $v_1$ is reached somewhat earlier (since $v_1$ measures just an overall shift of the event in azimuthal dimension that is ``captured'' already with a very few $\varphi$-bins). \begin{figure}[!t] \centering \begin{overpic}[width=0.995\textwidth, trim={2.9cm 0.75cm 3.65cm 1.25cm},clip] {figs/AMPT_v234_obs_rand_corrected.pdf} \end{overpic}\vspace{-3mm} \caption{ Values of $v_n$ ($n$=2,3,4) extracted by PCA -- before correction (blue crosses) and after correction \eqref{correction_for_noise} for statistical noise (red open circles) -- as a function of centrality of Pb-Pb collisions in AMPT. Number of $\varphi$-bins used in PCA is 48. Open squares show $v_n$ in events with randomized $\varphi$ of tracks. Results obtained with the two-particle cumulant method are shown as green close circles. Centrality classes are determined using percentiles of multiplicity distribution in forward acceptance that corresponds to VZERO detector in ALICE experiment. } \label{v234_AMPT} \end{figure} In order to test robustness of $v_n$ extracted with PCA, this analysis was applied to Pb-Pb events at $\sNN$=5 TeV simulated in AMPT event generator (Fig.\ref{v234_AMPT}, corrected PCA results for $v_2$, $v_3$ and $v_4$ are shown as open circles) and compared to calculations with the traditional two-particle cumulant method ($v_n\{2\}$, full circles). Correspondence between the values justifies again the possibility to extract $v_n$ with PCA. It is important to note also that other conventional analyses, like symmetric cumulants and event-plane correlations, are also possible with the azimuthal PCA. \newpage \vspace{-0.1cm} \section{\bf Longitudinal harmonics from PCA } \vspace{-0.1cm} \begin{figure}[b] \centering \begin{overpic}[width=0.42\textwidth, trim={0.1cm 0.05cm 1.3cm 1.6cm},clip] {figs/eta_toy_test_Legendre_makeup.pdf} \end{overpic} \hspace{0.2cm} \begin{overpic}[width=0.42\textwidth, trim={0.1cm 0.05cm 1.3cm 1.6cm},clip] {figs/eta_AMPT_test_first_components_makeup.pdf} \end{overpic} \vspace{-3mm} \caption{ The first three PCA eigenvectors obtained for the ``random parabola'' toy model (left panel) and from AMPT for tracks with $|\eta|$<2.4 (right panel). In the left panel, the first two Legendre polynomials are drawn as lines. In both panels, the third eigenvectors are consistent with the statistical noise. } \label{toy_walking_parabola_components} \end{figure} While the Fourier basis as the best option for azimuthal distributions was somewhat expected, it is not so obvious which basis is optimal for longitudinal ($\eta$) dimension. It was suggested to quantify longitudinal structure of events using Chebyshev \cite{Bzdak:2012tp} or Legendre polynomials \cite{Jia:2015jga} in some pseudorapidity range $[-Y ,Y ]$, without a strong motivation for a particular choice. The question of a proper basis for $\eta$-dimension can be addressed using PCA. First of all, when does this or that polynomial basis appear in PCA? To answer that, let us take a toy model of ``random parabola'', where the particle $\eta$-density in each event is sampled according to expression $\rho(\eta) \sim 1+A(\eta-B)^2$ with $A$ and $B$ being random numbers. It turns out that PCA reveals the basis of $P_1(\eta)=\eta$ and $P_2(\eta)={1\over 2} (3\eta^2-1)$, which are the first two Legendre polynomials. This is demonstrated in the left panel in Figure \ref{toy_walking_parabola_components}. However, in a more realistic case of AMPT events eigenvectors from PCA have different shapes (right panel in Fig.\ref{toy_walking_parabola_components}). Mathematically, this indicates that a set of these orthonormal polynomials has its own unique weight function (recall, that for the Legendre polynomials the weight function equals 1). Moreover, it can be shown that, unlike azimuthal case, PCA basis in $\eta$-dimension depends on kinematic cuts ($\eta$- and $p_{\rm T}$-ranges) and physics of the collisions (for example, results differ between AMPT and HIJING event generators). It is interesting to get PCA eigenvectors from real A--A events and compare with model results. \begin{figure}[b] \centering \begin{overpic}[width=0.99\textwidth] {figs/canv_surf1_AMPT_12_components_eta_24.png} \end{overpic}\vspace{-4.5mm} \caption{ First 12 eigenvectors from PCA applied to $\eta$-$\varphi$ distributions in AMPT Pb-Pb events (centrality class 20-30\%). Ordering is according to decrease of explained variance values. } \label{PCA_2D} \end{figure} \vspace{-0.1cm} \section{\bf Two-dimensional case } \vspace{-0.1cm} Finally, the PCA can be straightforwardly applied for single-particle densities in two dimensions, in particular, to $\eta$-$\varphi$ distributions. Eigenvectors for AMPT semi-central collisions (centrality 20-30\%) are shown in Figure \ref{PCA_2D} for the case of $M$=$\eta$$\times$$\varphi$=10$\times$48=480 bins. We may note the pairs of ``azimuthal'' harmonics (1,2), (3,4), etc. that are nearly uniform in $\eta$: it was checked that corresponding event-averaged $v_n$ values agree with purely azimuthal PCA presented above. Longitudinal eigenvectors 7 and 10 are uniform in $\varphi$, their shapes are the same as in the right panel in Fig.\ref{toy_walking_parabola_components}. Finally, ``mixed'' (or ``twisted'') $\eta$-$\varphi$ harmonics appear, namely, pairs (5,6), (8,9), (11,12). A closer look shows that these mixed eigenvectors can be factorized into $\varphi$-- and $\eta$--parts (since PCA components must be able to capture different structures in $\eta$ at any azimuthal rotation). Thus, event-by-event particle densities can be decomposed according to \begin{equation} \rho(\eta, \varphi) = {1\over 2\pi} \sum_{k=0}^{K_{\varphi} } \sum_{l=0}^{K_{\eta} } a_{k,l} \Phi_k (\varphi) {\rm H}_l(\eta) \hspace{0.1cm} , \end{equation} where $\Phi_k (\varphi)$ denotes the azimuthal part (it can be written as $2\cos \big[ k(\varphi-\Psi_k)\big]$), the longitudinal part is denoted as ${\rm H}_l(\eta)$, $a_{k,l}$ are the decomposition coefficients, $K_{\varphi}$ and $K_{\eta}$ stand for cut-off numbers of harmonics to consider. This decomposition could be used, for instance, in studies of the longitudinal decorrelation of harmonic flow as an alternative to other methods like \cite{Jia:2014vja, BozekQM2018}. Another possible application of this 2D-analysis is the study of rapidity dependence of the directed flow. Detailed discussion of the two-dimensional PCA is out of scope of the present paper. \newpage \vspace{-0.3cm} \section*{\bf Conclusion } \vspace{-0.2cm} Application of PCA to single-particle distributions in A-A collisions gives a $hint$ how a proper (most optimal) basis should look like. It was shown how PCA coefficients could be corrected for statistical noise. For azimuthal dimension, PCA confirms that the basis of Fourier harmonics is a proper choice, since it is natural for rotationally invariant problems. In case of longitudinal dimension, a set of PCA eigenvectors is not a "standard" one -- the most optimal basis of orthogonal functions depends on given data (collision system, energy, acceptance). Finally, PCA was applied to two-dimensional $\eta$-$\varphi$ distributions, where ``twisted'' harmonics are revealed. This approach may be of practical use in studies of longitudinal decorrelation of collective flow. \section*{\bf Acknowledgements } This study is supported by Russian Science Foundation, grant 17-72-20045. Author thanks Andrey Erokhin and Evgeny Andronov for discussions and interest in this work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Semileptonic decays of pseudoscalar mesons allow to evaluate the elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, which are fundamental parameters of the Standard Model. The decay $K\to\pi e\nu$ provides the most accurate determination of $V_{us}$, the semileptonic decays of D and B mesons, $D\to K(K^*) l\nu$, $B\to D(D^*)l\nu$ and $B\to \pi(\rho)l\nu$, can be used to determine $|V_{cs}|$, $|V_{cb}|$ and $|V_{ub}|$, respectively. The effects of strong interactions in these processes can be expressed in terms of form factors, which depend on $q^2$, the squared momentum transferred to the leptonic pair. Information on the form factors are obtained by measuring the distributions of $q^2$ and decay angles. The decays of heavy D and B mesons are of particular interest due to the spin-flavor symmetry observed for infinite quark masses \cite{IW}. This symmetry allows to reduce the number of form factors and express them in terms of the universal Isgur-Wise function \cite{IWf}. Also the scaling laws derived for some physical observables can be, in principle, tested experimentally. Since the Isgur-Wise function cannot be calculated from first principles, many models and nonperturbative approaches, which exhibit the heavy quark symmetry, have been employed to describe relevant phenomena. However, it was found out, that the finite mass corrections are very important, especially, in the charm sector. It appears that in some sense a step back should be done from using the heavy quark symmetry as a guide under model building to the straightforward calculations with full quark propagators. Then one has to check the consistency of the results with the spin-flavor symmetry in the heavy quark limit. In this paper we employ the relativistic constituent quark model (RCQM) \cite{RCQM} for the simultaneous description of both light and heavy flavored meson leptonic and semileptonic decays. This model is based on the effective Lagrangian describing the coupling of mesons with their quark constituents, and the compositeness condition. The physical processes are described by the one-loop quark diagrams with free constituent propagators and meson-quark vertices related to the Bethe-Salpeter amplitudes. The masses of lower-lying pseudoscalar (PS) mesons should be less than the sum of quark constituent masses to provide the absence of imaginary parts corresponding to quark production. The adjustable parameters, the widths of Bethe-Salpeter amplitudes in momentum space and constituent quark masses, are determined from the best fit of available experimental data and some lattice determinations. We found that our results are in good agreement with experimental data and other approaches. Also we reproduce the results of spin-flavor symmetry for leptonic decay constants and semileptonic form factors in the heavy quark limit. The shapes of vertex functions and quark propagators should be found from the Bethe-Salpeter and Dyson-Schwinger equations, respectively. This is provided by the Dyson-Schwinger Equation (DSE) \cite{DSE} studies. A DSE-approach has been employed to provide a unified and uniformly accurate description of light- and heavy-meson observables \cite{DSEH1,DSEH2}. A similar approach, based on the effective heavy meson Lagrangian, has been described in \cite{Gatto} in terms of a model based on meson-quark interactions, where mesonic transition amplitudes are represented by diagrams with heavy mesons attached to quark loops. The free propagator has been used for light quarks. However, the quark propagator obtained in the heavy quark limit has been employed for heavy quarks. \section{The model} We employ an approach \cite{RCQM} based on the effective interaction Lagrangian which describes the transition of hadron into quarks. For example, the transition of the meson $H$ into its constituents $q_1$ and $q_2$ is given by the Lagrangian \begin{equation} \label{lag} {\cal L}_{{\rm int}} (x)=g_H H(x) \int\!\! dx_1 \!\!\int\!\! dx_2 \Phi_H (x;x_1,x_2) \bar q(x_1) \Gamma_H \lambda_H q(x_2)\,. \end{equation} Here, $\lambda_H$ and $\Gamma_H$ are the Gell-Mann and Dirac matrices, respectively, which provide the flavor and spin numbers of the meson $H$. The function $\Phi_H$ is related to the scalar part of Bethe-Salpeter amplitude. For instance, the separable form $\Phi_H(x;x_1,x_2)=\delta(x-(x_1+x_2)/2) f((x_1-x_2)^2)$ has been used in \cite{RCQM} for pions. The coupling constants $g_H$ is given by the so called {\it compositeness condition} proposed in \cite{SW} and extensively used in \cite{EI}. That condition means that the renormalization constant of the meson field is equal to zero: \begin{equation} \label{comp} Z_H=1-\frac{3g^2_H}{4\pi^2}\tilde\Pi^\prime_H(m^2_H)=0\,, \end{equation} where $\tilde\Pi^\prime_H$ is the derivative of the meson mass operator defined by \begin{equation} \label{mass} \tilde\Pi_H(p^2)=\int\!\!{d^4k\over 4\pi^2i}\phi_H^2(-k^2) {\rm tr}\biggl[\Gamma_H S_2(\not\! k) \Gamma_H S_1(\not\! k+\not\! p)\biggr]. \end{equation} The invariant amplitudes describing the leptonic $H(p)\to l\nu$ and semileptonic $H(p)\to H'(p^\prime) l\nu$ decays are written down \begin{eqnarray} A(H(p) \to e \nu)&=& { G_F \over \sqrt{2} } V_{qq'} (\bar e O_{\mu}\nu) M_H^\mu(p) \label{matlep}\\ &&\nonumber\\ A(H(p)\to H'(p') e\nu)&=&{G_F\over \sqrt{2}}V_{qq'}(\bar e O_{\mu}\nu) M^\mu_{HH'}(p,p^\prime), \label{matsem} \end{eqnarray} where $G_F$ is the Fermi weak-decay constant, $V_{qq'}$ is the appropriate element of the CKM matrix. The matrix elements of the hadronic currents are given by \begin{eqnarray} M_H^\mu(p)&=& {3\over 4\pi^2}g_H\int\!\!{d^4k\over 4\pi^2i}\phi_H(-k^2) {\rm tr}\biggl[\gamma^5 S_2(\not\! k)O^\mu S_1(\not\! k+\not\! p)\biggr]= f_H p^\mu \label{curlep}\\ &&\nonumber\\ M^{\mu}_{HH'}(p,p^\prime)&=& {3\over 4\pi^2}g_Hg_{H'}\!\!\int\!\!{d^4k\over 4\pi^2i} \phi_H(-k^2)\phi_{H'}(-k^2) \label{cursem}\\ &&\times {\rm tr}\biggl[\gamma^5 S_3(\not\! k)\gamma^5 S_2(\not\! k+\not\! p^\prime)O^\mu S_1(\not\! k+\not\! p) \biggr] \nonumber\\ &&\nonumber\\ &&=f_+(q^2)(p+p^\prime)^\mu + f_-(q^2)(p-p^\prime)^\mu\\ \nonumber \end{eqnarray} where $\phi_H(-k^2)$ is related to the BS-amplitude in momentum space, and \begin{equation}\label{prop} S_i(\not\! k)=\frac{1}{m_i-\not\! k} \end{equation} is the propagator of the constituent quark with mass $m_i$. As discussed before, to avoid the appearance of imaginary parts in Eqs.~(\ref{curlep}) and (\ref{cursem}), we assume that $m_H<m_{q_1}+m_{q_2}$ which is a reliable approximation for the lower-lying mesons considered here. To evaluate the integral in Eq.~(\ref{cursem}) \begin{equation}\label{int} I_{HH'}(p,p^\prime)=\int \frac{d^4k}{4\pi^2i} {\cal F}(-k^2) {\rm tr} \biggl \{\gamma^5 S_3(\not\! k)\gamma^5 S_2(\not\! k+\not\! p^\prime)\gamma^\mu S_1(\not\! k+\not\! p) \biggr\}\,, \end{equation} where ${\cal F}(-k^2)=\phi_H(-k^2)\cdot\phi_{H^\prime}(-k^2)$, we need to calculate the following integrals: \begin{equation}\label{int1} J^{(0,\mu,\mu\nu,\mu\nu\delta)}=\int \frac{d^4k}{\pi^2i} \frac{(1,k^\mu,k^\mu k^\nu,k^\mu k^\nu k^\delta){\cal F}(-k^2)} {[m_1^2-(k+p)^2][m_2^2-(k+p^\prime)^2][m_3^2-k^2]}\,. \end{equation} Using the Cauchy representation for the function ${\cal F}(-k^2)$ and then the standard techniques of the Feynman $\alpha-$parametrization one finds (${\cal F'}(z)=d{\cal F}(z)/dz$) \begin{eqnarray} J^{0}&=& \int\limits_0^\infty dt \biggl(\frac{t}{1+t}\biggr)^2 \int\! d^3\alpha\,\delta\biggl(1-\sum\limits_{i=1}^3\alpha_i\biggr) \biggl(-{\cal F'}(z_I)\biggr) \label{scalar}\\ &&\nonumber\\ &&\nonumber\\ J^{\mu}&=& -\int\limits_0^\infty dt \biggl(\frac{t}{1+t}\biggr)^3 \int\! d^3\alpha\,\delta\biggl(1-\sum\limits_{i=1}^3\alpha_i\biggr) P_\alpha^\mu \biggl(-{\cal F'}(z_I)\biggr) \label{vector}\\ &&\nonumber\\ &&\nonumber\\ J^{\mu\nu}&=& \int\limits_0^\infty dt \biggl(\frac{t}{1+t}\biggr)^2 \int\! d^3\alpha\, \delta\biggl(1-\sum\limits_{i=1}^3\alpha_i\biggr) \label{tensor2}\\ &&\nonumber\\ &&\times\biggl\{ -\frac{1}{2}g^{\mu\nu}\frac{1}{1+t}{\cal F}(z_I) -P_\alpha^\mu P_\alpha^\nu \biggl(\frac{t}{1+t}\biggr)^2 {\cal F'}(z_I) \biggr\} \nonumber\\ &&\nonumber\\ &&\nonumber\\ J^{\mu\nu\delta}&=& \int\limits_0^\infty dt \biggl(\frac{t}{1+t}\biggr)^2 \int\! d^3\alpha\, \delta\biggl(1-\sum\limits_{i=1}^3\alpha_i\biggr) \label{tensor3}\\ &&\nonumber\\ &&\times\biggl\{ \frac{1}{2}\biggl[g^{\mu\nu}P_\alpha^\delta+ g^{\mu\delta}P_\alpha^\nu+ g^{\nu\delta}P_\alpha^\mu \biggr] \frac{t}{(1+t)^2}{\cal F}(z_I) \nonumber\\ &&\nonumber\\ &&+P_\alpha^\mu P_\alpha^\nu P_\alpha^\delta \biggl(\frac{t}{1+t}\biggr)^3 {\cal F'}(z_I) \biggr\} \nonumber \end{eqnarray} where $q=p-p^\prime$, $P_\alpha=\alpha_1 p+\alpha_2 p^\prime$, $D_3=\alpha_1\alpha_3 p^2+\alpha_2\alpha_3 p^{\prime 2}+\alpha_1\alpha_2 q^2 $, and $z_I=t[\sum_{i=1}^3\alpha_i m^2_i-D_3]-P_\alpha^2t/(1+t)$. Finally, Eq.~(\ref{int}) becomes $$ I_{HH'}^\mu(p,p^\prime)= (p+p^\prime)^\mu\, I_+(p^2,p^{\prime 2},q^2)+ (p-p^\prime)^\mu\, I_-(p^2,p^{\prime 2},q^2) $$ with \begin{eqnarray} I_+(p^2,p^{\prime 2},q^2)&=& \frac{1}{2}\int\limits_0^\infty dt \biggl(\frac{t}{1+t}\biggr)^2 \int\! d^3\alpha\, \delta\biggl(1-\sum\limits_{i=1}^3\alpha_i\biggr) \label{intfin}\\ &&\nonumber\\ &\times& \biggr\{ {\cal F}(z_I)\frac{1}{1+t}\biggl[4-3(\alpha_1+\alpha_2)\frac{t}{1+t}\biggr] \nonumber\\ &&\nonumber\\ && -{\cal F'}(z_I) \biggl[(m_1+m_2)m_3 \nonumber\\ &&\nonumber\\ && +\frac{t}{1+t}\biggl(-(\alpha_1+\alpha_2)(m_1m_3+m_2m_3-m_1m_2) \nonumber\\ &&\nonumber\\ && +\alpha_1 p^2+\alpha_2 p^{\prime 2}\biggr) -P_\alpha^2 \biggl(\frac{t}{1+t}\biggr)^2 \biggl(2-(\alpha_1+\alpha_2)\frac{t}{1+t}\biggr) \biggr] \biggr\}\,. \nonumber \end{eqnarray} The normalization condition is written in the form \begin{equation} \label{normfin} \frac{3g_H^2}{4\pi^2}I_+(p^2,p^2,0)=1 \end{equation} with $m_1=m_2\equiv m$. The integrals corresponding to the matrix element of the leptonic decay $H(p)\to l\nu$ and radiative decay of neutral meson $H(p)\to\gamma(q_1)+\gamma(q_2)$ are calculated following the same procedure. We have \begin{eqnarray} Y^\mu(p)&=& \int \frac{d^4k}{4\pi^2i}\phi(-k^2) {\rm tr}\biggl\{\gamma^5 S_2(\not\! k)\gamma^\mu(I-\gamma^5) S_1(\not\! k+\not\! p)\biggr\}=p^\mu Y(p^2) \nonumber\\ &&\nonumber\\ Y(p^2)&=& \int\limits_0^\infty dt \frac{t}{(1+t)^2} \int\limits_0^1 d\alpha \biggl[m_2+(m_1-m_2)\frac{\alpha t}{1+t}\biggr]\phi(z_Y) \label{intlep}\\ &&\nonumber\\ &&\nonumber\\ K^{\mu\nu}(q_1,q_2)&=& \int \frac{d^4k}{4\pi^2i} \phi(-k^2) {\rm tr} \biggl\{\gamma^5 S(\not\! k-\not\! q_2)\gamma^\mu S(\not\! k) \gamma^\nu S(\not\! k+\not\! q_1)\biggr\} \nonumber\\ &&\nonumber\\ &=& i\varepsilon^{\mu\nu\alpha\beta}q_1^\alpha q_2^\beta K(p^2) \nonumber\\ &&\nonumber\\ K(p^2)&=& m\int\limits_0^\infty dt \biggl(\frac{t}{1+t}\biggr)^2 \int\limits_0^1 d\alpha_1 \int\limits_0^{1-\alpha_1} d\alpha_2 \biggl(-\phi'(z_K)\biggr) \label{intrad} \end{eqnarray} where $z_Y=t[\alpha m_1^2+(1-\alpha)m_2^2-\alpha p^2+\alpha^2 p^2t /(1+t)]$ and $z_K=t[ m_1^2-\alpha_1\alpha_2 p^2]+\alpha_1\alpha_2 p^2 t/(1+t)$. The physical observables are expressed in terms of the structural integrals written in Eqs.~(\ref{intfin}), (\ref{intlep}) and (\ref{intrad}): \begin{eqnarray} g_{P\gamma\gamma}=\frac{g_P}{2\sqrt{2}\pi^2}K(m^2_P), && \hspace{1cm} \Gamma(P\to \gamma\gamma)=\frac{\pi}{4}\alpha^2 m_P^3 g_{P\gamma\gamma}^2, \label{rad}\\ &&\nonumber\\ &&\nonumber\\ f_P = \frac{3}{4\pi^2}\ g_P\ Y(m^2_P), && \Gamma(P\to l\nu)=|V_{qq'}|^2\frac{G_F^2 f_P^2}{8\pi} m_P m_l^2 \biggl[1-\frac{m_l^2}{m_P^2}\biggr]^2, \label{lep}\\ &&\nonumber\\ &&\nonumber\\ f_+(q^2)&=&\frac{3}{4\pi^2}\ g_Pg_{P'}\ I_+(m^2_P,m^2_{P'},q^2), \label{sem}\\ &&\nonumber\\ \Gamma(P\to P^\prime l\nu)&=&|V_{qq'}|^2\frac{G_F^2}{192\pi^3 m_P^3} \int\limits_0^{t_-} dt |f_+(t)|^2 \biggl[(t_+-t)(t_--t)\biggr]^{3/2}, \nonumber \end{eqnarray} with $t_\pm=(m_P\pm m_{P^\prime})^2$ (the extra factor 1/2 appears for $\pi^0$ in the final state). \subsection{Heavy quark limit} The leptonic heavy decay constants and semileptonic heavy to heavy form factors acquire a simple form in the heavy quark limit, {\it i. e.} when $m_1\equiv M\to\infty$, $m_2\equiv M'\to\infty$ and $p^2=(M+E)^2$, $p^{\prime 2}=(M'+E)^2$ with $E$ being a constant value. From Eq.~(\ref{intfin}) by replacing the variables $\alpha_1\to\alpha_1/M$ and $\alpha_2\to\alpha_2/M'$, one obtains \begin{eqnarray} I_+ & \rightarrow & \frac{M+M'}{2MM'}\cdot \int\limits_0^\infty dt \biggl(\frac{t}{1+t}\biggr)^2 \int\limits_0^1 d\alpha\alpha \int\limits_0^1 d\tau \biggl({-\cal F'}(z)\biggr) \biggl[m+\frac{\alpha t}{1+t}\biggr] \nonumber\\ &&\nonumber\\ &=& \frac{M+M'}{2MM'}\cdot \frac{1}{2} \int\limits_0^1 \frac{d\tau}{W} \int\limits_0^\infty du {\cal F}(\tilde z)\frac{m+\sqrt{u}}{m^2+\tilde z} \label{inthql} \end{eqnarray} where $\tilde z=u-2E\sqrt{u/W}$, $W=1+2\tau(1-\tau)(w-1)$ and $w=(M^2+M^{\prime 2}-2MM' q^2)/(2MM')$. The normalization condition can be obtained from Eq.~(\ref{inthql}) by putting $w=1$ and $M'=M$. We have \begin{equation}\label{normhql} \frac{3g_H^2}{4\pi^2}\cdot I_+^{(0)}=1, \hspace{0.8cm} I_+^{(0)}=\frac{1}{2M}I_N, \hspace{0.8cm} I_N= \int\limits_0^\infty du \phi_H^2(\tilde z_0) \frac{m+\sqrt{u}}{m^2+\tilde z_0} \end{equation} where $\tilde z_0=u-2E\sqrt{u}$. Then the leptonic decay constant and semileptonic form factors are written as \begin{eqnarray} f_P & \rightarrow & \frac{1}{\sqrt{M}}\cdot \sqrt{\frac{3}{2\pi^2 I_N}} \int\limits_0^\infty du [\sqrt{u}-E]\phi_H(\tilde z_0) \frac{m+\sqrt{u}/2}{m^2+\tilde z_0} \label{lephql}\\ &&\nonumber\\ f_{\pm}& \rightarrow & \frac{M'\pm M}{2\sqrt{MM'}}\xi(w) \hspace{1cm} \xi(w)=\frac{1}{I_N}\int\limits_0^1 \frac{d\tau}{W} \int\limits_0^\infty du \phi_H^2(\tilde z) \frac{m+\sqrt{u}}{m^2+\tilde z}\,. \label{semhql} \end{eqnarray} It is readily seen that we reproduce the scaling law for both leptonic decay constants and form factors, and obtain the explicit expression for the Isgur-Wise function \cite{IW,IWf}. \section{Results and discussion} The expressions obtained in the previous section for physical observables are valid for any vertex function $\phi_H(-k^2)$. Here, we choose a Gaussian form $\phi(-k^2)=\exp\{k^2/\Lambda_H^2\}$ in Minkowski space. The magnitude of $\Lambda_H$ characterizes the size of the BS-amplitude and is an adjustable parameter in our approach. Thus, we have six $\Lambda$-parameters plus the four quark masses, all of which are fixed via the least-squares fit to the observables measured experimentally or taken from lattice simulations (see, Table~\ref{t1}). \begin{table}[t] \caption{Calculated values of a range of observables ($g_{\pi\gamma\gamma}$ in GeV$^{-1}$, leptonic decay constants in GeV, form factors and ratios are dimensionless). The ``Obs." are extracted from Refs.~\protect \cite{PDG,CLEO-BD,CLEO-Bpi,Flynn,Wittig,Debbio,MILC}. The quantities used in fitting our parameters are marked by ``$\ast$". \label{t1}} \begin{tabular}{clll|clll} & & Obs. & Calc. & & & Obs. & Calc. \\ \hline $\ast$ & $g_{\pi\gamma\gamma}$ & 0.274 & 0.242 & & $f_+^{K\pi}(0)$ & 0.98 & 0.98 \\ $\ast$ & $f_\pi$ & 0.131 & 0.131 & $\ast$ & $f_+^{DK}(0)$ & 0.74 $\pm$ 0.03 & 0.74 \\ $\ast$ & $f_K$ & 0.160 & 0.160 & & $f_+^{BD}(0)$ & & 0.73 \\ $\ast$ & $f_D$ & 0.191$^{+19}_{-28}$ & 0.191 & & $f_+^{B\pi}(0)$ & 0.27 $\pm$ 0.11 & 0.51 \\ $\ast$ & $\frac{f_{D_s}}{f_D}$ & 1.08(8) & 1.08 & & ${\rm Br}(K\to\pi l\nu)$& $(4.82\pm 0.06)\cdot 10^{-2}$ & $4.4\cdot 10^{-2}$ \\ & $f_{D_s}$ & 0.206$^{+18}_{-28}$ & 0.206 & & ${\rm Br}(D\to K l\nu)$ & $(6.8\pm 0.8)\cdot 10^{-2}$ & $ 8.1\cdot 10^{-2}$ \\ $\ast$ & $f_B$ & 0.172$^{+27}_{-31}$ & 0.172 & & ${\rm Br}(B\to D l\nu)$ & $(2.00\pm 0.25)\cdot 10^{-2}$ & $2.3\cdot 10^{-2}$ \\ $\ast$ & $\frac{f_{B_s}}{f_B}$ & 1.14(8) & 1.14 & & ${\rm Br}(B\to\pi l\nu)$& $(1.8\pm 0.6)\cdot 10^{-4}$ & $2.1\cdot 10^{-4}$ \\ & $f_{B_s}$ & & 0.196 & & & & \\ \hline \end{tabular} \end{table} The fit yields the values of $\Lambda$-parameters and the constituent quark masses which are listed in Eqs.~(\ref{fitlam}) and (\ref{fitmas}). \begin{equation}\label{fitlam} \begin{array}{ccccccc} \Lambda_\pi & \Lambda_K & \Lambda_D & \Lambda_{D_s} & \Lambda_B & \Lambda_{B_s} & {\rm (in\ GeV)}\\ \hline $\ \ 1.16\ \ $ & $\ \ 1.82\ \ $ & $\ \ 1.87\ \ $ & $\ \ 1.95\ \ $ & $\ \ 2.16\ \ $ & $\ \ 2.27\ \ $ & \\ \end{array} \end{equation} \begin{equation}\label{fitmas} \begin{array}{ccccc} m_u & m_s & m_c & m_b & {\rm (in\ GeV)} \\ \hline $\ \ 0.235\ \ $ & $\ \ 0.333\ \ $ & $\ \ 1.67\ \ $ & $\ \ 5.06\ \ $ & \\ \end{array} \end{equation} The values of $\Lambda$ are such that $\Lambda_{m_i}<\Lambda_{m_j}$ if $m_i<m_j$. This corresponds to the ordering law for sizes of bound states. The values of $\Lambda_D=1.87$ GeV and $\Lambda_B=2.16$ GeV are larger than those obtained in \cite{DSEH2}: $\Lambda_D=1.41$ GeV and $\Lambda_B=1.65$ GeV. The mass of u-quark and the parameter $\Lambda_\pi$ are almost fixed from the decays $\pi\to\mu\nu$ and $\pi^0\to\gamma\gamma$ with an accuracy of a few percent. The obtained value of the u-quark mass $m_u=0.235$ GeV is less than the constituent-light-quark mass typically employed in quark models for baryon physics ($m_u>m_N/3=0.313$ GeV). For instance, the value of $m_u=0.420$ GeV was extracted from fitting nucleon observables within our approach \cite{RCQM}. The different choice of constituent quark masses is a common feature of quark models with free propagators due to the lack of confinement. However, we consider here the low-lying mesons that allows us to fix the constituent quark masses in a self-consistent manner. As mentioned above, the meson masses must be less than the sum of masses of their constituents. This gives the restrictions on the choice of the meson binding energies: $E_K=m_K-m_s<m_u$, $E_D=m_D-m_c<m_u$ and $E_B=m_B-m_b<m_u$, which means that the binding energy cannot be relatively large as compared with those obtained in \cite{DSEH2}: $E_D=0.58$ GeV and $E_B=0.74$ GeV. Let us now consider the $q^2$-behaviour of the form factors. We use the three-parameter function for the four $f_+$ form factors \begin{equation}\label{approx} f_+^{HH'}(q^2)=\frac {f(0)} {1-b_0(q^2/m_H^2)-b_1(q^2/m_H^2)^2} \end{equation} here $b_0~,b_1~$ and $f(0)$ are parameters to be fitted. We collect the fitted values in the following Table and report the $q^2$-dependence in Fig. 1. \begin{equation} \label{coef} \begin{array}{c|cccc} &\ \ K\to \pi\ \ & \ \ D\to K\ \ &\ \ B\to D\ \ &\ \ B\to\pi\ \ \\ \hline b_0 & 0.28 & 0.64 & 0.77 & 0.52 \\ b_1 & 0.057 & 0.20 & 0.19 & 0.38 \\ \end{array}\, \end{equation} For comparison, we plot, together with our results, the predictions of a vector dominance monopole model: \begin{equation}\label{mon} f_+^{q\to q'}(q^2)=\frac{f_+^{q\to q'}(0)} {1-q^2/m^2_{V_{qq'}}} \end{equation} with $m^2_{V_{qq'}}$ being a mass of lower-lying $\bar qq'$-vector meson. We choose $m_{D^*_s}=2.11$ GeV for $c\to s$, $m_{B^*}=5.325$ GeV for $b\to u$, $m_{B_c^*}\approx m_{B_c}=6.4$ GeV \cite{CDF-Bc} for $b\to c$ transitions. The values of $f_+^{qq'}(0)$ are taken from the Table 1. Also we calculate the branching ratios of semileptonic decays by using widely accepted values of the CKM matrix elements \cite{PDG}. Our result for the slope of the $K_{l3}$ form factor \begin{equation} \lambda_+ = m_{\pi}^2 \frac{f_+^{K\pi\prime}(0)}{f_+^{K\pi}(0)} = 0.023\ , \end{equation} is in good agreement with experiment: $\lambda_+^{\rm expt} = 0.0286 \pm 0.0022$ \cite{PDG} and VDM prediction: $\lambda_+^{\rm VDM} = m_{\pi}^2/m^2_{K^{\ast}} = 0.025.$ This value is also consistent with Refs. \cite{kl3} One can see that the agreement with experimental data and lattice results is very good, with the exception of the value of $f_+^{bu}(0)$ which is found to be larger than the monopole extrapolation of a lattice simulation, QCD Sum Rules (cf. \cite{CSB}) and some other quark models (see, for example, \cite{BSW,LNS}). However, this result is consistent with the value calculated from Refs.~\cite{DSEH2,infra} and allows us to reproduce the experimental data for $B\to \pi l\nu$ with quite good accuracy. \section*{Acknowledgments} We appreciate F. Buccella, V.E. Lyubovitskij, G. Nardulli and C.D. Roberts for many interesting discussions and critical remarks. We thank G. Esposito for reading the manuscript. M.A.I. gratefully acknowledges the hospitality and support of the Theory Group at Naples University where this work was conducted. This work was supported in part by the Russian Fund for Fundamental Research, under contract number 99-02-17731-a.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} In \cite{Br:95} K. Brakke introduced the covering space method for solving a rather large class of one-codimensional Plateau type problems, including the classical case of an area-minimizing surface spanning a knot, a Steiner minimal graph connecting a given number of points in the plane, and an area-minimizing surface spanning a nonsmooth one-dimensional frame such as the one-skeleton of a polyhedron. The method does not impose any topological restriction on the solutions; it relies on the theory of currents and takes into account also unoriented objects. It consists essentially in the construction of a pair of covering spaces, and is based on the minimization of what the author called the soap film mass. Recenlty, a slightly different approach has been proposed in \cite{AmBePa:17}; it is based on the minimization of the total variation for functions defined on a single covering space and satisfying a suitable contraint on the fibers. Also this method does not impose any topological restriction on the solutions. Moreover, it takes advantage of the full machinery known on the space of BV functions defined on a locally Euclidean manifold: for instance, and remarkably, it allows approximating the considered class of Plateau type problems by $\Gamma$-convergence. In the forthcoming paper \cite{BePaPaSc:17} we shall deepen this $\Gamma$-convergence regularization for finding minimal networks in the plane. The interest in the covering space method is also illustrated in the recent paper \cite{BePaPa:17}, where is shown a triple cover of $\R^3 \setminus (S\cup C)$, $S$ a tetrahedral frame and $C$ two disk boundaries, compatible with a soap film spanning $S$ and having higher topological type, more precisely with two tunnels (see Figure \ref{fig:morgan} in the case of the {\it regular} tetrahedron). \begin{figure} \includegraphics[width=0.43\textwidth]{lawlormorganmod.png} \caption{\small{ A slightly retouched version of \cite[fig. 1.1.1]{LaMo:94}, see also \cite[fig. 11.3.2]{Mo:08}. This soap film has two tunnels, one clearly visible in the picture. This figure was done by Jean Taylor, following an idea due to Bob Hardt. }} \label{fig:morgan} \end{figure} The cover described in \cite{BePaPa:17} has the particular feature of being not normal; in addition, it is constructed using the above mentioned disks. Similar disks were firstly introduced in \cite{Br:95} in other examples, and called {\it invisible wires} by the author. In the case of the tetrahedron, they play a crucial role. {}From one side, they are necessary to complete the construction of the triple cover; from the other side, they act as an obstacle. In addition, they allow one to distinguish {\it tight} loops around particular edges of the frame $S$ from loops turning {\it far} from the edges: this distinction turns out to be crucial for the modelization of a higher genus soap film. The results of \cite{BePaPa:17} strongly suggest that, for a tetrahedron sufficiently elongated in one direction, the higher-genus surface has area strictly less then the conical configuration. In this paper, for convenience of the reader we recall (Section \ref{sec:double_covers_of_R3_deprived_by_a_curve}) the double covers method and BV functions for treating the classical Plateau problem. In Section \ref{sec:covers_of_degree_larger_than_two} we point out the main modifications of the construction in the case of covers of degree larger than two. Next, in Section \ref{sec:examples} we continue the analysis in the spirit of \cite{BePaPa:17}, discussing various interesting examples. In Example \ref{exa:a_partially_wetted_curve} we discuss with some care a classical example due to F.J. Almgren of a soap film only partially wetting an unknotted curve, see also \cite{Br:95}. In Example \ref{exa:soap_film_on_a_cubical_frame} we describe a cover of $\R^3 \setminus S$, where $S$ is the one-skeleton of a cube, which is compatible with the soap film depicted in Figure \ref{fig:nscube}. This is obviously not the most common soap film one usually finds in pictures, which has no holes and has triple curves starting in the corners \cite[Figure 6]{Ta:76}. It is worthwhile to notice that such a soap film has area larger than the area of the soap film in Figure \ref{fig:nscube}. In Example \ref{exa:triple_moebius_band} we show how to construct a triple cover compatible with the soap film of Figure \ref{fig:retract}, which is a surface that retracts on its boundary, and therefore for which we cannot apply the Reifenberg method. In Example \ref{exa:octahedron} we discuss the case when $S$ is the one-skeleton of an octahedron. We conclude this introduction by mentioning that calibrations, applied to the covering space method, have been considered in \cite{Br:95}, \cite{Br:95b} and, more recently, in \cite{CaPl:17} in connection with the BV approach in dimension two. \section{Double covers of \texorpdfstring{$\Omega\setminus S$}{Omega - S}} \label{sec:double_covers_of_R3_deprived_by_a_curve} In this section we describe the cut and paste method for constructing a double cover of the base space $M:= \Omega\setminus S$ where, for simplicity, $S$ is a smooth compact embedded two-codimensional manifold without boundary and $\Omega$ is a sufficiently large ball of $\R^n$ containing $S$, $n \geq 2$. Just to fix ideas, one can consider $n=3$ and $S$ a tame knot or link\footnote{No invisible wires will be taken into account in this section.}. Next, to model the area minimization problem with $S$ as boundary datum, we define a minimum problem on a class of BV functions defined on the cover and satisfying a suitable constraint. The projection over the base space of the jump set of a minimizer will be our definition of solution to the Plateau problem; this is a simplified version of the construction described in \cite{AmBePa:17}, to which we refer for all details. Before starting the discussion, it is worth to recall that, in more general cases (such as those in Section \ref{sec:examples}), the cut and paste procedure needs not be the most convenient method to work with. Indeed, the cover can be equivalently described in two other ways. In the first one it is sufficient to declare an orientation of the cut, and a family of permutations of the strata along the cut; this family must be consistent, a condition that is obtained from the local triviality of the cover. The second method is based on an abstract construction, by taking the quotient of the universal cover of $M$ with respect to a subgroup of the fundamental group of $M$; at the end of the section we recall this construction, while in Section \ref{sec:examples} we shall use both these two latter methods. In what follows we shall always assume that the cover is trivial in a neighbourhood of $\partial \Omega$. Hence, in that neighbourhood we can speak without ambiguities of sheet one and sheet two, up to automorphisms of the cover. \subsection{Cut and paste construction of the double cover} We start by defining a {\it cut} (also called a cutting surface when $n=3$), which is a $(n-1)$-dimensional compact embedded smooth oriented submanifold $\Sigma \subset \Omega$ with $\partial \Sigma = S$. Next we glue two copies (the sheets, or strata) of $M:= \Omega \setminus S$ along $\Sigma$ by exchanging the sheets. Equivalently, we associate the permutation $(1 ~ 2)$ to $\Sigma$.\footnote{Note that, being this permutation of order two, fixing an orientation of $\Sigma$ is not necessary and $\Sigma$ could even be nonorientable. For covers of degree larger than two and other type of permutations (see Sections \ref{sec:covers_of_degree_larger_than_two} and \ref{sec:examples}) orientability of $\Sigma$ is necessary.} To figure out the construction, it is convenient to ``double'' $\Sigma$, namely to slightly separate two copies of $\Sigma$ having boundary $S$ and meeting only at $S$; we call these two copies $\Sigma$ and $\Sigma'$, and we denote by $\mathbf \Sigma$ the pair $(\Sigma, \Sigma')$, that we call pair of cuts. The orientability of $\cut$ gives a unit normal vector field on $\cut\setminus\Hole$\,---\,hence, in particular, a direction to follow in order to ``enlarge'' the cut, separating its two ``faces''. If we call $O\subset \Om$ (resp. $I \subset \Om$) the open region exterior (resp. interior) to $\cut \cup \cut'$, we can explicitely describe the gluing procedure as follows: \begin{itemize} \item[] we let $$ \chart:=\Om \setminus \cut, \qquad \chart':= \Om \setminus \cut', $$ and consider\footnote{In order to be consistent with the permutation $(1~ 2)$ mentioned above, it is sufficient to rename $(D',3)$ and $(D',4)$ as $(D',1)$ and $(D',2)$.} $$ \unionedisgiunta:=(\chart,1)\cup (\chart, 2)\cup (\chart', 3)\cup (\chart', 4); $$ \item[] we endow $\unionedisgiunta$ with the following equivalence relation: given $x, x' \in \mfd$ and $j \in \{1,2\}$, $j' \in \{3,4\}$, $(x,j), (x',j')\in \mathcal X$, we say that $(x,j)$ is equivalent to $(x',j')$ if and only if $x=x'$, and one of the following conditions hold: \begin{equation}\label{eq:equivalenza} \begin{cases} & x\in O, \qquad \{j,j'\}\in \big\{\{1,3\}, \{2,4\}\big\}, \\ & x\in I, \qquad ~ \{j,j'\}\in \big\{\{1,4\}, \{2,3\}\big\}. \end{cases} \end{equation} \end{itemize} We call $Y_ \mathbf{\cut}$ the quotient space of $\unionedisgiunta$ by this equivalence relation (endowed with the quotient topology) and $\widetilde{\pi} : \unionedisgiunta \to Y_ \mathbf{\cut}$ the projection. The double cover of $\mfd$ is then \begin{equation}\label{eq:proiezionesullabase} \pi_{\mathbf{\cut},\mfd} \colon Y_ \mathbf{\cut} \to \mfd \end{equation} where $\pi_{\mathbf{\cut},M}(\widetilde{\pi}(x,j)):=x$ for any $(x,j)\in\unionedisgiunta$, which is well defined, since if $(x,j)\sim(x',j')$, then $\pi_{\mathbf{\cut},\mfd}(\widetilde{\pi}(x,j))=\pi_{\mathbf{\cut},\mfd}(\widetilde{\pi}(x',j'))$. If we set $\pi\colon (x,j)\in \unionedisgiunta\mapsto x \in \mfd$, we have the following commutative diagram: \begin{equation}\label{eq:schema} \xymatrix{ \unionedisgiunta \arr[r]^{\widetilde{\pi}} \arr[dr]_{\pi} & Y_ \mathbf{\cut} \arr[d]^{\pi_{\mathbf{\cut},M}} \\ & \mfd } \end{equation} The quotient $Y_ \mathbf{\cut}$ admits a natural structure of differentiable manifold, with four local parametrizations given by $\Psi_1, \Psi_2, \Psi_3, \Psi_4$, where \begin{equation}\label{eq:Psij} \begin{aligned} & \Psi_j \colon \chart \to \widetilde{\pi} \big((\chart,j)\big), \qquad \Psi_j:=\widetilde{\pi}\circ \pi_{\vert_{(\chart,j)}}^{~~-1}, \quad j =1,2, \\ & \Psi_{j'} \colon \chart' \to\widetilde{\pi} \big((\chart',j')\big), \qquad \Psi_{j'}:=\widetilde{\pi}\circ \pi_{\vert_{(\chart',j')}}^{~~-1}, \quad j' = 3,4. \end{aligned} \end{equation} It is important here that the transition maps are the identity: $$\Psi_{j'}^{-1}\circ \Psi_j = \mathrm{id}=\Psi_j^{-1} \circ \Psi_{j'}, \qquad j\in\{1,2\}, j'\in \{3,4\}, $$ the equalities being valid where all members of the equation are defined. Notice that $\Psi_1(D) \cup \Psi_2(D) = Y_ \mathbf{\cut} \setminus \pi_{\mathbf{\cut},M}^{\ -1}(\cut \setminus S)$, and $\Psi_3(D') \cup \Psi_4(D') = Y_ \mathbf{\cut} \setminus \pi_{\mathbf{\cut},M}^{\ -1}(\cut' \setminus S)$. The local parametrizations allow to read a function $u : Y_ \mathbf{\cut} \to \R$ in charts: for $j=1,2$ and $j'=3,4$ we let $v_j(u): D \to \R$, $v_{j'}(u): D'\to \R$ be \begin{equation}\label{eq:v} v_j(u):=u\circ \Psi_j, \qquad v_{j'}(u):=u\circ \Psi_{j'}. \end{equation} Recalling \eqref{eq:equivalenza}, we have \begin{equation}\label{eq:vvvv} \begin{split} v_1(u)=v_3(u) , \quad & v_2(u)=v_4(u) \qquad{\rm a.e.~in~} O,\\ v_1(u)=v_4(u), \quad & v_2(u)=v_3(u) \qquad {\rm a.e.~in~} I. \end{split} \end{equation} \subsection{Total variation on the double cover} The set $Y_ \mathbf{\cut}$ is endowed with the push-forward $\mu$ of the $n$-dimensional Lebesgue measure $\mathcal L^n$ in $\mfd$ via the local parametrizations. We set $L^1(Y_ \mathbf{\cut}) := L^1_\mu(Y_ \mathbf{\cut})$. We say that $u$ is in $BV(Y_ \mathbf{\cut})$ if its distributional gradient $D u \colon \phi\in C^1_c(Y_ \mathbf{\cut}) \mapsto -\int_{Y_ \mathbf{\cut}} u D \phi \,d\mu \in \R^n$ is a bounded vector\,--\,valued Radon measure on $Y_ \mathbf{\cut}$. We denote by $|Du|$ the \emph{total variation} measure of $Du$. Let $u \in BV(Y_ \mathbf{\cut})$ and $E\subseteq Y_ \mathbf{\cut}$ be a Borel set; $E$ can be written as the union of the following four disjoint Borel sets: \begin{equation} \label{eq:splitting} E \cap \widetilde{\pi} ((D,1)), \; E \cap \widetilde{\pi} ((D,2)), \; E \cap \widetilde{\pi} ((\cut\setminus S,3)), \; E \cap \widetilde{\pi} ((\cut\setminus S,4)), \end{equation} and we have \begin{equation}\label{eq:vartotnuova} \begin{aligned} \vert Du\vert(E) = & \sum_{j=1,2} \vert Dv_j(u)\vert \Big(\pi_{\mathbf{\cut},M}\big(E \cap \widetilde{\pi} ((D,j))\big)\Big)\\ & + \sum_{j' =3,4} \vert Dv_{j'}(u)\vert \Big(\pi_{\mathbf{\cut},M}\big(E \cap \widetilde{\pi} ((\cut\setminus S,j'))\big)\Big). \end{aligned} \end{equation} Notice that $\cut'$ does not appear in \eqref{eq:splitting}. Choosing $\chart'$ in place of $\chart$ amounts in considering $\cut'$ in place of $\cut$ and does not change the subsequent discussion. \begin{Example}\label{exa:dischi}\rm Suppose the simplest case $n=2$, and $S$ two distinct points $q_1, q_2$. Let $u \in BV(Y_ \mathbf{\cut})$ be such that $v_1(u)$ is equal to $a \in \R$ inside a disk $B$ of radius $r>0$ contained in $I$ (or in $O$) and $b \in \R$ outside, and $v_2(u)$ is equal to $c \in \R$ in $B$ and $d \in \R$ outside. Then, owing to \eqref{eq:vvvv}, \begin{equation}\label{eq:esempiostupido} \begin{split} \vert Du\vert(Y_ \mathbf{\cut}) = & \vert Dv_1(u)\vert(B \cap \chart) + \vert Dv_2(u)\vert(B \cap \chart) \\ & + \vert Dv_3(u)\vert (\cut \setminus \{q_1, q_2\}) + \vert Dv_4(u)\vert (\cut\setminus \{q_1, q_2\})\\ =& (\vert b-a\vert + \vert d - c\vert) ~2 \pi r + 2 \mathcal H^1(\cut) \vert d-b \vert. \end{split} \end{equation} On the other hand, if $B$ is centered at a point of $\cut$, and $B\cap \cut' = \emptyset$, then \begin{equation}\label{eq:esempio} \begin{aligned} \vert Du\vert(Y_ \mathbf{\cut}) = & \vert Dv_1(u)\vert(B \cap \chart) + \vert Dv_2(u)\vert(B \cap \chart) \\ & + \vert Dv_3(u)\vert(\cut\setminus \{q_1, q_2\}) + \vert Dv_4(u) \vert(\cut\setminus \{q_1,q_2\}) \\ = & \left(\vert b-a\vert + \vert d - c\vert\right) ~2 \pi r + 2 \vert c-a\vert \mathcal H^1(\cut \cap B) \\ & + 2 \vert d - b\vert \left( \mathcal H^1(\cut)-\mathcal H^1(\cut \cap B)\right) . \end{aligned} \end{equation} If in particular $a=1$, $b=0$, $c=0$, $d=1$, we have that \eqref{eq:esempiostupido} and \eqref{eq:esempio} become \begin{equation*}\label{eq:utile} \vert Du\vert(Y_ \mathbf{\cut}) = 2\left(2 \pi r + \mathcal H^1(\cut)\right). \end{equation*} \end{Example} \subsection{The constrained minimum problem on the double cover}\label{subsec:setting} We let $$ BV(Y_ \mathbf{\cut}; \{0, 1\}) := \Big\{ u \in BV(Y_ \mathbf{\cut}) \; : \; u(y) \in \{0,1\} {\rm ~for}~\mu~{\rm a.e.}~ y \in Y_ \mathbf{\cut}\Big\}. $$ The domain of $\mathcal F$ is defined\footnote{For simplicity we drop the dependence on $\mathbf{\cut}$ in the notation.} by \begin{equation*} \label{eq:vincolo_gen} \domain(\FFF):=\Big\{u\in BV(Y_ \mathbf{\cut}; \{0, 1\}) \; : \sum_{\pi_{\mathbf{\cut},M}(y)=x} u(y)=1 \text{~ for a.e.~$x$ in~} \mfd \Big\}, \end{equation*} and $$ \mathcal F(u) := \vert Du\vert (Y_ \mathbf{\cut}), \qquad u \in \domain(\FFF). $$ Therefore the values of $u$ on the two points of a fiber are $0$ and $1$: this is what we call the {\it constraint on the fibers}. Hence, for any $u \in \domain(\FFF)$ we have \begin{equation} \label{eq:v1v2} v_1(u)=1-v_2(u) \; \text{a.e.~in}~ \chart, \qquad v_3(u)=1-v_4(u) \; \text{a.e.~in}~\chart'. \end{equation} For this reason, in formulas \eqref{eq:saltou} and \eqref{eq:formulafinale} below the functions $v_2(u)$ and $v_4(u)$ are not present. Moreover, the following splitting formula holds: \begin{equation}\label{eq:saltou} \pi_{\mathbf{\cut},M}(J_u) = \Big( J_{v_1(u)} \setminus (\cut\setminus S) \Big) \cup \Big( J_{v_3(u)}\cap (\Sigma \setminus S) \Big). \end{equation} Indeed, as in \eqref{eq:splitting}, let us split $J_u$ as the union of the following four disjoint sets: \begin{equation}\label{eq:splitjump} J_u \cap \widetilde{\pi} ((\chart,1)) , \; J_u \cap \widetilde{\pi} ((\chart,2)), \; J_u \cap \widetilde{\pi} ((\cut\setminus S,3)) , \; J_u \cap \widetilde{\pi} ((\cut\setminus S,4)) . \end{equation} By the constraint on the fibers, to each point in the first set of \eqref{eq:splitjump} there corresponds a unique point in the second set, belonging to the same fiber, and viceversa. A similar correspondence holds between the third and the fourth set. Hence \begin{equation*}\label{eq:splitjump2} \pi_{\mathbf{\cut},M}(J_u) = \pi_{\mathbf{\cut},M}\Big( J_u \cap \widetilde{\pi} ((\chart,1)) \Big) \cup \pi_{\mathbf{\cut},M}\Big( J_u \cap \widetilde{\pi} ((\cut\setminus S,3)) \Big). \end{equation*} By the definitions of $J_u$, $J_{v_1(u)}$ and $J_{v_3(u)}$, using also the local parametrizations $\Psi_1$, $\Psi_3$, it follows that $\pi_{\mathbf{\cut},M}\big( J_u \cap \widetilde{\pi} ((\chart,1)) \big)= J_{v_1(u)} \setminus (\cut\setminus S)$, and $\pi_{\mathbf{\cut},M}\big( J_u \cap \widetilde{\pi} ((\cut\setminus S,3))\big)= J_{v_3(u)} \cap (\cut\setminus S)$, and \eqref{eq:saltou} follows. \begin{definition}[\textbf{Constrained lifting}]\label{rem:uspeciale} Let $v\in BV(\chart; \{0, 1\})$. Then the function \begin{equation} \label{eq:usigma} u:= \begin{cases} v & \text{in } \Psi_1(\chart),\\ 1-v & \text{in } \Psi_2(\chart), \end{cases} \end{equation} is in $\domain(\FFF)$, and $v_1(u)=v$. We call $u$ the constrained lifting of $v$. \end{definition} In particular, when $v$ is identically equal to $1$ (or $0$), we have $$ \pi_{\mathbf{\cut},M}(J_u)= \cut\setminus S. $$ The next result clarifies which is the notion of area we intend to minimize. \begin{Proposition} Let $u\in \domain(\FFF)$. Then \begin{equation} \label{eq:formulafinale} \begin{split} |D u| (Y_ \mathbf{\cut}) =& 2 \Big( \mathcal H^{n-1}( J_{v_1(u)} \setminus \cut) + \mathcal H^{n-1}(J_{v_3(u)} \cap \cut) \Big) \\ =& 2 \,\mathcal H^{n-1}(\pi_{\mathbf{\cut},M} (J_u)). \end{split} \end{equation} \end{Proposition} \begin{proof} Recall the splitting in \eqref{eq:vartotnuova}, with the choice $E:=Y_ \mathbf{\cut}$. By \eqref{eq:v1v2}, we have \begin{equation} \label{eq:contributo1} |D v_1(u) | (\chart)=|D v_2(u) | (\chart), \qquad |D v_3(u) | (\cut)=|D v_4(u) | (\cut). \end{equation} By the properties of BV functions we have \begin{equation} \label{eq:contributo2} |D v_1(u) | (\chart) = \mathcal H^{n-1}( J_{v_1(u)} \setminus \cut), \quad |D v_3(u)|(\cut)= \mathcal H^{n-1} (J_{v_3(u)} \cap \cut). \end{equation} Substituting \eqref{eq:contributo2} into \eqref{eq:vartotnuova}, and recalling \eqref{eq:contributo1}, we get the first equality in \eqref{eq:formulafinale}. The second equality is now a consequence of \eqref{eq:saltou}. \end{proof} \begin{remark}\label{rem:fattore4}\rm The factor $2$ in \eqref{eq:formulafinale} is obtained by multiplying the absolute value of the difference of the values of $u$ (which gives a factor $1$), with the number of the sheets (which gives a factor $2$). \end{remark} A particular case of a result proven in \cite{AmBePa:17} is the following. \begin{theorem}[\textbf{Existence of minimizers}]\label{teo:esistenza} We have \begin{equation} \label{eq:problema_gen} \inf\Big\{|D u|\big(Y_ \mathbf{\cut} \big) \; : \; u\in \domain(\FFF) \Big \} = \min\Big\{|D u|\big(Y_ \mathbf{\cut} \big) \; : \; u\in \domain(\FFF) \Big \} >0. \end{equation} \end{theorem} Positivity follows from \eqref{eq:constancy2} below, with the choice $A:=\Om$. We denote by $u_{\min}$ a minimizer of problem \eqref{eq:problema_gen}. \begin{lemma}\label{lem:constancy} Let $A\subseteq \Om$ be a nonempty open set such that $\pi_{\mathbf{\cut},M}^{\ -1}(A\setminus S)$ is connected. Then for any $u\in\domain(\FFF)$, \begin{equation} \label{eq:constancy} \mathcal H^{n-1}\big(A \cap \pi_{\mathbf{\cut},M}(J_u) \big)>0. \end{equation} Moreover, if $A$ is bounded with Lipschitz boundary, then \begin{equation} \label{eq:constancy2} \inf \big\{ \mathcal H^{n-1}\big(A \cap \pi_{\mathbf{\cut},M}(J_u) \big) \; : \; u\in\domain(\FFF) \big\} >0. \end{equation} \end{lemma} \begin{proof} By contradiction, suppose that \begin{equation} \label{eq:constancycontr} \mathcal H^{n-1}\big(A \cap \pi_{\mathbf{\cut},M}(J_u) \big)=0. \end{equation} Applying \eqref{eq:saltou} to \eqref{eq:constancycontr}, we get \begin{equation}\label{eq:contaccio2} 0 = \mathcal H^{n-1}(A \cap (J_{v_1(u)} \setminus \Sigma)) + \mathcal H^{n-1}(A \cap J_{v_3(u)} \cap \Sigma). \end{equation} Now, set $A^S:=A\setminus S$. Applying \eqref{eq:vartotnuova} with the choice $E:=\pi_{\mathbf{\cut},M}^{-1}(A^S)$, we get \begin{equation}\label{eq:contaccio} \begin{aligned} \vert Du\vert(\pi_{\mathbf{\cut},M}^{\ -1}(A^S)) = & 2 \vert D v_1(u)\vert \left( \pi_{\mathbf{\cut},M}(\pi_{\mathbf{\cut},M}^{\ -1}(A^S) \cap\widetilde{\pi}((D,1)) )\right) \\ & + 2 \vert D v_3(u)\vert \left( \pi_{\mathbf{\cut},M}(\pi_{\mathbf{\cut},M}^{\ -1}(A^S) \cap \widetilde{\pi}(\Sigma \setminus S,3)) )\right) \\ = & 2 \, \big( \vert D v_1(u)\vert \left( A^S \setminus \Sigma \right) + \vert D v_3(u)\vert \left( A^S \cap \Sigma \right) \big)\\ = & 2 \, \big( \mathcal H^{n-1}(A\cap (J_{v_1(u)} \setminus \cut)) + \mathcal H^{n-1}(A\cap J_{v_3(u)} \cap \cut) \big), \end{aligned} \end{equation} which, coupled with \eqref{eq:contaccio2}, implies $|D u|(\pi_{\mathbf{\cut},M}^{\ -1}(A^S))=0$. Then $u$ is constant on $\pi_{\mathbf{\cut},M}^{\ -1}(A^S)$, which contradicts the validity of the constraint on the fibers. This proves \eqref{eq:constancy}. Now, let us suppose, still by contradiction, that there exists a sequence $(u_k)_k\subset \domain(\FFF)$ such that $\lim_{k\to +\infty} \mathcal H^{n-1}\big(A \cap \pi_{\mathbf{\cut},M}(J_{u_k}) \big) =0$. Thanks to the assumption on $A$, $\pi_{\mathbf{\cut},M}^{\ -1}(A^S)$ is a double nontrivial cover of $A^S$. In particular, for each $k\in\mathbb N$, the restriction $\hat u_k:= {u_k}_{|_{\pi_{\mathbf{\cut},M}^{\ -1}(A^S)}}$ is in $BV(\pi_{\mathbf{\cut},M}^{\ -1}(A^S); \{0,1 \})$ and satisfies the constraint on the fibers, and reasoning as above, $|D {\hat u_k} |(\pi_{\mathbf{\cut},M}^{\ -1}(A^S))= 2 \mathcal H^{n-1}(A \cap \pi_{\mathbf{\cut},M}(J_{u_k}))$. By compactness, up to a not relabelled subsequence, there exists $\hat u\in BV_{\mathrm{constr}}(\pi_{\mathbf{\cut},M}^{\ -1}(A^S); \{0, 1 \})$ such that $\hat u_k\to u$ in $L^1(\pi_{\mathbf{\cut},M}^{\ -1}(A^S))$, and by lower semicontinuity, \[ |D \hat u|(\pi_{\mathbf{\cut},M}^{\ -1}(A^S)) \leq \liminf_{k\to+\infty} |D {\hat u_k} |(\pi_{\mathbf{\cut},M}^{\ -1}(A^S)) = 2 \lim_{k\to +\infty} \mathcal H^{n-1}\big(A \cap \pi_{\mathbf{\cut},M}(J_{u_k}) \big) =0. \] Hence $\hat u$ is constant on $\pi_{\mathbf{\cut},M}^{\ -1}(A^S)$, contradicting the constraint on the fibers. \end{proof} Lemma \ref{lem:constancy} shows, in particular, that the nontrivial topology of the cover coupled with the constraint on the fibers forces $u$ to jump in suitable open sets. As a further consequence of Lemma \ref{lem:constancy}, the boundary datum $S$ is attained by any constrained function on the cover, in the following sense. \begin{corollary}\label{cor:Sbordo} Let $u\in\domain(\FFF)$. Then \begin{equation}\label{eq:inclusione} \overline{\pi_{\mathbf{\cut},M}(J_u)}\setminus \pi_{\mathbf{\cut},M}(J_u) \supseteq S. \end{equation} \end{corollary} \begin{proof} The relation $S\cap \pi_{\mathbf{\cut},M}(J_u)=\emptyset$ is trivial, recall also \eqref{eq:saltou}. Now, suppose by contradiction that there exists a point $p\in S\setminus \overline{\pi_{\mathbf{\cut},M}(J_u)}$. Take an open ball $B$ centered at $p$, with $B\subset \Om\setminus \overline{\pi_{\mathbf{\cut},M}(J_u)}$, and apply Lemma \ref{lem:constancy} with the choice $A:=B$. Then, since $A \cap \pi_{\mathbf{\cut},M}(J_u)=\emptyset$, we end up with a contradiction with \eqref{eq:constancy}. \end{proof} If $2 \leq n \leq 7$ and $u$ is a minimizer, it is possible to show that equality holds in \eqref{eq:inclusione} \cite{AmBePa:17}. The definition of solution to the Plateau problem in the sense of double covers\footnote{An analog definition can be given for covers of degree larger than two. } is as follows. \begin{definition}[\textbf{Constrained double\,--\,cover solutions}]\label{def:soluzione} We call $$ \pi_{\mathbf{\cut},M}(J_{u_{\rm min}}) $$ a \emph{constrained double\,--\,cover solution} (in $\Om$) to Plateau's problem \emph{with boundary $\Hole$}. \end{definition} We say that a portion $P$ of $S$ is wetted if $\overline{\pi_{\mathbf{\cut},M}(J_{u_{\rm min}})} \supseteq P$, see also Section \ref{sec:examples}. \subsection{Independence of the pair of cuts}\label{sec:indipendenza} In this section we show that constrained double\,--\,cover solutions are independent of admissible cuts. A different proof of such an independence is given in Proposition \ref{prop:isometria}. Let us recall the definition of unoriented linking number, see for instance \cite[Section 3.17]{BoTu:82} or \cite[Section 5.2]{Hi:76}. \begin{definition}\label{def:link_property} Let $\rho \in C^1(\Sf^1;\R^n \setminus \Hole)$ be transverse to $\cut$. The unoriented linking number between $\rho$ and $\Hole$ is defined as \begin{equation}\label{eq:link} \mathrm{link}_2(\rho;\Hole):= \begin{cases} 0 & {\rm if} ~ \#(\rho^{-1}(\cut)) {\rm ~is~even}, \\ 1 & {\rm if} ~ \#(\rho^{-1}(\cut)) {\rm ~is~odd}. \end{cases} \end{equation} \end{definition} The right hand side of \eqref{eq:link} turns out to be independent of the cut $\cut$. When $\rho$ is just continuous, the unoriented linking number is defined using a $C^1$ loop homotopic to $\rho$ and not intersecting $\Hole$ \cite{Hi:76}. \medskip \begin{theorem}\label{teo:indipendenza} Let $\mathbf{\cut}=(\cut,\cut')$, $\mathbf{\Gamma}=(\Gamma,\Gamma')$ be two pairs of cuts. Let $u\in BV(Y_{\mathbf{\cut}}; \{0,1\})$ satisfies the constraint on the fibers. Then there exists $u'\in BV(Y_{\mathbf{\Gamma}}; \{0,1\})$ satisfying the constraint on the fibers such that, up to a $\mathcal H^{n-1}$\,--\,negligible set, \begin{equation}\label{eq:saltiuguali} \pi_{\mathbf{\cut},\mfd}(J_u)= \pi_{\mathbf{\Gamma},\mfd}(J_{u'}). \end{equation} \end{theorem} \begin{proof} Before giving the proof, we explain in a rough way the idea. First we fix a ``base point'' $x_0$ and count the parity of the number of intersections with the various manifolds $\Sigma, \Sigma', \Gamma, \Gamma'$. Next, we construct $u'$ so that $u'$ coincides with $u$ when calculated on $(x,j)$ for $j=1,2$, provided that the parity of the number of intersections with $\Sigma$ coincides with the parity of the number of intersections with $\Gamma$, while $u'$ coincides with $1-u$ when calculated on $(x,j)$ for $j=1,2$, provided that the parity of the number of intersections with $\Sigma$ differs with the parity of the number of intersections with $\Gamma$. Similarly, $u'$ coincides with $u$ when calculated on $(x,j')$ for $j'=3,4$, provided that the parity of the number of intersections with $\Sigma'$ coincides with the parity of the number of intersections with $\Gamma'$, while $u'$ coincides with $1-u$ when calculated on $(x,j')$ for $j'=3,4$, provided that the parity of the number of intersections with $\Sigma'$ differs with the parity of the number of intersections with $\Gamma'$. Let us now come to the proof. Without loss of generality, we can suppose that $\cut \neq \Gamma$. Fix $\polo \in \mfd\setminus (\cut\cup\Gamma)$. Let $x\in \mfd \setminus (\cut\cup \Gamma)$, and let $\gamma_x\in C^1([0,1];\mfd)$ be such that $\ga_x(0)=\polo$, $\ga_x(1)=x$, and $\ga_x$ is transverse both to $\cut$ and to $\Gamma$; such a $\ga_x$ will be called an admissible path from $x_0$ to $x$. We set $$ {\mathit h}(\ga_x; \cut, \Gamma):= \#(\ga_x^{-1}(\cut)) + \#(\ga_x^{-1}(\Gamma)). $$ If we consider another admissible path $\la_x$ from $x_0$ to $x$, we have that $h(\ga_x; \cut, \Gamma)$ and $h(\la_x; \cut,\Gamma)$ have the same parity. Indeed, let $\rho$ be the closed curve going from $\polo$ to $x$ following $\ga_x$, and then backward from $x$ to $\polo$ along $\la_x$. Recalling that ${\rm link}_2(\rho; \cut) = {\rm link}_2(\rho; \Gamma)$, it follows that ${\mathit h}(\ga_x; \cut,\Gamma) + {\mathit h}(\la_x; \cut,\Gamma) = \#(\rho^{-1}(\cut)) + \#(\rho^{-1}(\Gamma))$ is even. We are then allowed to set \begin{equation} \label{eq:i2} h(x; \cut,\Gamma):= \begin{cases} 0 & {\rm if} ~{\mathit h}(\ga_x; \cut,\Gamma) ~{\rm is~ even}, \\ 1 & {\rm if} ~{\mathit h}(\ga_x; \cut,\Gamma) ~{\rm is~ odd}, \end{cases} \end{equation} for any admissible $\ga_x$ from $x_0$ to $x$\footnote{Once $\polo$ is fixed, the function $h$ allows to define an ``exterior'' and an ``interior'' of $\cut\cup\Gamma$, even when $\cut$ and $\Gamma$ intersect on a set of positive $\mathcal H^{n-1}$\,--\,measure.}. Set $\mathcal Q :=\{x\in \mfd \setminus (\cut \cup \Gamma) \; : \; h(x; \cut, \Gamma)=0\}$, which is an open set, with $\dde \mathcal Q \subseteq \cut \cup \Gamma$; moreover $\mathcal Q$ has finite perimeter in $\Omega$ by \cite[Proposition 3.62]{AmFuPa:00}. Define \begin{equation*} \label{eq:v1primo} v_1':=\begin{cases} v_1(u) \quad & \text{in } \mathcal Q, \\ 1-v_1(u) \quad & \text{in } \Om \setminus \mathcal Q. \end{cases} \end{equation*} {}From \cite[Theorem 3.84]{AmFuPa:00} it follows that $v_1' \in BV(\Om; \{0,1\})$. It also follows\footnote{ Indeed, let $x\in J_{v_1'} \setminus (\cut\cup\Gamma)$ and let $\ga_x$ be an admissible path from $x_0$ to $x$. Let $B(x)$ be an open ball centered at $x$ and disjoint from $\cut\cup\Gamma$; in particular, every $z\in B(x)$ can be reached by a path obtained attaching to $\ga_x$ the segment between $x$ and $z$; notice that such a path $\ga_z$ is admissible from $x_0$ to $z$, and ${\mathit h}(\ga_z; \cut, \Gamma)={\mathit h}(\ga_x; \cut,\Gamma)$. Therefore, either $v_1' = v_1(u)$ in $B(x)$ or $v_1'=1-v_1(u)$ in $B(x)$, which implies $x \in J_{v_1(u)}$. Hence $J_{v_1'} \setminus (\cut\cup\Gamma) \subseteq J_{v_1(u)} \setminus (\cut\cup\Gamma)$. Similarly, also the converse inclusion holds, and \eqref{eq:primopezzosaltovprimo} follows. } that \begin{equation} \label{eq:primopezzosaltovprimo} J_{v_1'} \setminus (\cut\cup\Gamma) = J_{v_1(u)} \setminus (\cut\cup\Gamma). \end{equation} We define $u'\in BV_{\mathrm{constr}}(Y_{\mathbf{\Gamma}}; \{0,1\})$ as the constrained lifting of $v_1'$ when $D$ is replaced by $\Omega \setminus \Gamma$. Recalling also \eqref{eq:vvvv}, set $$ v_3' := \begin{cases} v_1' & {\rm in~ the~ exterior~ region~} {\rm ~ to~} \Gamma\cup\Gamma', \\ 1-v_1' & {\rm in~ the~ interior~ region~} {\rm ~to ~} \Gamma\cup\Gamma'. \end{cases} $$ Notice that $v'_3\in BV(\Om; \{0,1\})$. By construction, we have $$ v_1' = v_1(u'), \qquad v_3' = v_3(u'). $$ We claim that $u'$ satisfies \eqref{eq:saltiuguali}. {}From \eqref{eq:saltou} we have $$ \pi_{\mathbf{\Gamma},\mfd}(J_{u'})= \big ( J_{v_1'} \setminus (\Gamma \setminus S) \big) \cup \big( J_{v_3'} \cap (\Gamma \setminus S) \big), $$ and our proof is concluded provided we show that, up to a $\mathcal H^{n-1}$\,--\,negligible set, \begin{equation}\label{eq:saltovvprimo} \big ( J_{v_1'} \setminus \Gamma \big) \cup \big( J_{v_3'} \cap \Gamma \big) = \big( J_{v_1(u)} \setminus \cut) \cup \big( J_{v_3(u)} \cap \cut\big). \end{equation} Let us split the left hand side of \eqref{eq:saltovvprimo} as follows: \begin{equation} \label{eq:saltovprimosplit} \begin{split} J_{v_1'} \setminus \Gamma &= \Big( (J_{v_1'} \cap \cut) \setminus \Gamma\Big) \cup \Big(J_{v_1'} \setminus (\cut\cup\Gamma)\Big), \\ J_{v_3'} \cap \Gamma &= \Big(J_{v_3'} \cap \cut \cap \Gamma\Big) \cup \Big((J_{v_3'} \cap \Gamma)\setminus \cut\Big). \end{split} \end{equation} Let us show that, up to a $\mathcal H^{n-1}$\,--\,negligible set, \begin{equation} \label{eq:secondopezzosaltovprimo} (J_{v_1'} \cap \cut) \setminus \Gamma= (J_{v_3(u)} \cap \cut) \setminus \Gamma. \end{equation} Let $x\in (J_{v_1'} \cap \cut) \setminus \Gamma$. Up to a $\mathcal H^{n-1}$\,--\,negligible set\footnote{ Here we use again \cite[Theorem 3.84]{AmFuPa:00}.}, we can assume that the approximate tangent spaces to $J_{v_1'}$ and $\cut$ at $x$ coincide. Let $B(x)$ be an open ball centered at $x$, not intersecting $\Gamma$, and such that $B(x)\setminus \cut$ consists of two connected components. The same argument used in the proof of \eqref{eq:primopezzosaltovprimo} shows that on one component $v_1'=v_1(u)$, while on the other $v_1'=1-v_1(u)$. Since $x\in J_{v_1'}$, we have $$ x \notin J_{v_1(u)}. $$ On the other hand, by \eqref{eq:vvvv}, in one component we have $v_1(u)=v_3(u)$, while in the other component $v_3(u)=v_2(u)=1-v_1(u)$ (where in the last equality we used \eqref{eq:v1v2}). Thus, $x\in J_{v_3(u)}$. So, up to a $\mathcal H^{n-1}$\,--\,negligible set, $(J_{v_1'} \cap \cut) \setminus \Gamma \subseteq (J_{v_3(u)} \cap \cut) \setminus \Gamma$. Arguing similarly for the other inclusion, we get \eqref{eq:secondopezzosaltovprimo}. The same argument applies also to prove that, up to a $\mathcal H^{n-1}$\,--\,negligible set, \begin{equation} \label{eq:terzopezzosaltovprimo} J_{v_3'} \cap \cut \cap \Gamma= J_{v_3(u)} \cap \cut \cap \Gamma, \quad \end{equation} and \begin{equation}\label{eq:quartopezzosaltovprimo} (J_{v_3'} \cap \Gamma)\setminus \cut =(J_{v_1(u)} \cap \Gamma)\setminus \cut. \end{equation} From \eqref{eq:primopezzosaltovprimo}\,--\,\eqref{eq:quartopezzosaltovprimo}, we finally get \eqref{eq:saltovvprimo}. \end{proof} \begin{corollary}[\textbf{Independence}]\label{cor:indipendenza} The minimal value in \eqref{eq:problema_gen} is independent of the pair $\mathbf{\cut}$ of cuts. \end{corollary} \begin{proof} Let $\mathbf{\cut}$, $\mathbf{\Gamma}$ be two pairs of cuts. Let $u_{\min}\in \domain(\FFF)$ be a function realizing the minimal value, call it $\mathscr A(\mathbf{\cut})$. Let $u'\in BV(Y_{\mathbf{\Gamma}}; \{0, 1\})$ be the function satisfying the constraint on the fibers given by Theorem \ref{teo:indipendenza} applied with $u = u_{{\rm min}}$. Then, by \eqref{eq:formulafinale} and \eqref{eq:saltiuguali}, we have \[ \mathscr A(\mathbf{\Gamma}) \leq 2 \mathcal H^{n-1}(\pi_{\mathbf{\Gamma},\mfd}(J_{u'})) = 2 \mathcal H^{n-1}(\pi_{\mathbf{\cut},\mfd} (J_{{u_{\min}}}))= \mathscr A(\cutpair). \] Arguing similarly for the converse inequality, we get $\mathscr A(\mathbf{\Gamma})=\mathscr A(\cutpair)$. \end{proof} In view of Corollary \ref{cor:indipendenza}, we often skip the symbol $\mathbf{\cut}$ in the notation of the cover, and on the minimal value of the area. Moreover, we often set $$ p := \pi_{\mathbf{\cut}, \mfd}. $$ \smallskip The relations between a constrained double-cover solution and other notions of solution to the Plateau problem can be found in \cite{AmBePa:17}. \subsection{Abstract construction of the double cover}\label{sec:abstract_construction_of_the_double_cover} The construction of the abstract cover is standard \cite{Ha:01}: fix $\polo\in \mfd$, and set $C_{x_0}([0,1]; \mfd):=\{\ga \in C\big([0,1]; \mfd\big) \;:\; \ga(0)=\polo\}$. For $\ga\inC_{x_0}([0,1]; \mfd)$, let $[\ga]$ be the class of paths in $C_{x_0}([0,1]; \mfd)$ which are homotopic to $\ga$ with fixed endpoints. We recall that the universal cover of $\mfd$ is the pair $(\widetilde \mfda,\projH)$, where $\widetilde \mfda:=\big\{[\ga] \ : \ \ga\inC_{x_0}([0,1]; \mfd) \big\}$ and $\projH\colon [\ga]\in \widetilde \mfda \mapsto \projH([\ga]):= \ga(1)\in \mfd$. The topology of $\widetilde \mfda$ is defined as follows: consider the family $\mathcal U:=\{B\subseteq \mfd \; : \; B \text{ open ball}\}$, which is a basis of open sets of $\mfd$. For $B\in \mathcal U$, and for $[\ga]\in \widetilde \mfda$ such that $\ga(1)\in B$, define \begin{equation*} U_{[\ga],B}:=\big\{ [\ga\la] \; : \; \la\in C([0,1]; B), \; \la(0)=\ga(1) \big\}. \end{equation*} Then a basis for the topology of $\widetilde \mfda$ is given by $\widetilde {\mathcal U} :=\{U_{[\ga],B} \; : \; B\in \mathcal U, \; [\ga]\in \widetilde \mfda, \, \ga(1)\in B\}.$ Let $\pi_1(\mfd,\polo)$ be the fundamental group of $\mfd$ with base point $\polo$, and let \begin{equation*}\label{eq:H} H:=\{[\rho]\in \pi_1(\mfd,\polo) \; : \; \mathrm{link}_2(\rho; \Hole)=0\}, \end{equation*} which is a normal subgroup of $\pi_1(\mfd,\polo)$ of index two. For $\ga\inC_{x_0}([0,1]; \mfd)$, set $\bar\ga(t):=\ga(1-t)$ for all $t\in[0,1]$. Associated with $H$, we can consider the following equivalence relation $\sim_H$ on $\widetilde \mfda$: for $[\ga], [\la] \in \widetilde \mfda$, \begin{equation*} [\ga]\sim_H [\la] \iff \ga(1)=\la(1), \quad \mathrm{link}_2(\ga\bar\la; \Hole)=0. \label{eq:Hrelbis} \end{equation*} \noindent We denote by $[\ga]_H$ the equivalence class of $[\ga]\in\widetilde \mfda$ induced by $\sim_H$, and we set \begin{equation*}\label{eq:abcv} \mfd_H:= \widetilde \mfda / \sim_H. \end{equation*} Letting $\widetilde \mathfrak{p}_H\colon \widetilde \mfda \to \mfd_H$ be the canonical projection induced by $\sim_H$, we endow $\mfd_H$ with the corresponding quotient topology. We set $\mathfrak{p}_{H,\mfd} \colon [\ga]_H \in \mfd_H \mapsto \ga(1) \in \mfd$, so that we have the following commutative diagram \begin{equation}\label{eq:schema2} \xymatrix{ \widetilde \mfda \ar[r]^{\widetilde \mathfrak{p}_H} \ar[dr]_{\projH} & \mfd_H \ar[d]^{\mathfrak{p}_{H,\mfd}} \\ & \mfd } \end{equation} and the pair $(\mfd_H,\mathfrak{p}_{H,\mfd})$ is a cover of $\mfd$, see \cite[Proposition 1.36]{Ha:01}. Let $(Y,\pi_Y)$ be a cover of $\mfd$, and let $y_0\in\pi_Y^{-1}(\polo)$. By $(\pi_Y)_*\colon \pi_1(Y,y_0)\to \pi_1(\mfd,\polo)$ we denote the homomorphism defined as $(\pi_Y)_*([\varrho]):= [\pi_Y\circ \varrho]$. By \cite[Proposition 1.36]{Ha:01}, we have \begin{equation}\label{eq:HHH} (\mathfrak{p}_{H,\mfd})_*(\pi_1(\mfd_H,[x_0]_H))=H, \end{equation} where $\pi_1(\mfd_H,[x_0]_H)$ is the fundamental group of $\mfd_H$ with base point the equivalence class $[x_0]_H$ of the constant loop $x_0$. \begin{Proposition}\label{pro:corollary} Let $\mathbf{\cut}$ be a pair of cuts. Then $Y_ \mathbf{\cut}$ and $\mfd_H$ are homeomorphic. \end{Proposition} \begin{proof} By \cite[p.~28]{Ha:01}, we can assume that $\polo \notin \cut \cup \cut'$. Now, let $y_0 \in \pi_{\mathbf{\cut},M}^{-1}(x_0)$ and $[\varrho]\in \pi_1(Y_ \mathbf{\cut}, y_0)$. Then, $[\varrho]$ changes sheet in $Y_ \mathbf{\cut}$ an even (or zero) number of times; therefore, assuming without loss of generality $\varrho$ of class $C^1$ and transverse to $\cut$, recalling also \eqref{eq:link}, we have \[ 0 \equiv \#\big((\pi_{\mathbf{\cut}, M}\circ\varrho)^{-1}(\cut)\big) \equiv \mathrm{link}_2(\pi_{\mathbf{\cut},M}\circ\varrho; \Hole) \ \ \;(\mathrm{mod }~2), \] which implies $\pi_{\mathbf{\cut},M} \circ \varrho \in H$. Hence, $(\pi_{\mathbf{\cut},M})_*\big(\pi_1(Y_ \mathbf{\cut}, y_0)\big)\leq H$, and since $H$ and $(\pi_{\mathbf{\cut},M})_*\big(\pi_1(Y_ \mathbf{\cut}, y_0)\big)$ have the same index, they must coincide. {}From \eqref{eq:HHH}, we deduce $$(\mathfrak{p}_{H,\mfd})_*(\pi_1(\mfd_H,[x_0]_H))=(\pi_{\mathbf{\cut},M})_*\big(\pi_1(Y_ \mathbf{\cut}, y_0)\big).$$ By \cite[Proposition 1.37]{Ha:01}, the proof is complete. \end{proof} The homeomorphism between the two covers, which we denote \begin{equation}\label{eq:omeo} f_{\mathbf{\cut}} \colon \mfd_H \to Y_ \mathbf{\cut}, \end{equation} is given for instance in the proof of \cite[Proposition 1.33]{Ha:01}: for $[\ga]_H \in\mfd_H$, let $\beta\in C([0,1];\mfd_H)$ be a path from $[\polo]_H$ to $[\ga]_H$; we uniquely lift $\mathfrak{p}_{H,\mfd}\circ \beta$ to a path in $Y_ \mathbf{\cut}$ with base point $y_0$. Then, $f_{\mathbf{\cut}}([\ga]_H)$ is defined as the endpoint of the lifted path, which turns out to be independent of $\beta$. Let us define the distance $d_{\mfd_H}$ on $\mfd_H$ as follows: for $[\ga]_H,$ $[{\la}]_H \in \mfd_H$, \begin{equation}\label{eq:dabcv} d_{\mfd_H}([\ga]_H,[\la]_H):=\inf_\beta \sup \big\{ \sum_l | \mathfrak{p}_{H,\mfd}(\beta(t_l))- \mathfrak{p}_{H,\mfd}(\beta(t_{l-1})) | \; : \; (t_l)_l\in \mathrm{Part}(\beta) \big\}, \end{equation} where the infimum runs among all $\beta\in C([0,1]; \mfd_H)$ connecting $[\ga]_H$ and $[\la]_H$; for any such $\beta$, $\mathrm{Part}(\beta)$ denotes the collection of all finite partitions $(t_l)_l$ of $[0,1]$ such that, for every $l$, there exist $[\ga_l] \in \widetilde M$ and a ball $B_l\subseteq M$ with $U_{[\gamma_l],B_l}\in \widetilde{\mathcal U}$ such that $\beta([t_{l-1},t_l]) \subset \widetilde \mathfrak{p}_H(U_{[\gamma_l],B_l})$. Symmetry, positivity, and the triangular inequality of $d_{\mfd_H}$ are direct consequences of the definition. Let us show that $d_{\mfd_H}([\ga]_H,[{\la}]_H)=0$ implies $[\ga]_H=[{\la}]_H$. Clearly, we have $\ga(1)={\la}(1)$. Fix $\eps>0$, and let $\beta\in C([0,1], \mfd_H)$, $N\in \mathbb N$, $(t_l)_l\in \mathrm{Part}(\beta)$, $l\in \{1,\dots, N\}$, be such that $\sum_{l=1}^N |\mathfrak{p}_{H,\mfd}(\beta(t_l))-\mathfrak{p}_{H,\mfd}(\beta(t_{l-1}))| \leq \eps$. In particular, for $\eps>0$ sufficiently small, the closed curve $\rho$ defined as\footnote{ Here by $[\![x, x']\!]$ we mean the path corresponding to the segment from $x$ to $x'$, for every $x,\,x'\in \mfd$.} $$\rho:=[\![\ga(1), \mathfrak{p}_{H,\mfd}(\beta(t_1))]\!] \cdots [\![\mathfrak{p}_{H,\mfd}(\beta(t_{N-1})), \la(1)]\!]$$ is contractible in $\mfd$, which implies that \begin{equation}\label{eq:linkrho} \mathrm{link}_2(\rho;\Hole)= 0. \end{equation} By definition of ${\rm Part}(\beta)$, for every $l\in \{1,\dots, N\}$ there exist $\la_{l,1}$, $\la_{l,2} \in C([0,1]; B_l)$, with $\la_{l,1}(0)=\la_{l,2}(0)=\ga_l(1)$, and such that $\beta(t_{l-1})=[\ga_l\la_{l,1}]_H$, $\beta(t_l)=[\ga_l\la_{l,2}]_H$; notice that, since $[\ga_{l-1} \la_{l-1,2}]_H=\beta(t_{l-1})=[\ga_{l}\la_{l,1}]_H$, we have \begin{equation} \label{eq:betall} \mathrm{link}_2 (\ga_{l-1}\la_{l-1,2}\bar\la_{l,1}\bar \ga_l; \Hole)=0. \end{equation} Set $ \rho_l:= \ga_l \la_{l,1} [\![\la_{l,1}(1), \la_{l,2}(1)]\!] \bar\la _{l,2}\bar\ga_l, $ which is a closed curve in $\mfd$. In particular, \begin{equation} \label{eq:contraibile} \mathrm{link}_2(\rho_l; \Hole) = \mathrm{link}_2(\la_{l,1} [\![\la_{l,1}(1), \la_{l,2}(1)]\!] \bar\la_{l,2}; \Hole)= 0, \end{equation} where last equality follows recalling that $B_l$ is contractible in $\mfd$. Coupling \eqref{eq:linkrho}, \eqref{eq:betall} and \eqref{eq:contraibile}, we get \begin{equation* \begin{split} \mathrm{link}_2(\ga{\bar\la};\Hole) = & \mathrm{link}_2(\ga_0 \la_{0,1} \bar\la_{N,2} \bar\la; S)\\ =& \sum_{l=1}^N \Big( \mathrm{link}_2 (\rho_l; \Hole) + \mathrm{link}_2 (\ga_{l-1}\la_{l-1,2}\bar\la_{l,1}\bar\ga_l; \Hole) \Big) + \mathrm{link}_2 (\rho; \Hole) =0 . \end{split} \end{equation*} Hence $[\ga] \sim_H {[\la]}$, and the conclusion follows. Now, we are in the position to establish the isometry bewteen the two covers. We endow $Y_ \mathbf{\cut}$ with the distance $d_{\cvv}$ defined as follows: for any $y$, $y' \in Y_ \mathbf{\cut}$, we set \begin{equation} \label{eq:dcvv} d_{\cvv}\big( y, y'\big)=\inf _\eta\ \sup \big\{\sum_l |\pi_{{\bf \Sigma},M}(\eta(t_l) )- \pi_{{\bf \Sigma},M}(\eta(t_{l-1}))| \ : \ (t_l)_l\in \mathrm{Part}(\eta) \big \}, \end{equation} where the infimum runs among all $\eta \in C([0,1]; Y_ \mathbf{\cut})$ connecting $y$ and $y'$, and $\mathrm{Part}(\eta)$ is the family of all finite partitions $(t_l)_l$ of $[0,1]$ such that, for every $l$, $\eta([t_{l-1}, t_l])$ is contained in a single chart of $Y_ \mathbf{\cut}$. \begin{Proposition}[\textbf{Isometry}]\label{prop:isometria} The map $f_{\mathbf{\cut}}$ in \eqref{eq:omeo} is an isometry between $(\mfd_H,d_{\mfd_H})$ and $(Y_ \mathbf{\cut},d_{\cvv})$. \end{Proposition} \begin{proof} Let $[\ga]_H$, $[\la]_H\in \mfd_H$. For $\eps>0$, let $\beta\in C([0,1]; \mfd_H)$ be a path from $[\ga]_H$ to $[\la]_H$, realizing the infimum in \eqref{eq:dabcv} up to a contribution of order $\eps$. Now, set $\eta:=f_{\mathbf{\cut}}\circ\beta$; accordingly to \eqref{eq:dcvv}, let $(t_l)_l\in \mathrm{Part}(\eta)$ be such that $$d_{\cvv}(f_{\mathbf{\cut}}([\ga]_H),f_{\mathbf{\cut}}([\la]_H)) \leq \sum_l |\pi_{{\bf \Sigma},M}(\eta(t_l)) - \pi_{{\bf \Sigma},M}(\eta(t_{l-1}))| +\eps.$$ Clearly, it is not restrictive to assume that, for every $l$, $\pi_{\mathbf{\cut},M}(\eta([t_{l-1},t_l]))\subset B_l$, for some open ball $B_l\subset \mfd$. Therefore, accordingly to \eqref{eq:dabcv}, we have $(t_l)_l\in \mathrm{Part}(\beta)$; hence, for every $l$, \[ |\pi_{\mathbf{\cut},M}(\eta(t_l) )- \pi_{\mathbf{\cut},M}(\eta(t_{l-1}))|=|\mathfrak{p}_{H,\mfd}(\beta(t_l)) - \mathfrak{p}_{H,\mfd}(\beta(t_{l-1}))|, \] which implies \[ d_{\cvv}(f_{\mathbf{\cut}}([\ga]_H),f_{\mathbf{\cut}}([\la]_H)) \leq d_{\mfd_H}([\ga]_H,[\la]_H) +2\eps. \] By the arbitrariness of $\eps$, we get $d_{\cvv}(f_{\mathbf{\cut}}([\ga]_H),f_{\mathbf{\cut}}([\la]_H))\leq d_{\mfd_H}([\ga]_H,[\la]_H)$. Similarly, we get the converse inequality. \end{proof} Once we have to minimize a functional defined on some functional domain, the metric structure (and not only its topology) of the cover becomes relevant: the distance function on $Y$ is locally euclidean, and the two methods described above give isometric covers. We conclude this section remarking that a large part of what we have described can be generalized \cite{AmBePa:17}: \begin{itemize} \item[$\bullet$] to a cover of $\R^n \setminus S$ having more then two sheets. Allowing three or more sheets has the interesting by-product of modelling singularities in soap films such as triple junctions (in the plane), or triple curves (in space), quadruple points, etc. \item[$\bullet$] when $S$ is not smooth, for instance $S$ the one-skeleton of a polyhedron. \end{itemize} We refer to \cite{Br:95}, \cite{AmBePa:17} and \cite{BePaPa:17} for a more complete description for covers of any (finite) degree. \section{Covers of degree larger than two}\label{sec:covers_of_degree_larger_than_two} The use of covers $p := \pi_{\mathbf{\cut},M}: Y \to M$ of degree larger than two, coupled with vector-valued BV-functions defined on $Y$ and satisfying a suitable constraint, is of interest since for instance: \begin{itemize} \item[$\bullet$] when $n=2$, one can model, among others, the Steiner minimal graph problem connecting a finite number $k \geq 3$ of points in the plane \cite{AmBePa:17}; \item[$\bullet$] when $n=3$, one can consider configurations with singularities (triple curves, quadruple points etc.), in particular when $S$ is the one-dimensional skeleton of a polyhedron; \item[$\bullet$] choosing carefully the cover, it is possible to model soap films with higher topological genus, as in the example of the one-skeleton of a tetrahedron\footnote{The triple cover constructed in \cite{BePaPa:17} used to realize a soap film with two tunnels is not normal. Roughly, this means that one of the three sheets is treated in a special way; this is also related to the Dirichlet condition imposed on the cover in correspondence of the boundary of $\Omega$.} discussed in \cite{BePaPa:17}: the resulting soap film seems not to be modelable using the Reifenberg approach \cite{Re:60}. \end{itemize} Some remarks to be pointed out are the following: \begin{itemize} \item[$\bullet$] in the construction of the cover, and to model interesting situations, it frequently happens to make use\footnote{Invisible wires can be useful also for covers of degree two.} of what the author of \cite{Br:95} called ``invisible wires'': these may have various applications, such as making globally compatible the cover, or also acting as an obstacle (see also Section \ref{sec:examples}). They are called invisible wires because the soap film should be supposed to wet the initial wireframe $S$, but not to wet the invisible wires, so that their actual position becomes relevant. Proving that a soap film has no convenience to wet the invisible wires for special choices of their position, seems to be an open problem, not discussed in \cite{Br:95}. We refer to \cite{BePaPa:17} for more. \item[$\bullet$] Instead of describing explicitely the cut and past procedure (as in Section \ref{sec:double_covers_of_R3_deprived_by_a_curve}) and the parametrizing maps (which becomes more and more complicated as the degree of the cover increases) now it is often convenient to construct the cover first by orienting all portions\footnote{It is worth noticing that it may happen that now the cut surface is immersed, and not embedded.} of the cut, then declaring in a consistent global way the permutations for gluing the sheets along the cut, and finally to use the local triviality of the cover, in order to check the consistency of the gluing. Already in the case of triple covers, a relevant fact is the use of permutations with fixed points. \item[$\bullet$] Another useful way to describe the cover is the abstract construction (already considered in Section \ref{sec:abstract_construction_of_the_double_cover} for double covers): one has to suitably quotient the universal cover with a subgroup of the fundamental group of the complement of $S$\footnote{or, if necessary, of the union of $S$ and the invisible wires.}. A clear advantage of this approach is its independence of any cut, a fact that, with the cut and past procedure, requires a proof. \item[$\bullet$] BV-functions defined on $Y$ could be vector valued, as in \cite{AmBePa:17}. Suppose for simplicity to consider a triple cover; then one choice is to work with BV-functions $u : Y \to \{\alpha, \beta, \gamma\}$, where $\alpha,\beta,\gamma$ are the vertices of an equilateral triangle of $\R^2$, having its barycenter at the origin. If $x$ is any point of $M$ and $p^{-1}(x) = \{y_1,y_2,y_3\}$ is the fiber over $x$, then we require $\{u(y_1), u(y_2), u(y_3)\} = \{\alpha, \beta,\gamma\}$. Clearly, the constraint now reads as $\sum_{i=1}^3 u(y_i) =0$. \noindent Another choice (made also in \cite{BePaPa:17}) is, instead, the following. Again, suppose for simplicity to consider a triple cover. We can consider BV-functions $u : Y \to \{0,1\}$, so that if $x$ is any point of $M$ and $p^{-1}(x) = \{y_1,y_2,y_3\}$ is the fiber over $x$, then we require the constraint $\sum_{i=1}^3 u(y_i) =1$. Other choices of the constraint are conceivable, but we do not want to pursue this issue in the present paper. \end{itemize} Once we have specified the domain of the area functional, i.e., a class of constrained BV-functions $u$, the variational problem becomes, as in Section \ref{sec:double_covers_of_R3_deprived_by_a_curve}, to minimize the total variation of $u$\footnote{In the case of $u(y) \in \{\alpha,\beta,\gamma\}$, the total variation is using the Frobenius norm $\vert T\vert = \sqrt{\sum (t_{ij})^2}$ on matrices $T = (t_{ij})$.}. This turns out to be the $(n-1)$-dimensional Hausdorff measure of the projection $p(J_u)$ of the jump set $J_u$ of $u$, times a positive constant $c$, related to the codomain of $u$ and possibly to the number of sheets. For instance, for $u(y) \in \{\alpha,\beta,\gamma\}$ as above, then $c=3 \ell$, where $\ell = \vert \beta-\alpha\vert$. For $u(y) \in \{0,1\}$, then $c=2$. In the next section we construct triple covers, in some interesting cases not considered in \cite{BePaPa:17}, and only partially considered in \cite{Br:95}. \section{Examples}\label{sec:examples} In this section all covers are of degree three; moreover, we consider BV functions $u : Y \to \{0,1\}$ with the constraint that the sum of the values of $u$ on the three points of each fiber equals $1$. We start with the example of Figure \ref{fig:cravatta}, due to F.J. Almgren \cite[Fig. 1.9]{Al:01}. \begin{figure} \includegraphics[width=0.48\textwidth]{cravatta_base.pdf} \includegraphics[width=0.48\textwidth]{partialborder.png} \caption{\small{Left: an unknotted boundary (bold curve). The dotted loop represents an invisible wire that is not part of the problem but essential for the cover construction. Right: a striking example of minimal film that only partially touches the boundary, due to Almgren \cite[fig. 1.9]{Al:01}.}} \label{fig:cravatta} \end{figure} \begin{Example}[\textbf{A partially wetted curve}]\label{exa:a_partially_wetted_curve}\rm Let $\edges$ be the (unknotted) bold curve in Figure \ref{fig:cravatta} (left). We want to construct a cover of $\R^3 \setminus \edges$ compatible with the soap film in Figure \ref{fig:cravatta} (right), where the lower part is not wetted. The presence of the triple curve suggests to use a cover of degree at least three, and indeed three will suffice. Removal of the unknotted curve from $\R^3$ leaves a set with infinite cyclic fundamental group (isomorphic to $\Z$). The only possible cover with three sheets that can be constructed on such a base space would necessarily imply a cyclic permutation of the three points of the fiber when looping around the lower portion of the curve, forcing an undesired wetting. Similarly to the construction described in \cite{BePaPa:17} and in the same spirit as in many of the examples in \cite{Br:95}, we then add an ``invisible wire'' in the form of a loop circling the pair of nearby portions of $\edges$ in the upper part. This is represented by the dotted loop $\iwire$ in Figure \ref{fig:cravatta} (left). The base space $M$ is then defined as $\R^3 \setminus (\edges \cup \iwire)$. A cut and past construction of the cover $\prj : \cover \to \base$ can now be defined by cutting $\base$ along two surfaces bounded by $\edges$ and by $\iwire$ respectively. The first one resembles the film of Figure \ref{fig:cravatta} (right), but it has a selfintersection along the dashed (lower) segment and continues below the disk-like portion touching the whole of $\edges$; the second one is a small disk bounded by $\iwire$, intersecting the first cutting surface along the dashed segment. We now take three copies, numbered $1, 2, 3$, of the cutted version of $\base$ and glue them along the cutting surfaces according to given permutations of the three sheets, that we now describe. The permutation along the lower portion of $\edges$ is chosen as $(2~3)$, namely stratum 1 glues with itself, while strata 2 and 3 get exchanged. This choice is justified because we do not want to force wetting of that portion, indeed a function in $\domainF$ defined equal to $1$ in sheet $1$ does not jump along a tight loop around that part of $\edges$. This choice in turn requires that we fix the Dirichlet-type condition $u=1$ out of a sufficiently large ball on stratum $1$ of the cover. The permutations on the remaining parts of the cut can then be chosen consistently as follows: \begin{itemize} \item[$(2~3)$] (as already described) in the lower tongue-like portion of the surface bordered by $\edges$; \item[$(2~3)$] when crossing the disk-like surface bordered by $\iwire$; \item[$(1~2)$] when crossing the large disk-like portion of the surface bordered by $\edges$; \item[$(1~3)$] when crossing the ribbon-like portion of the surfaces between the two dashed crossing curves. \end{itemize} Note that corresponding to portions of the surface that are wetting the bold curve, stratum $1$ is exchanged with a different stratum. It is a direct check that with this definition the local triviality of the triple cover around the triple curves, namely that a small loop around the dashed curves must be contractible in $M$, is satisfied. This check consists in showing that the composition of the three permutations associated with the crossings must produce the identity: $(2~3) (1~2)^{-1} (1~3)^{-1} (1~2) = \id$. The construction is actually unique up to exchange of sheets $2$ and $3$. The fundamental group $\pi_1(\base)$ of $\base$ is readily seen to be free of rank $2$. It can be generated by the two Wirtinger generators schematically denoted by $a$ and $b$ in Figure \ref{fig:cravatta} left. We can then finitely present $\pione$ with two generators and no relation as $$ \pione = <a, b;> . $$ An abstract construction of the cover can be obtained by considering the homomorphism $\varphi : \pione \to \SSS_3$ (permutations of the set $\{1,2,3\}$) defined by the position $\varphi(a) = (1~2)$, $\varphi(b) = (2~3)$ and then defining the subgroup $H < \pione$ as $$ H = \{ w \in \pione : \varphi(w): 1 \mapsto 1 \} . $$ It contains all reduced words $w \in \pione$ whose image under $\varphi$ is either the identity $\id \in \SSS_3$ or the transposition $(2~3)$. It is a direct check that $H$ has index $3$ in $\pione$ and that it is not normal. As discussed in \cite{BePaPa:17} for the example of the tetrahedral wire, also in this example we cannot exclude a priori that a minimizing surface wets the invisible wire: we have already remarked that this is a difficulty present in any example constructed using invisible wires. Finally, we recall that soap films that partially wet any knotted curve have been proven to exist in \cite{Pa:92}. \end{Example} The soap film of the next example can be found for instance in \cite[pag. 85 and Fig. 4.14]{Is:92}. \begin{Example}[\textbf{Soap film with triple curves on a cubical frame}]\label{exa:soap_film_on_a_cubical_frame}\rm Let $S$ be the one-dimensional skeleton of the cube (Figure \ref{fig:nscube}). We want to construct a cover of $\base =\R^3 \setminus S$ which is compatible with the soap film in Figure \ref{fig:nscube}; note that here the soap film wets all the edges of the skeleton. \begin{figure} \includegraphics[width=0.95\textwidth]{nscube.png} \caption{\small{A non-simply connected minimal film spanning a cube. Image obtained using the \texttt{surf} code by E. Paolini.}} \label{fig:nscube} \end{figure} \begin{figure} \includegraphics[width=0.60\textwidth]{cube_cutpaste.pdf} \includegraphics[width=0.39\textwidth]{cubeabstract.pdf} \caption{\small{Left: orientation of the cut (the faces of the cube), and permutations of the sheets along the cut. Right: the Wirtinger presentation of the fundamental group of the complement of the one-skeleton of a cube. }} \label{fig:cube} \end{figure} Again, we want to model a soap film with triple curves, but not with quadruple points, and indeed, as we shall see, a triple cover of $\base$ will suffice. Also, there will be no need of any invisible wire. First of all, we orient the three pairs of opposite faces of the cube from the exterior to the interior, as in Figure \ref{fig:cube} (left). It turns out that we can make use of the cyclic permutations of $\{1,2,3\}$. We imagine a cut along the six faces of the cube, and we associate the same permutation to opposite faces: the identity permutation $\id$ is associated to the frontal and back faces, in order to model the presence of the tunnel. The three powers $\id, (1~2~3), (1~3~2)$ of the cyclic permutation $(1~2~3)$ are depicted in Figure \ref{fig:cube}. The presence of the identity permutation on a pair of opposite faces has the effect of actually not having a cut there. On the other hand, a tight loop around an edge turns out in the composition of a power of $(1~2~3)$ with the inverse of a different power of $(1~2~3)$, so that the result is either $(1~2~3)$ or $(1~3~2)$, hence a permutation without fixed points, which forces to wet that edge. Observe that a curve entering a face and exiting from the opposite one produces the identical permutation of the strata of the cover, hence it does not necessarily has to meet the projection of the jump set of a function $u$. The fundamental group of $\base$ turns out to be a free group of rank $5$, and it can be generated by the elements of $\pione$ schematically displayed in Figure \ref{fig:cube} (right) as $a$, $b$, $c$, $d$, $e$; the corresponding Wirtinger presentation is $$ \pione = <a,b,c,d,e;> $$ (five generators and no relations). Observe that the orientation of the edges in the figure is chosen such that all five generators loop positively around the corresponding edge and result in the permutation $(1~2~3)$ of the three sheets when compared with the cut/paste construction. This allows an abstract definition of the cover by considering the homomorphism $\varphi : \pione \to \SSS_3$ that maps all five generators onto the cyclic permutatioon $(1~2~3)$ and take the normal subgroup $H < \pione$, kernel of $\varphi$. A word $w \in \pione$ belongs to $H$ whenever the exponent sum with respect to all generators is a multiple of $3$. The abstract construction shows that this cover is normal. Note that this construction is invariant (up to isomorphisms) under the symmetry group of the cube, hence a minimizer will not be unique unless it is invariant under such symmetry group, which we do not expect to be true in view of the film displayed in Figure \ref{fig:nscube}. Minimizers with this topology were also obtained by real experiments \cite{Is:92}. \end{Example} The next example (Figure \ref{fig:retract}, found by J.F. Adams in \cite[Appendix]{Re:60}) concerns a soap film which retracts to its boundary. \begin{figure} \includegraphics[width=0.95\textwidth]{retract.png} \caption{\small{A minimal film that retracts to its boundary. Image provided by E. Paolini. An example of a film that \emph{deformation} retracts to its boundary can be found in \cite[fig. 3]{Mo:93}, the same example can also be found in \cite[fig. 14]{Br:95}}} \label{fig:retract} \end{figure} \begin{Example}\label{exa:triple_moebius_band}\rm Let $S$ be the curve of Figure \ref{fig:retract}: we would like to consider the soap film of the figure as a cut, but in order to construct a consistent triple cover, this is not sufficient. Indeed, we add an invisible wire in the form of a loop $C$ circling around the Mo\"ebius strip on the right; next we consider as a cut the union of the soap film in the figure and a disk bounded by $C$. Of course, this cut has a selfintersection along a diamater of the disk. Now, take as usual three copies $1,2,3$ of the cutting surface and glue them using the permutations as follows: \begin{itemize} \item[] $(2~3)$ when crossing the disk bounded by $C$; \item[] $(1~ 2~ 3)$ on the remaining part of the cut. \end{itemize} Observe that the part of the cut on the right hand side is not orientable: the invisible wire acts in such a way to revert the cyclic permutation $(1~ 2~ 3)$ when crossing the disk. It turns out that a presentation of the fundamental group of $M = \R^3 \setminus (S\cup C)$ is $$ \pi_1(M) = <a,b ; abab = baba>, $$ where $a$ corresponds to a small loop circling around $S$, and $b$ corresponds to a short loop circling around the invisible wire $C$. The abstract definition of the cover is obtained by considering the homomorphism $\varphi : \pione \to \SSS_3$ that maps $a$ to $(1~2~3)$ and $b$ to $(2~3)$\footnote{One verifies that $\varphi$ is well defined with respect to the relation of the presentation.}. A word belongs to $H < \pione$ whenever it consists of the words of $\pione$ that are mapped through $\varphi$ in a permutation of $\{1,2,3\}$ which fixes $1$: namely, either the identity $()$ or the transposition $(2~3)$. \end{Example} \begin{Example}\label{exa:octahedron}\rm Let $\edges$ be the one-skeleton of a regular octahedron. The fundamental group of $\base = \R^3 \setminus \edges$ is a free group of rank $5$. After suitable orientation, each of the $12$ edges of the octahedron can be associated to an element of $\pione$ corresponding to a loop from the base point (at infinity) that circles once in the positive sense around it. Imposing a strong wetting condition \cite{BePaPa:17} at all edges for a cover with three sheets amounts in forcing the permutation of sheets corresponding to a positive loop around that edge to be either $(1~2~3)$ or its inverse $(1~3~2)$. Upon possibly reversing the orientation of some edge we can assume all such permutations to be $(1~2~3)$. Local triviality of the cover at points near a vertex then corresponds in requiring that exactly two of the four edges concurring at that vertex to be ``incoming'', the other two being ``outgoing''. A choice of the orientation of the edges consistent with the requirement above corresponds to travel clockwise along the boundary edges of four of the eight faces selected in a checkerboard fashion. The resulting soap film in Figure \ref{fig:octahedron} (top-left) simply consists in those four faces or on the four remaining faces. Another consistent choice of orientation consists in travelling around the three diametral squares in a selected direction. Two relative minimizers corresponding to this choice are shown in Figure \ref{fig:octahedron} (top-right and bottom), the latter consists in a tube-shaped surface with six lunettes attached along six triple curves. It turns out that there are at least two other non isomorphic $3$-sheeted covers of the same base space, which however seem not to provide minimizers different from the ones described above. \begin{figure} \includegraphics[width=0.45\textwidth]{octahedron_a.png} \includegraphics[width=0.51\textwidth]{octahedron_b.png} \\ \includegraphics[width=0.48\textwidth]{octahedron_c.png} \caption{\small{Three examples of non-simply connected minimal films spanning the boundary of a regular octahedron. Top-left: trivial nonconnected surface consisting in four of the eight faces; Top-right: surface obtained by starting from five of the eight faces, the result consists in an isolated triangular face $F$ (after removing its boundary) plus a film with three triple curves wetting all the edges of the octahedron that are not edges of $F$; Bottom: surface obtained by starting from six of the eight faces. Note the presence of six triple curves. Images obtained using the \texttt{surf} code by E. Paolini.}} \label{fig:octahedron} \end{figure} \end{Example}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Supplementary Information} \end{document} \section{Introduction} \subsection{Introducing and Motivating the problem} \cite{hornik} showed neural networks are universal function approximators that can achieve remarkably low bias on estimating highly complex functions on high dimensional data, producing bountiful successes in a wide range of domains and applications. This makes it all the more surprising, in the view of statistical learning theory, why they can generalize well to unseen data. There have been considerable advances in demonstrating the generalization of deep neural networks. Characterizing when NNs will generalize in terms of its capacity has informed many common heuristics and practices such as dropout in the machine learning community, and has driven the rapid adoption of them in real-world applications. However, explaining why they generalize still remains an open problem. Of particular interest is using simpler models to characterize and decompose what a NN is learning through looking at their mutual information. Boosting is an ensemble method to make a stronger classifier out of weaker classifiers, with each classifier is explainable in their features and interpretable in their decisions. In the popular variant adaptive boosting, each weak classifier is trained in succession, and the next weak classifier trains on re-weighted data which focuses on examples the ensemble does poorly on. Adaptive boosting is explainable as a weighted majority vote algorithm (described by its authors \cite{adaboost}), where each vote comes from a weak learner that has an easily explained decision rule. Moreover, a familiar "self-averaging" view from the original authors on Dropout has recently gained a lot of consensus for explaining why Adaboost rarely overfit in practice \cite{krieger} \cite{wyner}. Within an ensemble of classifiers, self-averaging is the process which stabilizes the fit to regions with signal, while continuing to localize the effect of noisy points on the overall fit. Similarly, dropout is often used in practice to reduce the NN from overfitting. Dropout randomly selects a subset of the neurons to ignore while training, and it selects a different subset at every epoch. When making predictions, all of the neurons are used. We are inspired by dropout to view a NN as an ensemble of sub-classifiers, and relate each iteration of boosting to some sub-network. The time is ripe for us to bridge these two parallel developments on generalization amongst empirical risk minimizers. This connection can go a long way in uniting two traditionally separate fields of research – boosting and deep learning – and help ground the success of neural networks in more established territory. \subsection{Your contributions} Previous work has conjectured that neural networks learn a series of increasingly complex functions (\cite{nakkiran}), although the sequence and notion of complexity are not defined. We will define the series of increasingly complex functions as adaptive boosted classifiers with well-characterized growth in its VC dimension in terms of the number of boosting rounds. Each complexity rung from this series corresponds to a distinct phase of training. To justify this definition, we inductively show a similarity between the learning outcomes in a single phase between a single hidden layer neural network and boosted classifier. Namely, the NN's mutual information with $Y$, the label distribution, can be explained away by a boosted algorithm from the next phase, and the boosted algorithm's mutual information with $Y$ can be explained away by the NN in the next phase. In addition, we run extensive experiments demonstrating that NNs learn and retain increasingly complex hypotheses from adaptive boosting. In the first phase of training, the mutual performance between a single weak learner and the NN is perfect. Then, as the NN learns increasingly complex concepts, this simple concept is not forgotten. We show that this same pattern holds in further phases of training for boosted algorithms with more weak learners. We corroborate these claims with qualitative evidence by investigating the learning order of examples for each model and find similar preference of example difficulty within the same corresponding phase. We further investigate the connection between boosting and neural networks by studying a chosen ensemble of sub-networks chosen as in dropout as shown by \cite{srivastava} and \cite{baldi}. In boosting, each weak learner is encouraged to perform well on different subsets of the data which decreases the correlation between each of them. We prove that training only sub-networks within our chosen ensemble similarly decreases the correlation between any two hidden neurons in a simplified setting. We also experimentally validate the self averaging behavior of NNs trained with our modified version of dropout by viewing each sub-network created by dropout as an independent learner. When compared with neural networks that do not use dropout, the self averaging effect is more pronounced. \subsection{Related Research} There have been many attempts to formulate and motivate the search for simpler neural networks. For a short early history of the development of such techniques and how they encourage simplicity, one is encouraged to read \cite{schmidhuber}. Among these, of particular interest to us is dropout, a technique for addressing overfitting by averaging over an exponential number of "thinned" networks \cite{srivastava}. This is elaborated by \cite{baldi}'s recursive view of the averaging properties of dropout. However, such techniques that seek to design simpler neural networks do not yield simple explanations for why the resulting models generalize. As another attempt, there have been many attempts to explain generalization seen in broader classes of interpolating classifiers. This is the case with random forests, adaptive boosting \cite{wyner}, and nearest-neighbor schemes \cite{belkin}. Meanwhile, understanding of generalization in interpolating neural networks only began advancing recently, with theoretical analysis in more realistic settings with e.g. noisy labels and nonlinear learning dynamics by \cite{niladri} still in its nascent stages. We seek to bridge these two complementary fields of research by showing deep NNs learn a series of boosted classifiers, whose generalization is popularly attributed to self-averaging over an increasing number of interpolating sub-classifiers. Towards that end, our survey of what's known will overview the literature on neural network complexity and adaptive boosting separately. Then, we will address previous attempts to explain neural networks by simpler models, identify gaps in understanding, then argue why the novel connection we establish between adaptive boosting and neural networks can fill in some of those gaps. \section{Scientific Background} \textbf{How do different choices of low complexity models help us understand and explain the generalization of a neural network? } \textbf{Do neural networks, under certain hyperparameters, generalize through a self averaging effect as in boosting?} \subsection{Motivation} In recent years, neural networks have made giant leaps in a wide variety of domains. Neural networks are often referred to as "black box" algorithms due to how little we can explain its empirical success. A lack of understanding raises concerns including but not limited to explainability, interpretability, adversarial robustness, and bias. Our foundational research seeks to understand how and why neural networks generalize by using simpler, less complex models. Aristotle described one of the earliest forms of what we now call Occam's Razor in his Posterior Analytics by saying "We may assume the superiority … [all else being equal] of the demonstration which derives from fewer postulates or hypotheses." (Aristotle, Posterior Analytics, p. 150). From this theoretical framework, neural networks with lower complexity are preferred. In a practical setting, simple explanations for what a neural network learned and how it did so can bolster confidence for deployment as well as bring to light pitfalls. \subsection{What's Known} Developing a framework for understanding when and why neural networks work has been a longstanding problem. Early studies by \cite{kernel} found that NNs learn increasingly good representations in deeper layers, where good is defined as how much predictive signal we can extract (using PCA) from the associated Kernel matrix using a linear classifier. This understanding has proved valuable for tasks such as transfer learning. On the other hand, they also found that Neural networks trained with gradient descent struggle generalizing on few-shot "reasoning" tasks \cite{schmidhuber}. An example of such a task is predicting the number of on-bits in a 100-dimensional binary vector from just 3 examples. ML research on generalization today generally adopts gradient descent, instead studying the statistical assumptions/interpretations behind techniques of constraining the neural network. Common notions of "complexity" are in terms of the number and magnitude of parameters. One measure for complexity is just to measure the euclidean distance between a vector of all the weights of a model and a vector of all ones. This relates to the more classical measures like Kolmogorov complexity, where a neural network with most weights equal to zero have lower complexity due to fewer bits needed to store them. A paper by \cite{persistence} investigated an improvement on this by defining a topologically derived neural persistence measure based on the idea that some neurons contribute more to the final prediction. This work showed that finding the right measures of complexity can explain best practices in machine learning. In their experiments, dropout, a widely used technique for regularization, increased the neural persistence by a statistically significant amount. Another measure of complexity that is not specific to neural networks is the VC dimension, a measure of the capacity of a set of functions that can be learned. This general measure is well-studied for many function classes that we consider as candidates for "simple models", from boosting classifiers to neural networks. For boosting, we know $d_k < 2kd \log(ke)$ where $d$ is the VC dimension of the base hypothesis class, and $d_k$ is that of the boosted hypothesis class after $k$ rounds of boosting. Moreover, the authors of Adaboost \cite{adaboost} gave various bounds on the generalization error in terms of the training error and $d_k$. Bounds on the VC dimension of MLPs have also been given for various choices of activations by \cite{shalev}. For example, with sigmoid activation, the VC dimension of a MLP with neurons V and connections E is $\Theta (|E|^2)$ and $O(|E|^2|V|^2)$. Thus, the VC dimension has the advantage of being general to any function class, and will be part of one of our conjectures relating boosted classifiers and neural networks. Adaptive boosting was introduced by \cite{adaboost} as an improved boosting algorithm which adjusts adaptively to the errors of weak hypotheses found in previous iterations. Boosting has been observed to rarely overfit in practice, and many recent explanations have been proposed to explain why. These include margin-based explanations by \cite{margin} in terms of "mirror descent applied to the problem of maximizing the smallest margin in the training set under suitable separability conditions," and they include optimization explanations in terms of the minimization of a convex loss function or "a procedure to search through the space of convex combinations of weak learners or base classifiers." Notably, \cite{wyner} and \cite{mease} suggested that Adaboost exhibits the self-averaging mechanism of many interpolating classifiers and thus generalizes for the same reason random forests do. More recently, it was shown by \cite{nakkiran} that NNs trained with stochastic gradient descent (SGD) share high performance correlation with linear classifiers in the initial phase of training and retain it afterwards. Contemporary work by \cite{tengyu} showed that the degree to which this is true depends on the learning rate. With a small learning rate the model learns a complex, non-linear decision boundary because it will memorize complex patterns wherever possible. They demonstrated this qualitatively by adding a memorizable patch to CIFAR-10 images, and showing the small learning rate model maximizes patch accuracy from the start. Their claims rely on the fact neural networks have a non-convex loss landscape, and may not hold outside the setting of SGD and a sufficiently large learning rate. \subsection{What's Still Missing} As described in \cite{nakkiran}'s paper, there are gaping holes in their central conjecture. Namely, there exists a "sequence" of "increasingly complex" functions $g_i$ and timesteps $T_i$ for which up until $T_i$, the neural network F learns nothing beyond $g_i$ and after $T_i$, retains it. Neither the function sequence or notion of complexity is defined. The combination of boosting and VC dimension have not been investigated as candidates to define this conjecture. More broadly, the machine learning community can't always predict in understandable terms when a model will or will not generalize. Measuring the performance correlation of two models is a new line of inquiry in this space, and has yet to be explored with other architectures than linear models. To address the stochasticity of training, we provide a theoretically motivated training setup which corroborates our claims in two ways: 1) a high performance correlation between neural networks and boosted classifiers and 2) a similar distribution of errors over examples. We also formalize a general connection between the self-averaging property in dropout and boosted classifiers, and include in our analysis the degrees of sensitivity of this conjecture to various choices of hyperparameters. \section{Suggested Research} \newtheorem{theorem}{Theorem} \newtheorem{corollary}{Corollary}[theorem] \newtheorem{conjecture}{Conjecture} \subsection{Meta-goals} Explaining the generalization of all neural networks by a simple model class may be intractable (otherwise, there'd be no need for neural networks). Instead, we think different choices of "low complexity models" can better explain different constrained subclasses of neural networks. We want to identify one such model class and the corresponding subclass of neural networks. More formally, we want to explain what a NN is learning at each phase of training corresponding to training iterations $T_i, T_{i+1}$ for $i >= 0$. Denote $F_i$ as the neural network at $T_i$. We want to find functions $G_i : i >= 0$ of increasing "complexity" such that $I(F_i ; Y | G_{i+1}) \simeq 0$, and $I(G_i ; Y | F_{i+1}) \simeq 0$ for all $i$. To fill in the missing parts, we want to materialize $G_i : i >= 0$, and try suitable definitions of "complexity". A stretch goal is to also characterize the timesteps $T_i$, so we know when we can expect neural networks to generalize. A third meta-goal which we describe in detail later is to "explain" what $F$ is learning. We can do so if we can see if $F$ learns increasingly "difficult" examples in the same order as $G_i$. \subsection{Specific Approaches} \textbf{Boosting} Under VC dimension bounds given on Adaboost by \cite{margin} and \cite{adaboost}, we can formalize the complexity of the weak learners by varying the number of boosting iterations. Towards achieving the meta-goal about a series of increasingly complex learners, we can simultaneously train a neural network and Adaboost, checkpointing (saving) the neural network periodically. Here, the series of complex learners are boosted algorithms with more learners. Similarly, we checkpoint the final boosted classifier after each boosting round. Afterwards, we can plot the relevant quantities $I(F ; Y | G)$ and $I(G ; Y | F)$ for all pairwise combinations $(F, G)$ over the checkpoints. We expect to see a similar phase separation as in \cite{nakkiran}'s paper. \textbf{Learning Order} It would go a long way to "explaining" what $F$ is learning if we can see if it's learning increasingly "difficult" examples in the same order as $G_i$. To do so, we can define the difficulty of an example as the difficulty of the class it resides in, and its euclidean distance of its embedding from the mean of embeddings in that class. Then, we can measure progressive error on examples of various difficulty levels, and compare that between the neural network and boosted classifiers. In particular, we can take the embedding of examples, obtained using the pre-final layer. This can lead to a qualitative measure to explain similarity in how the model learns. The difficulties of the classes for common benchmark datasets have been explored in other works. As shown by \cite{hanxu}, different classes in CIFAR-10 are more or less difficult. As shown by \cite{tailin}, "discontinuous" phase transitions are observed by sweeping the Beta parameter of a binary classification loss function, which controls between compression (simplicity) and accuracy. These phase transitions correspond to new classes of examples the model learns. Plotting the accuracy against Beta shows many discontinuities, at which the model's accuracy suddenly "jumps" upon learning either to recognize a new class, or to discriminate between samples of two visually similar classes. Thus, we may use the order of classes as a proxy for difficulty. Afterwards, we can also see if the times at which these phase transitions occur for a training neural network map onto the times at which the neural network correlates with increasingly complex functions $G_i$ , as will be defined in Theorem 1. \subsection{Specific Questions} We plan to directly extend \cite{nakkiran}'s conjecture. We will replace g with a recursive definition of a class of functions, and fill in the notion of complexity as the VC dimension while keeping the same mutual performance correlation. \begin{theorem} Let $G_i = \sum_{j=1}^i h_j$, where all $h_j \in H$ (hypothesis class), and each $h_j$ is the (empirical risk minimizer) ERM on $S_i \sim P_i, D$. Let $F$ be $\{f: f(x) = \sum_{j=1}^k v_j f_j(x), \text{ and } f_j(x) = ReLU(\sum_{i=1}^d w_{i,j} x_i)\}$ denoting the set of one-hidden-layer neural networks with k hidden units and ReLU activations. Denote $F_i$ as the neural network at $T_i$. Then \begin{equation} I(F_i ; Y | G_(i+1)) \simeq 0 \end{equation} and \begin{equation} I(G_i ; Y | F_(i+1)) \simeq 0 \end{equation} for all $i$ from $1$ to $J$, and is statistically significant compared to increasingly better random classifiers $[R_i : 1 <= i <= J]$, that satisfy $I(R_i; Y) = I(F_i; Y)$. \end{theorem} Note that $S_i \sim P_i, D$ corresponds to the sample reweighting scheme in Adaboost, formalized drawing training samples from a different (but not independent) probability distribution each round. This is a reformulation of \cite{nakkiran}'s conjecture that is more direct: $F_i$ will learn $Y$ in the same way as $G_i$ then retain it afterwards as it learns the more complex $G_{i+1}$. That is equivalent to stating $F_i$ will never be more useful than $G_{i+1}$ in explaining $Y$ for all $i$. Vice-versa, $G_i$ will never be more useful than $F_{i+1}$ in explaining $Y$, stating $F$ has learned and fully retained $G_i$ . To establish this relation, we have to draw a connection between the iterations of boosting in adaptive boosting, and sub-networks from the neural network. We do establish this relation in theorem 2. \begin{theorem} Let $G_i = \sum_{j}^i h_j$, where all $h_j \in H $ (hypothesis class), and each $h_j$ minimizes risk on $S_i \sim P_i, D$. Let $f(x) = \sum_{j}^k v_j f_j(x)$, where $f_j(x) = \text{ReLU}(\sum_{i=1}^d w_{i,j} x_i)$ denote a one-hidden-layer neural network $f$ with $k$ hidden units and ReLU activations. Indexed $f, h$ are random variables. SGD minibatch gradient descent will progressively increase and decrease \[\text{corr}(f_{i1}, h_{j1}) \text{and corr}(f_{i1}, f_{i2})\] under some matching $\{i1, i2, …\} \text{ to } \{j1, j2, …\}$, respectively. Next, we define $J \subset \{1, ..., k\}$, and $f^J(x) = \sum_{j \in J} v_j f_j(x)$ as a "sub-network" of $f$. SGD gradient descent will progressively decrease \[corr(f^J_{i1}(x), f^J_{i2}(x))\]. \end{theorem} Note that this theorem, if true, will mean each $f_j$ comes up with an independent "hypothesis" which combines via a self-averaging mechanism to form the final hypothesis. Implicitly, this means each neuron $f_j$ will have to focus on different feature(s). Like in random forests and Adaboost, this self-averaging mechanism of relatively independent base hypotheses can explain why neural networks, under the common practice of dropout, can generalize well. As pointed out by \cite{tengyu}'s paper, the dataset noise/signal matters. A specific question we also want to ask is if the mutual information that is shared comes from a specific portion of the data. To do so, we can experimentally separate the data in, say, CIFAR-100, into easy and difficult tasks as described in the learning order section. Then, we can plot the mutual information on each of these subsets of data and find ones that correlate. To make these sub-networks useful for practitioners trying to explain a NN with boosted classifiers, we make the following conjecture. \begin{conjecture} Let $F(x)$ denote a one-hidden-layer neural network $f$ with $k$ hidden units and ReLU activations. There exists some mapping $\pi : \mathds{N} \to \mathds{N}$ such that we can find $G$ of VC dimension $\pi (\text{VCDim}(F))$ for which theorem 1 and 2 hold in a variety of settings. \end{conjecture} In boosting, one needs to choose both the base hypothesis class and the number of boosting rounds. With this conjecture, we envision general practices to choose the base hypothesis class. This will substantially accelerate the practitioner's' search for finding a boosted classifier which can pair with the black box neural network using the relationship between their VC dimensions. \subsection{Tools and Techniques you intend to employ and develop} For the experimental part of our research, we will apply computer vision architectures such as CNN's to CIFAR-100 data, as well as create our own synthetic data as in \cite{nakkiran}'s paper. The CNN will serve as our complex deep learning model which we wish to explain and study. We will use pre-trained CNN's like VGG-16 and feature extractors like ScatterNet to both extract features for our boosted classifiers and measure the "difficulty" of examples. Measuring the difficulty of an example will involve finding a combination of how much it deviates from others in its class and how close it is to other classes. We intend to adopt proof techniques similar to \cite{nakkiran}, \cite{tengyu}, and others in the field. There, they precisely characterize the data distribution, architecture, and training procedure. Common defaults for the setup involve a data distribution with separable noise and signal components. A MLP with a single hidden layer with hidden layer ReLU activation and sigmoid or tanh final activation is commonly used for the architecture. Training procedure often involves feature standardization and uses the SGD optimizer. For proofs, hinge loss is often used. We plan to use Adaboost due to its state-of-the-art results amongst boosting algorithms and ease of implementation. We will use the same measure of mutual information and plot this over the training time of our CNN expert model. \subsubsection{Experiment 1} We will train and checkpoint a typical CNN to greater than 90 percent performance on CIFAR-10 first 5 versus last 5 binary classification, as well as on synthetic high dimensional sinusoidal data where we can control the ratio of signal to noise. Next, we will train and checkpoint adaboost at each round of boosting, where we experiment with standard weak learners and extracted features from VGG-16, as well as shallow CNN's on the raw images as in \cite{TAHERKHANI2020351}. We can qualitatively examine the plots of mutual information as in figure 1 to see observed phase separation. \begin{figure} \centering \includegraphics[scale=0.8]{main_figs/plot_phases.png} \caption{The plots generated by \cite{nakkiran} to qualitatively examine phase separation} \label{fig:phases} \end{figure} Our quantitative analysis will involve checking that both quantities \begin{equation} I(G_i; Y | F_{i+1}) \text{ and }. I(F_i;Y|G_{i+1}) \end{equation} are close to zero. We will optimize the number of training epochs at which each $F_i$ is defined for these two quantities to be small. Next, will measure the statistical significance of the phase separation by averaging over the quantities for many random classifiers of the same accuracy as $G_i$. To construct a random classifier with accuracy $a$ we output the correct label with probability $a$ and the incorrect label otherwise. \subsubsection{Experiment 2} Let errors($h_j$) and errors($f_j$) denote the emprical distribution of errors (e.g. $|f_j(x) - y| \quad \forall (x, y) \in X \times Y$) over the dataset using the sub-classifier $f_j$ directly to predict $X$. Aside from showing the correlation between f and h to decrease per our theoretical analysis, we're also interested in looking at examples they make the same errors on. Specifically, we expect the distribution of errors to be similar between the NN sub-classifier $f_j$ and the weak classifier $h_j$ it matches with, and the distribution of errors to diverge between two different sub-classifiers $f_{j1}$ and $f_{j2}$, whose votes become more independent. Specifically, we can show that after phase $i$, there is a sudden decrease in $D_{KL}$(errors($f_j$), errors($h_j$)) and a sudden increase in $D_{KL}$(errors($f_{j1}$), errors($f_{j2}$)). Lastly, we make sure the final test error of a NN constrained this way still compares favorably to a normally trained NN. \subsection{Initial Ideas} \subsubsection{Theorem 1} We think we can inductively establish the result, given two assumptions. First, $H^T$ is expressive, i.e. the data distribution and $H$ are suitably chosen so $\argmax_{f \in H^T} l(f(X), Y) \approx \argmax_{f \in F} l(f(X), Y)$ where $l$ denotes the likelihood objective. Second, the batch used to train $F_i$ in phase $i$ is drawn from the distribution $S_i$, which was used to fit weak classifier $i$. This will consist of the base case, where we show the neural network learns a low-complexity weak learner, drawn from some hypothesis class $H$. \cite{nakkiran} among others already show this for $H$ being the full linear classifier. Our base case will be no more difficult than that, since we sample the batch from the same observation weight distribution chosen by Adaboost. A proof sketch will be to show that $I(F_i ; Y | G_{i+1}) \simeq 0 \cap F_{i+1} \leftarrow \text{train}(F_i, S_i) \Rightarrow I(F_{i+1} ; Y | G_{i+2}) \simeq 0$ and similarly $I(G_i ; Y | F_{i+1}) \simeq 0 \cap F_{i+1} \leftarrow \text{train}(F_i, S_i) \Rightarrow I(G_{i+1} ; Y | F_{i+2}) \simeq 0$. Intuitively, $F_i$ and $h_i$ will minimize risk on the same examples. In Theorem 2, we will further extend this correspondence by finding a matching between $F_i$'s "sub-classifiers" and the weak classifiers $h_j$'s. \subsubsection{Theorem 2} In the neuron view of the sub-classifier, we think we can create a correspondence between each neuron $f_j$ to a weak classifier $h_j$ by adopting the arguments in \cite{niladri}, which shows a single step of gradient descent can create "neuron alignment": a substantial number of neurons will correlate to some cluster mean in their chosen xor-like data distribution, hence activating on examples chosen from that cluster. They also show "almost-orthogonality": an aligned neuron won't activate on other cluster means (i.e. the complement). An idealized interpretation of their result in our case is that $x\in S_j \Leftrightarrow f_j(x) \approx 1$ and $x\not\in S_j \Leftrightarrow f_j(x) \approx 0$. In the subnetwork view of the sub-classifier, we think we can adopt a similar approach, but analysis may call for some additional assumptions. Instead of choosing an ensemble of subnetworks at random (similar to what dropout will do), we impose we get to pick them such that the number of shared neurons between any pair of them is small. By limiting to a small collection of subnetworks, and training each on different sets of observation weights informed by Adaboost, we can create a 1:1 correspondence between a subnetwork and a weak classifier. Extending the assumption from Theorem 1, each sub-classifier $f_j$ or $f^{J_j}$ will be trained batches sampled from $S_j$, so it learns using the same observations as a corresponding weak classifier, specializing to the same region. Then, we will combine the aligned neurons (or ensemble of sub-networks) into a single sub-network. \cite{niladri} shows this creates a large margin sub-network (the substantial number of aligned neurons comprising its hidden layer) which results in a large margin classifier overall. This large margin sub-network's decision implicitly averages over these sub-classifiers' votes, hence it has the same self-averaging explanation as a boosted classifier! \subsubsection{Conjecture 1} To really make the connection hold in a wide variety of settings, we want to find $G$ given $F$ for which both hypotheses hold. Our theoretical analysis has assumed so far Adaboost yields a near-optimal classifier $G$ for which we then constrain the training of $F$ in order for it to be explained by $G$ both from correlation and causation. However, to truly deploy this into the real world, we would need to construct both in parallel, and this conjecture formulates our intuition that there's a mapping between their complexities so that given one, we can narrow down the search for the other. \subsubsection{Summary} Whereas there is little work explaining both the complexity and performance gains of neural networks with every layer added, there is plenty of literature on VC dimension bounds and generalization error guarantees for boosting from PAC learning like \cite{adaboost}, \cite{margin} and much more. We want to see if the claims of \cite{nakkiran} still hold when a linear classifier is swapped with boosted classifiers and with VC dimension as the notion of complexity. We propose to first establish an explanation using correlation (using a mutual information based correlation measure through phases of training), then an explanation using causation by a shared self-averaging mechanism over sub-classifiers. We give experimental setups that can put these falsifiable hypotheses to the test. If true, that can go a long way in uniting two traditionally separate fields of research – boosting and deep learning – and helping to ground the success of neural networks in more explored territory. As a future direction, we envision when the situation calls for it, we can readily swap a neural network black box decision with an alternative that takes a small step down in accuracy but a giant leap forward in being explainable in the features and interpretable in their decisions. For additional visualizations and toy cases that complement this proposal, one is welcome to reference our \href{https://docs.google.com/presentation/d/1eMMTieuvW1jJMDWejJZlGbEGDhYqXS9VqF9Co4aR7bs/edit?usp=sharing}{recent presentation}. \section{Methods} \subsection*{Supplemental Text} Some supplemental text that doesn't fit in the main text or provides extra detail can go here. We will use this space to outline some general tips. \subsubsection*{Article tips, tricks, and nitpicks} \generalTips \subsection*{Supplemental Figures} \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}} \generateFig{tony}{tony}{.3} {This is a picture of Tony Capra.} {Nobody puts Tony in a supplement\dots} \tony \generateFig[90]{cutegoat}{cutegoat}{.6} {This cute sideways goat wishes you good luck with your manuscript.} {This is how you rotate a figure but not the caption. To rotate the caption with the figure, use the function(s) \texttt{generateSidewaysFig} or \texttt{generateSidewaysFigSubpanels}.} \cutegoat \clearpage \subsection*{Supplemental Tables} \setcounter{table}{0} \renewcommand{\thetable}{S\arabic{table}} \generateTab{sampleTable}{supplement/suppl_tabs/sampleTable}{.55} {Example table.} {I use \texttt{Excel2LaTeX.xla} to create easy tables. There are also other online websites. Look for one that is booktabs enabled. Here's good advice about making simple readable tables: \url{https://users.umiacs.umd.edu/~jbg/images/tables.gif}} \sampleTable \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \hbox{} This paper presents a new Monte Carlo method that employs biased trial moves to achieve an efficient sampling of the torsional degrees of freedom for linear and cyclic peptides. Peptides are small molecules, built from amino acids, that are of fundamental importance in biological systems \cite{Alberts}. They play key roles in signal transduction between cells, regulation of cell growth and differentiation, and protein localization on cell surfaces \cite{Cohen}. Peptides are thought to regulate neurotransmission, from modulating pain and thirst to affecting memory and emotion \cite{Kandel,Pert}. They are used as a chemical defense mechanism by some organisms. The {\em conus} snails, for example, produce a family of highly-constrained peptides that include very powerful neurotoxins \cite{Olivera}. Finally, peptides are used within the biotechnology industry to identify antagonists blocking various abnormal enzymatic actions or ligand-receptor interactions \cite{Clackson}. Cyclic or otherwise constrained peptides are often preferred for this application, since such molecules suffer less of a loss of configurational entropy upon binding \cite{Alberg}. A classic example is the use of the RGD peptide to block the GPIIb/IIIa-fibronectin interaction, reducing blood platelet aggregation \cite{Ruoslahti,ONeil}. The properties of peptides are amenable to examination by computer experiment. An early study was of the alanine dipeptide, in which the potential energy surface was deduced from {\em ab initio} quantum mechanical calculations \cite{Cheam,Tobias}. Larger peptides have been examined by classical simulations. Both molecular dynamics \cite{Roux} and Monte Carlo \cite{Nikiforovich} approaches have proven useful. The effects of the aqueous environment have been incorporated by simple dielectric theory \cite{Schiffer,Smith,Gould,Daggett} or by explicit inclusion of water molecules \cite{Yan}. It has become clear, however, that the standard molecular dynamics and Monte Carlo methods are not capable of sampling all conformational degrees of freedom accessible at body temperature to the larger peptides. This problem is particularly evident for the important case of constrained peptides. Various solutions, such as high-temperature molecular dynamics \cite{Bruccoleri,Tsujishita} or simplified force fields \cite{Tsujishita,Brunne}, have been suggested, but these approaches suffer from uncontrolled approximations. A simulation method able to sample the relevant conformational states of peptides, particularly constrained ones, or exposed loops of larger proteins would be of great value. It would aid study of these molecules in biological systems as well as facilitate structural understanding of the peptides and antibodies of interest to the biotechnology industry. Recently, powerful Monte Carlo methods have been developed that have a greatly enhanced sampling efficiency \cite{Dodd,Frenkel,SmitV,dePablo,Smit,Maginn,Leontidis,Escobedo}. These methods have been applied to chain molecules at low and high density \cite{SmitV,dePabloII} and even at phase coexistence \cite{SmitIV,dePabloIII,SmitIII,SmitII}. These methods all use importance sampling, or biased moves, to efficiently explore the free energy landscape. We here apply these concepts to peptide molecules. Both linear and constrained or cyclic peptides are treated by this method. In Sec.\ 2 we describe the Monte Carlo method in detail. Appendices describe the rigid molecular fragments from which peptides are constructed and provide technical details of the method. In Sec.\ 3 we describe the application of this method to the prototypical polyglycine peptides. We discuss the results in Sec.\ 4. The superiority of this method over conventional molecular dynamics and Monte Carlo is demonstrated. Conclusions are presented in Sec.\ 5. \section{Monte Carlo Method} We make the simplifying assumption that the intramolecular potential energy contains only torsional and non-bonded terms. That is, bond lengths and angles are fixed, and rotation is allowed only about sigma bonds. At room- or body-temperature, these are fairly good assumptions. They could easily be relaxed, although sampling the increased degrees of freedom would entail a computational expense. Appendix A describes the rigid fragments that occur in peptides under these assumptions. A suitable form for the interatomic potential would be the AMBER \cite{Weiner}, ECCEP \cite{Nemethy}, or CHARMm \cite{Brooks} force field. We pick the AMBER potentials. Water is treated in an implicit way, assuming the dielectric constant for Coulomb interactions is given by $\epsilon/\epsilon_0 = 4 r$, with $r$ given in {\AA}ngstroms. These assumptions allow the method to be presented without a discussion of detailed force field issues. The method is generically applicable to better force fields and an explicit treatment of water. A configurational bias Monte Carlo (CBMC) technique is used to explore the conformations of the molecules. We describe the algorithm for both linear and cyclic peptides. By cyclic, we mean peptides constrained because of disulfide bonds between cystine residues. There are two types of atoms in a peptide, those in the side chains and those in the backbone. Consequently, there are two types of Monte Carlo moves: type I moves change the positions of side chain atoms only, and type II moves change the positions of backbone atoms, rigidly rotating the attached side chains. The type I move is an extension of the chain-molecule CBMC \cite{SmitV,dePablo} to the structurally more complicated case of peptides. The type I move is applicable to side chains with a free end ({\em i.e.}\ all naturally occurring amino acid side chains except for proline). The backbone to which the side chain is attached can be either linear or cyclic. In the cyclic case, the type I move is also used to change the configuration of the free ends of the main chain. There are two kinds of type II moves for the backbone: type IIa moves for linear peptides and type IIb moves for cyclic peptides. The type IIa move is essentially the same as a type I move. The side-chain residues that are attached to the backbone are rigidly rotated so as to remain properly bonded to the C$_\alpha$ atoms in their new positions. When the peptide is cyclic, we use a type IIb move to change the configuration of part of the backbone loop, rigidly rotating any side chains or free ends of the peptide that are attached to that part of the backbone. The backbone of a cyclic peptide includes the atoms along the main chain as well as the C$_{\beta}$ and S atoms of the cystines participating in the disulfide bond. This move requires a concerted rotation of the backbone torsional angles with a rigid rotation of the attached side groups. This concerted rotation of the torsional angles is an extension of the concerted rotation scheme for alkanes \cite{Dodd,Leontidis}. A type I move is initiated by identifying the side chain to be regrown. Not all of the side chain need be regrown, and the first group to regrow is chosen. This feature is helpful for the amino acids with longer side chains, such as lysine. These choices are made randomly. The $M$ rigid units to be regrown are first removed and then added one at a time, starting from the one closest to the backbone. For each addition, the following actions are carried out (see Fig. 1): 1) $k$ values of the torsional angle $\phi_{ij},~ 1 \le j \le k$ connecting rigid unit $i$ to unit $i-1$ are generated according to the internal potential, \beq{5} p_i^{int}(\phi_{ij}) \propto \exp[-\beta u_i^{int}(\phi_{ij})] \ . \end{equation} The function $u_i^{int}(\phi_{ij})$ is the part of the internal energy that couples unit $i$ to the rest of the molecule (but excluding units $i+1$ to $M$). The inverse temperature is given by $\beta = 1/k_B T$. 2) One of these is picked with probability \beq{6} p_i^{ext}(\phi_{ij}) = \exp[-\beta u_i^{ext}(\phi_{ij})] / w^{ext}(i) \ , \end{equation} where \beq{7} w^{ext}(i) = \sum_{j=1}^k \exp[-\beta u_i^{ext}(\phi_{ij})] \ . \end{equation} The function $u_i^{ext}(\phi_{ij})$ is the part of the external energy that couples unit $i$ to the rest of the molecule (but excluding units $i+1$ to $M$). 3) Steps 1-2 are repeated until all M units have been added. 4) The Rosenbluth weight \beq{8} W^{(n)} = \prod_{i=1}^M w^{ext}(i) \end{equation} is calculated. This attempted move is accepted with a probability \beq{9} acc(o \rightarrow n) = \min[1, W^{(n)}/W^{(o)}] \ . \end{equation} The quantity $W^{(o)}$ is the Rosenbluth weight for the reverse move and is calculated as in steps 2-4, but with $k-1$ random orientations and one orientation that is equal to the original geometry for each rigid unit. A type IIa move is very similar to a type I move. In this case, the direction of regrowth is chosen randomly. Then the first backbone unit to be regrown is chosen. The $M$ rigid units to be regrown are removed and added back sequentially, as in the type I move. The rigid units in this case are either A-units, B-units with the side chain rigidly attached, C-units, or D-units (see appendix A). An alternative procedure would be to regrow the side chain units as well, but this proved not to be efficient, due to frequent steric repulsions. The move is accepted with the probability given by Eq.\ (\ref{9}). A type IIb move is initiated by identifying the 4 rigid units on the backbone to be rotated. This is done randomly. The four rigid units are labeled in an amine to carboxy terminal fashion. The attached side groups are rigidly rotated with the backbone units. The rotation is carried out as follows (see Fig. 2): 1) The driver angle $\phi_0$ is changed by an amount $\delta \phi_0$, where $-\Delta \phi < \delta \phi_0 < \Delta \phi$. This is done $k'$ times with probabilities according to the internal potential, \beq{10} p^{int}(\phi_{0j}) \propto \exp[-\beta u_0^{int}(\phi_{0j})] \ . \end{equation} The function $u_0^{int}(\phi_{0j})$ is the internal energy associated with this torsional angle. Only those values of $\phi_0$ that lead to valid solutions for the modified torsional angles are considered. In the general case there will be a distinct $\phi_1$ for each solution arising from the new value of $\phi_0$. Define $k^{(n)}$ to be the number of $\phi_0$-$\phi_1$ pairs. If $k^{(n)}=0$, the move is rejected. 2) A $\phi_0$-$\phi_1$ pair is picked with probability \beq{11} p_0^{ext}(\phi_{0j}, \phi_{1j}) = \exp[-\beta u_0^{ext}(\phi_{0j}, \phi_{1j})] / W^{(n)} \ , \end{equation} where \beq{12} W^{(n)} = \sum_{j=1}^{k^{(n)}} \exp[-\beta u_0^{ext}(\phi_{0j},\phi_{1j})] \ . \end{equation} The function $u_0^{ext}(\phi_{0j},\phi_{1j})$ is the part of the external energy that couples this part of the backbone to the rest of the molecule. The value $J^{(n)}$ of the Jacobian is calculated for the new, chosen configuration (as detailed in Appendix B). 3) The reverse move is considered. That is, a rotation about the new, chosen $\phi_0$-$\phi_1$ pair is considered. $k'-1$ random values $\delta \phi_0$ are chosen. The original value of $\phi_0$ is assigned to the $k'$th value. This move results in $k^{(o)}$ solutions for $\phi_1$. $k^{(o)}$ is always greater than zero, since the original configuration exists. (Special care is taken to ensure that the original configuration is found by the root finding procedure.) The Rosenbluth weight is assigned to $W^{(o)}$. The value $J^{(o)}$ of the Jacobian is also calculated for the original configuration. This attempted move is accepted with a probability \beq{113} acc(o \rightarrow n) = \min[1, J^{(n)} W^{(n)}/ J^{(o)} W^{(o)}] \ . \end{equation} Splitting the energy into internal and external parts is rather arbitrary. There are some constraints imposed, however, by the requirement that the normalization constants for Eqs.\ (\ref{5}) and (\ref{10}) be independent of chain conformation \cite{Smit}. We assume for simplicity that $u_i^{int} = 0$. One other natural choice, however, would set the internal part equal to the torsional terms in $H_{intra}$ and set the external part equal to the rest of $H$. For any Monte Carlo scheme to properly sample the Boltzmann probability distribution, detailed balance must be satisfied. Refs.\ \cite{Dodd} and \cite{Smit} prove that detailed balance is satisfied for the above scheme. \section{Application to Polyglycine} In this section we present the result of applying this configurational bias Monte Carlo method to two simple peptides, polyglycine G$_6$ and constrained polyglycine CG$_6$C. Figure 3 shows the energy of linear polyglycine as a function of Monte Carlo steps. This run took roughly 3 hours on a Silicon Graphics Indigo$^2$. In Fig.\ 4 we show the end-to-end probability distribution for this system. Gaining this degree of convergence took a one-day run. Figure 5 illustrates the energy of the cyclic polyglycine as a function of Monte Carlo steps. This run took roughly 6 hours. Figure 6 provides a histogram of the number of solutions found for each attempted concerted rotation. In rare cases the root finding procedure failed to find all the roots. In the construction of this plot, we rounded $n^{(n)}$ up when it was odd. Figure 7 shows the histogram for the C$_\beta$SSC$_\beta$ dihedral angle, with the statistics taken from a run six times as long as that illustrated in Fig.\ 5. To give a feel for the barrier to rotation about this angle, we show in Fig.\ 8 the potential of mean force. This potential was determined by umbrella sampling \cite{Chandler}. This curve took two orders of magnitude longer to determine than did the probability distribution in Fig.\ 7. The potential of mean force is contrasted with the energy associated purely with the ${\rm C_{\beta}SSC_{\beta}}$ torsional terms. Finally, Fig.\ 9 shows the result of classifying the configurations produced by the method into distinct stable conformations. Fuzzy clustering \cite{Gordon} was used to determine the dominant conformations, with the result that there are only two or three distinct conformations within this limited simulation run. The simulation run depicted in Figs.\ 5 and 9 took approximately 8 hours on a Silicon Graphics Indigo$^2$. \section{Discussion} We see that with a very modest computational effort, we can achieve equilibrated results for linear peptides. With somewhat more effort, we can achieve equilibration for cyclic peptides. As expected, we find that the linear peptide G$_6$ is relatively unstructured in solution. There is a common crumpled state, but there is also a significant population of the extended state. The constraint of the disulfide bond in CG$_6$C, in contrast, forces that molecule to adopt a limited number of molecular conformations. For the fairly short runs illustrated in Figs.\ 5,6,7 and 9, we find only three dominant conformations. The first conformation is associated with the C$_\beta$SSC$_\beta$ torsional angle of 290$^\circ$, whereas the other two are associated with angles of 88$^\circ$\ and 98$^\circ$. The first of these conformations is very tight, with 0.7 \AA\ fluctuations about the mean for all atoms in the molecule. The other two are somewhat looser, with roughly 1.2 \AA\ fluctuations. We see from Fig.\ 9 that even in this short run the method revisits previous conformations. In the limit of a long simulation, the time spent in each conformation would, of course, be proportional to the exponential of the free energy of the conformation. If CG$_6$C were achiral, the potential of mean force in Fig.\ 8 would be symmetric about 0$^\circ$\ and 180$^\circ$. Since the C$_\alpha$ carbons in the cystine residues are, in fact, chiral, the potential of mean force is not required to be symmetric. The asymmetry seen in Fig.\ 8 results from the mean, chiral force of the rest of the molecule on the C$_\beta$SSC$_\beta$ torsion. In fact, the AMBER forcefield takes this chirality into account by reducing the symmetry of the C$_\beta$ carbon in cysteine. We have used this geometry \cite{InsightII}. The barrier at 0$^\circ$\ is due to a high steric repulsion between the hydrogens on the C$_\beta$ carbons adjacent to the disulfide bond. This barrier is substantially higher than the barrier at 180$^\circ$. From Fig.\ 8, we see that there is a very significant free energy barrier to rotation about the C$_\beta$SSC$_\beta$ torsional angle. This figure was not constructed from a standard simulation run, but by the specialized procedure of umbrella sampling. It is clear from Fig.\ 7, however, that the present method is able to overcome this barrier and to properly sample the relevant conformations even in a relatively short simulation. Any method such as molecular dynamics or standard Monte Carlo that makes only small, local changes to the configuration would never cross this barrier in a simulation of reasonable length. High temperature dynamics can allow systems to cross high barriers, but can not perform the requisite Boltzmann sampling to predict the physiologically relevant conformations. Only a biased method that makes fairly large geometrical changes is capable of dealing with such barriers in an automatic way, without resort to special techniques such as umbrella sampling. Furthermore, the ability to perform umbrella sampling has as a prerequisite the detailed knowledge of the important conformations and the paths between them. In our specific case, we find our method to be two orders of magnitude more efficient than umbrella sampling. \section{Conclusion} We have presented a Monte Carlo method capable of sampling the relevant room- or body-temperature configurations of linear and cyclic peptides. This method allows the study of peptides important in biological and technological settings. Our sampling of the disulfide dihedral angle in a prototypical cyclic peptide indicates that the method can explore widely separated regions of conformation space according to the proper Boltzmann distribution, even if the barriers between the regions are quite large. Previous simulation methods either fail to sample the proper thermal distribution or are vastly more computationally intensive and require detailed knowledge of the thermally accessible regions. The method can be extended to allow incorporation of explicit water molecules. The method can be extended to force fields with flexible bonds and angles. These extensions are subjects for future work. \section*{Acknowledgements} We thank Berend Smit and Charlene X.\ L.\ Liang for helpful discussions about the Monte Carlo method and Len Bogarad, Michael McKenna, Jonathan Rothberg, and Gregory Went for helpful conversations about the biological applications. This work was supported by the NCI/NIH under grant \#CA62752-01 and by the NIST ATP program under grant number \#70NANB5H1066. Many of the calculations described herein were performed on an Indigo-R8000 on loan from SGI and on a HP-735/125 on loan from Hewlett Packard.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsubsection*{\bibname}} \renewcommand{\bibname}{References} \usepackage{hyperref} \usepackage{graphicx} \usepackage{mathrsfs} \usepackage{amsmath} \usepackage{amssymb} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{mathtools} \usepackage{booktabs} \usepackage{subfig} \usepackage{bm} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator{\expect}{\mathbb{E}} \DeclareMathOperator*{\KL}{KL} \DeclareMathOperator*{\ent}{\mathcal{H}} \DeclareMathOperator{\ELBO}{\mathcal{L}} \DeclareMathOperator*{\N}{\mathcal{N}} \begin{document} \twocolumn[ \aistatstitle{Regularising Deep Networks using Deep Generative Models} \aistatsauthor{ Matthew Willetts \And Alexander Camuto \And Stephen Roberts \And Chris Holmes} \aistatsaddress{ University of Oxford \\ Alan Turing Institute \And University of Oxford \\ Alan Turing Institute \And University of Oxford \\ Alan Turing Institute\And University of Oxford \\ Alan Turing Institute} ] \begin{abstract} We develop a new method for regularising neural networks. We learn a probability distribution over the activations of all layers of the model and then insert imputed values into the network during training. We obtain a posterior for an arbitrary subset of activations conditioned on the remainder. This is a generalisation of data augmentation to the hidden layers of a network, and a form of data-aware dropout. We demonstrate that our training method leads to higher test accuracy and lower test-set cross-entropy for neural networks trained on CIFAR-10 and SVHN compared to standard regularisation baselines: our approach leads to networks with better calibrated uncertainty over the class posteriors all the while delivering greater test-set accuracy. \end{abstract} \section{Introduction} Methods such a dropout \citep{Srivastava2014}, batch norm \citep{Ioffe2015}, $L_2$ regularisation, data augmentation \citep{deeplearningbook, Wang2017} and ensembling \citep{Lakshminarayanan2017} have been shown to improve generalisation and robustness of deep discriminative models. We show that by learning a density estimator over the activations of a network during training, and inserting draws from that density estimator into the discriminative model during training, that we get discriminative models with better test set accuracy and better calibration; outperforming all the methods listed above on standard datasets. Our approach be interpreted as a generalisation of data augmentation to the hidden layers of a network, or from an alternative viewpoint as a from of dropout where we impute activations rather than setting them to $0$. We specify this density over activations by developing on the ideas of a recent model, VAEAC \citep{Vetrov2019}, a deep generative model (DGM), that enabled the computation of the conditional distribution for arbitrary subsets of pixels of an input image conditioned on the remainder. After having been trained with imputed values for activations, at test time the discriminative model can either be run as a simple feed-forward model, or, following MC Dropout \citep{Gal2016} we can sample from the model for activations to obtain an estimate of the classifiers uncertainty. A statistical metric for the quality of the uncertainty of a model is its calibration \citep{Dawid1982}: is a model as likely to be correct in a particular prediction as it is confident in that prediction? A well-calibrated predictive distribution is key for robust decision making, including in the case of asymmetric losses. Our proposed form of regularisation leads to better model calibration than standard baselines in pure feed-forward operation and increased test set accuracy. The key contributions of this paper are: \begin{itemize} \item The introduction of \textit{Pilot} - a model that simultaneously trains a discriminative model and a deep generative model over the former's activations. \item Showing that, when applied to both multi-layer perceptron and convolutional neural networks, \textit{Pilot} results in increased accuracy when classifying SVHN and CIFAR-10, beating our baselines. \item Demonstrating that the discriminative models trained using samples from our generative model are better calibrated than various baselines. \item Finally, showing that when using samples from the model over activations to give model uncertainty, our method outperforms MC-Dropout. \end{itemize} \section{Related Work} Bayesian neural networks (BNNs) \citep{MacKay1992, Neal1995}, where a prior is placed over the parameters of the model and the training data is used to evaluate the posterior over those parameters, give many benefits. Among them is uncertainty over predictions. However, as exact inference is not commonly computationally feasible for BNNs, various methods of approximation have been proposed. These include variational inference \citep{Graves2011, Blundell2015}, expectation propagation \citep{Lobato2015}, and MCMC methods \citep{Welling2011}. Our approach is analogous to a BNN where we concern ourselves with modelling the discriminative model's activations, not its weights and biases. Generally, \cite{Poole2014} observed that adding noise, drawn from fixed distributions, to the hidden layers of deep neural networks leads to improved performance. Our model also has ties to meta-learning, particularly \textit{Hallucination} \citep{Hariharan, Wang2018}, which is an approach to data augmentation in the final hidden layer of a neural network model. One generates synthetic activations with novel combinations of high-level aspects of the data to represent new or rare classes of data. Markov chain methods have been developed for data imputation, where a series of draws converges to the underlying data distribution \citep{Bordes2017,Sohl2105,Barber2012}. \cite{Nazabal2018} extends variational autoencoders (VAEs) to impute missing data, including for discrete, count and categorical data. There has been interest in using generative adversarial networks \citep{Goodfellow2014} to provide data augmentation \citep{Kiyoiti2019, Antoniou2018, Bowles2018, Fridadar2018} and imputation \citep{Yeh2017, Yoon2018}. Re-calibrating the probabilities of a discriminative model can enable the construction of a well-calibrated model. Platt scaling \citep{Platt1999} and binning methods \citep{Zadrozny2001, Zadrozny2002} are well studied \citep{Niculescu2005}. Temperature scaling \citep{Jaynes1957} has been shown to produce well-calibrated DNNs \citep{Guo_calib_2017}. Ensembling of models also leads to better uncertainty in discriminative models \citep{Lakshminarayanan2017,Dietterich2000, Minka2000}. Dropout \citep{Srivastava2014} can be interpreted as a form of model ensembling. If dropout is used at test time, as in Monte Carlo (MC) Dropout \citep{Gal2016}, we are in effect sampling sets of models: sub-networks from a larger neural network. The samples obtained from the predictive distribution provide an estimate of model uncertainty. Tuning the dropout rate to give good calibration is challenging, and grid search is expensive, which motivates Concrete dropout \citep{Gal_conc_2017} where gradient descent is used to find an optimal value. The method is strongest for reinforcement learning models, showing lesser performance gains for classification. \cite{Kingma2015b} show that training a neural network with Gaussian dropout \citep{Wang2013} to maximise a variational lower bound enables the learning of an optimal dropout rate. However, when the dropout rate is tuned in such a fashion, it is harder to interpret the resulting model as an ensemble \citep{Lakshminarayanan2017}. \section{Background} \subsection{Review of VAEAC: VAE with Arbitrary Conditioning} Briefly we will overview the recent model VAE with Arbitrary Conditioning (VAEAC) \citep{Vetrov2019} - a generalisation of a Conditional VAE \citep{Sohl2105} - as it forms the basis for our approach. The problem attacked in \citep{Vetrov2019} is dealing with missing data in images via imputation. They amortise over different arbitrary subsets of pixels, such that training and running the model is relatively cheap. In \citet{Vetrov2019} there are images $x$ and a binary mask $b$ of unobserved features. That is, the unobserved data is $x_b$ and the observed data is $x_{1-b}$. The aim is to build a model, with parameters $\theta$, to impute the value of $x_b$ conditioned on $x_{1-b}$ that closely approximates the true distribution: $p_\theta(x_b | x_{1-b}, b) \approx p(x_b | x_{1-b}, b)$. Given a dataset $x \sim D$ and a mask prior $p(b)$ we aim to maximise the log likelihood for this problem wrt $\theta$: \begin{equation} \theta^* = \argmax_\theta \expect_{x\sim D} \expect_{b\sim p(b)} \log p_\theta(x_b | x_{1-b}, b) \end{equation} Introducing a continuous latent variable $z$ gives us the VAEAC generative model: \begin{equation} p_\theta(x_b | x_{1-b}, b) = \int \mathrm{d}z \, p_\theta(x_b | z, x_{1-b}, b) p_\theta(z|x_{1-b}, b) \end{equation} Where $p_\theta(z|x_{1-b}, b) = \N(z|\mu_\theta(x_{1-b},b),\Sigma_\theta(x_{1-b},b))$, and $p_\theta(x_b | z, x_{1-b}, b)$ is an appropriate distribution for the data $x$. The parameters of both are parameterised by neural networks. By introducing a variational posterior $q_\phi(z|x, b) = \N(z|\mu_\phi(x,b),\Sigma_\phi(x,b))$ and obtain the VAEAC evidence lower bound (ELBO) for a single data point and a given mask: \begin{align} \ELBO^{\mathrm{VAEAC}}(x, b; \theta, \phi) =& \expect_{z\sim q}\log p_\theta(x_b | z, x_{1-b}, b) \\ &- \KL(q_\phi(z|x, b) || p_\theta(z|x_{1-b}, b)) \nonumber \end{align} Note that the variational posterior $q$ is conditioned on all $x$, so that in training under this objective we must have access to complete data $x$. When training the model they transfer information from $q_\phi(z|x, b)$ which has access to all $x$ to the $p_\theta(z|x_{1-b}, b)$ that does not, by penalising the $\KL$ divergence between them. At test time, when applying this model to real incomplete data, they sample from the generative model to infill missing pixels. \subsection{Training Classifiers with Data Augmentation and Dropout} We wish to train a discriminative model $p_\Psi(y|x)$, where $y$ is a the output variable, $x$ an input (image), and $\Psi$ are the parameters of the network. We focus here on classification tasks, where in training we aim to minimise the cross-entropy loss of the network, or equivalently maximise the log likelihood of the true label under the model, $\ELBO(D; \Psi)$ wrt $\Psi$ for our training data $(x^*,y^*) \sim D$: \begin{align} \ELBO(D;\Psi) & = \expect_{(x,y) \sim D} \log p_\Psi(y|x) \\ \Psi^* &= \argmax_\Psi \ELBO(D; \Psi) \end{align} Commonly one might a train the model with methods like dropout or data augmentation to regularise the network. Here we consider data augmentation as a probabilistic procedure, and write out (MC) dropout in similar notation. Then we describe our approach to learning a density estimator over activations of a DNN, which we then use as a generalisation of both data augmentation and dropout in training regularised deep nets. \subsubsection{Data Augmentation} If we have a discriminative classifier $p_\Psi(y|x)$, we could train it on augmented data $\tilde{x}$. If the procedure for generating the augmentation is stochastic we could represent it as $p_\theta(\tilde{x}|x)$. This could correspond, say, to performing transformations (like rotating or mirroring) on some proportion $\theta$ of each batch during training. Thus we can write the joint distribution for the classifier and the `augmenter' $p_\theta(\tilde{x}|x)$, conditioned on $x$, as: \begin{equation} p_{\Psi, \theta}(y, \tilde{x} | x) = p_\Psi(y|\tilde{x})p_\theta(\tilde{x}|x) \label{eq:data_aug_prob} \end{equation} And so marginalising out the augmenter: \begin{equation} p_{\Psi, \theta}(y|x) = \expect_{\tilde{x} \sim p_\theta(\tilde{x}|x)} p_\Psi(y|\tilde{x}) \label{eq:data_aug_marginal} \end{equation} \subsubsection{Dropout} Taking a probabilistic perspective to $\Psi$, the weights and biases of the network, we would write the same classifier as $p(y|x, \Psi)$. A manipulation of the weights by a stochastic method, such as dropout, can be written as $p_\theta(\tilde{\Psi}|\Psi)$. For dropout, $\theta$ would be the dropout rate. So the equivalent to Eq (\ref{eq:data_aug_prob}) is: \begin{equation} p_\theta(y, \tilde{\Psi}|x, \Psi) = p(y|x, \tilde{\Psi}) p_\theta(\tilde{\Psi}|\Psi) \label{eq:dropout_prob} \end{equation} And to Eq (\ref{eq:data_aug_marginal}): \begin{equation} p_{\Psi,\theta}(y|x) = \expect_{\tilde{\Psi} \sim p_\theta(\tilde{\Psi}|\Psi)} p(y|x, \tilde{\Psi}) \label{eq:dropout_marginal} \end{equation} \subsubsection{Loss functions} For both data augmentation and dropout, the model is still trained on the expected log likelihood of the output variable, but is now being fed samples from the data augmentation pipeline or dropout mask. We can obtain this loss by applying Jensen's inequality to the logarithms of Eqs (\ref{eq:data_aug_marginal}, \ref{eq:dropout_marginal}) and taking expectations over $D$: \begin{align} \ELBO^{\mathrm{aug}}(D;\Psi,\theta) =& \expect_{x,y\sim D}\expect_{\tilde{x} \sim p_\theta(\tilde{x}|x)} \log p_\Psi(y|\tilde{x})\\ \ELBO^{\mathrm{drop}}(D;\Psi,\theta) =& \expect_{x,y\sim D}\expect_{\tilde{\Psi} \sim p_\theta(\tilde{\Psi}|\Psi)} \log p(y|x, \tilde{\Psi}) \end{align} Training is then done by maximising $\ELBO^{\mathrm{aug}}$ or $\ELBO^{\mathrm{drop}}$ wrt $\Psi$. In principle $\theta$ may contain other parameters of the data augmentation or dropout procedure, which could be learnt jointly, but are commonly fixed. \subsection{Stochastic manipulation of activations} We wish to obtain a conditional distribution for any subset of activations, conditioned on the remaining activations. Consider our discriminative model as being composed of $L$ layers. We view our input data and the activations of the network on an equal footing; the output of one layer acts as input to the next layer, and can be viewed as equivalent to data for that later layer. In this view the input data to the discriminative model, $x$, is simply the $0^{\mathrm{th}}$ layer of activations. We record the read out of the activations of every unit of every layer in the model $p_\Psi(y|x)$ for a given data-point, namely: \begin{equation} a = f_\Psi(x) \end{equation} where $a^0 = x$ and $\mathrm{softmax}(a^L)=p_\Psi(y|x)$. In analogy with the section above we denote a stochastic procedure for generating different realisations of the activations $\tilde{a}$ given the recorded activations $a$ as $p_\theta(\tilde{a}|a)$. The joint is then: \begin{equation} p_{\Psi,\theta}(y, \tilde{a}|a) = p_\Psi(y|\tilde{a}) p_\theta(\tilde{a}|a) \label{eq:pilot_prob} \end{equation} Marginalising out $\tilde{a}$ and taking the logarithm we obtain: \begin{equation} \log p_{\Psi, \theta}(y|a) = \log \expect_{\tilde{a} \sim p_\theta(\tilde{a}|a)}[p_\Psi(y|\tilde{a})] \label{eq:pilot_marginal_one_datapoint} \end{equation} Applying Jensen's Inequality: \begin{equation} \log p_{\Psi, \theta}(y|a) \geq \expect_{\tilde{a} \sim p_\theta(\tilde{a}|a)}[\log p_\Psi(y|\tilde{a})] \end{equation} And taking an expectation over the dataset $D$: \begin{equation} \ELBO^\mathrm{act}(D;\Psi,\theta) = \expect_{\substack{(x,y) \sim D \\ a=f_\Psi(x)}} \expect_{\tilde{a} \sim p_\theta(\tilde{a}|a)}[\log p_\Psi(y|\tilde{a})] \label{eq:pilot_marginal} \end{equation} $\ELBO^\mathrm{act}$ is our classification objective. We will optimise it wrt $\Psi$, not taking its gradient wrt $\theta$. In the next section we introduce a particular form for $p_\theta(\tilde{a}|a)$, which we will then learn simultaneously. \section{Pilot: DGM over Activations} As well as training the model parameters $\Psi$ when our model is being run with imputed activations $\tilde{a}$, we also wish for our model to generate realistic activations $\tilde{a}$. To learn a density estimator over activations, we define a parametric generative model for $p_\theta(\tilde{a}|a)$ that we will then train using amortised stochastic variational inference \citep{Kingma2013, Rezende2014}. In analogy to the image in-painting of \cite{Vetrov2019}, we impute a subset of a network's activations given the values of the remainder. Likewise, we introduce a mask $b$ with prior $p(b)$ so $\tilde{a}_b$ are the values we will impute given the unmasked variables $a_{1-b}$. That means we choose: \begin{equation} p_\theta(\tilde{a}|a) = \expect_{b\sim p(b)} p_\theta(\tilde{a}_b|a_{1-b},b) \label{eq:a_form} \end{equation} where $\tilde{a}$ is constructed deterministically by taking the values $\tilde{a}_b$ in the masked positions with the recorded values $a_{1-b}$. We denote this as a form of masked element-wise addition: \begin{equation} \tilde{a} = \tilde{a}_b \oplus a_{1-b} \end{equation} We wish to train the model $p_\theta(\tilde{a}_b|a_{1-b},b)$ so that $\tilde{a}_b$ is close to the real activations $a_b$. As in \cite{Vetrov2019} we introduces a latent variable $z$, defining the generative model as: \begin{align} p_\theta(\tilde{a}_b|a_{1-b},b) &= \int \mathrm{d}z \, p_\theta(\tilde{a}_b|a_{1-b},b,z)p_\theta(z|a_{1-b},b) \label{eq:log_expect1} \\ &= \expect_{z \sim p_\theta(z|a_{1-b},b)} p_\theta(\tilde{a}_b|a_{1-b},b,z) \end{align} Where $p_\theta(z|a_{1-b},b) = \N(z|\mu_\theta(a_{1-b},b),\Sigma_\theta(a_{1-b},b))$ in analogy with VAEAC. Our aim is to maximise the log likelihood: \begin{equation} \expect_{\substack{(x,y) \sim D \\ a=f_\Psi(x)}}\expect_{b\sim p(b)}\log p_\theta(a_b|a_{1-b},b) \label{eq:loglike} \end{equation} To train this model, we introduce a variational posterior for $z$, which is conditioned on all $a$: $q_\phi(z|a, b) = \N(z|\mu_\phi(a,b),\Sigma_\phi(a,b))$. This is unlike its generative counterpart $p_\theta(z|a_{1-b},b)$ which only receives the masked activity information $a_{1-b}$. This gives us an ELBO for $\log p_\theta(\tilde{a}_b|a_{1-b},b)$ which we denote $\Lambda$: \begin{align} \Lambda(a, b;\theta,\phi) =& \expect_{z\sim q}[\log p_\theta(a_b|a_{1-b},b,z)] \notag \\ &- \KL(q_\phi(z|a,b)||p_\theta(z|a_{1-b},b)) \label{eq:lambda} \\ \ELBO^{\mathrm{DGM}}(D;\theta,\phi) =& \expect_{\substack{(x,y)\sim D \\ b \sim p(b)}}[\Lambda(a=f_\Psi(x),b;\theta,\phi)] \label{eq:pilot_elbo} \end{align} As stated previously, this objective leads to information being passed from $q_\phi(z|a,b)$, the part of the model that can access all $a$, to $p_\theta(z|a_{1-b},b))$ the part of the model that only sees the masked activation values $a_{1-b}$. We choose to model the raw activations of the model, before applying an activation function. This sidesteps the potential difficulties if we modelled them after the application of an activation function - for instance applying ReLUs' gives us values that are $>0$. We model our raw activations with Gaussian likelihood $\log p_\theta(a_b|a_{1-b},b,z)$ with fixed diagonal covariance. We place a Normal-Gamma hyperprior on $p_\theta(z|a_{1-b},b)$ to prevent large means and variances. See Appendix \ref{app:hyperprior} for a full definition. \subsection{Overall Objective} To train both the DGM (that produces samples for $a_b$) and classifier we maximise their objectives simultaneously: \begin{align} \ELBO^\mathrm{pilot}(D;\Psi,\theta, \phi) =& \ELBO^\mathrm{act}(D;\Psi) + \ELBO^{\mathrm{DGM}}(D;\theta,\phi) \label{eq:overall} \end{align} We train the model by optimising $\ELBO^\mathrm{act}$ wrt $\Psi$ while simultaneously optimising $\ELBO^{\mathrm{DGM}}$ wrt $\theta, \phi$, both by stochastic gradient descent using Adam \citep{Kingma2015} over $D$. The objective $\ELBO^{\mathrm{DGM}}$ could be written as a function of $\Psi$ as well, as $a=f_\Psi(x)$, but we choose not to take gradients wrt $\Psi$ through $\ELBO$, similarly for $\ELBO^\mathrm{act}$ and $\theta,\phi$. This separation is key to the proper functioning of our model. If we optimised $\ELBO^\mathrm{act}$ wrt $\theta,\phi$ then the larger, more powerful DGM would perform the task of the classifier - the DGM could learn to simply insert activations that gave a maximally clear signal that a simplistic classifier could then use. The classifier could then fail to operate in the absence of samples $\tilde{a}$, and the DGM would be the real classifier. If we optimised $\ELBO^\mathrm{DGM}$ wrt $\Psi$, we would in effect be training the discriminator to be more amenable to being modelled by the DGM. An interesting idea perhaps, but a different kind of regularisation to that which we wish to study. We take MC samples to approximate the integrals in $\ELBO^{\mathrm{DGM}}$, employing the reparameterisation trick to take differentiable samples from our distributions \citep{Kingma2013, Rezende2014}. \section{Calibration Metrics} A neural network classifier gives a prediction $\hat{y}(x)$ with confidence $\hat{p}(x)$ (the probability attributed to that prediction) for a datapoint $x$. Perfect calibration consists of being as likely to be correct as you are confident: \begin{equation} p(\hat{y}=y|\hat{p}=r)=r, \quad \forall r\in[0,1] \end{equation} To see how closely a model approaches perfect calibration, we plot reliability diagrams \citep{degroot1983, Niculescu2005}, which show the accuracy of a model as a function of its confidence over $M$ bins $B_m$. \begin{align} \mathrm{acc}(B_m) &= \frac{1}{|B_m|}\sum_{i\in B_m} \mathcal{1}(\hat{y}_i = y_i)\\ \mathrm{conf}(B_m) &= \frac{1}{|B_m|}\sum_{i\in B_m} \hat{p}_i \end{align} We also calculate the Expected Calibration Error (ECE) \cite{Naeini2015}, the mean difference between the confidence and accuracy over bins: \begin{equation} \mathrm{ECE} = \sum_{m=1}^M \frac{|B_m|}{N}|\mathrm{acc}(B_m) - \mathrm{conf}(B_m)| \label{eq:ece} \end{equation} However, ECE is not a perfect metric. With a balanced test set one can trivially obtain ECE $\approx 0$ by sampling predictions from a uniform distribution over classes. Nevertheless, ECE is a valuable metric in conjunction with a model's reliability diagram. \section{Experiments}\label{sec:experiments} \begin{table*} \vspace{1em} \caption{Test set accuracy, mean per-datapoint negative log-likelihood ($\mathrm{NLL}= -\ELBO_{\mathrm{xent}}$) and ECE [see Eq (\ref{eq:ece})] for convolutional neural networks and 2-hidden-layer MLPs (with 1024 hidden units) trained on CIFAR-10 and SVHN with different regularisation methods} \label{table:results} \centering \setlength\tabcolsep{1.9pt} \begin{tabular}{rcccccc} \toprule \multicolumn{7}{c}{\textbf{CNN models}}\\ \toprule Model & Acc$(D^\mathrm{CIFAR10}_\mathrm{test})$ & Acc$(D^\mathrm{SVHN}_\mathrm{test})$& $\mathrm{NLL}(D^\mathrm{CIFAR10}_\mathrm{test})$ & $\mathrm{NLL}(D^\mathrm{SVHN}_\mathrm{test})$ & ECE$(D^\mathrm{CIFAR10}_\mathrm{test})$ & ECE$(D^\mathrm{SVHN}_\mathrm{test})$ \\ \midrule Vanilla & $0.630 \pm 0.003$ & $ 0.846 \pm 0.001$ & $3.54 \pm 0.05$ & $ 1.69 \pm 0.00$ & $0.308 \pm 0.003$ & $0.122 \pm 0.002$\\ \midrule Pilot \textit{a-aug} & $\bm{0.701 \pm 0.005}$ & $\bm{0.881 \pm 0.005}$ & $\bm{0.87 \pm 0.02}$ & $\bm{0.44 \pm 0.00}$ & $\bm{0.012 \pm 0.000}$ & $0.033 \pm 0.002$\\ Pilot \textit{a-drop} & $0.454 \pm 0.035$ & $0.200 \pm 0.001$ & $ 1.59 \pm 0.01$ & $2.29 \pm 0.01$ & $0.096 \pm 0.010$ & $0.092 \pm 0.002$ \\ Pilot \textit{x-aug} & $0.648 \pm 0.001$ & $0.861 \pm 0.001$ & $1.49 \pm 0.01$ & $ 0.64 \pm 0.002$ & $0.210 \pm 0.005$ & $0.066 \pm 0.001$\\ Pilot \textit{x-drop} & $0.625 \pm 0.04$ & $0.844 \pm 0.002$ & $1.15 \pm 0.01$ & $0.55 \pm 0.01$ & $0.116 \pm 0.002$ & $0.019 \pm 0.001$\\ \midrule Add \textit{a-aug} & $0.641 \pm 0.001$ & $0.858 \pm 0.012$ & $4.80 \pm 0.02$ & $ 1.05 \pm 0.06$ & $0.199 \pm 0.000$ & $0.065 \pm 0.012$ \\ Add \textit{a-drop} & $0.630 \pm 0.002$ & $0.850 \pm 0.011$ & $2.05 \pm 0.02$ & $ 0.89 \pm 0.12$ & $0.249 \pm 0.000$ & $0.081 \pm 0.001$ \\ Add \textit{x-drop} & $0.609 \pm 0.011$ & $0.844 \pm 0.000$ & $1.44 \pm 0.00$ & $ 0.56 \pm 0.01$ & $0.184 \pm 0.002$ & $0.033 \pm 0.001$ \\ Sub \textit{a-drop} & $0.403 \pm 0.001$ & $0.748 \pm 0.070$ & $1.43 \pm 0.00$ & $ 1.52 \pm 0.02$ & $0.490 \pm 0.001 $ & $0.155 \pm 0.001$ \\ Sub \textit{x-drop} & $0.521 \pm 0.086$ & $ 0.742 \pm 0.007$ & $1.97\pm 0.05$ & $ 0.88 \pm 0.04$ & $0.199 \pm 0.001$ & $0.032 \pm 0.001$\\ \midrule Dropout & $0.629 \pm 0.002$ & $ 0.850 \pm 0.001$ & $3.57 \pm 0.01$&$ 1.68 \pm 0.01$ & $0.308 \pm 0.001$ & $0.121 \pm 0.001$\\ $L_2$, $\lambda=0.1$ & $0.629 \pm 0.002$ & $ 0.847 \pm 0.000$ & $3.59 \pm 0.05$ & $ 1.69 \pm 0.00$ & $0.308 \pm 0.004$ & $0.123 \pm 0.001$\\ Batch norm & $0.631 \pm 0.001$ & $ 0.846 \pm 0.001$ & $4.60 \pm 0.02$ & $ 2.12 \pm 0.02$ & $0.230 \pm 0.010 $ & $0.054 \pm 0.000$\\ Data Aug & $0.646 \pm 0.001$ & $ 0.750 \pm 0.002$ & $1.027 \pm 0.00$ & $ 0.77 \pm 0.01$ & $0.016 \pm 0.002$ & $\bm{0.009 \pm 0.001}$\\ \midrule \midrule Pilot$_{\mathrm{MC}}$ \textit{a-aug} & $0.700 \pm 0.002$ & $0.877 \pm 0.002$ & $0.94 \pm 0.01$ & $0.53 \pm 0.00$ & $0.089 \pm 0.001$ & $0.120 \pm 0.001$ \\ Pilot$_{\mathrm{MC}}$ \textit{a-drop} & $0.453 \pm 0.002$ & $0.196 \pm 0.000$ & $1.57 \pm 0.02$ & $2.25 \pm 0.00$ & $0.065 \pm 0.003$ & $0.087 \pm 0.001$ \\ \midrule Add$_{\mathrm{MC}}$ \textit{a-aug} & $0.576 \pm 0.035$ & $0.860 \pm 0.001$ & $1.67 \pm 0.01$ & $0.56 \pm 0.00$ & $0.063 \pm 0.001$ & $0.017 \pm 0.002$ \\ Add$_{\mathrm{MC}}$ \textit{a-drop} & $0.636 \pm 0.020$ & $0.854 \pm 0.001$ & $1.73 \pm 0.01$ & $0.73 \pm 0.00$ & $0.199 \pm 0.001$ & $0.0528 \pm 0.001$ \\ \midrule MC Dropout & $0.579 \pm 0.001$ & $ 0.795 \pm 0.002$ & $1.69 \pm 0.01$&$ 0.93 \pm 0.00$ & $0.065 \pm 0.000$ & $0.067 \pm 0.005$ \\ Ensemble & $0.683 \pm 0.001$ & $ 0.870 \pm 0.001$ & $0.96 \pm 0.01$ & $0.51 \pm 0.01$ & $0.025 \pm 0.003$ & $0.060 \pm 0.002$ \\ \bottomrule \toprule \multicolumn{7}{c}{\textbf{MLP models}}\\ \toprule Model & Acc$(D^\mathrm{CIFAR10}_\mathrm{test})$ & Acc$(D^\mathrm{SVHN}_\mathrm{test})$& $\mathrm{NLL}(D^\mathrm{CIFAR10}_\mathrm{test})$ & $\mathrm{NLL}(D^\mathrm{SVHN}_\mathrm{test})$ & ECE$(D^\mathrm{CIFAR10}_\mathrm{test})$ & ECE$(D^\mathrm{SVHN}_\mathrm{test})$ \\ \midrule Vanilla & $0.581 \pm 0.003$ & $ 0.848 \pm 0.001$ & $4.78 \pm 0.03$ & $ 2.17 \pm 0.06$ &$0.470 \pm 0.004$&$0.127 \pm 0.003$ \\ \midrule Pilot \textit{a-aug} & $\bm{0.601 \pm 0.001}$ & $\bm{0.858 \pm 0.002}$ & $\bm{1.22 \pm 0.01}$ & $\bm{0.53 \pm 0.02}$ & $0.056 \pm 0.004$ &$\bm{0.014} \pm 0.001$\\ Pilot \textit{a-drop} & $0.517 \pm 0.001$ & $0.794 \pm 0.002$ & $1.36 \pm 0.01$ & $ 0.79 \pm 0.02$ & $0.110 \pm 0.003$ &$0.029 \pm 0.002$\\ Pilot \textit{x-aug} & $0.565 \pm 0.002$ & $0.851 \pm 0.001$ & $2.42 \pm 0.01$ & $ 1.16 \pm 0.00$ & $0.288 \pm 0.002$ &$0.057 \pm 0.002$\\ Pilot \textit{x-drop} & $0.570 \pm 0.002$ & $0.837 \pm 0.003$ & $2.14 \pm 0.07$ & $0.72 \pm 0.001$ & $0.284 \pm 0.017$ &$0.057 \pm 0.001$\\ \midrule Add \textit{a-aug} & $0.578 \pm 0.001$ & $0.843 \pm 0.001$ & $2.76 \pm 0.01$ & $ 0.78 \pm 0.06$ & $0.301 \pm 0.000$ &$0.077 \pm 0.001$ \\ Add \textit{a-drop} & $0.578 \pm 0.004$ & $0.849 \pm 0.031$ & $4.26 \pm 0.02$ & $ 1.48 \pm 0.12$ & $0.345 \pm 0.002$&$0.114 \pm 0.004$\\ Add \textit{x-drop} & $0.547 \pm 0.043$ & $0.841 \pm 0.000$ & $2.99 \pm 0.01$ & $ 0.75 \pm 0.01$ & $0.307 \pm 0.001$ &$0.067 \pm 0.002$\\ Sub \textit{a-drop} & $0.462 \pm 0.041$ & $0.737 \pm 0.079$ & $4.23 \pm 1.23$ & $ 1.92 \pm 0.02$ & $0.403 \pm 0.131$ &$0.143 \pm 0.001$\\ Sub \textit{x-drop} & $0.499 \pm 0.002$ & $ 0.765 \pm 0.001$ & $2.22\pm 0.02$ & $ 0.80 \pm 0.03$ &$0.279 \pm 0.001$&$0.029 \pm 0.003$\\ \midrule Dropout & $0.570 \pm 0.002$ & $ 0.837 \pm 0.049$ & $4.88 \pm 0.01$&$ 1.27 \pm 0.24$ &$0.480 \pm 0.001$&$0.116 \pm 0.021$\\ $L_2$, $\lambda=0.1$ & $0.574 \pm 0.002$ & $ 0.847 \pm 0.000$ & $4.74 \pm 0.02$ & $ 2.12 \pm 0.00$ &$0.479 \pm 0.001$&$0.127 \pm 0.001$\\ Batch norm & $0.579 \pm 0.001$ & $ 0.848 \pm 0.001$ & $4.55 \pm 0.02$ & $ 2.04 \pm 0.02$ &$0.570 \pm 0.001$&$0.162 \pm 0.002$\\ Data Aug & $0.566 \pm 0.001$ & $ 0.731 \pm 0.001$ & $1.36 \pm 0.01$ & $ 0.91 \pm 0.01$ & $0.231 \pm 0.001$&$0.055 \pm 0.001$\\ \midrule \midrule Pilot$_{\mathrm{MC}}$ \textit{a-aug} & $0.598 \pm 0.002$ & $0.855 \pm 0.001$ & $1.24 \pm 0.01$ & $0.56 \pm 0.00$ &$0.036 \pm 0.001$&$0.042 \pm 0.002$\\ Pilot$_{\mathrm{MC}}$ \textit{a-drop} & $0.519 \pm 0.001$ & $0.761 \pm 0.002$ & $1.37 \pm 0.01$ & $0.84 \pm 0.01$ &$0.066 \pm 0.002$&$0.107 \pm 0.003$\\ \midrule Add \textit{a-aug} & $0.576 \pm 0.003$ & $0.839 \pm 0.001$ & $2.65 \pm 0.02$ & $0.73 \pm 0.00$ &$0.296 \pm 0.001$&$0.056 \pm 0.001$\\ Add \textit{a-drop} & $0.583 \pm 0.023$ & $0.847 \pm 0.001$ & $3.70 \pm 0.01$ & $1.12 \pm 0.00$ &$0.335 \pm 0.003$&$0.087 \pm 0.000$\\ \midrule MC Dropout & $0.509 \pm 0.002$ & $ 0.784 \pm 0.001$ & $2.19 \pm 0.01$&$ 1.07 \pm 0.01$ &$0.085 \pm 0.002$&$0.080 \pm 0.004$\\ Ensemble & $0.518 \pm 0.002$ & $ 0.850 \pm 0.001$ & $1.58 \pm 0.01$&$ 1.68 \pm 0.01$ &$\bm{0.027} \pm 0.002$&$0.155 \pm 0.001$\\ \bottomrule \end{tabular} \end{table*} \begin{figure*} \centering \subfloat[CIFAR10 - CNN]{{ \includegraphics[width=0.7\textwidth]{plots/calibration_line_plots_cifar_conv.pdf} }}% \\ \subfloat[SVHN - CNN]{{ \includegraphics[width=0.7\textwidth]{plots/calibration_line_plots_svhn_conv.pdf} }}% \\ \subfloat[CIFAR10 - MLP]{{ \includegraphics[width=0.7\textwidth]{plots/calibration_line_plots_cifar_mlp.pdf}}}% \\ \subfloat[SVHN - MLP]{{ \includegraphics[width=0.7\textwidth]{plots/calibration_line_plots_svhn_mlp.pdf} }} \caption{Reliability diagrams for SVHN and CIFAR10 test sets for different regularisation methods.} \label{fig:calib_plot} \end{figure*} We wish to test if our method produces a trained deep net with better calibration, as measured by reliability diagrams, ECE and test-set log likelihood, while maintaining or increasing test-set accuracy. We benchmark against standard methods to regularise deep nets: dropout \citep{Srivastava2014} with rate $r=0.5$, batch norm \citep{Ioffe2015} with default parameters, $L_2$ regularisation with weighting $\lambda=0.1$ \citep{deeplearningbook}, and a data augmentation strategy where we introduce colour shifts, rotations and flips to data with probability of $0.1$ for each datapoint. We propose two modes of operation for \textit{Pilot}. The first, at test-time, simply evaluates $p(y|x)$ without draws from $\tilde{a}$. The second, which we call Pilot$_{\mathrm{MC}}$, samples numerous realisations of $p(y|x)$ for our model by repeatedly drawing $\tilde{a}$ thus giving uncertainty estimates for predictions. As such, we also benchmark against methods shown to given model uncertainty estimates for deep nets: ensembles \citep{Lakshminarayanan2017} and MC dropout \citep{Gal2016}. Here we build an ensemble by uniformly weighting the predictions of all benchmarks. Where uncertainty estimates are obtained by sampling, such as in \textit{Pilot}, MC Dropout, and our noise baselines, we draw 10 samples from the models at test time and average their outputs to generate a prediction. In addition to standard baselines, we compare our method to a `noisy substitution' (Sub) method where during training, we substitute values of $a$ with draws from $N(0,\sigma^{2})$; and to a `noisy addition' (Add) method whereby we sum noisy draws from $N(0,\sigma^{2})$ with $a$. These methods are applied on neurons masked by a mask $b$ as in \textit{Pilot}. In both cases $\sigma^{2}$ is the same fixed variance as in the DGM's decoder (see below). Both these methods are linked to \textit{Pilot}, but are also reminiscent of the work in \citep{Poole2014}. Additive noise corresponds to the asymptote of the DGM where it perfectly infers a neuron's activation with a fixed variance $\sigma^{2}$. Substitutive noise corresponds to earlier stages of training where samples are drawn from an uninformative prior. We apply our method and benchmarks to 2-hidden-layer multi-layer perceptrons (MLPs) and small convolutional networks, on CIFAR-10 \citep{Krizhevsky2009} and SVHN \citep{SVHN}. We train the models containing MLPs for 250 epochs and models containing CNNs for 100 epochs. The Appendix includes a subset of the experiments for a smaller MLP. We run Pilot in two broad modes: \textit{aug} where we impute a single layer at a time, and; \textit{drop} where, akin to dropout, we randomly sample nodes from across the network. This leads us to choose four settings for our mask prior $p(b)$, which is applied to Pilot and to the noisy addition and substitution benchmarks. \begin{itemize} \itemsep1em \item[1)] \textit{$x$ dropout (x-drop)}: $p(b)$ an iid Bernoulli distributions with trial success $r$ over just the input layer $a^0=x$, never masking $a^{\ell>0}$. \item[2)] \textit{$x$ augment (a-aug)}: we impute all of $a^0=x$ given the other activations, but only for a proportion $r$ during training. \item[3)] \textit{activation dropout (a-drop)}: $p(b)$ is a set of iid Bernoulli distributions with trial success $r$ over all units of $p_\Psi(y|x)$. \item[4)] \textit{activation augment (a-aug)}: we impute all of one layer $a^\ell$ chosen uniformly at random, but only for a proportion $r$ during training. \end{itemize} All networks in the DGM parts of our model are MLPs and we fix the variance of our decoder distribution, $ p_\theta(a_b|a_{1-b},b,z)$, a Gaussian with parameterised mean, to 0.1. \subsection{Results} From Table \ref{table:results} we can see that Pilot activation augmentation (\textit{a-aug}) leads to better test set accuracy and test set negative log likelihood (NLL) relative to all other models including a vanilla classifier, which has no regularisation applied during training. Pilot \textit{a-aug} and Pilot$_{\mathrm{MC}}$ \textit{a-aug} consistently provide low ECE but do not always generate the best calibrated models. Better calibrated models are generated for SVHN when CNNs are trained with data augmentation (Pilot \textit{a-aug}: ECE=$3.3\%$ vs Data Augmentation: ECE=$0.9\%$), and for CIFAR-10 when MLPs are ensembled (Pilot$_{\mathrm{MC}}$ \textit{a-aug}: ECE=$3.6\%$ vs Ensemble: ECE=$2.7\%$). Nevertheless, in both cases these methods provide lower test set accuracy and NLL compared to Pilot \textit{a-aug}. Figure \ref{fig:calib_plot} shows reliability diagrams for our Pilot models and our various regularisation baselines. Appendix \ref{app:mc_reliability} shows reliability diagrams for our Pilot models where we are running the classifier with samples $\tilde{a}$, as well as our baselines for model uncertainty estimation, MC dropout and ensembles. Pilot activation augmentation, the top left of each sub figure, consistently gives well calibrated models with high reliability. Appendix \ref{app:ent} contains histograms of the entropy of the predictive distributions over the test set for our Pilot models against baselines for CNNs and MLPs. The Pilot models generally produce predictions with greater uncertainty. \section{Discussion} Overall Pilot \textit{a-aug} results in well-calibrated classifiers that exhibit superior performance to any of the baselines. Calibration is important in many real work classification problems, where asymmetric loss shifts the prediction from $\argmax_y p(y|x)$. Furthermore, it is compact: it does not require multiple forward pass samples, as in MC dropout, or training and storing a variety of models, as in ensembling, to produce accurate and calibrated predictions. That generalising data augmentation to activations gives a modelling benefit is not unreasonable. In a deep net, data and activations have similar interpretations. For instance, the activations of the penultimate layer are features on which one trains logistic regression, so augmenting in this space has the same flavour as doing data augmentation for logistic regression. Importantly, Pilot \textit{a-aug} outperforms our noise addition and substitution baselines meaning that our model performance cannot be solely attributed to noise injections in the fashion of \citet{Poole2014}. Note that we also include results in Appendix \ref{app:noisestop} for the additive noise baselines where we do not propagate gradients through inserted activation values, thus mimicking the exact conditions under which Pilot operates. These methods outperform their counterparts with propagated gradients, particularly for SVHN, which in itself is an interesting observation. Nevertheless, Pilot \textit{a-aug} also outperforms additive noise baselines with this design choice. One could view our model as performing a variety of \textit{transfer learning}: the samples from a larger generative model are used to train a smaller discriminative model (which is also the source of the training data for the larger model). In addition, our approach constitutes a form of \textit{experience replay} \citep{Mnih2015nature}: the generative model learns a posterior over the discriminative model's activations by amortising inference over previous training steps, not solely relying on the $a=f_\Psi(x)$ from the current training iteration. Pilot$_{\mathrm{MC}}$ models exhibit a small degradation in accuracy and NLL relative to their Pilot counterparts. Note that MC Dropout experiences a larger drop in accuracy relative to Dropout, and that Pilot$_{\mathrm{MC}}$ \textit{a-aug} still outperforms other uncertainty estimate models in terms of accuracy and NLL. Nevertheless, Pilot$_{\mathrm{MC}}$ models lead to a noticeable increase in ECE (save for CIFAR-10 MLPs), especially for CNNs. This could be due to the fixed variance of our decoder (set to 0.1) which may degrade model performance at test time. Our MC models, at the expense of calibration, can offer uncertainty estimates with little degradation to test-set accuracy and NLL. Refining the calibration of our MC models is an area for further research. We are pleased to present here \textit{Pilot}, an effective new regularisation strategy for deep nets, and we hope this stimulates further research into regularisers that are trained alongside their discriminator. \clearpage \bibliographystyle{humannat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The study of mathematical foundations of learning and teaching has been very fruitful, revealing fundamental connections to various other areas of mathematics, such as geometry, topology, and combinatorics. Many key ideas and notions emerged from this study: Vapnik and Chervonenkis's VC-dimension \cite{zbMATH03391742}, Valiant's seminal definition of PAC learning \cite{zbMATH03943062}, Littlestone and Warmuth's sample compression schemes \cite{littleWarm}, Goldman and Kearns's teaching dimension~\cite{GoldmanK95}, recursive teaching dimension (RT-dimension, for short)\cite{zbMATH06253884,DoliwaSZ10,SameiSYZ14} and more. While it is known that some of these measures are tightly linked, the exact relationship between them is still not well understood. In particular, it is a long standing question whether the VC-dimension can be used to give a universal bound on the size of sample compression schemes, or on the RT-dimension In this work, we make progress on these two questions. First, we prove that the RT-dimension of a boolean concept class $C$ having VC-dimension $d$ is upper bounded by\footnote{In this text $O(f)$ means at most $\alpha f + \beta$ for $\alpha,\beta >0$ constants.} $O(d 2^d \log \log |C|)$. Secondly, we give a sample compression scheme of size $O(d 2^d \log \log |C|)$ that uses additional information. {Both results were subsequently improved to bounds that are independent of the size of the concept class $C$~\cite{DBLP:journals/eccc/MoranY15,DBLP:journals/eccc/ChenCT16}} {Our proofs are based on a similar technique of recursively applying Haussler's Packing Lemma on the dual class. This similarity provides another example of the informal connection between sample compression schemes and RT-dimension. This connection also appears in other works that study their relationship with the VC-dimension~\cite{DoliwaSZ10,DBLP:journals/eccc/MoranY15,DBLP:journals/eccc/ChenCT16}.} \subsection{VC-dimension}\label{sec:vc} \paragraph{{VC-dimension and size.}} A concept class over the universe $X$ is a set $C\subseteq\{0,1\}^X$. When $X$ is finite, we denote $|X|$ by $n(C)$. The VC-dimension of $C$, denoted $\text{VC}(C)$, is the maximum size of a shattered subset of $X$, where a set $Y \subseteq X$ is shattered if for every $Z \subseteq Y$ there is $c \in C$ so that $c(x)=1$ for all $x \in Z$ and $c(x)=0$ for all $x \in Y-Z$. The most basic result concerning VC-dimension is the Sauer-Shelah-Perles Lemma, that upper bounds $|C|$ in terms of $n(C)$ and $\text{VC}(C)$. It has been independently proved several times, e.g.\ in \cite{zbMATH03392460}. \begin{theorem}[Sauer-Shelah-Perles]\label{thm:Sauer} Let $C$ be a boolean concept class with VC-dimension $d$. Then, $$|C| \leq \sum_{k=0}^{d}{n(C) \choose k}.$$ In particular, if $d\geq 2$ then $|C|\leq n(C)^d$ \end{theorem} \paragraph{{VC-dimension and PAC learning.}} The VC-dimension is one of the most basic complexity measures for concept classes. It is perhaps mostly known in the context of the PAC learning model. PAC learning was introduced in Valiant's seminal work \cite{zbMATH03943062} as a theoretical model for learning from random examples drawn from an unknown distribution {(see the book \cite{KearnsVazirani94} for more details)}. A fundamental and well-known result of Blumer, Ehrenfeucht, Haussler, and Warmuth~\cite{zbMATH04143473}, which is based on an earlier work of Vapnik and Chervonenkis~\cite{zbMATH03391742}, states that {PAC learning sample complexity is equivalent to} VC-dimension. The proof of this theorem uses Theorem~\ref{thm:Sauer} and an argument commonly known as double sampling (see Section~\ref{sec:DoubleSampling} in the appendix for a short and self contained description of this well known argument). \begin{theorem}[\cite{zbMATH03391742},\cite{zbMATH04143473}] \label{thm:PAC} Let $X$ be a set and $C \subseteq \{0,1\}^X$ be a concept class of VC-dimension $d$. Let $\mu$ be a distribution over $X$. Let $\epsilon,\delta >0$ and $m$ an integer satisfying $2 (2m+1)^{d} (1-\epsilon/4)^{m} < \delta$. Let $c\in C$ and $Y = (x_1,\dots,x_m)$ be a multiset of $m$ independent samples from $\mu$. Then, the probability that there is $c' \in C$ so that $c|_Y = c'|_Y$ but $\mu(\{x : c(x) \neq c'(x)\}) > \epsilon$ is at most $\delta$. \end{theorem} \paragraph{{VC-dimension and the metric structure.}} Another fundamental result in this area is Haussler's \cite{zbMATH00734534} description of the metric structure of concept classes with low VC-dimension (see also the work of Dudley \cite{zbMATH03628097}). Roughly, it says that a concept class $C$ of VC-dimension $d$, when thought of as an $L_1$ metric space, behaves like a $d$ dimensional space in the sense that the size of an $\epsilon$-separated set in $C$ is at most $(1/\epsilon)^d$. More formally, every probability distribution $\mu$ on $X$ induces the (pseudo) metric $$\mathsf{dist}_\mu(c,c') = \mu( \{x : c(x) \neq c'(x)\})$$ on $C$. A set $S \subseteq C$ is called $\epsilon$-separated with respect to $\mu$ if for every two concepts $c \neq c'$ in $S$ we have $\mathsf{dist}_\mu(c,c') > \epsilon$. A set $A = A_\mu(C,\epsilon) \subseteq C$ is called an {\em $\epsilon$-approximating set}\footnote{In metric spaces such a set is called an $\epsilon$-net, however in learning theory and combinatorial geometry the term $\epsilon$-net has a different meaning, so we use $\epsilon$-approximating instead.} for $C$ with respect to $\mu$ if it is a maximal $\epsilon$-separated set with respect to $\mu$. The maximality of $A$ implies that for every $c\in C$ there is some rounding $r = r(c,\mu,C,\epsilon)$ in $A$ so that $r$ is a good approximation to $c$, that is, $\mathsf{dist}_\mu(c,r) \leq \epsilon$. We call $r$ a rounding of $c$ in $A$. An approximating set can be thought of as a metric approximation of the possibly complicated concept class $C$, and for many practical purposes it is a good enough substitute for $C$. Haussler proved that there are always small approximating sets. \begin{theorem}[Haussler] \label{thm:Haussler} Let $C \subseteq \{0,1\}^X$ be a concept class with VC-dimension $d$. Let $\mu$ be a distribution on $X$. Let $\epsilon\in (0,1]$. If $S$ is $\epsilon$-separated with respect to $\mu$ then $$|S| \leq e (d+1) \left(\frac{2e}{\epsilon}\right)^d \leq \left(\frac{4e^2}{\epsilon}\right)^d.$$ \end{theorem} \begin{proof}[A proof of a weaker statement] For {$m = 2 \log(|S|)/\epsilon$}, let $x_1,\ldots,x_m$ be independent samples from $\mu$. For every $c \neq c'$ in $S$, $$\Pr_{\mu^m} \left( \forall i \in [m] \ \ c(x_i) = c'(x_i) \right) < (1-\epsilon)^m \leq e^{-m \epsilon} \leq 1/|S|^2.$$ The union bound implies that there is a choice of $Y \subseteq X$ of size $|Y| \leq m$ so that $|S|_Y| = |S|$. Theorem~\ref{thm:Sauer} implies $|S| \leq (|Y|+1)^d$. Thus, $|S| < \left( 30 d \log(2 d/\epsilon) / \epsilon \right)^d$. \end{proof} \subsection{Teaching} Imagine a teacher that helps a student to learn a concept $c$ by picking insightful examples. The concept $c$ is known only to the teacher, but $c$ belongs to a class of concepts $C$ known to both the teacher and the student. The teacher carefully chooses a set of examples that is tailored for $c$, and then provides these examples to the student. Now, the student should be able to recover $c$ from these examples. A central issue that is addressed in the design of mathematical teaching models is ``collusions.'' Roughly speaking, a collusion occurs when the teacher and the student agree in advance on some unnatural encoding of information about $c$ using the bit description of the chosen examples, instead of using attributes that separate $c$ from other concepts. Many mathematical models for teaching were suggested: Shinohara and Miyano~\cite{ShinoharaM90}, Jackson and Tomkins~\cite{JacksonT92}, Goldman, Rivest and Schapire~\cite{GoldmanRS93}, Goldman and Kearns~\cite{GoldmanK95}, Goldman and Mathias~\cite{GoldmanM96} Angluin and Krikis~\cite{AngluinK03}, Balbach~\cite{Balbach2007}, and Kobayashi and Shinohara~\cite {KobayashiS09}. We now discuss some of these models in more detail. \paragraph{{Teaching sets.}} The first mathematical models for teaching~\cite{GoldmanK95,ShinoharaM90,AnthonyBCS92} handle collusions in a fairly restrictive way, by requiring that the teacher provides a set of examples $Y$ that uniquely identifies $c$. Formally, this is captured by the notion of a teaching set, which was independently introduced by Goldman and Kearns~\cite{GoldmanK95}, Shinohara and Miyano~\cite{ShinoharaM90} and Anthony et al.~\cite{AnthonyBCS92}. A set $Y\subseteq X$ is a teaching set for $c$ in $C$ if for all $c' \neq c$ in $C$, we have $c'|_Y \neq c|_Y$. The teaching complexity in these models is captured by the hardest concept to teach, i.e., $\max_{c \in C}\min\{ |Y| \,:\, Y \text{ is a teaching set for } c \text{ in } C \}$. Teaching sets also appear in other areas of learning theory: Hanneke~\cite{Hanneke07} used it in his study of the label complexity in active learning, and the authors of~\cite{WigdersonY12} used variants of it to design efficient algorithms for learning distributions using imperfect data. Defining the teaching complexity using the hardest concept is often too restrictive. Consider for example the concept class consisting of all singletons and the empty set over a domain $X$ of size $n$. Its teaching complexity in these models is $n$, since the only teaching set for the empty set is $X$. This is a fairly simple concept class that has the maximum possible complexity. \paragraph{{Recursive teaching dimension.}} Goldman and Mathias~\cite{GoldmanM96} and Angluin and Krikis~\cite{AngluinK03} therefore suggested less restrictive teaching models, and more efficient teaching schemes were indeed discovered in these models. One approach, studied by Zilles et al.~\cite{zbMATH06253884}, Doliwa et al.~\cite{DoliwaSZ10}, and Samei et al.~\cite{SameiSYZ14}, uses a natural hierarchy on the concept class $C$ which is defined as follows. The first layer in the hierarchy consists of all concepts whose teaching set has minimal size. Then, these concepts are removed and the second layer consists of all concepts whose teaching set with respect to the remaining concepts has minimal size. Then, these concepts are removed and so on, until all concepts are removed. The maximum size of a set that is chosen in this process is called the {\it recursive teaching (RT)} dimension. {One way of thinking about this model is that the teaching process satisfies an Occam's razor-type rule of preferring simpler concepts.} For example, the concept class consisting of singletons and the empty set, which was considered earlier, has recursive teaching dimension $1$: The first layer in the hierarchy consists of all singletons, which have teaching sets of size $1$. Once all singletons are removed, we are left with a concept class of size $1$, the concept class $\{\emptyset\}$, and in it the empty set has a teaching set of size $0$. A similar notion to RT-dimension was independently suggested in~\cite{WigdersonY12} under the terminology of partial IDs. There the focus was on getting a simultaneous upper bound on the size of the sets, as well as the number of layers in the recursion, and it was shown that for any concept class $C$ both can be made at most $\log|C|$. Motivation for this study comes from the population recovery learning problem defined in~\cite{DRWY12}. \paragraph{{Previous results.}} Doliwa et al.~\cite{DoliwaSZ10} and Zilles et al.~\cite{zbMATH06253884} asked whether small VC-dimension implies small recursive teaching dimension. An equivalent question was asked 10 years earlier by Kuhlmann~\cite{Kuhlmann99}. Since the VC-dimension does not increase when concepts are removed from the class, this question is equivalent to asking whether every class with small VC-dimension has some concept in it with a small teaching set. Given the semantics of the recursive teaching dimension and the VC-dimension, an interpretation of this question is whether exact teaching is not much harder than approximate learning (i.e., PAC learning). For infinite classes the answer to this question is negative. There is an infinite concept class with VC-dimension $1$ so that every concept in it does not have a finite teaching set. An example for such a class is $C\subseteq\{0,1\}^\mathbb{Q}$ defined as $C=\{c_q:q\in\mathbb{Q}\}$ where $c_q$ is the indicator function of all rational numbers that are smaller than $q$. The VC-dimension of $C$ is $1$, but every teaching set for some $c_q \in C$ must contain a sequence of rationals that converges to $q$. For finite classes this question is open. However, in some special cases it is known that the answer is affirmative. In~\cite{Kuhlmann99} it is shown that if $C$ has VC-dimension $1$, then its recursive teaching dimension is also $1$. It is known that if $C$ is a maximum\footnote{That is, $C$ satisfies Sauer-Shelah-Perles Lemma with equality.} class then its recursive teaching dimension is equal to its VC-dimension~\cite{DoliwaSZ10,DBLP:journals/jmlr/RubinsteinR12}. Other families of concept classes for which the recursive teaching dimension is at most the VC-dimension are discussed in~\cite{DoliwaSZ10}. In the other direction, \cite{Kuhlmann99} provided examples of concept classes with VC-dimension $d$ and recursive teaching dimension at least $\frac{3}{2}d$. The only bound on the recursive teaching dimension for general classes was observed by both~\cite{DoliwaSZ10,WigdersonY12}. It states that the recursive teaching dimension of $C$ is at most $\log|C|$. This bound follows from a simple halving argument which shows that for all $C$ there exists some $c\in C$ with a teaching set of size $\log|C|$. \paragraph{{Our contribution.}} Our first main result is the following general bound, which exponentially improves over the $\log|C|$ bound when the VC-dimension is small (the proof is given in Section~\ref{sec:RTD}). \begin{theorem}[RT-dimension] \label{thm:RTD} Let $C$ be a concept class of VC-dimension $d$. Then there exists $c\in C$ with a teaching set of size at most $$d2^{d+3}(\log(4e^2) + \log\log|C|) .$$ \end{theorem} It follows that the recursive teaching dimension of concept classes of VC-dimension $d$ is at most $d2^{d+3}(\log(4e^2) + \log\log|C|)$ as well. {Subsequent to this paper, Chen, Cheng, and Tang~\cite{DBLP:journals/eccc/ChenCT16} proved that the RT-dimension is at most $\exp(d)$. Their proof is based on ideas from this work, in particular they follow and improve the argument from the proof of Lemma~\ref{lem:3,6}.} \subsection{{Sample} compression schemes} \label{sec:intCompSch} A fundamental and well known statement in learning theory says that if the VC-dimension of a concept class $C$ is small, then any consistent\footnote{An algorithm that outputs an hypothesis in $C$ that is consistent with the input examples.} algorithm successfully PAC learns concepts from $C$ after seeing just a few labelled examples~\cite{zbMATH03391742,BlumerEHW87}. In practice, however, a major challenge one has to face when designing a learning algorithm is the construction of an hypothesis that is consistent with the examples seen. Many learning algorithms share the property that the output hypothesis is constructed using a small subset of the examples. For example, in support vector machines, only the set of support vectors is needed to construct the separating hyperplane \cite{Cristianini00a}. {Sample compression schemes provide a formal meaning for this algorithmic property.} Before giving the formal definition of compression schemes, let us consider a simple illustrative example. Assume we are interested in learning the concept class of intervals on the real line. We get a collection of 100 samples of the form $(x,c_I(x))$ where $x \in \mathbb R$ and $c_I(x) \in \{0,1\}$ indicates\footnote{That is $c_I(x)=1$ iff $x \in I$.} if $x$ is in the interval $I \subset \mathbb R$. Can we remember just a few of the samples in a way that allows to recover all the 100 samples? In this case, the answer is affirmative and in fact it is easy to do so. Just remember two locations, those of the left most $1$ and of the right most $1$ (if there are no $1$s, just remember one of the $0$s). From this data, we can reconstruct the value of $c_I$ on all the other 100 samples. \paragraph{The formal definition.} Littlestone and Warmuth~\cite{littleWarm} formally defined sample compression schemes as follows. Let $C \subseteq \{0,1\}^X$ with $|X|=n$. Let $$L_C(k_1,k_2) = \{(Y,y) : Y \subseteq X , \ k_1 \leq |Y| \leq k_2, \ y \in C|_Y\},$$ the set of labelled samples from $C$, of sizes between $k_1$ and $k_2$. A $k$-sample compression scheme for $C$ with information $Q$, consists of two maps $\kappa,\rho$ for which the following hold: \begin{description} \item[(${\kappa}$)] The {\em compression map} $$\kappa: L_C(1,n) \to L_C(0,k) \times Q$$ takes $(Y,y)$ to $((Z,z),q)$ with $Z \subseteq Y$ and $y|_Z = z$. \item[($\rho$)] The {\em reconstruction map} $$\rho : L_C(0,k) \times Q \to \{0,1\}^X$$ is so that for all $(Y,y)$ in $L_C(1,n)$, $$\rho(\kappa(Y,y))|_Y = y.$$ {The {\em size} of the scheme is $k+\log|Q|$.} \end{description} Intuitively, the compression map takes a long list of samples $(Y,y)$ and encodes it as a short sub-list of samples $(Z,z)$ together with some small amount of side information $q\in Q$, which helps in the reconstruction phase. The reconstruction takes a short list of samples $(Z,z)$ and decodes it using the side information $q$, {without any knowledge of $(Y,y)$}, to an hypothesis in a way that essentially inverts the compression. {Specifically, the following property must always hold:} if the compression of $(Y,c|_Y)$ is the same as that of $(Y',c'|_{Y'})$ then $c|_{Y\cap Y'} = c'|_{Y\cap Y'}$. A different perspective of the side information is as a list decoding in which the small set of labelled examples $(Z,z)$ is mapped to the set of hypothesis $\{ \rho((Z,z),q) : q \in Q\}$, one of which is correct. We note that it is not necessarily the case that the reconstructed hypothesis belongs to the original class $C$. All it has to satisfy is that for any $(Y,y)\in L_C(1,n)$ such that $h=\rho(\kappa(Y,y))$ we have that $h|_Y = y$. Thus, $h$ has to be consistent only on the sampled coordinates that were compressed and not elsewhere. {Let us consider a simple example of a sample compression scheme, to help digest the definition. Let $C$ be a concept class and let $r$ be the rank over, say, $\mathbb R$ of the matrix whose rows correspond to the concepts in $C$. We claim that there is an $r$-sample compression scheme for $C$ with no side information. Indeed, for any $Y \subseteq X$, let $Z_Y$ be a set of at most $r$ columns that span the columns of the matrix $C|_Y$. Given a sample $(Y,y)$ compress it to $\kappa(Y,y) = (Z_Y,z)$ for $z = y|_{Z_Y}$. The reconstruction maps $\rho$ takes $(Z,z)$ to any concept $h \in C$ so that $h|_Z = z$. This sample compression scheme works since if $(Z,z) =\kappa(Y,y)$ then every two different rows in $C|_Y$ must disagree on $Z$. } \paragraph{Connections to learning.} {Sample compression schemes are known to yield practical learning algorithms (see e.g.\ \cite{DBLP:journals/jmlr/MarchandS02}), and allow learning for multi labelled concept classes~\cite{DBLP:conf/colt/SameiSYZ14}. They} can also be interpreted as a formal manifestation of Occam's razor. Occam's razor is a philosophical principle attributed to William of Ockham from the late middle ages. It says that in the quest for an explanation or an hypothesis, one should prefer the simplest one which is consistent with the data. There are many works on the role of Occam's razor in learning theory, a partial list includes \cite{littleWarm,BlumerEHW87,DBLP:conf/colt/Floyd89,DBLP:journals/iandc/QuinlanR89,DBLP:journals/jcss/HelmboldW95,DBLP:journals/ml/FloydW95,DBLP:journals/datamine/Domingos99}. In the context of sample compression schemes, simplicity is captured by the size of the compression scheme. Interestingly, this manifestation of Occam's razor is provably useful \cite{littleWarm}: Sample compression schemes imply {PAC} learnability. \begin{theorem}[Littlestone-Warmuth] \label{thm:LWPAC} Let $C \subseteq \{0,1\}^Xß$, and $c \in C$. Let $\mu$ be a distribution on $X$, and $x_1,\ldots,x_m$ be $m$ independent samples from $\mu$. Let $Y = (x_1,\ldots,x_m)$ and $y = c|_Y$. Let $\kappa,\rho$ be a $k$-sample compression scheme for $C$ with additional information $Q$. Let $h = \rho(\kappa(Y,y))$. Then, $$\Pr_{\mu^m}( \mathsf{dist}_\mu(h,c) > \epsilon) <|Q| \sum_{j=0}^{k} {m \choose j} (1-\epsilon)^{m-j}.$$ \end{theorem} \begin{proof}[Proof sketch.] There are $\sum_{j=0}^{k} {m \choose j}$ subsets $T$ of $[m]$ of size at most $k$. There are $|Q|$ choices for $q \in Q$. Each choice of $T,q$ yields a function $h_{T,q} = \rho((T,y_T),q)$ that is measurable with respect to $x_T = (x_t : t \in T)$. The function $h$ is one of the functions in $\{h_{T,q} : |T|\leq k,q\in Q\}$. For each $h_{T,q}$, the coordinates in $[m] - T$ are independent, and so if $\mathsf{dist}_\mu(h_{T,q},c) > \epsilon$ then the probability that all these $m-|T|$ samples agree with $c$ is less than $(1-\epsilon)^{m-|T|}$. The union bound completes the proof. \end{proof} { The sample complexity of PAC learning is essentially the VC-dimension. Thus, from Theorem~\ref{thm:LWPAC} we expect the VC-dimension to bound from below the size of sample compression schemes. Indeed, \cite{DBLP:journals/ml/FloydW95} proved that there are concept classes of VC-dimension $d$ for which any sample compression scheme has size at least $d$.} { This is part of the motivation for the following basic question that was asked by Littlestone and Warmuth \cite{littleWarm} nearly 30 years ago: Does a concept class of VC-dimension $d$ have a sample compression scheme of size depending only on $d$ (and not on the universe size)?} In fact, unlike the VC-dimension, the definition of sample compression schemes as well as the fact that they imply PAC learnability naturally generalizes to multi-class classification problems~\cite{DBLP:conf/colt/SameiSYZ14}. {Thus, Littlestone and Warmuth's question above can be seen as the boolean instance of a much broader question: Is it true that the size of an optimal sample compression scheme for a given concept class (not necessarily binary-labeled) is the sample complexity of PAC learning of this class? } \paragraph{{Previous constructions.}} { Floyd~\cite{DBLP:conf/colt/Floyd89} and Floyd and Warmuth~\cite{DBLP:journals/ml/FloydW95} constructed sample compression schemes of size $\log |C|$. The construction in \cite{DBLP:journals/ml/FloydW95} uses a transformation that converts certain online learning algorithms to compression schemes.} Helmbold and Warmuth~\cite{DBLP:journals/jcss/HelmboldW95} and Freund~\cite{DBLP:journals/iandc/Freund95} showed how to compress a sample of size $m$ to a sample of size $O(\log(m))$ using some side information for classes of constant VC-dimension (the implicit constant in the $O(\cdot)$ depends on the VC-dimension). {In a long line of works, several interesting compression schemes for special cases were constructed. A partial list includes Helmbold et al.\ \cite{DBLP:journals/siamcomp/HelmboldSW92}, Floyd and Warmuth \cite{DBLP:journals/ml/FloydW95}, Ben-David and Litman \cite{DBLP:journals/dam/Ben-DavidL98}, Chernikov and Simon \cite{chernikovS}, Kuzmin and Warmuth \cite{DBLP:journals/jmlr/KuzminW07}, Rubinstein et al.\ \cite{DBLP:journals/jcss/RubinsteinBR09}, Rubinstein and Rubinstein \cite{DBLP:journals/jmlr/RubinsteinR12}, Livni and Simon \cite{DBLP:conf/colt/LivniS13} and more. These works provided connections between compression schemes and geometry, topology and model theory. } \paragraph{{Our contribution.}} Here we make the first quantitive progress on this question, since the work of Floyd~\cite{DBLP:conf/colt/Floyd89}. The following theorem shows that low VC-dimension implies the existence of relatively efficient compression schemes. The constructive proof is provided in Section~\ref{sec:compSch}. \begin{theorem}[Sample compression scheme] \label{thm:CompSch} If $C$ has VC-dimension $d$ then it has a $k$-sample compression scheme with additional information $Q$ where $k =O(d 2^d \log\log|C|)$ and $\log|Q| \leq O(k \log(k) )$. \end{theorem} {Subsequent to this paper, the first and the last authors improved this bound~\cite{DBLP:journals/eccc/MoranY15}, showing that any concept class of VC-dimension $d$ has a sample compression scheme of size at most $\exp(d)$. The techniques used in~\cite{DBLP:journals/eccc/MoranY15} differ from the techniques we use in this paper. {In particular, our scheme relies on Haussler's Packing Lemma (Theorem~\ref{thm:Haussler}) and recursion, while the scheme in~\cite{DBLP:journals/eccc/MoranY15} relies on von Neumann's minimax theorem~\cite{Neumann1928} and the $\epsilon$-approximation theorem~\cite{zbMATH03391742,DBLP:journals/dcg/HausslerW87}, which follow from the double-sampling argument of~\cite{zbMATH03391742}.} Thus, despite the fact that our scheme is weaker than the one in~\cite{DBLP:journals/eccc/MoranY15}, it provides a different angle on sample compression, which may be useful in further improving the exponential dependence on the VC-dimension to an optimal linear dependence, as conjectured by Floyd and Warmuth~\cite{DBLP:journals/ml/FloydW95,DBLP:conf/colt/Warmuth03}.} \subsection{Discussion and open problems} This work provides relatively efficient constructions of teaching sets and sample compression schemes. {However, the exact relationship between VC-dimension, sample compression scheme size, and the RT-dimension remains unknown.} Is there always a concept with a teaching set of size depending only on the VC-dimension? (The interesting case is finite concept classes, as mentioned above.) Are there always sample compression schemes of size linear (or even polynomial) in the VC-dimension? The simplest case that is still open is VC-dimension $2$. One can refine this case even further. VC-dimension $2$ means that on any three coordinates $x,y,z\in X$, the projection $C|_{\{x,y,z\}}$ has at most $7$ patterns. A more restricted family of classes is $(3,6)$ concept classes, for which on any three coordinates there are at most $6$ patterns. We can show that the recursive teaching dimension of $(3,6)$ classes is at most $3$. \begin{lemma}\label{lem:3,6} Let $C$ be a finite $(3,6)$ concept class. Then there exists some $c\in C$ with a teaching set of size at most $3$. \end{lemma} \begin{proof} Assume that $C\subseteq \{0,1\}^{X}$ with $X = [n]$. If $C$ has VC-dimension $1$ then there exists $c\in C$ with a teaching set of size $1$ (see~\cite{Kuhlmann99,AMY14}). Therefore, assume that the VC-dimension of $C$ is $2$. Every shattered pair $\{x,x'\}\subseteq X$ partitions $C$ to $4$ nonempty sets: $$C^{x,x'}_{b,b'} = \{c\in C: c(x)=b,c(x')=b'\},$$ for $b,b'\in\{0,1\}$. Pick a shattered pair $\{x_*,x'_*\}$ and $b_*,b'_*$ for which the size of $C^{x_*,x'_*}_{b_*,b'_*}$ is minimal. Without loss of generality assume that $\{x_*,x'_*\}=\{1,2\}$ and that $b_*=b'_*=0$. To simplify notation, we denote $C^{1,2}_{b,b'}$ simply by $C_{b,b'}$. We prove below that $C_{0,0}$ has VC-dimension $1$. This completes the proof since then there is some $c\in C_{0,0}$ and some $x \in [n] \setminus \{1,2\}$ such that $\{x\}$ is a teaching set for $c$ in $C_{0,0}$. Therefore, $\{1,2,x\}$ is a teaching set for $c$ in $C$. First, a crucial observation is that since $C$ is a $(3,6)$ class, no pair $\{x,x'\} \subseteq [n]\setminus\{1,2\}$ is shattered by both $C_{0,0}$ and $C\setminus C_{0,0}$. Indeed, if $C\setminus C_{0,0}$ shatters $\{x,x'\}$ then either $C_{1,0}\cup C_{1,1}$ or $C_{0,1}\cup C_{1,1}$ has at least $3$ patterns on $\{x,x'\}$. If in addition $C_{0,0}$ shatters $\{x,x'\}$ then $C$ has at least $7$ patterns on $\{1,x,x'\}$ or $\{2,x,x'\}$, contradicting the assumption that $C$ is a $(3,6)$ class. Now, assume towards contradiction that $C_{0,0}$ shatters $\{x,x'\}$. Thus, $\{x,x'\}$ is not shattered by $C\setminus C_{0,0}$ which means that there is some pattern $p \in \{0,1\}^{\{x,x'\}}$ so that $p \not \in (C\setminus C_{0,0})|_{\{x,x'\}}$. This implies that $C^{x,x'}_{p(x),p(x')}$ is a proper subset of $C_{0,0}$, contradicting the minimality of $C_{0,0}$. \end{proof} \section{The dual class} We shall repeatedly use the dual concept class to $C$ and its properties. The dual concept class $C^*\subseteq\{0,1\}^C$ of $C$ is defined by $C^*=\{c_x:x\in X\}$, where $c_x:C\rightarrow\{0,1\}$ is the map so that $c_x(c)=1$ iff $c(x)=1$. If we think of $C$ as a binary matrix whose rows are the concepts in $C$, then $C^*$ corresponds to the distinct rows of the transposed matrix (so it may be that $|C^*| < |n(C)|$). We use the following well known property (see \cite{Assouad}). \begin{claim}[Assouad] \label{clm:assou} If the VC-dimension of $C$ is $d$ then the VC-dimension of $C^*$ is at most $2^{d+1}$. \end{claim} \begin{proof}[Proof sketch] If the VC-dimension of $C^*$ is $2^{d+1}$ then in the matrix representing $C$ there are $2^{d+1}$ rows that are shattered, and in these rows there are $d+1$ columns that are shattered. \end{proof} We also define the dual approximating set (recall the definition of $A_\mu(C,\epsilon)$ from Section~\ref{sec:vc}). Denote by $A^*(C,\epsilon)$ the set $A_U(C^*,\epsilon)$, where $U$ is the uniform distribution on $C^*$. \section{Teaching sets} \label{sec:RTD} In this section we prove Theorem~\ref{thm:RTD}. The high level idea is to use Theorem~\ref{thm:Haussler} and Claim~\ref{clm:assou} to identify two distinct $x,x'$ in $X$ so that the set of $c \in C$ so that $c(x) \neq c(x')$ is much smaller than $|C|$, add $x,x'$ to the teaching set, and continue inductively. \begin{proof}[Proof of Theorem~\ref{thm:RTD}] For classes with VC-dimension $1$ there is $c\in C$ with a teaching set of size $1$, see e.g.\ \cite{DoliwaSZ10}. We may therefore assume that $d\geq 2$. We show that if $|C| > (4e^2)^{d\cdot 2^{d+2}}$, then there exist $x \neq x'$ in $X$ such that \begin{equation}\label{eq:RTDstep} 0< |\{c\in C : c(x)=0 \text{ and } c(x')=1\} | \leq |C|^{1-\frac{1}{d2^{d+2}}}. \end{equation} From this the theorem follows, since if we iteratively add such $x,x'$ to the teaching set {and restrict ourselves to $\{c\in C : c(x)=0 \text{ and } c(x')=1\}$, then after at most $d2^{d+2}\log\log|C|$ iterations, the size of the remaining class} is reduced to less than $(4e^2)^{d\cdot2^{d+2}}$. At this point we can identify a unique concept by adding at most $\log((4e^2)^{d\cdot 2^{d+2}})$ additional indices to the teaching set, using the halving argument of \cite{DoliwaSZ10,WigdersonY12}. This gives a teaching set of size at most $2d2^{d+2}\log\log|C| + d2^{d+2}\log(4e^2)$ for some $c\in C$, as required. In order to prove~\eqref{eq:RTDstep}, it is enough to show that there exist $c_x \neq c_{y}$ in $C^*$ such that the normalized hamming distance between $c_x,c_{y}$ is at most $\epsilon := |C|^{-\frac{1}{d2^{d+2}}}$. Assume towards contradiction that the distance between every two concepts in $C^*$ is more than $\epsilon$, and assume without loss of generality that $n(C)=|C^*|$ (that is, all the columns in $C$ are distinct). By Claim~\ref{clm:assou}, the VC-dimension of $C^*$ is at most $2^{d+1}$. Theorem~\ref{thm:Haussler} thus implies that \begin{align} \label{eqn:RisSmallTeach} n(C)= |C^*| \leq \left(\frac{4e^2}{\epsilon}\right)^{2^{d+1}} < \left(\frac{1}{\epsilon}\right)^{2^{d+2}}, \end{align} where the last inequality follows from the definition of $\epsilon$ and the assumption on the size of $C$. Therefore, we arrive at the following contradiction: \begin{align*} |C| & \leq (n(C))^d \tag{by Theorem~\ref{thm:Sauer}, since $VC(C)\geq 2$ } \\ & < \left(\frac{1}{\epsilon}\right)^{d\cdot2^{d+2}}\tag{by Equation~\ref{eqn:RisSmallTeach} above}\\ & = |C| . \tag{by definition of $\epsilon$} \end{align*} \end{proof} \section{Sample compression schemes} \label{sec:compSch} In this section we prove Theorem~\ref{thm:CompSch}. The theorem statement and the definition of sample compression schemes appear in Section~\ref{sec:intCompSch}. While the details are somewhat involved, due to the complexity of the definitions, the high level idea may be (somewhat simplistically) summarized as follows. For an appropriate choice of $\epsilon$, we pick an $\epsilon$-approximating set $A^*$ of the dual class $C^*$. It is helpful to think of $A^*$ as a subset of the domain $X$. Now, either $A^*$ faithfully represents the sample $(Y,y)$ or it does not (we do not formally define ``faithfully represents'' here). We identify the following win-win situation: In both cases, we can reduce the compression task to that in a much smaller set of concepts of size at most $\epsilon |C| \approx |C|^{1-2^{-d}}$, similarly to as for teaching sets in Section~\ref{sec:RTD}. This yields the same double-logarithmic behavior. In the case that $A^*$ faithfully represents $(Y,y)$, Case \ref{kappa:2} below, we recursively compress in the small class $C|_{A^*}$. In the unfaithful case, Case \ref{kappa:1} below, we recursively compress in a (small) set of concepts for which disagreement occurs on some point of $Y$, just as in Section~\ref{sec:RTD}. In both cases, we have to extend the recursive solution, and the cost is adding one sample point to the compressed sample (and some small amount of additional information by which we encode whether Case~\ref{kappa:1} or~\ref{kappa:2} occurred). The compression we describe is inductively defined, and has the following additional structure. Let $((Z,z),q)$ be in the image of $\kappa$. The information $q$ is of the form $q=(f,T)$, where $T \geq 0$ is an integer so that $|Z| \leq T+O(d \cdot 2^d)$, and $f : \{0,1,\ldots,T\} \to Z$ is a partial one-to-one function\footnote{That is, it is defined over a subset of $\{0,1,\ldots,T\}$ and it is injective on its domain.}. The rest of this section is organized as follows. In Section~\ref{sec:kappa} we define the compression map $\kappa$. In Section~\ref{sec:rho} we give the reconstruction map $\rho$. The proof of correctness is given in Section~\ref{sec:correct} and the upper bound on the size of the compression is calculated in Section~\ref{sec:size}. \subsection{Compression map: defining $\kappa$}\label{sec:kappa} Let $C$ be a concept class. The compression map is defined by induction on $n=n(C)$. For simplicity of notation, let $d = VC(C)+2$. In what follows we shall routinely use $A^*(C,\epsilon)$. There are several $\epsilon$-approximating sets and so we would like to fix one of them, say, the one obtained by greedily adding columns to $A^*(C,\epsilon)$ starting from the first\footnote{We shall assume w.l.o.g. that there is some well known order on $X$.} column (recall that we can think of $C$ as a matrix whose rows correspond to concepts in $C$ and whose columns are concepts in the dual class $C^*$). To keep notation simple, we shall use $A^*(C,\epsilon)$ to denote both the approximating set in $C^*$ and the subset of $X$ composed of columns that give rise to $A^*(C,\epsilon)$. This is a slight abuse of notation but the relevant meaning will always be clear from the context. \paragraph{Induction base.} The base of the induction applies to all concept classes $C$ so that $|C| \leq (4e^2)^{d\cdot2^d + 1}$. In this case, we use the compression scheme of Floyd and Warmuth~\cite{DBLP:conf/colt/Floyd89,DBLP:journals/ml/FloydW95} which has size $\log(|C|) = O(d \cdot 2^d)$. This compression scheme has no additional information. Therefore, to maintain the structure of our compression scheme we append to it redundant additional information by setting $T=0$ and $f$ to be empty. \paragraph{Induction step.} Let $C$ be so that $|C| > (4e^2)^{d\cdot 2^d + 1}$. Let $0 < \epsilon <1$ be so that \begin{align} \label{en:whatIsEps} \epsilon |C| = \left( \frac{1}{\epsilon} \right)^{d\cdot2^{d}} . \end{align} This choice balances the recursive size. By Claim~\ref{clm:assou}, the VC-dimension of $C^*$ is at most $2^{d-1}$ (recall that $d=VC(C)+2$). Theorem~\ref{thm:Haussler} thus implies that \begin{align} \label{eqn:RisSmall} |A^*(C,\epsilon)| \leq \left(\frac{4e^2}{\epsilon}\right)^{2^{d-1}} < \left(\frac{1}{\epsilon}\right)^{2^{d}} < n(C). \end{align} (Where the second inequality follows from the definition of $\epsilon$ and the assumption on the size of $C$ and the last inequality follows from the definition of $\epsilon$ and Theorem~\ref{thm:Sauer}). Let $(Y,y) \in L_C(1,n)$. Every $x \in X$ has a rounding\footnote{The choice of $r(x)$ also depends on $C,\epsilon$, but to simplify the notation we do not explicitly mention it.} $r(x)$ in $A^*(C,\epsilon)$. We distinguish between two cases: \begin{enumerate}[\bf{Case} 1:] \item \label{kappa:1} There exist $x\in Y$ and $c\in C$ such that $c|_Y=y$ and $c(r(x)) \neq c(x)$. This is the unfaithful case in which we {recurse} as in Section~\ref{sec:RTD}. Let \begin{align*} & C' = \{ c'|_{X-\{x,r(x)\}}:c'\in C, c'(x)=c(x),c'(r(x))=c(r(x)) \} ,\\ & Y' = Y-\{x,r(x)\},\\ & y' = y|_{Y'}. \end{align*} Apply recursively $\kappa$ on $C'$ and the sample $(Y',y')\in L_{C'}(1,n(C'))$. Let $((Z',z'),(f',T'))$ be the result of this compression. Output $((Z,z),(f,T))$ defined as\footnote{Remember that $f$ is a partial function.} \begin{align*} & Z=Z'\cup\{x\} , &\\ & z|_{Z'}=z' ,\ z(x)=y(x), &\\ & T=T'+1 , &\\ &f|_{\{0,\ldots,T-1\}}=f'|_{\{0,\ldots,T-1\}}, &\\ & f(T)=x \tag{$f$ is defined on $T$, marking that Case~\ref{kappa:1} occurred} \end{align*} \item \label{kappa:2} For all $x\in Y$ and $c\in C$ such that $c|_Y=y$, we have $c(x)= c(r(x))$. This is the faithful case, in which we compress by restricting $C$ to $A^*$. Consider $r(Y)=\{r(y'): y'\in Y\} \subseteq A^*(C,\epsilon)$. For each $x'\in r(Y)$, pick\footnote{The function $s$ can be thought of as the inverse of $r$. Since $r$ is not necessarily invertible we use a different notation than $r^{-1}$.} $s(x')\in Y$ to be an element such that $r(s(x'))=x'$. Let \begin{align*} & C' = C|_{A^*(C,\epsilon)},\\ & Y' = r(Y),\\ & y'(x') = y(s(x')) \ \forall x'\in Y' . \end{align*} By \eqref{eqn:RisSmall}, we know $|A^*(C,\epsilon)| < n(C)$. Therefore, we can recursively apply $\kappa$ on $C'$ and $(Y',y')\in L_{C'}(1,n(C'))$ and get $((Z',z'),(f',T'))$. Output $((Z,z),(f,T))$ defined as \begin{align*} & Z=\{s(x'):x'\in Z'\}, \\ & z(x) = z'(r(x)) \ \forall x\in Z , \tag{$r(x)\in Z'$}\\ & T=T'+1 , \\ & f=f'. \tag{$f$ is not defined on $T$, marking that Case~\ref{kappa:2} occurred} \end{align*} \end{enumerate} The following lemma summarizes two key properties of the compression scheme. The correctness of this lemma follows directly from the definitions of Cases~\ref{kappa:1} and~\ref{kappa:2} above. \begin{lemma} \label{property:compScheme} Let $(Y,y) \in L_C(1,n(C))$ and $((Z,z),(T,f))$ be the compression of $(Y,y)$ described above, where $T\geq 1$. The following properties hold: \begin{enumerate} \item \label{prop:defT} $f$ is defined on $T$ and $f(T)=x$ iff $x\in Y$ and there exists $c\in C$ such that $c|_Y=y$ and $c(r(x)) \neq c(x)$. \item \label{prop:notdefT} $f$ is not defined on $T$ iff for all $x\in Y$ and $c\in C$ such that $c|_Y=y$, it holds that $c(x)= c(r(x))$. \end{enumerate} \end{lemma} \subsection{Reconstruction map: defining $\rho$}\label{sec:rho} The reconstruction map is similarly defined by induction on $n(C)$. Let $C$ be a concept class and let $((Z,z),(f,T))$ be in the image\footnote{For $((Z,z),(f,T))$ not in the image of $\kappa$ we set $\rho((Z,z),(f,T))$ to be some arbitrary concept.} of $\kappa$ with respect to $C$. Let $\epsilon = \epsilon(C)$ be as in \eqref{en:whatIsEps}. \paragraph{Induction base.} The induction base here applies to the same classes like the induction base of the compression map. This is the only case where $T=0$, and we apply the reconstruction map of Floyd and Warmuth~\cite{DBLP:conf/colt/Floyd89,DBLP:journals/ml/FloydW95} \paragraph{Induction step.} Distinguish between two cases: \begin{enumerate}[\bf{Case} 1:] \item \label{rho:1} $f$ is defined on $T$. Let $x = f(T)$. Denote \begin{align*} & X' = X - \{x,r(x)\},\\ & C' = \{ c'|_{X'}:c'\in C, c'(x)=z(x),c'(r(x))=1-z(x)\} ,\\ & Z' = Z-\{x,r(x)\},\\ & z' = z|_{Z'},\\ & T' = T-1,\\ & f' = f|_{\{0,\ldots,T'\}}. \end{align*} Apply recursively $\rho$ on $C', ((Z',z'),(f',T'))$. Let $h'\in\{0,1\}^{X'}$ be the result. Output $h$ where \begin{align*} & h|_{X'}=h',\\ & h(x)=z(x),\\ & h(r(x))=1-z(x). \end{align*} \item \label{rho:2} $f$ is not defined on $T$. Consider $r(Z)=\{r(x): x\in Z\} \subseteq A^*(C,\epsilon)$. For each $x'\in r(Z)$, pick $s(x')\in Z$ to be an element such that $r(s(x'))=x'$. Let \begin{align*} & X' = A^*(C,\epsilon) , \\ & C' = C|_{X'},\\ & Z' = r(Z) ,\\ & z'(x') = z(s(x')) \ \forall x'\in Z',\\ & T' = T-1,\\ & f' = f|_{\{0,\ldots,T'\}}. \end{align*} Apply recursively $\rho$ on $C', ((Z',z'),(f',T'))$ and let $h'\in\{0,1\}^{X'}$ be the result. Output $h$ satisfying \begin{align*} h(x) = h'(r(x)) \ \forall x\in X. \end{align*} \end{enumerate} \subsection{Correctness}\label{sec:correct} The following lemma yields the correctness of the compression scheme. \begin{lemma} Let $C$ be a concept class, $(Y,y)\in L_C(1,n)$, $\kappa(Y,y) = ((Z,z),(f,T)) $ and $h=\rho(\kappa(Y,y))$. Then, \begin{enumerate} \item \label{enum:1} $Z\subseteq Y$ and $z|_Z=y|_Z$, and \item \label{enum:2} $h|_Y=y|_Y$. \end{enumerate} \end{lemma} \begin{proof} We proceed by induction on $n(C)$. In the base case, $|C| \leq (4e^2)^{d\cdot 2^d + 1}$ and the lemma follows from the correctness of Floyd and Warmuth's compression scheme (this is the only case in which $T=0$). In the induction step, assume $|C| > (4e^2)^{d\cdot 2^d + 1}$. We distinguish between two cases: \begin{enumerate}[\bf{Case} 1:] \item $f$ is defined on $T$. Let $x = f(T)$. This case corresponds to Case~\ref{kappa:1} in the definitions of $\kappa$ and Case~\ref{rho:1} in the definition of $\rho$. By Item~\ref{prop:defT} of Lemma~\ref{property:compScheme}, $x \in Y$ and there exists $c\in C$ and $x\in Y$ such that $c|_Y=y$ and $c(r(x)) \neq c(x)$. Let $C', (Y',y')$ be the class defined in Case~\ref{kappa:1} in the definition of $\kappa$. Since $n(C') < n(C)$, we know that $\kappa,\rho$ on $C'$ satisfy the induction hypothesis. Let \begin{align*} & ((Z',z'),(f',T')) = \kappa(C',(Y',y')), \\ & h' = \rho(C', ((Z',z'),(f',T'))) , \end{align*} be the resulting compression and reconstruction. Since we are in Case~\ref{kappa:1} in the definitions of $\kappa$ and Case~\ref{rho:1} in the definition of $\rho$, $((Z,z),(f,T))$ and $h$ have the following form: \begin{align*} & Z=Z'\cup\{x\} , \\ & z|_{Z'}=z' ,\ z(x)=y(x) , \\ & T=T'+1 , \\ &f|_{\{0,\ldots,T-1\}}=f'|_{\{0,\ldots,T-1\}}, &\\ & f(T)=x, \end{align*} and \begin{align*} & h|_{X-\{x,r(x)\}}=h',\\ & h(x)=z(x) = y(x) = c(x) ,\\ & h(r(x))=1-z(x) = 1-y(x) = 1-c(x) = c(r(x)) . \end{align*} Consider item \ref{enum:1} in the conclusion of the lemma. By the definition of $Y'$ and $x$, \begin{align*} Y'\cup\{x\} &\subseteq Y, \tag{by the definition of $Y'$}\\ Z' & \subseteq Y'. \tag{by the induction hypothesis} \end{align*} Therefore, $Z=Z'\cup\{x\}\subseteq Y$. Consider item \ref{enum:2} in the conclusion of the lemma. By construction and induction, $$h|_{Y\cap \{x,r(x)\}} = c|_{Y\cap \{x,r(x)\}} = y|_{Y \cap \{x,r(x)\}} \ \ \text{and} \ \ h|_{Y'} = h'|_{Y'} = y'.$$ Thus, $h|_Y = y$. \item $f$ is not defined on $T$. This corresponds to Case~\ref{kappa:2} in the definitions of $\kappa$ and Case~\ref{rho:2} in the definition of $\rho$. Let $C', (Y',y')$ be the result of Case~\ref{kappa:2} in the definition of $\kappa$. Since $n(C') < n(C)$, we know that $\kappa,\rho$ on $C'$ satisfy the induction hypothesis. Let \begin{align*} & ((Z',z'),(f',T')) = \kappa(C',(Y',y')), \\ & h' = \rho(C', ((Z',z'),(f',T'))), \\ & s:Y'\rightarrow Y, \end{align*} as defined in Case~\ref{kappa:2} in the definitions of $\kappa$ and Case~\ref{rho:2} in the definition of $\rho$. By construction, $((Z,z),(f,T))$ and $h$ have the following form: \begin{align*} & Z=\{s(x'):x'\in Z'\}, \\ & z(x) = z'(r(x)) \ \forall x\in Z, \\ & T=T'+1, \\ & f=f', \end{align*} and \begin{align*} & h(x) = h'(r(x)) \ \forall x\in X. \end{align*} \end{enumerate} Consider item \ref{enum:1} in the conclusion of the lemma. Let $x\in Z$. By the induction hypothesis, $Z'\subseteq Y'$. Thus, $x=s(x')$ for some $x'\in Z'\subseteq Y'$. Since the range of $s$ is $Y$, it follows that $x\in Y$. This shows that $Z\subseteq Y$. { Consider item \ref{enum:2} in the conclusion of the lemma. For $x\in Y$, \begin{align*} h(x) &= h'(r(x))\tag{by the definition of $h$} \\ &= y'(r(x))\tag{by the induction hypothesis}\\ & = y(s(r(x))) \tag{by the definition of $y'$ in Case~\ref{kappa:2} of $\kappa$} \\ &= y(x), \end{align*} where the last equality holds due to item~\ref{prop:notdefT} of Lemma~\ref{property:compScheme}: Indeed, let $c \in C$ be so that $c|_Y = y$. Since $f$ is not defined on $T$, for all $x \in Y$ we have $c(x)= c(r(x))$. In addition, for all $x \in Y$ it holds that $r(s(r(x))) = r(x)$ and $s(r(x)) \in Y$. Hence, if $y(s(r(x))) \neq y(x)$ then one of them is different than $c(r(x))$, contradicting the assumption that we are in Case~\ref{kappa:2} of $\kappa$.} \end{proof} \subsection{The compression size}\label{sec:size} Consider a concept class $C$ which is not part of the induction base (i.e.\ $|C| >(4e^2)^{d\cdot 2^d +1}$). Let $\epsilon = \epsilon(C)$ be as in \eqref{en:whatIsEps}. We show the effect of each case in the definition of $\kappa$ on either $|C|$ or $n(C)$: \begin{enumerate} \item \label{size:1} Case~\ref{kappa:1} in the definition of $\kappa$: Here the size of $C'$ becomes smaller $$|C'|\leq\epsilon|C|.$$ Indeed, this holds as in the dual set system $C^*$, the normalized hamming distance between $c_x$ and $c_{r(x)}$ is at most $\epsilon$ and therefore the number of $c\in C$ such that $c(x)\neq c(r(x))$ is at most $\epsilon |C|$. \item \label{size:2} Case~\ref{kappa:2} in the definition of $\kappa$: here $n(C')$ becomes smaller as $$n(C')=|A^*(C,\epsilon)|\leq \left(\frac{1}{\epsilon}\right)^{2^{d}}.$$ \end{enumerate} We now show that in either cases, $|C'| \leq |C|^{1-\frac{1}{d\cdot2^d+1}}$, which implies that after $$O((d\cdot 2^d+1)\log\log|C|)$$ iterations, we reach the induction base.\\ In Case~\ref{size:1}: \begin{align*} |C'| & \leq \epsilon |C| = |C|^{1-\frac{1}{d\cdot2^d+1}}.\tag{by the definition of $\epsilon$} \end{align*} In Case~\ref{size:2}: \begin{align*} |C'| & \leq (n(C'))^d \tag{by Theorem~\ref{thm:Sauer}, since $VC(C') \leq d-2$} \\ & \leq \left(\frac{1}{\epsilon}\right)^{d\cdot 2^d}\tag{by Theorem~\ref{thm:Haussler}, since $n(C')=|A^*(C,\epsilon)|$}\\ & = |C|^{1-\frac{1}{d\cdot2^d+1}} . \tag{by definition of $\epsilon$} \end{align*} \begin{remark} Note the similarity between the analysis of the cases above, and the analysis of the size of a teaching set in Section~\ref{sec:RTD}. Case~\ref{size:1} corresponds to the rate of the progress performed in each iteration of the construction of a teaching set. Case~\ref{size:2} corresponds to the calculation showing that in each iteration significant progress can be made. \end{remark} Thus, the compression map $\kappa$ performs at most $$O((d\cdot 2^d+1)\log\log|C|)$$ iterations. In every step of the recursion the sizes of $Z$ and $T$ increase by at most $1$. In the base of the recursion, $T$ is $0$ and the size of $Z$ is at most $O(d \cdot 2^d)$. Hence, the total size of the compression satisfies \begin{align*} & |Z| \leq k = O(2^d d\log\log |C|),\\ & \log(|Q|) \leq O(k \log(k)). \end{align*} This completes the proof of Theorem~\ref{thm:CompSch}. \section*{Acknowledgements} We thank Noga Alon and Gillat Kol for helpful discussions in various stages of this work. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $M$ be a closed oriented smooth $n$-dimensional manifold. A diffeomorphism $f\colon M\to M$ is called {\em Anosov} if there exists a $df$-invariant splitting $TM=E^s\oplus E^u$ of the tangent bundle of $M$, together with constants $\mu\in (0,1)$ and $C > 0$, such that for all positive integers $m$ \begin{equation*} \begin{split} \|df^m(v)\| \leq C\mu^m\|v\|,\ \text{if} \ v \in E^s,\\ \|df^{m}(v)\| \leq C^{-1}\mu^{-m}\|v\|,\ \text{if} \ v \in E^u. \end{split} \end{equation*} The invariant distributions $E^s$ and $E^u$ are called the {\em stable} and {\em unstable} distributions. An Anosov diffeomorphism $f$ is said to be of {\em codimension $k$} if $E^s$ or $E^u$ has dimension $k \leq [n/2]$, and it is called {\em transitive} if there exists a point whose orbit is dense in $M$. One of the most influential conjectures in dynamics, dating back to Anosov and Smale~\cite{Sm}, is that any Anosov diffeomorphism $f$ of a closed manifold $M$ is finitely covered by a diffeomorphism which is topologically conjugate to a hyperbolic automorphism of a nilpotent manifold. In this paper we prove the following: \begin{thm}\label{t:main} If $M$ is a closed 4-manifold that carries a Thurston geometry other than $\mathbb{R}^4$, $\mathbb H^2\times\mathbb{R}^2$ or the reducible $\mathbb H^2\times\mathbb H^2$ geometry, then $M$ does not support transitive Anosov diffeomorphisms. \end{thm} Some cases have already been studied in arbitrary dimensions, most notably the hyperbolic geometries. In many of the other cases, our proof will rely on certain properties of the fundamental groups of manifolds modeled on specific geometries. We will show existence of a degree one cohomology class $u\in H^1(M;\mathbb{Z})$ that is fixed under an iterate of any diffeomorphism $f\colon M\to M$. Then we will be able to exclude the possibility for $f$ being Anosov by exploiting Hirsch's study~\cite{Hirsch} on those cohomology classes; cf. Theorems \ref{t:Hirschcov} and \ref{t:Hirsch}. Hirsch's work has already been applied in certain cases, such as on mapping tori of hyperbolic automorphisms of the torus of any dimension or products of such mapping tori with a torus of any dimension~\cite{Hirsch}. In dimension four, these manifolds correspond (up to finite covers) to the geometries $Sol^4_0$, $Sol^4_{m\neq n}$ or $Sol^3\times\mathbb{R}$. Among the most interesting remaining examples include, on the one hand, manifolds with virtually infinite first Betti numbers, such as manifolds modeled on the geometry $\widetilde{SL_2}\times\mathbb{R}$, and, on the other hand, certain polycyclic manifolds; in fact, the case of $Nil^3\times\mathbb{R}$ indicates an error in the proof of~\cite[Theorem 9(a)]{Hirsch}; see Remark \ref{r:Hirsch}. We should point out that the transitivity assumption in Theorem \ref{t:main} is mild and will only be used when $M$ is virtually an $S^2$-bundle over an aspherical surface $\Sigma_h$, i.e. of genus $h\geq1$. Franks~\cite{Fr} and Newhouse~\cite{Newhouse} proved that a codimension one Anosov diffeomorphism exists only on manifolds which are homeomorphic to tori. It will therefore suffice to examine the existence of codimension two Anosov diffeomorphisms. For a transitive Anosov diffeomorphism $f\colon M\to M$ of codimension $k$, Ruelle-Sullivan \cite{RS} exhibit a cohomology class $\alpha\in H^k(M;\mathbb{R})$ such that $f^*(\alpha)=\lambda\cdot\alpha$, for some positive $\lambda\neq 1$ (which depends on the topological entropy of $f$). In the light of the latter, we will rule out codimension two transitive Anosov diffeomorphisms on products of type $S^2\times\Sigma_h$, where $h\geq1$. The non-existence of transitive Anosov diffeomorphisms on sphere bundles over surfaces is also part of a more general study of Gogolev-Rodriguez Hertz using cup products~\cite{GH}. Recall that a manifold modeled on $\mathbb{R}^4$ is finitely covered by the 4-torus and a manifold modeled on the $\mathbb H^2\times\mathbb{R}^2$ geometry or the reducible $\mathbb H^2\times\mathbb H^2$ geometry is finitely covered by the product of the 2-torus with a hyperbolic surface or the product of two hyperbolic surfaces respectively. Thus, Theorem \ref{t:main} excludes transitive Anosov diffeomorphisms on any geometric 4-manifold which is not finitely covered by a product of surfaces $\Sigma_g\times\Sigma_h$, where $g,h\geq 1$. Clearly $T^4=T^2\times T^2$ (i.e. when $g=h=1$) admits Anosov diffeomorphisms. However, the case of $\Sigma_g\times\Sigma_h$, where at least one of $g$ or $h$ is $\geq2$, seems to be more subtle: \begin{prob}\normalfont{(Gogolev-Lafont \cite[Section 7.2]{GL}).}\label{p:GL} Does the product of two closed aspherical surfaces at least one of which is hyperbolic admit an Anosov diffeomorphism? \end{prob} \subsection*{Outline} In Section \ref{s:thurston} we enumerate the Thurston geometries in dimensions up to four and gather some preliminaries. In Sections \ref{s:hyperbolic}, \ref{s:solvandcomp} and \ref{s:products} we prove Theorem \ref{t:main}. \subsection*{Acknowledgments} Parts of this project were carried out during research stays at CUNY Graduate Center and at IH\'ES in 2019. I am grateful to Dennis Sullivan and to Misha Gromov respectively for their hospitality. Also, I would like to thank Morris Hirsch for useful correspondence, as well as an anonymous referee for constructive comments. The support by the Swiss NSF, under grant FNS200021$\_$169685, is also gratefully acknowledged. \section{Thurston geometries and finite covers}\label{s:thurston} We begin our discussion by recalling the classification of the Thurston geometries in dimension four, as well as some simple general facts about Anosov diffeomorphisms and of their finite covers. \medskip Let $\mathbb{X}^n$ be a complete simply connected $n$-dimensional Riemannian manifold. A closed manifold $M$ {\em carries the $\mathbb{X}^n$ geometry} or it is an {\em $\mathbb{X}^n$-manifold} in the sense of Thurston, if it is diffeomorphic to a quotient of $\mathbb{X}^n$ by a lattice $\Gamma$ (the fundamental group of $M$) in the group of isometries $\mathrm{Isom}(\mathbb{X}^n)$ (acting effectively and transitively). We say that two geometries $\mathbb{X}^n$ and $\mathbb{Y}^n$ are the same if there exists a diffeomorphism $\psi \colon \mathbb{X}^n \to \mathbb{Y}^n$ and an isomorphism $\mathrm{Isom}(\mathbb{X}^n) \to \mathrm{Isom}(\mathbb{Y}^n)$ which maps each element $g \in \mathrm{Isom}(\mathbb{X}^n)$ to $\psi \circ g \circ \psi^{-1} \in \mathrm{Isom}(\mathbb{Y}^n)$. In dimension one, the circle is the only closed manifold and it is a quotient of the real line $\mathbb{R}$ by $\mathbb{Z}$. In dimension two, a closed surface carries one of the geometries $S^2$, $\mathbb{R}^2$ or $\mathbb{H}^2$ and (virtually) it is respectively $S^2$, $T^2$ or a hyperbolic surface $\Sigma_g$ (of genus $g\geq2$). In dimension three, Thurston~\cite{Th1} proved that there exist eight homotopically unique geometries, namely $\mathbb{H}^3$, $Sol^3$, $\widetilde{SL_2}$, $\mathbb{H}^2 \times \mathbb{R}$, $Nil^3$, $\mathbb{R}^3$, $S^2 \times \mathbb{R}$ and $S^3$. In Table \ref{table:3geom}, we list the finite covers for manifolds in each of those geometries (see~\cite{Th1,Scott:3-mfds,Agol}), as we will use several of those properties in our proofs. \begin{table \centering {\small \begin{tabular}{r|l} Geometry $\mathbb{X}^3$ & $M$ is finitely covered by...\\ \hline $\mathbb{H}^3$ & a mapping torus of a hyperbolic surface with pseudo-Anosov monodromy\\ $Sol^3$ & a mapping torus of the 2-torus $T^2$ with hyperbolic monodromy\\ $\widetilde{SL_2}$ & a non-trivial circle bundle over a hyperbolic surface\\ $Nil^3$ & a non-trivial circle bundle over $T^2$\\ $\mathbb{H}^2 \times \mathbb{R}$ & a product of the circle with a hyperbolic surface\\ $\mathbb{R}^3$ & the $3$-torus $T^3$\\ $S^2 \times \mathbb{R}$ & the product $S^2 \times S^1$\\ $S^3$ & the $3$-sphere $S^3$ \end{tabular}} \newline \caption{{\small Finite covers of Thurston geometric closed 3-manifolds.}}\label{table:3geom} \end{table} The 4-dimensional geometries were classified by Filipkiewicz in his thesis~\cite{Filipkiewicz}. According to that classification, there are eighteen geometries with compact representatives, and an additional geometry which is not realizable by a compact $4$-manifold. The list with the eighteen geometries is given in Table \ref{table:4geom}, and it is arranged so that it serves as an organising principle for the forthcoming sections. (Note that nineteen geometries appear, because $Sol^3\times\mathbb{R}$ is the geometry $Sol^4_{m,n}$ when $m=n$.) The individual characteristics of each geometry needed for our proofs will be given when dealing with each geometry. As pointed out in the introduction, among the most mysterious geometries with respect to Anosov diffeomorphisms is $\mathbb{H}^2\times\mathbb{H}^2$. Manifolds modeled on this geometry are divided into the ``reducible" and ``irreducible" ones, and different phenomena occur depending on where they belong. \begin{table}[!ht] \centering {\small \begin{tabular}{r|l} Type of the geometry & Geometry $\mathbb{X}^4$\\ \hline Hyperbolic & $\mathbb{H}^4$, $\mathbb{H}^2(\mathbb{C})$\\ Solvable non-product & $Nil^4$, $Sol^4_{m \neq n}$, $Sol^4_0$, $Sol^4_1$\\ Compact non-product & $S^4$, $\mathbb{CP}^2$\\ Product & $\mathbb{R}^4$, $Nil^3\times\mathbb{R}$, $S^2\times S^2$, $S^2\times\mathbb{H}^2$, $S^2\times \mathbb{R}^2$, $S^3 \times \mathbb{R}$, $\mathbb{H}^3\times\mathbb{R}$, \\ & $\mathbb{H}^2\times\mathbb{R}^2$, $\mathbb{H}^2\times\mathbb{H}^2$, $Sol^3\times\mathbb{R}$, $\widetilde{SL_2}\times\mathbb{R}$ \\ \end{tabular}} \newline \caption{{\small The 4-dimensional Thurston geometries with compact representatives.}}\label{table:4geom} \end{table} The virtual properties of geometric 4-manifolds will be used extensively in our study. We thus end this preliminary section with the following general lemmas (see~\cite{GL} and~\cite{GH} respectively): \begin{lem} \label{l:pre1} Let $M$ be a closed manifold and $p\colon\overline M\to M$ be a finite covering. If $f\colon M\to M$ is a diffeomorphism, then there is an $m\geq 0$ such that $f^m$ lifts to a diffeomorphism $\overline{f^m}\colon\overline M\to \overline M$, i.e. the following diagram commutes. $$ \xymatrix{ \overline M\ar[d]_{p} \ar[r]^{\overline{f^m}}& \ar[d]^{p} \overline M\\ M\ar[r]^{f^m}& M \\ } $$ \end{lem} \begin{lem} \label{l:pre2} If $f\colon M\to M$ is a transitive Anosov diffeomorphism and there is a lift $\overline f\colon\overline M\to\overline M$ of $f$ for some cover $\overline M$ of $M$, then $\overline f$ is transitive. \end{lem} \section{Hyperbolic geometries}\label{s:hyperbolic} We now begin the proof of Theorem \ref{t:main}. We first deal with the hyperbolic geometries. \medskip The real and complex hyperbolic geometries, $\mathbb{H}^4$ and $\mathbb{H}^2(\mathbb{C})$ respectively, are generally among the less understood of the eighteen geometries in dimension four. However, the machinery developed for hyperbolic manifolds in general suffices to rule out Anosov diffeomorphisms on 4-manifolds carrying one of those geometries. The following theorem is now well-known to experts, but nevertheless we give a proof for the sake of completeness and in order to include some useful facts about Anosov diffeomorphisms which will be used below as well, such as properties of their Lefschetz numbers. \begin{thm}[\cite{Yano,GL}]\label{t:finiteout} If $M$ is a negatively curved manifold, then $M$ does not support Anosov diffeomorphisms. \end{thm} \begin{proof} The first proof due to Yano~\cite{Yano} rules out the existence of transitive Anosov diffeomoprhisms. Let $M$ be negatively curved and suppose $f\colon M\to M$ is a transitive Anosov diffeomorphism. Since codimension one Anosov diffeomorphisms exist only on tori~\cite{Fr,Newhouse}, we can clearly assume that the dimension of $M$ is at least four and the codimension $k$ of $f$ is at least two. By Ruelle-Sullivan~\cite{RS}, the transitivity assumption implies the existence of a homology class $a\in H_l(M;\mathbb{R})$ such that $f_*(a)=\lambda\cdot a$ for some $\lambda>1$, where $l=k>1$ or $l=\dim(M)-k>1$. This means that the simplicial $\ell^1$-semi-norm of $a$ is zero which is impossible because $M$ is negatively curved~\cite{Gromov,IY}. An argument that rules out the existence of any Anosov diffeomorphism on a negatively curved manifold $M$ of dimension $\geq 3$ was given by Gogolev-Lafont~\cite{GL}, using the fact that the outer automorphism group $\mathrm{Out}(\pi_1(M))$ is finite (the latter can be derived by combining results of Paulin~\cite{Pau}, Bestvina-Feighn~\cite{BF} and Bowditch~\cite{Bow}; see~\cite[Corollary 4.5]{GL}). The finiteness of $\mathrm{Out}(\pi_1(M))$ and the asphericity of $M$ (being negatively curved) implies that an iterate $f^l$ of (a finite covering of) $f$ induces the identity on cohomology. (One already concludes that $M$ does not support transitive Anosov diffeomorphisms by Ruelle-Sullivan~\cite{RS} or Shiraiwa~\cite{Shi}.) Thus the Lefschetz numbers $\Lambda$ (i.e. the sum of indices of the fixed points) of all powers of $f^l$ are uniformly bounded, which is in contrast with the growth of periodic points of $f^l$, because of the equation \begin{equation}\label{eq.FixAnosov} |\Lambda(f^{m})|=|\mathrm{Fix}(f^{m})| = re^{mh_{top}(f)} + o(e^{mh_{top}(f)}), \ m\geq 1, \end{equation} where $h_{top}(f)$ is the topological entropy of $f$ and $r$ is the number of transitive basic sets with entropy equal to $h_{top}(f)$; see~\cite[Lemma 4.1]{GL} for details. \end{proof} We immediately obtain: \begin{cor} Closed 4-manifolds modeled on the geometry $\mathbb{H}^4$ or $\mathbb{H}^2(\mathbb{C})$ do not support Anosov diffeomorphisms. \end{cor} \begin{rem} As observed in~\cite{GL}, the finiteness of the outer automorphism group of the fundamental group of every negatively curved manifold of dimension $\geq 3$ caries over the outer automorphism group of the fundamental group of a finite product $M_1\times\cdots\times M_s$ of negatively curved manifolds $M_i$ of dimensions $\geq3$. Thus $M_1\times\cdots\times M_s$ does not support Anosov diffeomorphisms. However, this obstruction does not apply anymore if one of the $M_i$ is 2-dimensional, i.e. a hyperbolic surface. In~\cite[Theorem 1.4 and Example 4.3]{NeoAnosov1} we ruled out Anosov diffeomorphisms on products of a hyperbolic surface with certain higher dimensional negatively curved manifolds. It seems that an alternative method is required in general in order to rule out Anosov diffeomorphisms on product of two surfaces at least one of which is hyperbolic (those manifolds correspond to the geometry $\mathbb{H}^2\times\mathbb{R}^2$ or the reducible $\mathbb{H}^2\times\mathbb{H}^2$ geometry); cf. Problem \ref{p:GL} and \cite[Section 7.2]{GL} for further discussion. \end{rem} \section{Non-product, solvable and compact geometries}\label{s:solvandcomp} In this section, we deal with the geometries $Nil^4$, $Sol^4_{m \neq n}$, $Sol^4_0$, $Sol^4_1$, $S^4$ and $\mathbb{CP}^2$. \subsection{Solvable non-product geometries} \subsubsection{The geometry $Nil^4$.}\label{ss:Nil} Let $M$ be a closed 4-manifold modeled on the geometry $Nil^4$. Then (a finite index subgroup of) the fundamental group of $M$ has a presentation \[ \pi_1(M) = \langle x,y,z,t \ \vert \ txt^{-1}=x, \ tyt^{-1}=x^kyz^l, \ tzt^{-1} = z, [x,y]=z, \ xz=zx, \ yz=zy \rangle, \] $k\geq 1$, $l\in\mathbb{Z}$, with center $C(\pi_1(M)) = \langle z \rangle$. The quotient of $\pi_1(M)$ by its center is given by \[ \pi_1(M)/\langle z\rangle = \langle x,y,t \ \vert \ [t,y]=x^k, \ xt=tx, \ xy=yx \rangle; \] see~\cite[Prop. 6.10]{NeoIIPP} and~\cite[Section 8.7]{Hil} for details. We moreover observe that $\pi_1(M)$ is an extension $\mathbb{Z}^3\rtimes_\theta\mathbb{Z}=\langle z,x,t\rangle\rtimes_\theta\langle y \rangle$, where the automorphism $\theta\colon\mathbb{Z}^3\to\mathbb{Z}^3$ is given by \[ \left(\begin{array}{ccc} 1 & -1 & -l \\ 0 & 1 & -k \\ 0 & 0 & 1 \\ \end{array} \right). \] Let $f\colon M\to M$ be a diffeomorphism. Then $f_\sharp\colon\pi_1(M)\to\pi_1(M)$ induces an automorphism of $\pi_1(M)/\langle z\rangle$, because $f_\sharp(\langle z\rangle)=\langle z\rangle$. Since $C( \pi_1(M)/\langle z\rangle)=\langle x\rangle$, we deduce that $f_\sharp(x)=z^nx^m$, for some $n,m\in\mathbb{Z}$, $m\neq0$. Now, the relation $txt^{-1}=x$ is mapped to $f_\sharp(t)x^mf_\sharp(t)^{-1}=x^m$, thus, by $[x,y]=z$, the image $f_\sharp(t)$ does not contain any powers of $y$. Combining all together, we conclude, using the commutative diagram $$ \xymatrix{ \pi_1(M)\ar[d]_{h} \ar[r]^{{f}_\sharp}& \ar[d]^{h} \pi_1(M)\\ H_1(M;\mathbb{Z}) \ar[r]^{{f}_*}& H_1(M;\mathbb{Z}), \\ } $$ where $h\colon\pi_1(M)\to H_1(M;\mathbb{Z})=\pi_1(M)/[\pi_1(M),\pi_1(M)]$ denotes the Hurewicz homomorphism, that the induced isomorphism in homology $f_*$ maps $\bar t\in H_1(M;\mathbb{Z})/\Tor H_1(M;\mathbb{Z})$ to a multiple of itself. The induced automorphism on $H_1(M;\mathbb{Z})/\Tor H_1(M;\mathbb{Z})=\langle \bar t\rangle\times \langle \bar y\rangle=\mathbb{Z}\times\mathbb{Z}$ implies in fact that $f_*(\bar t)=\bar t$ and thus $f$ cannot be Anosov by Lemma \ref{l:pre1} and the following result of Hirsch: \begin{thm}{\normalfont(\cite[Theorem 1]{Hirsch}).}\label{t:Hirschcov} Let $f\colon M\to M$ be an Anosov diffeomorphism and a non-trivial cohomology class $u\in H^1(M;\mathbb{Z})$ such that $(f^*)^m(u)=u$, for some positive integer $m$. Then the infinite cyclic covering of $M$ corresponding to $u$ has infinite dimensional rational homology. \end{thm} \begin{rem} The infinite cyclic covering of $M$ corresponding to $u$ is the covering whose fundamental group is given by the kernel of the composition \[ \pi_1(M)\stackrel{h}\longrightarrow H_1(M)\xrightarrow{<u,\cdot>}\mathbb{Z}, \] where $h$ is the Hurewitz homomorphism as above and $<u,\cdot>$ the Kronecker product. Note that Hirsch's result amounts again to the fact that finite dimensional rational homology of the above infinite cyclic covering would imply vanishing of the Lefschetz number of (an iterate of) $f$, which is impossible for an Anosov diffeomorphism. \end{rem} \begin{rem} As we conclude from our proof, the induced automorphism \[ f_*\colon H_1(M;\mathbb{R})\to H_1(M;\mathbb{R}) \] has a root of unity as eigenvalue. Then~\cite[ Corollary 2]{Hirsch} implies that $f$ is not Anosov (as an application of Theorem \ref{t:Hirschcov}). For a manifold $M$ with polycyclic fundamental group and whose universal covering has finite dimensional rational homology,~\cite[Theorem 4]{Hirsch} tells us that a diffeomorphism $f\colon M\to M$ is not Anosov if there is a root of unity among the eigenvalues of $f_*\colon H_1(M;\mathbb{R})\to H_1(M;\mathbb{R})$. Also, note that ~\cite{Mal} determines which nilpotent manifolds admit Anosov diffeomorphism up to dimension six, hence also covers the case of the $Nil^4$ geometry. In our proof we did not (explicitly) use the fact that $\pi_1(M)$ is polycyclic, but we rather exhibited a cohomology class satisfying Theorem \ref{t:Hirschcov}. \end{rem} \subsubsection{The geometries $Sol_{m \neq n}^4$, $Sol^4_0$ and $Sol^4_1$} For the geometries $Sol^4_{m \neq n}$, $Sol^4_0$ and $Sol^4_1$ a weaker statement (Theorem \ref{t:Hirsch} below) than that of Theorem \ref{t:Hirschcov}, based on the first Betti number, suffices to rule out Anosov diffeomorphisms. We begin by recalling the model spaces of those geometries: \medskip Suppose $m$ and $n$ are positive integers, $a > b > c$ reals such that $a+b+c=0$ and $e^a,e^b,e^c$ are roots of the polynomial $P_{m,n}(\lambda)=\lambda^3-m\lambda^2+n\lambda-1$. For $m \neq n$, the Lie group $Sol_{m \neq n}^4$ is defined as a semi-direct product $\mathbb{R}^3 \rtimes \mathbb{R}$, where $\mathbb{R}$ acts on $\mathbb{R}^3$ by \[ t \mapsto \left(\begin{array}{ccc} e^{at} & 0 & 0 \\ 0 & e^{bt} & 0 \\ 0 & 0 & e^{ct} \\ \end{array} \right). \] Note that the case $m=n$ gives $b = 0$ and corresponds to the product geometry $Sol^3 \times \mathbb{R}$. \medskip If two roots of the polynomial $P_{m,n}$ are required to be equal, then we obtain the model space of the $Sol_0^4$ geometry, again defined as a semi-direct product $\mathbb{R}^3 \rtimes \mathbb{R}$, where now the action of $\mathbb{R}$ on $\mathbb{R}^3$ is given by \[ t \mapsto \left(\begin{array}{ccc} e^{t} & 0 & 0 \\ 0 & e^{t} & 0 \\ 0 & 0 & e^{-2t} \\ \end{array} \right). \] Closed manifolds modeled on the geometries $Sol_{m \neq n}^4$ and $Sol_0^4$ have the following property: \begin{thm}[\normalfont{\cite[Corollary 8.5.1]{Hil}}]\label{t:mappingtorisolvable1} Every closed manifold carrying one of the geometries $Sol_0^4$ or $Sol_{m \neq n}^4$ is a mapping torus of a hyperbolic automorphism of the 3-torus. \end{thm} Finally, the Lie group $Sol_1^4$ is defined as a semi-direct product $Nil^3 \rtimes \mathbb{R}$, where $\mathbb{R}$ acts on the 3-dimensional Heisenberg group \[ Nil^3 = \Biggl\{ \left( \begin{array}{ccc} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \\ \end{array} \right) \biggl\vert \ x,y,z \in \mathbb{R} \Biggl\} \] by \[ t \mapsto \left(\begin{array}{ccc} 1 & e^{-t}x & z \\ 0 & 1 & e^{t}y \\ 0 & 0 & 1 \\ \end{array} \right). \] Closed manifolds modeled on the geometry $Sol_1^4$ can be described as follows: \begin{thm}[\normalfont{\cite[Theorem 8.9]{Hil}}]\label{t:mappingtorisolvable2} A closed oriented manifold carrying the geometry $Sol_1^4$ is a mapping torus of a self-homeomorphism of a $Nil^3$-manifold. \end{thm} Using this, one can moreover derive that every closed $Sol_1^4$-manifold is a virtually non-trivial circle bundle over a $Sol^3$-manifold~\cite[Prop. 6.15]{NeoIIPP}. \medskip The descriptions of the fundamental groups of manifolds carrying one of the above solvable geometries suffice to exclude Anosov diffeomorphisms on them by the following result of Hirsch, which is a consequence of the more general Theorem \ref{t:Hirschcov}: \begin{thm}{\normalfont(\cite[Theorem 8]{Hirsch}).}\label{t:Hirsch} Suppose $M$ is a compact manifold such that \begin{itemize} \item[(a)] $\pi_1(M)$ is virtually polycyclic; \item[(b)] the universal covering of $M$ has finite dimensional rational homology; \item[(c)] $H^1(M;\mathbb{Z})\cong\mathbb{Z}$. \end{itemize} Then $M$ does not support Anosov diffeomorphisms. \end{thm} \begin{cor} Closed 4-manifolds modeled on one of the geometries $Sol_0^4$, $Sol_{m \neq n}^4$ or $Sol_1^4$ do not support Anosov diffeomorphisms. \end{cor} \begin{proof} After passing to a finite covering we may assume that $M$ is oriented. If $M$ carries one of the geometries $Sol_0^4$ or $Sol_{m \neq n}^4$, then by Theorem \ref{t:mappingtorisolvable1} \[ \pi_1(M)\cong \pi_1(T^3) \rtimes_{\theta_M} \langle t \rangle, \] where $\pi_1(T^3) = \mathbb{Z}^3 = \langle x_1,x_2,x_3 \vert \ [x_i,x_j] = 1 \rangle$ and the automorphism $\theta_M \colon \mathbb{Z}^3\to \mathbb{Z}^3$ is hyperbolic. Thus, $H^1(M;\mathbb{Z})\cong\mathbb{Z}$, and since $M$ is aspherical and $\pi_1(M)$ polycyclic, Theorem \ref{t:Hirsch} and Lemma \ref{l:pre1} tell us that $M$ cannot support Anosov diffeomorphisms. If $M$ carries the geometry $Sol_1^4$, then by Theorem \ref{t:mappingtorisolvable2} (see also~\cite[Prop. 6.15]{NeoIIPP}) a presentation of its fundamental group is given by \begin{eqnarray*} \pi_1(M) = &\langle x,y,z,t \ \vert & txt^{-1}=x^ay^cz^k, \ tyt^{-1}=x^by^dz^l, \ tzt^{-1} =z,\\ &\ & [x,y]=z, \ xz=zx, \ yz=zy \rangle, \end{eqnarray*} where $k,l\in\mathbb{Z}$ and the matrix \[ \left(\begin{array}{cc} a & b \\ c & d \\ \end{array} \right)\in \mathrm{SL}_2(\mathbb{Z}) \] has no roots of unity. The abelianization of $\pi_1(M)$ implies $H^1(M;\mathbb{Z})\cong\mathbb{Z}$. Since moreover $M$ is aspherical and $\pi_1(M)$ is polycyclic, we deduce by Theorem \ref{t:Hirsch} and Lemma \ref{l:pre1} that $M$ does not support Anosov diffeomorphisms. \end{proof} \begin{rem} Note that Theorem \ref{t:Hirsch} is not applicable to a $Nil^4$ manifold $M$ (cf. Section \ref{ss:Nil}), because $H^1(M;\mathbb{Z})\cong\mathbb{Z}^2$. \end{rem} \subsection{Compact non-product geometries} Among the simplest cases are the compact geometries $S^4$ and $\mathbb{CP}^2$. \subsubsection{The geometry $S^4$} The only closed oriented 4-manifold modeled on $S^4$ is $S^4$ itself~\cite[Section 12.1]{Hil}. Clearly, any orientation preserving diffeomorphism $f$ of $S^4$ induces the identity on $H^*(S^4)$, and as we have seen this makes it impossible for $f$ to be Anosov (cf. equation \ref{eq.FixAnosov}). \subsubsection{The geometry $\mathbb{CP}^2$} As for the geometry $S^4$, the only closed oriented 4-manifold modeled on $\mathbb{CP}^2$ is $\mathbb{CP}^2$ itself~\cite[Section 12.1]{Hil}. Suppose \[ f\colon \mathbb{CP}^2\to \mathbb{CP}^2 \] is a diffeomorphism. The cohomology groups of $\mathbb{CP}^2$ are $\mathbb{Z}$ in degrees 0, 2 and 4 and trivial otherwise. So, after possibly passing to an iterate of $f$, we observe, by the naturality of the cup product, that $f$ must induce the identity on cohomology. Thus $f$ cannot be Anosov. \section{Product geometries}\label{s:products} In order to complete the proof of Theorem \ref{t:main}, we need to examine the product geometries that are not excluded by the statement of Theorem \ref{t:main}, i.e. the geometries $\mathbb{H}^3\times\mathbb{R}$, $Sol^3\times\mathbb{R}$, $\widetilde{SL_2}\times\mathbb{R}$, $Nil^3\times\mathbb{R}$, the irreducible $\mathbb{H}^2\times\mathbb{H}^2$ geometry, $S^2\times\mathbb{H}^2 $, $S^2\times \mathbb{R}^2$, $S^3 \times \mathbb{R}$ and $S^2\times S^2$. \medskip \subsection{Products with a compact factor} \subsubsection{The geometry $S^2\times S^2$} The question of whether $S^2\times S^2$ supports Anosov diffeomorphisms was asked by Ghys in the 1990's and, although it has a quite straightforward solution using the intersection form, was only recently answered by Gogolev and Rodriguez Hertz~\cite{GH}. Suppose $f\colon S^2\times S^2\to S^2\times S^2$ is a diffeomorphism (or, more generally, a map of degree $\pm1$). The K\"unneth formula gives \[ H^2(S^2\times S^2)=(H^2(S^2)\otimes H^0(S^2))\oplus(H^0(S^2)\otimes H^2(S^2)). \] Let $\omega_{S^2}\times 1\in H^2(S^2)\otimes H^0(S^2)$ and $1\times\omega_{S^2}\in H^0(S^2)\otimes H^2(S^2)$ be the corresponding cohomological fundamental classes. After possibly replacing $f$ by $f^2$, we can assume that $\deg(f)=1$. The effect of $f$ on the above classes is given by \[ f^*(\omega_{S^2}\times1)=a\cdot(\omega_{S^2}\times 1)+b\cdot(1\times\omega_{S^2}), \ a,b\in\mathbb{Z}, \] and \[ f^*(1\times\omega_{S^2})=c\cdot(\omega_{S^2}\times 1)+d\cdot(1\times\omega_{S^2}), \ c,d\in\mathbb{Z}. \] Thus, by the naturality of the cup product we obtain \begin{equation}\label{eq.S2} ad+bc=1. \end{equation} Also, since the cup product of $\omega_{S^2}\times 1$ with itself vanishes, we obtain \[ 0=f^*((\omega_{S^2}\times1)\cup(\omega_{S^2}\times1))=f^*(\omega_{S^2}\times1)\cup f^*(\omega_{S^2}\times1)=2ab\cdot (\omega_{S^2\times S^2}), \] and so \begin{equation}\label{eq.S2b} ab=0. \end{equation} Similarly, since $(1\times\omega_{S^2})\cup(1\times\omega_{S^2})=0$, we obtain \begin{equation}\label{eq.S2c} cd=0. \end{equation} If $a=0$, then (\ref{eq.S2}), (\ref{eq.S2b}) and (\ref{eq.S2c}) imply $b=c=\pm 1$ and $d=0$. If $b=0$, then again by the same equations we obtain $a=d=\pm1$ and $c=0$. Thus, after possibly replacing $f$ by $f^2$, we deduce that $f$ induces the identity in cohomology. Therefore, the Lefschetz numbers of all powers of $f$ are uniformly bounded, and so $f$ cannot be Anosov diffeomorphism (cf. equation \ref{eq.FixAnosov}). \begin{rem} Alternatively to the above argument, note that, since $f$ is a diffeomorphism, the matrix for the induced action on $H^2$ lies in $\mathrm{GL}_2(\mathbb{Z})$, hence $ad-bc = \pm1$. Combining this with equation (\ref{eq.S2}), we can find the two possible integer solutions as above. \end{rem} \subsubsection{The geometry $S^2\times \mathbb{R}^2$} In that case, $M$ is (finitely covered by) $S^2\times T^2$~\cite[Theorem 10.10]{Hil}. Since every map $S^2\to T^2$ has degree zero, if $f\colon S^2\times T^2\to S^2\times T^2$ is a diffeomorphism, then the effect of $f$ on the cohomological fundamental classes $\omega_{S^2}\times 1\in H^2(S^2)\otimes H^0(T^2)$ and $1\times\omega_{T^2}\in H^0(S^2)\otimes H^2(T^2)$ is given by \[ f^*(\omega_{S^2}\times1)=a\cdot(\omega_{S^2}\times 1)+b\cdot(1\times\omega_{T^2}), \ a,b\in\mathbb{Z}, \] and \[ f^*(1\times\omega_{T^2})=d\cdot(1\times\omega_{T^2}), \ d\in\mathbb{Z}; \] see~\cite{Neodegrees} for details. As before, we assume that $\deg(f)=1$, and so the naturality of the cup product yields \begin{equation}\label{eq.S2mixied} ad=1. \end{equation} In particular, $a=d=\pm1$. Also, $b=0$ by the vanishing of the cup product of $\omega_{S^2}\times1$ with itself. Recall that, by Franks~\cite{Fr} and Newhouse~\cite{Newhouse}, if a manifold admits a codimension one Anosov diffeomorphism, then it must be homeomorphic to a torus. Thus, if $f$ is Anosov, then we may assume that it has codimension two. In that case, Ruelle-Sullivan's work~\cite{RS} gives us a class $\alpha\in H^2(S^2\times T^2;\mathbb{R})$ such that $f^*(\alpha)=\lambda\cdot\alpha$ for some positive real $\lambda\neq1$. We have \[ \alpha=\xi_1\cdot(\omega_{S^2}\times1)+\xi_2\cdot(1\times\omega_{T^2}), \ \xi_1,\xi_2\in\mathbb{R}, \] and so $f^*(\alpha)=\lambda\cdot\alpha$ yields \begin{equation}\label{eq.S2mixed-3} \lambda\xi_1=a\xi_1=\pm\xi_1 \end{equation} and \begin{equation}\label{eq.S2mixed-4} \lambda\xi_2=d\xi_2=\pm\xi_2. \end{equation} If $\xi_1\neq0$, then (\ref{eq.S2mixed-3}) becomes $\lambda=\pm1$, which is impossible. If $\xi_1=0$, then $\xi_2\neq0$ and (\ref{eq.S2mixed-4}) yields again the absurd conclusion $\lambda=\pm1$. This shows that $S^2\times T^2$ does not support transitive Anosov diffeomorphisms. \subsubsection{The geometry $S^2\times\mathbb{H}^2$} If $M$ is modeled on the geometry $S^2\times\mathbb{H}^2$, then $M$ is virtually an $S^2$-bundle over a closed hyperbolic surface $\Sigma_h$~\cite[Theorem 10.7]{Hil}. The case of $S^2\times\Sigma_h$ can be treated using the same argument as for $S^2\times T^2$. More generally, Gogolev-Rodriguez Hertz showed that a fiber bundle $S^{2n}\to E\to B$, where $B$ is $2n$-dimensional, does not support transitive Anosov diffeomorphisms~\cite[Theorem 1.1]{GH}, which covers as well the geometry $S^2\times \mathbb{R}^2$. Their argument uses again equation \ref{eq.FixAnosov} and cup products via the Gysin sequence \[ 0\longrightarrow H^{2n}(B;\mathbb{Z})\longrightarrow H^{2n}(E;\mathbb{Z})\longrightarrow H^0(B;\mathbb{Z})\longrightarrow0. \] Note that in our case, $2n=2$ is the only case of interest for the codimension; we refer to~\cite{GH} for the complete argument. \subsubsection{The geometry $S^3\times\mathbb{R}$} A closed 4-manifold modeled on the geometry $S^3\times\mathbb{R}$ is virtually a product $S^3\times S^1$~\cite[Ch. 11]{Hil}, which clearly does not support Anosov diffeomorphisms because $H_2(S^3\times S^1)=0$ and $H_1(S^3\times S^1)=\mathbb{Z}$. \subsection{The irreducible $\mathbb{H}^2\times\mathbb{H}^2$ geometry} As for the hyperbolic geometries, if $M$ is an irreducible manifold modeled on the geometry $\mathbb{H}^2\times\mathbb{H}^2$, then $\pi_1(M)$ has finite outer automorphism group by the strong rigidity of Mostow, Prasad and Margulis. Thus the proof of Theorem \ref{t:finiteout} implies that $M$ does not support Anosov diffeomorphisms. \subsection{Aspherical products with a circle factor} Finally, we deal with the product geometries $\mathbb{H}^3\times\mathbb{R}$, $Sol^3\times\mathbb{R}$, $\widetilde{SL_2}\times\mathbb{R}$ and $Nil^3\times\mathbb{R}$. \subsubsection{The geometries $\widetilde{SL_2}\times\mathbb{R}$ and $Nil^3\times\mathbb{R}$}\label{ss:Hirsch2} Let $M$ be a closed 4-manifold modeled on the geometry $\widetilde{SL_2}\times\mathbb{R}$ or the geometry $Nil^3\times\mathbb{R}$. Then $M$ is finitely covered by a product $N\times S^1$, where $N$ is an $\widetilde{SL_2}$-manifold or a $Nil^3$-manifold respectively~\cite{Hil}. We can moreover assume that $N$ is a non-trivial circle bundle over a surface $\Sigma_g$ of genus $g$, where $g\geq2$ if $N$ is an $\widetilde{SL_2}$-manifold and $g=1$ if $N$ is a $Nil^3$-manifold; cf. Table \ref{table:3geom}. In particular, the center of $\pi_1(N\times S^1)$ has rank two. Since (a finite power of) the generator of the fiber of $N$ vanishes in $H_1(N)$, we deduce that, for any diffeomorphism $f\colon N\times S^1\to N\times S^1$ the generator of $H_1(S^1)$ maps to a power of itself (modulo torsion). That is, in cohomology \[ f^*(1\times\omega_{S^1})=a\cdot(1\times\omega_{S^1}), \ a\in\mathbb{Z}. \] Moreover, because $N$ does not admit maps of non-zero degree from direct products~\cite{KN} and the degree three cohomology of $N\times S^1$ is \[ H^3(N\times S^1)\cong H^3(N)\oplus(H^2(N)\otimes H^1(S^1)), \] we obtain \[ f^*(\omega_N\times1)=b\cdot(\omega_N\times1), \ b\in\mathbb{Z}; \] see~\cite[Proof of Theorem 1.4]{Neodegrees} for further details. Since $\deg(f)=\pm1$, we deduce that $a,b\in\{\pm1\}$. Thus, after possibly replacing $f$ by $f^2$, we may assume that \[ f^*(1\times\omega_{S^1})=1\times\omega_{S^1}. \] Now Theorem \ref{t:Hirschcov} and Lemma \ref{l:pre1} imply that $f$ cannot be Anosov. Alternatively, since the generator of $H_1(S^1)$ maps to (a power of) itself, we can conclude that $f$ is not Anosov by~\cite[Corollary 2]{Hirsch}, again as an application of Theorem \ref{t:Hirschcov}. \begin{rem}\label{r:Hirsch} An example of a $Nil^3$ manifold is given by the mapping torus $M_A$ of $T^2$ with monodromy \[ A=\left(\begin{array}{cc} 1 & 1\\ 0 & 1\\ \end{array} \right). \] As we have seen above, $M_A\times S^1$ does not support Anosov diffeomorphisms. Now, clearly $A^m\neq I_2=\left(\begin{array}{cc} 1 & 0\\ 0 & 1\\ \end{array} \right)$ for all $m\neq 0$ and, moreover, \[ \pi_1(M_A)=\langle x,y,z \ | \ [x,y]=z,\ xz=zx, \ yz=zy\rangle, \] which has non-trivial center $C(\pi_1(M_A))=\langle z \rangle$. Therefore, in the proof of~\cite[Theorem 9(a)]{Hirsch} -- which asserts that for any monodromy $A\colon T^n\to T^n$ such that $A^m\neq I_n$ for all $m\neq 0$, the product $M_A\times S^1$ does not support Anosov diffeomorphisms -- the claim that the generator of $H_1(S^1)$ maps to a power of itself is derived by the invalid conclusion that $C(\pi_1(M_A))$ is trivial. (We remark that this error does not affect the aforementioned Theorems \ref{t:Hirschcov} and \ref{t:Hirsch} from the same paper.) \end{rem} \subsubsection{The geometries $\mathbb{H}^3\times\mathbb{R}$ and $Sol^3\times\mathbb{R}$} A closed 4-manifold $M$ modeled on the geometry $\mathbb{H}^3\times\mathbb{R}$ or the geometry $Sol^3\times\mathbb{R}$ is virtually a product $N\times S^1$, where $N$ is a hyperbolic 3-manifold or a $Sol^3$-manifold respectively~\cite{Hil}. In particular, the fundamental group $\pi_1(N\times S^1)$ has infinite cyclic center generated by the circle factor~\cite{Scott:3-mfds}; let us denote this by $\pi_1(S^1)=\langle z\rangle$. Suppose $f\colon N\times S^1\to N\times S^1$ is a diffeomorphism. Then $f_\sharp(\langle z\rangle)=\langle z\rangle$, and therefore $f_*(\omega_{S^1})=\omega_{S^1}$ (up to taking $f^2$ if necessary) as in the above subsection (because $N$ does not admit maps of non-zero degree from direct products~\cite{KN}) or alternatively because the center and the commutator of $\pi_1(N\times S^1)$ intersect trivially. We deduce that $f$ cannot be Anosov by Theorem \ref{t:Hirschcov} and Lemma \ref{l:pre1}. Alternatively for the case of hyperbolic $N$, the main result of~\cite{GL} implies that $N\times S^1$ does not support Anosov diffeomorphisms, because $\mathrm{Out}(\pi_1(N))$ is finite and $\pi_1(N)$ is Hopfian and has trivial intersection of maximal nilpotent subgroups. In fact, as shown in~\cite{NeoAnosov2}, the only properties needed to exclude Anosov diffeomorphisms on $N\times S^1$ is that $\mathrm{Out}(\pi_1(N))$ is finite and $\pi_1(N)$ has trivial center. \medskip The proof of Theorem \ref{t:main} is now complete. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The fundamental reason why random matrices have been used to model many large systems is based on the belief that their local eigenvalue statistics are universal. This is generally referred to as the universality of random matrices. It is well-known that the local behavior of eigenvalues near the spectral edge and in the bulk are governed by the Tracy-Widom law and by the Dyson sine kernel, respectively. Since the seminal work of Dyson \cite{Dy1} for the Gaussian Unitary Ensemble (GUE), the universality both for the edge and the bulk were proven for very general classes of unitary invariant ensembles in the past two decades (see, e.g. \cite{M, PS, D,BI,DKMVZ1,DKMVZ2, LL} and references therein). For non-unitary ensembles, the most natural examples are the Wigner matrix ensembles \cite{W}, i.e., random matrices with independent identically distributed entries. The edge universality for these ensembles was proved by Soshnikov \cite{Sosh} using the moment method; the bulk universality remained unknown due to a lack of method to analyze local spectral properties of large matrices inside the spectrum. For ensembles of the form \begin{equation} \widehat H+ a V, \label{HaV} \end{equation} where $\widehat H$ is a Wigner matrix, $V$ is an independent standard GUE matrix and $a$ is a positive constant of order one (independent of $N$), the bulk universality was proved by Johansson \cite{J}. (Strictly speaking, the range of the parameter $a$ in \cite{J} depends on the energy $E$. This restriction was later removed by Ben Arous and P\'ech\'e \cite{BP}, who also extended this approach to Wishart ensembles). The approach of \cite{J} is partly based on the asymptotic analysis of an explicit formula by Br\'ezin-Hikami \cite{BH} for the correlation functions of the eigenvalues of $\widehat H+ a V$. This matrix can also be generated by a stochastic flow \[ s \to \widehat H+ \sqrt{s} V, \quad s>0, \] and the evolution of the eigenvalues is given by the Dyson Brownian motion \cite{Dy}. The result of \cite{J, BP} thus states that the bulk universality holds for times of order one. The eigenvalue distribution of GUE is in fact the invariant measure of Dyson Brownian motion. (Rigorously speaking, the Brownian motion has to be replaced by an Ornstein-Uhlenbeck process, but we will neglect this subtlety.) It is thus tempting to derive the universality of $\widehat H+ \sqrt{s} V$ via the convergence to equilibrium. We have recently carried out this approach \cite{ERSY} and the key observation is that the sine kernel, as a property of local statistics, depends almost exclusively on the convergence to local equilibrium. With this method we have reduced the necessary time to $N^{-1+\xi}$, for any $\xi>1/4$ in \cite{ERSY}. Note that the relaxation time to local equilibrium is $N^{-1}$; the additional exponent $\xi$ is due to technical reasons. {F}rom the stochastic calculus, one can see that the typical distance between the corresponding eigenvalues of $\widehat H+ \sqrt s V$ and $\widehat H$ is of order $(s/N)^{1/2}$. Thus the bulk universality of $\widehat H$ would hold if we could prove the Dyson sine kernel for time $s \ll 1/N$. On the other hand, for time smaller than $1/N$, the eigenvalues do not move in the scale $1/N$ and the dynamical consideration seems to be pointless. In this paper, we provide an approach to address the comparison of eigenvalues between $\widehat H+ \sqrt s V$ and $\widehat H$. To describe the idea, we now introduce the notations. {F}ix $N\in{\mathbb N}$ and we consider a Hermitian matrix ensemble of $N\times N$ matrices $H=(h_{\ell k})$ with the normalization \begin{equation} h_{\ell k} = N^{-1/2} z_{\ell k}, \qquad z_{\ell k}= x_{\ell k} +i y_{\ell k}, \label{scaling} \end{equation} where $x_{\ell k}, y_{\ell k}$ for $\ell<k$ are independent, identically distributed random variables with distribution $\nu$ that has zero expectation and variance $\frac{1}{2}$. The diagonal elements are real, i.e. $y_{\ell\ell}=0$ and $x_{\ell \ell}$ are also i.i.d. with distribution $\widetilde \nu$ that has zero expectation and variance one. The diagonal elements are independent from the off-diagonal ones. Suppose the real and imaginary parts of the offdiagonal matrix elements evolve according to the Ornstein-Uhlenbeck (OU) process \begin{equation}\label{dy} \partial_{t} u_t = L u_t, \quad L = \frac{1}{4}\frac{\partial^2}{\partial x^2} - \frac{ x}{2} \frac{\partial}{\partial x} \end{equation} with the reversible measure $\mu({\rm d} x) = e^{-x^2} {\rm d} x$ and initial distribution $u_0=u$ (strictly speaking, a differently normalized OU process is used for the diagonal elements but we omit this detail here). Under this process, the matrix evolves as $$ t\to e^{-t/2}\widehat H + (1-e^{-t})^{1/2}V $$ and the expectation and variance of the matrix entries remain constant. Notice for time $t$ small, $t \approx a^2$ when compared with \eqref{HaV}, after a trivial rescaling. The initial distribution of all the matrix elements is $F \,{\rm d}\mu^{\otimes n}= ( u\; {\rm d}\mu)^{\otimes n} $ with $n=N^2$. Let ${\mathcal L}$ be the generator on the product space and $e^{t{\mathcal L}} := (e^{tL})^{\otimes n}$ be the dynamics of the OU process for all the matrix elements. The joint probability distribution of the matrix elements at time $t$ is then given by \[ F_t {\rm d} \mu^{\otimes n}:= e^{t{\mathcal L}} u^{\otimes n} \; {\rm d}\mu^{\otimes n} = (e^{t L} u )^{\otimes n} \; {\rm d} \mu^{\otimes n}. \] Suppose that for some $t$ small, say, $t = N^{-1+\lambda}$ with $\lambda>0$, we know the local eigenvalue correlation function w.r.t. $F_t$. Let \[ Var(F, F_t) = \int |F- F_t| {\rm d} \mu^{\otimes n} \] be the total variation norm between $F_t$ to $F$. In order to approximate the correlation functions of $F$ by $F_t$ in a weak sense (tested against bounded observables), we need $Var(F, F_t) \to 0$. Heuristically, $Var(F, F_t) \sim t N^2$ and this requires that $t \ll N^{-2}$ which is far from the time scale $t\ge N^{-1+\xi}$ for which the sine kernel has been proven in \cite{ERSY}. For observables on short scales, an effective speed of convergence for the total variation is needed. For example, to test a local observable with two variables in scale $1/N$, as in the case of the Dyson sine kernel, one has to prove $Var(F, F_t) = o(N^{-2})$. Although the heuristic bound $Var(F, F_t) \sim t N^2$ can be improved to $Var(F, F_t) \sim tN$, further improvement seems to be impossible. Thus we are unable to obtain even the weaker bound $Var(F, F_t) = o(1)$ for $t > 1/N$. The main observation in the current paper is that, while we cannot compare $F$ with $F_t$, it suffices to prove the existence of some function $G$ for which the correlation functions with respect to $e^{t{\mathcal L}}G$ can be computed for $t\ge N^{-1+\lambda}$ and $Var(F, e^{t{\mathcal L}}G) = o(N^{-2})$. Since the necessary input to compute the correlation functions is the validity of the semicircle law on short scales, which we have proved for a wide class of distributions $\nu$ in \cite{ESY1, ESY2, ESY3}, the choice of $G$ is essentially dominated by the condition $Var(F, e^{t{\mathcal L}}G) = o(N^{-2})$. Note that $G$ itself may depend on $t$. Since $F = e^{t {\mathcal L}} (e^{-t {\mathcal L}} F)$, we could, in principle, choose $G = e^{-t {\mathcal L}} F = [e^{-tL}]^{\otimes n}F$. But the diffusive dynamics cannot be reversed besides a very special class of initial data $G$. However, we only have to approximately reverse the dynamics and the choice $G_t = \big[ 1-tL+\frac{1}{2}t^2L^2\big]^{\otimes n} F$ turns out to be sufficient. In this case, $e^{t{\mathcal L}}G_t - F = O(N^2t^3)$ and we will show that \begin{equation} \big|Var(e^{t{\mathcal L}}G_t, F)\big|^2 \le \int \frac{|e^{t{\mathcal L}}G_t-F|^2}{e^{t{\mathcal L}}G_t}{\rm d} \mu^{\otimes n} = O(t^6N^2). \label{varg} \end{equation} Furthermore, under some mild regularity condition on $F$, $G_t$ is in the class for which we can establish the local semicircle law \cite{ESY3}. We will call this argument the {\it method of time reversal.} \medskip We now summarize the assumptions on the initial distribution. Let the probability measure of the real and imaginary parts of the off-diagonal matrix elements be of the form $$ \nu({\rm d} x) = e^{-U(x)}{\rm d} x=u(x) \mu({\rm d} x) = e^{- V(x)} e^{- x^2}{\rm d} x $$ with the real function $V(x)= U(x) -x^2$ and similarly for the diagonal elements $\widetilde\nu({\rm d} x) = e^{-\widetilde U(x)}{\rm d} x$, $\widetilde V(x)= \widetilde U(x) -\frac{1}{2}x^2$. Suppose that $V \in C^6 ({\mathbb R})$ and the derivatives satisfy \begin{equation} \sum_{j=1}^6 |V^{(j)}(x)| \le C (1+x^2)^k \label{cond1} \end{equation} for some $k\in {\mathbb N}$ and \begin{equation} \nu(x) \le C' e^{- \delta |x|^2} \label{cond2} \end{equation} with some constants $\delta>0$, $C$ and $C'$. In Section \ref{sec:relax} we explain how to relax this latter condition to exponential decay, \begin{equation} \nu(x) \le C' e^{-C|x|} \label{cond2relax} \end{equation} with some constants $C, C'$ (in fact, some high power law decay is sufficient). We assume that the first moment of $\nu$ is zero and the variance is $\frac{1}{2}$ \begin{equation} \int x \, {\rm d} \nu(x)=0 ,\qquad \int x^2 {\rm d} \nu(x)=\frac{1}{2}. \label{cond3} \end{equation} We assume the conditions \eqref{cond1}, \eqref{cond2} and \eqref{cond3} for $\widetilde V$ as well with the variance changed to 1. Let $p_N(x_1, x_2, \ldots , x_N)$ denote the probability density of eigenvalues and for any $k=1,2,\ldots, N$, let \begin{equation} p^{(k)}_N(x_1, x_2,\ldots x_k):= \int_{{\mathbb R}^{N-k}} p_N(x_1, x_2, \ldots , x_N){\rm d} x_{k+1}\ldots {\rm d} x_N \label{corrfn} \end{equation} be the $k$-point correlation function. With our choice of the variance of $\nu$, the density $p^{(1)}_N(x)$ is supported in $ [- 2, 2]+o(1)$ and in the $N\to\infty$ limit it converges to the Wigner semicircle law given by the density \begin{equation} \varrho_{sc}(x)= \frac{1}{2\pi} \sqrt{(4-x^2)_+}\; . \label{def:sc} \end{equation} \begin{theorem}\label{mainthm} Let the probability measure of the matrix elements satisfy conditions \eqref{cond1}, \eqref{cond2} and \eqref{cond3}. Then for any $u$ with $|u|< 2$ and for any compactly supported and bounded observable $O\in L^\infty_c({\mathbb R}^2)$ we have \begin{equation} \begin{split} \label{maineq} \lim_{N\to \infty} \int_{{\mathbb R}^2} O(\alpha,\beta) \frac{1}{[\varrho_{sc}(u)]^2} p^{(2)}_N\Big(u + \frac{\alpha}{N\varrho_{sc}(u)}, & u + \frac{\beta}{N\varrho_{sc}(u)}\Big) {\rm d} \alpha {\rm d} \beta \\ & = \int_{{\mathbb R}^2} O(\alpha,\beta) \Big[1- \Big(\frac{\sin \pi(\alpha-\beta)}{\pi(\alpha-\beta)}\Big)^2\Big] {\rm d} \alpha {\rm d} \beta. \end{split} \end{equation} \end{theorem} \begin{remark}{\rm With similar methods we can also prove that the higher order rescaled correlation functions, $$ \frac{1}{[\rho_{sc}(u)]^k} \;p^{(k)}_N\Big(u+ \frac{ a_1 } { \rho_{sc}(u) N}, u+\frac{ a_2} { \rho_{sc}(u) N }, \ldots, u+ \frac{ a_k } { \rho_{sc}(u) N} \Big), $$ converge in the weak sense to $\mbox{det}\big( f(a_i-a_j)\big)_{1\le i, j\le k}$ where $f(\tau)= \frac{\sin \pi \tau}{\pi \tau}$, however this statement requires more regularity conditions on $V$. The proof of the sine kernel for $e^{tL}G_t$ immediately implies the convergence of the higher order correlation functions with respect to the evolved measure. To conclude for the higher order correlation functions with respect to $F$, however, one needs to improve the accuracy in \eqref{varg}. This can be achieved by approximating the backward evolution $e^{-t{\mathcal L}}$ to a higher order. For example, using $G_t=\big[ 1- tL + \frac{1}{2!} (-tL)^2 -\ldots \frac{1}{(m-1)!} (-tL)^{m-1}\big]^{\otimes n}F$, will improve the bound \eqref{varg} to $t^{2m}N^2$, modulo $N^{ e}$ corrections, if $V$ is $2m$-times differentiable with bounds similar to \eqref{cond1}. } \end{remark} \begin{remark}{\rm With the same method, the condition that $V \in C^6 ({\mathbb R})$ in Theorem \ref{mainthm} can be relaxed to $V \in C^{4+\varepsilon} ({\mathbb R})$ for any ${ e}>0$. Heuristically, this can be seen by observing that, with $F= v^{\otimes n}$ and $G_t = v_t^{\otimes n} = [1-tL +\frac{1}{2} t^2 L^2]^{\otimes n} F$, one can estimate the difference $|e^{tL} v_t - v| \leq O(t^{2+\varepsilon} L^{2+\varepsilon} v)$, which, compared with the estimate $O(t^3 L^3 v)$ used in \eqref{varg} gives less decay (still enough to deliver the result of Theorem \ref{mainthm}), but requires less regularity of $v$ (only $4+2\varepsilon$ derivatives). A rigorous proof of this fact can be obtained by using part of the evolution $e^{t{\mathcal L}}$ to regularize, on the scale $t$, the initial density.} \end{remark} We now state our result concerning the eigenvalue gap distribution. For any $s>0$ and $|u|<2$ we define the density of eigenvalue pairs with distance less than $s/N\varrho_{sc}(u)$ in the vicinity of $u$ by \begin{equation} \Lambda (u; s, x) = \frac {1}{2 N t_N \varrho_{sc}(u)} \# \Big\{ 1\le j \le N-1\,: \; x_{j+1} - x_j \le \frac{s}{N\varrho_{sc}(u)}, \; |x_j-u| \le t_N \Big\} \label{def:Lambda} \end{equation} where $t_N = N^{-1+ \delta}$ for some $0< \delta< 1$. \begin{theorem}\label{mainthm2} Suppose the probability measure of the matrix elements satisfies conditions \eqref{cond1}, \eqref{cond2} and \eqref{cond3}. Let ${\mathcal K}_\alpha$ be the operator acting on $L^2((0, \alpha))$ with kernel $\frac {\sin \pi(x-y)} {\pi(x-y)}$. Then for any $u$ with $|u|< 2$ and for any $s> 0$ we have \begin{equation}\label{maineq2} \lim_{N\to \infty} \mathcal{ E} \, \Lambda (u; s, x) = \int_0^s p(\alpha)\; {\rm d} \alpha , \qquad p(\alpha) = \frac {{\rm d}^2} {{\rm d} \alpha^2} \det (1 - {\mathcal K}_\alpha), \end{equation} where $\det$ denotes the Fredholm determinant of the compact operator $1-{\mathcal K}_\alpha$. \end{theorem} As a corollary of Theorem \ref{mainthm2}, one can easily show that the probability to find no eigenvalue in the interval $[u,u+\alpha/(\varrho_{sc} (u_0) N)]$, after averaging in an interval of size $N^{-1+\delta}$ around $u_0 \in (-2,2)$, is given by $\det (1 - {\mathcal K}_\alpha)$, same as in the case of GUE (see, e.g., \cite{D}). Note that assuming more regularity on the exponent of the density $u(x) = e^{-U(x)}$, we can get a better bound on the convergence rate (by approximating the backwards evolution $e^{-t{\mathcal L}}$ to a higher order) and avoid therefore the averaging over $u$. \medskip The proof of Theorem \ref{mainthm} and \ref{mainthm2} consists of two main parts. In Section \ref{sec:rev} we prove the approximation \eqref{varg} under precise conditions on the initial distribution $u=e^{-V}$. In Section \ref{sec:timeevolved} we prove the sine kernel for the distribution $e^{t{\mathcal L}}G_t$ with $t=N^{-1+\lambda}$ for any $\lambda>0$, which is the optimal time scale for such a result. Our approach is to recast the formula for the correlation function in \cite{J}, which becomes unstable for $ t \ll 1$, into a more symmetric form (Proposition \ref{prop:rep}) so that it is stable for all time up to $t=N^{-1+\lambda}$. The saddle point analysis can then be achieved with the local semicircle law from \cite{ESY3}. Finally, we complete the proofs of the main theorems in Section \ref{sec:mainthm}. \medskip The method of time reversal described previously is very general and should be applicable to a wide range of models. More significantly, it explains the {\it origin} of the universality, i.e., the universality comes from the ``time reversal''. To summarize, the universality consists of the following observations: (1) The local statistics are determined by the local equilibrium measures. (2) The relaxation to local equilibria takes place in a short time. (3) The original distribution can be well-approximated by the distribution of the Dyson Brownian motion for a short time with initial data given by an approximate inverse flow. To implement this scheme, a key input is to estimate the fluctuations of the empirical density of eigenvalues in short scales. \medskip Shortly after this manuscript appeared on the arXiv, we learned that our main result was also obtained by Tao and Vu in \cite{TV} under essentially no regularity conditions on the initial distribution $\nu$ provided the third moment of $\nu$ vanishes. Some partial results for the Gaussian orthogonal ensembles are also obtained and we refer the reader to the preprint for more details. \medskip {\it Conventions.} We will use the letters $C$ and $c$ to denote general constants whose precise values are irrelevant and they may change from line to line. These constants may depend on the constants in \eqref{cond1}--\eqref{cond3}. \section{Method of Time Reversal}\label{sec:rev} Recall the Ornstein-Uhlenbeck process from \eqref{dy} with the reversible measure $\mu({\rm d} x) = \mu(x){\rm d} x=e^{-x^2} {\rm d} x$. Let $u$ be a positive density with respect to $\mu$, i.e. $\int u {\rm d} \mu=1$ and we write $u(x)=\exp (-V(x))$. \begin{proposition}\label{meascomp} Let $V$ satisfy the conditions \eqref{cond1}, \eqref{cond2} with some $k$ and \eqref{cond3}. Let $\lambda>0$ be sufficiently small and $t= N^{-1+\lambda}$. Define a cutoff initial density as \[ v (x):= e^{- V_c(x) } ,\qquad V_c (x):= V(x) \theta((x-c_N)N^{-\lambda/4k})+d_N, \] where $\theta$ is a smooth cutoff function satisfying $\theta(x) = 1$ for $|x|\le 1$ and $\theta(x) = 0$ for $|x| \ge 2$ and $c_N$ and $d_N$ are chosen such that $v(x){\rm d}\mu(x)$ is a probability density with zero expectation. Denote ${\mathcal L}=L^{\otimes n}$, $ F= u^{\otimes n}$ and $ F_c= v^{\otimes n}$ with $n=N^2$. (i) We have \begin{equation}\label{FFtilde} \int \left | { F_c } - F \right | {\rm d} \mu^{\otimes n} \le C\; e^{- cN^{c}}. \end{equation} with some $c>0$ depending on $k$ and $\lambda$. (ii) $g_t:= (1-tL+\frac{1}{2}t^2L^2)v$ is a probability measure with respect to ${\rm d}\mu$ and for $G_t:=[g_t]^{\otimes n}$ we have \begin{equation} \int \frac{\big|e^{t{\mathcal L}}G_t- F_c\big|^2}{e^{t{\mathcal L}}G_t} {\rm d} \mu^{\otimes n} \le CN^2t^{6-\lambda} \le CN^{-4+8\lambda}, \label{FF} \end{equation} where $C$ depends on $\lambda$ and on the constants in \eqref{cond1}, \eqref{cond2}. \end{proposition} In the formulation of this proposition we have not taken into account that in our application the diagonal elements of the matrix evolve under a differently normalized OU process with generator $\widetilde L = \frac{1}{2}\partial_x^2 - \frac{x}{2}\partial_x$ with invariant measure $e^{-x^2/2}{\rm d} x$. This modification is only notational and does not affect the validity of the estimates \eqref{FFtilde} and \eqref{FF}. \medskip {\it Proof.} {F}rom condition \eqref{cond2} the estimate \eqref{FFtilde} follows directly by noting that the constants $c_N$ and $d_N$ are subexponentially small in $N$. For the proof of \eqref{FF}, we first control the evolution of each matrix element under the OU process \eqref{dy}. We assume that for the initial density $v$ \begin{equation} Lv (x) \le A_1 v (x) , \qquad L^2 v(x)\ge - A_2 v(x), \qquad |L^3v(x)|\le A_3 v(x) \label{AB} \end{equation} hold with some constants positive $A_1, A_2$ and $A_3$. Set $g_t = (1-tL+\frac{1}{2}t^2L^2)v$ for some $t>0$ and note that $g_t$ is a probability density with respect to $\mu$ if \begin{equation} tA_1 + \frac{t^2}{2}A_2\le 1. \label{tAA} \end{equation} Define $$ v_t = e^{tL}g_t= e^{tL} \Big(1-tL+\frac{1}{2}t^2L^2\Big)v, $$ then $$ \partial_t v_t = \frac{1}{2} t^2 L^3 e^{tL}v. $$ Note that by the monotonicity preserving property of the Ornstein-Uhlenbeck kernel and by \eqref{AB}, we have \begin{equation} e^{sL} L^3 v \le A_3 e^{sL} v \le A_3 e^{sA_1} v, \qquad s\ge 0. \label{derest} \end{equation} Here we used the fact that $e^{sL}v\le e^{sA_1}v$ under the first condition in \eqref{AB}, which follows from integrating the inequality $$ \frac{{\rm d}}{{\rm d} s} e^{sL}v = e^{sL}Lv \le A_1 e^{sL}v. $$ In particular \begin{equation} v_t = v + \frac{1}{2}\int_0^t s^2 L^3 e^{sL}v \; {\rm d} s \ge v\Big(1- \frac{1}{6}t^3 A_3 e^{tA_1}\Big) \ge \frac{1}{2}v, \label{uut} \end{equation} assuming \eqref{tAA} and \begin{equation} t^3 A_3 \le 1. \label{tAA1} \end{equation} Then \begin{equation} \begin{split}\label{uu1} \int \frac{(v-v_t)^2}{v_t} \; {\rm d}\mu & = \int v_t^{-1} \Big[ \int_0^t {\rm d} s \; \frac{1}{2}s^2 L^3 e^{sL} v\Big]^2 {\rm d}\mu \\ & \le \frac{t^5}{20} \int_0^t \int v_t^{-1} \big[ e^{sL}L^3 v\big]^2{\rm d}\mu{\rm d} s \\ & \le \frac{t^5}{10}\int_0^t\int v^{-1} \big[ L^3 e^{sL} v\big]^2{\rm d}\mu{\rm d} s \\ & \le \frac{1}{10}\, A_3^2 t^6 e^{2tA_1}\le e^{CA_3^2 t^6}-1, \end{split} \end{equation} where we used \eqref{uut}, \eqref{derest} and finally \eqref{tAA}. \medskip Now we consider the evolution of the product density $F_c= v^{\otimes n}$, note that $\int F_c \; {\rm d}\mu^{\otimes n}=1$. Applying the same procedure to each variable, we have \begin{equation}\label{CA} \int \frac { (e^{t{\mathcal L}}G_t-F_c)^2 } {e^{t{\mathcal L}}G_t } \; {\rm d} \mu^{\otimes n} \le e^{CA_3^2 t^6n}-1\le CA_3^2 t^6n \, \end{equation} as long as $A_3^2 t^6n$ is bounded. In our application $n=N^2$, thus \eqref{CA} will imply \eqref{FF} provided that \begin{equation} A_3 \le Ct^{-\lambda/2} \label{A3} \end{equation} which will also guarantee \eqref{tAA1}. It is straightforward to check that the density $v(x)$ satisfies \eqref{AB} with constants $A_j$ subject to \eqref{tAA} and \eqref{A3}. This completes the proof. \hfill\fbox{}\par\vspace{0.3mm} \section{Sine kernel for the time evolved measure} \label{sec:timeevolved} We use the contour integral representation for the correlation functions of the eigenvalues of a matrix of the form $H=\widehat H + aV$, where $V$ is a GUE matrix \cite{BH, J}. We will apply this result for the matrix \begin{equation} e^{t{\mathcal L}}G_t = e^{-t/2}\big[ G_t + (e^t-1)^{1/2} V\big] \label{resc} \end{equation} where, apart from a trivial prefactor $e^{-t/2}$, $G_t$ plays the role of $\widehat H$ and $a= (e^t-1)^{1/2}\approx t^{1/2}$. In order to be able to use the formula given in Proposition 1.1 of \cite{J} to analyze $H=\widehat H + aV$, we rescale the variance of ${\rm d}\nu$ from $\frac{1}{2}$ to $\frac{1}{8} +\frac{1}{2}a^2$ which changes the semicircle law for $H=\widehat H +aV$ to \begin{equation} \varrho(u) : = \frac{2}{\pi(1+4a^2)}\sqrt{(1+4a^2-u^2)_+}. \label{def:varrho} \end{equation} In particular, the support changes from $[-2,2]$ to $[-\sqrt{1+4a^2},\sqrt{1+4a^2}]$. Since eventually $a$ will go to zero, the condition $|u|< 2$ in Theorem \eqref{mainthm} to be away from the spectral edge changes to the condition $|u|<1$ which we assume in the sequel. The semicircle law for $\widehat H$ will also change from the one given in \eqref{def:sc} to \begin{equation} \varrho_{sc}(v) := \frac{2}{\pi}\sqrt{(1-v^2)_+}. \label{def:sc1} \end{equation} In the rest of this Section we will use \eqref{def:sc1}. The main result of this section is \begin{proposition}\label{sinjoh} Let $\widetilde p^{(m)}_{N}$ be the $m$-point eigenvalue correlation function for the ensemble $\widehat H + aV$ defined above and let $O: {\mathbb R}^m\to {\mathbb R}$ be a compactly supported bounded observable function. Then for any $|u|<1$ and $a:= N^{-1/2+\lambda/2}$ we have \begin{equation} \begin{split} \lim_{N\to \infty} \int_{{\mathbb R}^m} O(\alpha_1, \ldots, \alpha_m) & \frac{1}{[\varrho(u)]^m} \widetilde p^{(m)}_{N}\Big(u + \frac{\alpha_1}{N\varrho(u)},\ldots, u + \frac{\alpha_m}{N\varrho(u)}\Big) {\rm d} \alpha_1\ldots {\rm d} \alpha_m \\ & = \int_{{\mathbb R}^m} O(\alpha_1, \ldots, \alpha_m) \det\Big(\frac{\sin \pi(\alpha_i-\alpha_j)}{\pi(\alpha_i-\alpha_j)}\Big)_{i,j=1}^m {\rm d} \alpha_1\ldots {\rm d} \alpha_m. \end{split} \end{equation} \end{proposition} {\it Proof.} Using Proposition 1.1 of \cite{J}, the (symmetrized) distribution of the eigenvalues $x=(x_1, \ldots, x_N)$ of $H=\widehat H + aV$ for any fixed $\widehat H$ is given by \begin{equation} q_S(x,y) : = \frac{1}{(2\pi S)^{N/2}} \frac{\Delta_N(x)}{\Delta_N(y)} \mbox{det}\big( e^{-(x_j-y_k)^2/2S}\big)_{j,k=1}^N, \label{def:qs} \end{equation} where $y=(y_1, \ldots y_N)$ is the eigenvalues of the Wigner matrix $\widehat H$ with the choice of $S= a^2/N$. Note that \begin{equation}\label{allsum} \begin{split}\int_{{\mathbb R}^m}& O(\alpha_1, \ldots, \alpha_m) \frac{1}{[\varrho(u)]^m} \widetilde p^{(m)}_{N}\Big(u + \frac{\alpha_1}{N\varrho(u)},\ldots, u + \frac{\alpha_m}{N\varrho(u)}\Big) {\rm d} \alpha_1\ldots {\rm d} \alpha_m \\ & = \widehat{\mathbb E}}\def\PP{{\mathbb P} \int_{{\mathbb R}^N} \sum_{i_1, i_2, \ldots, i_m=1}^N O\Big( N\varrho(u)(x_{i_1}-u), \ldots, N\varrho(u)(x_{i_m}-u)\Big) q_S(x,y){\rm d} x_1 \ldots {\rm d} x_N, \end{split} \end{equation} where $\widehat {\mathbb E}}\def\PP{{\mathbb P}$ denotes the expectation is w.r.t. the $\widehat H$ ensemble. Since $O$ is bounded and the sum contains $N^m$ terms, we thus need to compute the limit of the correlation functions of $q_S(x,y)$ in the $x=(x_1, \ldots, x_N)$ variables for a large set ${\mathcal Y}_N\subset {\mathbb R}^N$ of fixed $y=(y_1, \ldots , y_N)$ so that $$ \widehat \PP (y(\widehat H)\not\in{\mathcal Y}_N) = o(N^{-m}). $$ where $y(\widehat H) =(y_1(\widehat H), \ldots, y_N(\widehat H))$ are the eigenvalues of the Wigner matrix $\widehat H$. We will choose ${\mathcal Y}_N$ to be the event that the points $y=(y_1, \ldots , y_N)$ follow the semicircle law \eqref{def:sc1}. The limit of the correlation functions of $q_S(x,y)$ will be computed starting from the next section in Proposition \ref{prop:local}. More precisely, let \begin{equation} \eta := \eta_0t\sqrt{1-u^2} \label{eta} \end{equation} with some sufficiently small $\eta_0<1$ and we set \begin{equation} {\mathcal Y}_N: = \Big\{ y\in {\mathbb R}^N\; : \; \; \sup_{Im z\ge\eta} \Big| \frac{1}{N}\sum_j \frac{1}{z-y_j} - \int \frac{\varrho_{sc}(r){\rm d} r}{z-r}\Big|\le N^{-\lambda/4} \;\; \mbox{and} \quad\sup_j |y_j|\le K\Big\} \label{defY} \end{equation} for some large constant $K$. By Theorem 3.1 of \cite{ESY3} we then have \begin{equation} \widehat\PP (y(\widehat H)\not\in{\mathcal Y}_N) \le Ce^{-cN^{\lambda/4}} \label{good} \end{equation} (after taking the supremum over all energies, which can be controlled taking energies on a grid of spacing $\eta$). Note that the variance of the matrix elements in \cite{ESY3} was different (see remark at the beginning of Section \ref{sec:con}) but this does not change the estimates. The condition {\bf C1)} of \cite{ESY3} on the Gaussian decay for the initial density $g_t\mu =(1- tL+\frac{1}{2}t^2L^2)v\mu$ is clearly satisfied by \eqref{AB} and \eqref{cond2}. Combining the estimate \eqref{good} with Proposition \ref{prop:local} and with the argument after \eqref{allsum}, we have proved Proposition \ref{sinjoh}.\hfill\fbox{}\par\vspace{0.3mm} \medskip \subsection{Contour integral representation of the correlation function} \label{sec:con} We compute the correlation functions of $q_S(x;y)$ in $x$, for any fixed $y\in{\mathcal Y}_N$: \begin{equation} \widetilde p_{N,y, S}^{(m)}(x_1, \ldots , x_m) = \int_{{\mathbb R}^{N-m}} q_S(x_1, \ldots ,x_N; y) {\rm d} x_{m+1} \ldots {\rm d} x_N. \label{tildecorr} \end{equation} Note that this definition of the correlation functions differs from the definition of $R_m^N$ given in \cite{J}; the relation being $$ R_m^N(x_1, \ldots, x_m; y) = \frac{N!}{(N-m)!}\widetilde p_{N,y, S}^{(m)}(x_1, \ldots , x_m) . $$ The following representation is based on the formula in \cite{J}, but it is more stable and suitable for analysis for very short time. \begin{proposition}\label{prop:rep} The correlation functions can be represented as \begin{equation} R_m^N (x_1, \dots ,x_m ; y) = \mbox{det} \big( {\mathcal K}_N^S(x_i, x_j; y)\big)_{i,j=1}^m, \label{def:R} \end{equation} where \begin{equation} \begin{split}\label{ck} {\mathcal K}_N^S (u,v;y)= & \frac{1}{(2\pi i)^2 (v-u)S} \int_\gamma {\rm d} z\int_\Gamma {\rm d} w (e^{-(v-u)(w-r)/S} -1) \prod_{j=1}^N \frac{w-y_j}{z-y_j} \\ & \times \frac{1}{w-r}\Big( w-r+z-u - S\sum_j \frac{y_j-r}{(w-y_j)(z-y_j)}\Big) e^{(w^2-2uw -z^2+2uz)/2S}, \end{split} \end{equation} where $r\in {\mathbb R}$ is arbitrary and $\gamma= \gamma_+\cup\gamma_-$ is the union of two lines $\gamma_+:s\to -s + i{\omega}$ and $\gamma_-:s\to s-i{\omega}$ ($s\in {\mathbb R}$) for any fixed ${\omega}>0$ and $\Gamma$ is $s\to is$, $s\in {\mathbb R}$. \end{proposition} We note that $\Gamma$ can be shifted to any vertical line since the integrand is an entire function in $w$ and has a Gaussian decay as $|Im \; w| \to \infty$. The constants $r \in {\mathbb R}$ and ${\omega}>0$ (appearing in the definition of the contour $\gamma$ in $K_N$) can be arbitrary and will be specified later. \medskip {\it Proof of Proposition \ref{prop:rep}.} {F}rom Eq. (2.18) in \cite{J}, we have \begin{equation} R_m^N (x_1, \dots ,x_m ; y) = \mbox{det} \big( K_N^S(x_i, x_j; y)\big)_{i,j=1}^m, \label{R2} \end{equation} with \[ K_N^S (u,v;y) = K_N^S(u,v):= \frac{e^{(v^2-u^2)/2S}}{(2\pi i)^2 S} \int_{\widetilde\gamma} {\rm d} z \int_{\Gamma_L} dw \, e^{(w^2-2wv -z^2+2zu)/2S} \frac{1}{w-z} \prod_{j=1}^N \frac{w-y_j}{z-y_j}\, , \] where $\widetilde\gamma$ is a contour around all the $y_j$, $j=1, \dots ,N$, and $\Gamma_L$ is the vertical line ${\mathbb R } \ni s \to L +is$, for a fixed $L$ so large that $\widetilde\gamma$ and $\Gamma_L$ do not intersect. Eq. (\ref{R2}) remains invariant if we replace $K_N$ by \[ {\mathcal K}^S_N (u,v) = e^{r(v-u)/S} e^{(u^2-v^2)/2S} K^S_N (u,v) = \frac{1}{(2\pi i)^2 S} \int_{\widetilde\gamma} {\rm d} z \int_{\Gamma_L} {\rm d} w \, \frac{e^{r(v-u)/S}}{w-z} \, e^{(H_v(w) - H_u (z))/S} \] for arbitrary $r\in {\mathbb R}$. Here we defined $$ H_v (w) := \frac{w^2}{2} -vw +S \sum_{j=1}^N \log (w-y_j)\, . $$ The change of variables $w= (1-\beta) r + \beta w'$, $z=(1-\beta) r +\beta z'$ leads to \[ {\mathcal K}^S_N (u,v) := \frac{\beta}{(2\pi i)^2 S} \int_{\widetilde\gamma} {\rm d} z' \int_{\Gamma_L} {\rm d} w' \, \frac{e^{r(v-u)/S}}{w-z} \, e^{(H_v((1-\beta)r+\beta w') - H_u ((1-\beta) r + \beta z'))/S} \] for every $\beta$. Taking the derivative in $\beta$ at $\beta=1$, and removing the primes from the new integration variables, we find the identity \[ 0 = {\mathcal K}^S_N (u,v) + \frac{1}{(2\pi i)^2 S} \int_{\widetilde\gamma} {\rm d} z \int_{\Gamma_L} {\rm d} w \, e^{r(v-u)/S} \, e^{(H_v(w) - H_u (z))/S} \, \frac{1}{S} \left[ \frac{(w-r) H'_v (w) - (z-r) H'_u (z)}{w-z} \right] . \] Using that $H'_v (w) = w-v + S \sum_{j=1}^N 1/(w-y_j)$, we find \[ \frac{(w-r) H'_v (w) - (z-r) H'_u (z)}{w-z} = \frac{(w-r)(u-v)}{w-z} + (w-r) \frac{H'_u (w) - H'_u (z)}{w-z} +H'_u(z) \] and thus \[ \begin{split} 0 = \, &{\mathcal K}^S_N (u,v) + \frac{(u-v)}{(2\pi i)^2 S} \int_{\widetilde\gamma} {\rm d} z \int_{\Gamma_L} {\rm d} w \, e^{r(v-u)/S} \, \frac{w-r}{S(w-z)} e^{(H_v(w) - H_u (z))/S} \, \\ &+ \frac{1}{(2\pi i)^2 S} \int_{\widetilde\gamma} {\rm d} z \int_{\Gamma_L} {\rm d} w \, e^{r(v-u)/S} \, e^{(H_v(w) - H_u (z))/S} \, \frac{1}{S} \left[ (w-r) \frac{H'_u (w) - H'_u (z)}{w-z} + H'_u (z) \right]\,. \end{split} \] The second term on the r.h.s. is just $(v-u) \frac{\partial}{\partial v} {\mathcal K}_N (u,v)$. Therefore \[ \begin{split} \frac{\partial}{\partial v} & \left[ (v-u) {\mathcal K}^S_N (u,v)\right] \\ = \; & \frac{-1}{(2\pi i)^2 S} \int_{\widetilde\gamma} {\rm d} z \int_{\Gamma_L} {\rm d} w \, e^{r(v-u)/S} \, e^{(H_v(w) - H_u (z))/S} \, \frac{1}{S} \left[ (w-r) \frac{H'_u (w) - H'_u (z)}{w-z} + H'_u (z) \right] \\ = \; &\frac{1}{(2\pi i)^2 S} \int_{\widetilde\gamma} {\rm d} z \int_{\Gamma_L} {\rm d} w \, e^{r(v-u)/S} \, e^{(H_v(w) - H_u ( z))/S} \, \frac{1}{S} \left[ w-r +z-u -S \sum_{j=1}^N \frac{y_j -r}{(z-y_j)(w-y_j)} \right] . \end{split} \] Integrating back over $v$, starting from $u$, we find that \[ \begin{split} (v-u) {\mathcal K}^S_N (u,v) = \frac{1}{(2\pi i)^2 S} \int_{\widetilde\gamma} {\rm d} z \int_{\Gamma_L} {\rm d} w \,& \left(e^{-(w-r)(v-u)/S} - 1 \right) \, e^{(w^2 - 2uw - z^2 + 2uz)/2S} \prod_{j=1}^N \frac{w-y_j}{z-y_j} \\ &\times \frac{1}{(w-r)} \left[ w-r +z-u -S \sum_{j=1}^N \frac{y_j -r}{(z-y_j)(w-y_j)} \right] \,. \end{split} \] At this point the contours of integration can be modified; since the singularity $1/(w-z)$ has been removed, they are now allowed to cross. This completes the proof of the proposition. \hfill\fbox{}\par\vspace{0.3mm} \begin{proposition}\label{prop:local} Let $\kappa>0$. For any sequence $y=y^{(N)}\in {\mathcal Y}_N$ with the choice $S= N^{-2+\lambda}$ we have \begin{equation} \lim_{N\to \infty} \frac{1}{N\varrho(u)} {\mathcal K}_N^S\Big( u + \frac{\alpha}{N\varrho(u)} , u+ \frac{\beta}{N\varrho(u)} ; y\Big) = \frac{\sin \pi(\alpha-\beta)}{\pi(\alpha-\beta)} \label{convpoint} \end{equation} uniformly for $|u|\le 1-\kappa$ and for $\alpha,\beta$ in a compact set. Moreover, the correlation functions satisfy \begin{equation} \lim_{N\to\infty} \frac{1}{[\varrho(u)]^{m}} \widetilde p_{N,y, S}^{(m)} \Big( u+\frac{\alpha_1}{N\varrho(u)}, \ldots , u+\frac{\alpha_m}{N\varrho(u)}\Big) = \det \Big( \frac{\sin \pi(\alpha_i-\alpha_j)}{\pi(\alpha_i-\alpha_j)} \Big)_{i,j=1}^m, \label{pmconv} \end{equation} uniformly for $|u|\le 1-\kappa$ and for $\alpha_1, \ldots, \alpha_m$ in a compact set. \end{proposition} {\it Proof.} The statement in \eqref{pmconv} follows directly from \eqref{convpoint} and \eqref{def:R}, so it is sufficient to prove \eqref{convpoint}. We will prove \eqref{convpoint} in the form $$ \frac{1}{N\varrho(u)} {\mathcal K}_N^S\Big( u^{(N)} , u^{(N)}+ \frac{\tau}{N\varrho(u)} ; y\Big) \to \frac{\sin \pi\tau}{\pi\tau} $$ for any sequence $u^{(N)}$ with $|u^{(N)} - u_*| \leq C/N$ and for every fixed $u_*$ with $|u_*| <1-\kappa$. In order to get (\ref{convpoint}), we take $u^{(N)} = u_* +\alpha/N\varrho (u_*)$ with $u_* = u$. Set \begin{equation} \varrho = \varrho(u_*), \quad t=a^2 = N^{-1+\lambda}. \label{not} \end{equation} {F}rom (\ref{ck}), we find \begin{equation} \frac{1}{N\varrho}{\mathcal K}_N\Big(u^{(N)},u^{(N)}+\frac{\tau}{N\varrho}; y\Big) = N \int_\gamma \frac{{\rm d} z}{2\pi i}\int_\Gamma \frac{{\rm d} w}{2\pi i} h_N(w) g_N(z,w) e^{N(f_N(w)-f_N(z))} \label{repr} \end{equation} with \begin{equation} f_N(z) = \frac{1}{2t}(z^2-2u^{(N)} z) +\frac{1}{N}\sum_j\log(z-y_j) \label{def:fN} \end{equation} \begin{equation} g_N(z,w) = \frac{1}{t(w-r)}[w-r+z-u^{(N)}] - \frac{1}{N(w-r)}\sum_j \frac{y_j-r}{(w-y_j)(z-y_j)} \label{def:gN} \end{equation} \begin{equation} \begin{split} h_N( w) & = \frac{1}{\tau} \Big( e^{-\tau (w-r)/t\varrho} - 1 \Big) \label{def:hN} \end{split} \end{equation} with \eqref{not}. We will need the identity \begin{equation} g_N(z,w) = \frac{1}{w-r} f_N'(z) + \frac{f'_N(z)-f_N'(w)}{z-w}. \label{id} \end{equation} \subsection{Saddle points} For brevity, we will drop the superscript and denote $u^{(N)}$ by $u$ in the sequel and we fix $|u|<1$. We first determine the critical points of $f_N$, i.e. we solve \begin{equation} f_N'(z) = t^{-1}(z-u) +\frac{1}{N}\sum_j\frac{1}{z-y_j} =0. \label{fNroot} \end{equation} This is equivalent to finding the zeros of a polynomial of degree $N+1$. There are $N-1$ real roots and two complex roots, called $q_N^\pm$, that are complex conjugates of each other $$ f_N'(q_N^\pm)=0. $$ We will work with $q_N:= q_N^+$, the analysis of the other saddle is analogous. Clearly $|Re \; q_N |\le K$ for some large $K$. We can define $$ f(z) =\frac{1}{2t}(z^2-2uz) + \int_{\mathbb R}\varrho_{sc}(y) \log (z-y) {\rm d} y $$ and instead of \eqref{fNroot}, we can solve \begin{equation} f'(z) = t^{-1}(z-u) + 2(z-\sqrt{z^2-1}) =0. \label{fprime} \end{equation} The solutions of this latter equation (for small $t$) are given by \begin{equation} q^\pm =\frac{(2t+1)u\pm 2ti\sqrt{1+4t-u^2}}{1+4t} = u(1-2t)\pm 2ti\sqrt{1-u^2} +O(t^2), \label{qsol} \end{equation} and thus in particular $$ Im (q^\pm) = \pm O(t). $$ We have $$ f''(q) = \frac{1}{t}+2 -\frac{2q}{\sqrt{q^2-1}} $$ and $$ f''(q^\pm) = \frac{1}{t}+2 \pm \frac{2ui}{\sqrt{1-u^2}} + O(t) $$ where we also used the equation \eqref{fprime} for $q^\pm$. We set $q=q^+$. We need to know that $f_N''\ne 0$ at the $q_N$ saddle. $$ f_N''(q_N) = \frac{1}{t} - \frac{1}{N}\sum_j \frac{1}{(q_N-y_j)^2}. $$ It follows from \eqref{defY} that for $y\in {\mathcal Y}$ we have \begin{equation} \sup_{Im z \ge \eta} |f_N^{(\ell)}(z) - f^{(\ell)}(z)|\le \frac{C}{t^{\ell-1}N^{\lambda/4}} \label{ellgood} \end{equation} by contour integration. \bigskip We compare $q$ and $q_N$. We have from \eqref{fNroot} \begin{equation} q_N= F_N(q_N):= u- \frac{t}{N}\sum_j\frac{1}{q_N-y_j} , \qquad Im \; q_N>0 \label{qNeq} \end{equation} and \begin{equation} q = F(q):=u -t \int \frac{\varrho_{sc}(y){\rm d} y}{q-y} = u - 2t(q-\sqrt{q^2-1}) \label{qeq} \end{equation} First we show that for the only solution to \eqref{qNeq} with positive imaginary part we have $Im \; q_N\ge \eta$. This is a fixed point argument. Define the compact set $$ \Xi :=\Big\{ z\; : \; |Re \; z - u|\le Ct, \; \eta\le Im \; z \le Ct \Big\} $$ for some large constant $C$. Since $y\in{\mathcal Y}$, we know that $$ \sup_{Im\; z\ge \eta} |F_N(z) - F(z)| \le \frac{Ct}{N^{\lambda/4}}. $$ For $z\in \Xi$ clearly $$ Re \; F(z)= u + O(t) $$ and $$ Im \; F(z) = 2t\sqrt{1-u^2} + O(t^2) $$ thus $$ Re \; F_N(z) = u+O(t), \qquad Im \; F_N(z) = 2t\sqrt{1-u^2} + o(t) $$ so $F_N(\Xi)\subset \Xi$. Now we compute, for $z\in\Xi$, $$ F_N'(z) = \frac{t}{N} \sum_j\frac{1}{(z-y_j)^2} = F'(z) + O(N^{-\lambda/4}) $$ (here we used \eqref{ellgood} with $\ell=2$ and observed that $F_N' = tf_N''$), and $$ F'(z) = -2t\Big[ 1 - \frac{z}{\sqrt{z^2-1}}\Big] $$ with $F'(z)= O(t)$ if $z\in \Xi$. Thus $|F_N'(z)|\le 1/2$ for $z\in \Xi$, so $F_N$ is a contraction on $\Xi$ and thus \eqref{qNeq} has a unique solution, which is $q_N$. Comparing the two solutions, we have $$ |q_N-q| = |F_N(q_N)- F(q)| \le \sup_{z\in\Xi} |F_N'(z)| |q_N-q| + |F_N(q) - F(q)|. $$ Since $y\in {\mathcal Y}$, we get $$ |F_N(q) - F(q)| \le t \Bigg| \frac{1}{N}\sum \frac{1}{q_N-y_j} - \int \frac{\varrho_{sc}(y){\rm d} y}{z-y}\Bigg| = \frac{Ct}{N^{\lambda/4}} $$ thus \begin{equation} |q_N-q| \le \frac{Ct}{ N^{\lambda/4}}. \label{qqN} \end{equation} \subsection{Evaluating the integrals} Using Laplace asymptotics, we compute the integrals in \eqref{repr}. We choose the horizontal curves $\gamma_\pm$ to pass through the two saddles $q^\pm= a\pm ib$ of $f$ (see \eqref{qsol}), i.e. we set ${\omega} = b$ (see the definition of $\gamma^\pm$ after \eqref{ck}). The vertical line $\Gamma$ is shifted to pass through the saddles, i.e. $Re\; \Gamma = a$. Moreover, if necessary, we deform $\Gamma$ in a $O(N^{-1})$-neighborhood of $a$ so that $\min_j \mbox{dist}(\Gamma, y_j) \ge N^{-2}$ and $\mbox{dist}(\Gamma, a_N) \ge N^{-2}$; this is always possible. We split the integrals as follows $$ \frac{1}{N\varrho}{\mathcal K}_N(u,u+\frac{\tau}{N\varrho}; y) = A^{++}+A^{+-}+A^{-+}+A^{--} $$ according to whether $Im \; z$ and $Im\; w$ are positive or negative, e.g. \begin{equation} A^{\pm \pm}:= N \int_{\gamma^\pm} \frac{{\rm d} z}{2\pi i}\int_{\Gamma^\pm} \frac{{\rm d} w}{2\pi i} h_N(w) g_N(z,w) e^{N(f_N(w)-f_N(z))} \label{A+++} \end{equation} where $\Gamma_+=\Gamma \cap \{ w\; : \; Im \, w\ge0\}$ and $\Gamma_-=\Gamma \cap \{ w\; : \; Im \, w\le0\}$. We will work on $A^{++}$, the other three integrals are treated similarly. The main contribution to the integral $A^{++}$ will come from an ${ e}$-neighborhood in $z$ and $w$ of the saddle point $q_N=q_N^+$. The radius ${ e}$ will be chosen such that after a local change of variable $f$ and $f_N$ become quadratic near the saddle. We now explain the local change of variable. Since $f(z):{\mathbb C}\to{\mathbb C}$ is an analytic function with $f'(q)=0$ and $f''(q)\ne 0$ for $q=q^+$, there exists an invertible analytic map $\phi: z\to \phi(z)$ in $$ D_{ e}: = \{ z\; : \; |z-q|\le{ e}\} $$ with $\phi(q)=0$, $\phi'(q)=\sqrt{tf''(q)}$ such that \begin{equation} f(z) = f(q)+ \frac{1}{2t} [\phi(z)]^2 \qquad z\in D_{ e} \label{morse} \end{equation} with \begin{equation} \phi(z) = \sqrt{tf''(q)}(z-q)(1+ O(z-q)), \qquad z\in D_{ e}. \label{phi} \end{equation} Here ${ e}$ must satisfy \begin{equation} { e} \le \frac{|f''(q)|}{2\sup_{D_{ e}} |f'''(z)|} \label{eps} \end{equation} we also assume that ${ e} \le \eta$. We will choose ${ e}=ct$ with a small $c$, depending on $u$. We have \begin{equation} f''(q) = t^{-1} + O(1), \qquad \sup_{D_{ e}} |f'''(z)| \le C \label{fder} \end{equation} from the explicit formula \eqref{fprime}, so \eqref{eps} is satisfied. Note that $\phi'(q)= \sqrt{tf''(q)} = 1+ O(t)$. We have a similar change of variables for $f_N$, i.e. $\phi_N$ with the properties that \begin{equation} \phi_N(q_N)=0, \qquad \phi'_N(q_N) = \sqrt{tf''_N(q_N)} = 1 + O(t) \label{phider} \end{equation} and \begin{equation} f_N(z) = f_N(q_N)+ \frac{1}{2t}[\phi_N(z)]^2 \qquad z\in D_{{ e},N} = \{ z\; : \; |z-q_N|\le { e}\} \label{morseN} \end{equation} with \begin{equation} \phi_N(z) = \sqrt{tf''_N(q_N)}(z-q_N)(1+ O(z-q_N)), \qquad z\in D_{{ e},N}. \label{phiN} \end{equation} This holds if $$ { e} \le \frac{c |f''_N(q_N)|}{\sup_{D_{{ e},N}} |f'''_N(z)|}. $$ For $y\in {\mathcal Y}$, we have $f''_N(q_N) = t^{-1}\big[1+O(N^{-\lambda/4})\big]$ and $|f'''_N(z)|\le Ct^{-2}N^{-\lambda/4}$ by \eqref{ellgood} and \eqref{fder}, thus we can choose ${ e} = ct$ for some small constant $c\le \sqrt{1-u^2}$. Moreover we have $|\phi_N(z)|\le C|z-q_N|$ for $|z-q|\le ct$, so by Cauchy formula $|\phi'_N(z)| \le C$ and $|\phi''_N(z)|\le Ct^{-1}$ for $|z-q|\le ct$ (maybe after reducing $c$). The same formulas hold for $\phi$ as well. We also have $$ | \phi'(q) - \phi'_N(q_N)| \le \Big| \sqrt{t f''(q)} - \sqrt{t f_N''(q)}\Big| + \Big| \sqrt{t f_N''(q)} - \sqrt{t f_N''(q_N)}\Big| \le CN^{-\lambda/4}, $$ where in the first term we used \eqref{ellgood} and in the second we used $|f'''_N(z)|\le Ct^{-2}$. {F}rom \eqref{phi} and \eqref{phiN} we have \begin{equation} |\phi(z)- \phi_N(z) |\le | \phi'(q) - \phi'(q_N)||z-q| + |\phi'(q_N)||q-q_N| + C|z-q|^2 \le Ct N^{-\lambda/4} \label{phip1} \end{equation} and then by contour integration \begin{equation} |\phi'(z) - \phi_N'(z)|\le Ct N^{-\lambda/4} \label{phip} \end{equation} for any $z$ with $|z-q|\le ct$. Therefore the maps $\phi$ and $\phi_N$ are $C^1$-close within $D_{ e}$ and both of them are $C^1$-close to the shift map $z\to z-q$. \begin{figure} \begin{center} \epsfig{file=saddle.eps,scale=.75} \end{center} \caption{Integration contours around the saddle $q_N=q_N^+$}\label{fig:saddle} \end{figure} \bigskip We first consider the $z$ integration. Recall that $q_N=q^+_N=a_N+ib_N$ from \eqref{qsol}. We fix a small positive constant $c_1\ll 1$ and we define the domains $$ \Omega: = \Big\{ z=x+iy\; : \; |x-a_N|\ge { e}, |y-b_N|\le c_1{ e}/2\Big\} $$ $$ \Omega^*: = \Big\{ z=x+iy\; : \; |x-a_N|\ge { e}/2, |y-b_N|\le c_1{ e}/2\Big\} $$ and $$ W: = \Big\{ z= x+iy \; : \; |x-a_N|\le 2{ e}\; , \; |y-b_N|\le c_1 |x-a_N| \Big\} $$ where ${ e}=ct$. Recall that $\gamma^+$ was the horizontal line going through $q=a+ib$, the saddle of $f$. We will deform $\gamma^+$ to $\gamma_N^+$ so that it passes through $q_N$ and it matches with $\gamma^+$ at the points $a_N\pm 2{ e} + ib$. Within the regime $|Re \; z -a_N|\le { e}$, we define $\gamma_N^+$ by the requirement that $Im \; \phi_N =0$ along $\gamma_N^+$. Since $\phi_N(z)$ is close to the map $z\to z-q_N$ by \eqref{phiN}, clearly $\gamma_N^+$ is almost horizontal curve in small neighborhood of $q_N$, so it remains in $W$ until it reaches the vertical lines $|Re\; z-a_N|= { e}$. In the regime ${ e}\le|Re \;z -a_N |\le 2{ e}$, we require that $\gamma_N^+$ matches with $\gamma^+$ at the points $a_N\pm 2{ e} + ib$ and it remains in the wedge $W$. In the outside regime, $|Re \; z- a_N|\ge 2{ e}$ we set $\gamma_N^+=\gamma^+$, in particular $\gamma_N^+ \subset W\cup \Omega$ (see Fig. \ref{fig:saddle}). \begin{lemma}\label{lm:z} We have \begin{equation} Re \big[f_N(x+iy)-f_N(q_N)\big] \ge \frac{1}{12 t}(x-a)^2 \qquad \mbox{for}\;\; x+iy\in \Omega \label{reflower} \end{equation} and \begin{equation} Re \big[f_N(z)-f_N(q_N)\big]\ge 0 \qquad \mbox{for} \quad z\in W \; \mbox{and}\quad |Re\; z -a|\le { e} \label{reflowerin} \end{equation} \end{lemma} {\it Proof.} The second statement \eqref{reflowerin} follows from the normal form \eqref{morseN} and the fact that for $z \in W$ we have $|Im \; (z-q_N)| \le c_1 |Re (z-q_N)|$, i.e. $Re (z-q_N)^2 \ge 0$, and $\phi_N$ is close to the map $z\to z-q_N$ in $W$, so $Re [\phi_N(z)]^2 \ge0$ for $z\in W$. For the first statement, we assume $x\ge a$, the case $x\le a$ is analogous. We get by explicit calculation $$ Re \; f'(x+iy) \ge \frac{1}{2t}(x-a), \quad \mbox{for}\;\; x+iy \in\Omega^*, \; x\ge a. $$ Using \eqref{ellgood} for $\ell =1$, we have \begin{equation} Re\; \partial_x f_N(x+iy) \ge Re \; f'(x+iy) - CN^{-\lambda/4} \ge \frac{1}{3t}(x-a) \quad \mbox{for}\;\; x+iy \in\Omega^*\; x\ge a \label{fx} \end{equation} (the error is absorbed since $|x-a|\ge ct/2$ for $x+iy \in\Omega^*$). Since $Re [f_N(z) - f_N(q_N)]\ge 0$ on the vertical lines $|x-a|={ e}/2$, $|y-b|\le c_1{ e}/2$, we can integrate the inequality \eqref{fx} to obtain \eqref{reflower}. \hfill\fbox{}\par\vspace{0.3mm} \bigskip In order to estimate the $w$ integration along $\Gamma^+$ parametrized as $a+is$, $s\ge 0$, we analyze the behaviour of $Re \, f$ along $\Gamma^+$. For $|x-a|\le Ct$ and $y\in {\mathbb R}$ we first compute $$ Re\; \partial_y f_N(x+iy) = - Im\; f'_N(x+iy) = - Im \; f'(x+iy) + O( N^{-\lambda/4}) $$ which holds for $|y|\ge \eta$. By explicit computation, and using $f'(a+ib)=0$, $$ - Im \; f'(x+iy) = - (y-b)\Big( \frac{1}{t} + 2 \Big) +O(t) +O(y^2) $$ if $|y|\le \frac{1}{2}\sqrt{1-u^2}$, $|x-a|\le Ct$ for some large $C$. Thus we have $$ Re\; \partial_y f_N(x+iy) \le -\frac{y-b}{3t}, \qquad \eta\le |y|\le \frac{1}{2} \sqrt{1-u^2}\;\;\mbox{and} \; \; y-b\ge { e}/2, $$ where ${ e}=ct$ with a small $c$ as before and a similar lower bound holds for $y-b\le -{ e}/2$. Defining $$ \widetilde\Omega:= \Big\{ w=x+iy\; : \; { e}\le |y-b_N|,\; \eta\le y\le \frac{1}{2} \sqrt{1-u^2},\;\; \; |x-a_N|\le c_1{ e}/2\Big\} $$ $$ \widetilde W: = \Big\{ w=x+iy\; : \; |y-b_N|\le 2{ e}, \; |x-a_N|\le c_1|y-b_N|\Big\} $$ analogously to $W$ before, we easily obtain \begin{equation} Re \,\big[ f_N(x+iy)- f_N(q_N)\big] \le -\frac{1}{18t}(y-b)^2 \quad \mbox{for} \;\; x+iy\in \widetilde \Omega \label{wlow} \end{equation} and \begin{equation} Re \,\big[ f_N(w)- f_N(q_N) \big] \le 0 \;\; w\in \widetilde W, \; \mbox{and} \; |Im\, w-b|\le{ e}, \label{wlowin} \end{equation} similarly to the proof of Lemma \ref{lm:z}. The regimes $0\le y\le\eta$ and $y\ge\frac{1}{2} \sqrt{1-u^2}$ are treated directly. We use \begin{equation} \begin{split} Re \partial_y f_N (x+ i y) = &- Im \left [ t^{-1}(z-u) +\frac{1}{N}\sum_j\frac{1}{z-y_j} \right] \\ = & y \left [ -t^{-1} + \frac{1}{N}\sum_{j } \frac{1}{(x-y_j)^2 + y^2} \right ] \ge - y /t. \label{3.1} \end{split} \end{equation} Hence for $0\le y \le \eta$ we have \[ Re\; \big[ f_N (x+iy)-f_N(q_N)\big] \le Re\; \big[ f_N (x+i\eta)-f_N(q_N)\big] +\frac{\eta^2}{2t} \le -\frac{1}{36t}(y-b)^2 \] from \eqref{wlow}, if $\eta_0$ is sufficiently small, see \eqref{eta}. If $y \ge\frac{1}{2} \sqrt{1-u^2}$, then \[ \frac{1}{N}\sum_{j } \frac{1}{(x-y_j)^2 + y^2} \le\frac{4}{\sqrt{1-u^2}}, \] hence \begin{equation} Re \; \partial_y f_N (x+ i y) \le - y /2t \label{3.3} \end{equation} and thus $Re \; f_N (x+ i y)\le -y^2/4t$ in this regime. Summarizing these results, we have \begin{equation} Re\; \big[ f_N (x+iy)-f_N(q_N)\big] \le -\frac{1}{36t}(y-b)^2 \label{wlow1} \end{equation} holds for any $y\in {\mathbb R}$ and $|x-a|\le c_1{ e}/2$. \bigskip We can define a new contour $\Gamma_N^+$ similar to the $\gamma_N^+$. It follows the path where $\phi_N$ has zero imaginary part when $|Im\; w -b|\le { e}/2$ and then it returns to $\Gamma^+$ when $|Im\; w -b|\ge { e}$. We recall that $\min_j \mbox{dist}(\Gamma_N^+, y_j) \ge N^{-2}$ and $\mbox{dist}(\Gamma, a_N) \ge N^{-2}$ by the choice of $\Gamma$. With the paths $\gamma_N^+$ and $\Gamma_N^+$ defined, we can now do the integration \begin{equation} A^{++}: = N \int_{\gamma_N^+} \frac{{\rm d} z}{2\pi i}\int_{\Gamma_N^+} \frac{{\rm d} w}{2\pi i} h_N(w) g_N(z,w) e^{N(f_N(w)-f_N(z))}. \label{A} \end{equation} Near the saddle we need the bounds \begin{equation} |g_N(z,w)|\le C/t, \quad |\partial_z g_N(z,w)| \le C/t^2 , \quad |h_N(w)|\le C \label{gh} \end{equation} if $|z-(a+ib)|\le { e}$, $|w-(a+ib)|\le{ e}$. In order to make sure that these bounds are satisfied, we fix the constant $r= \text{Re } q_N (u_*)$ in (\ref{repr}). Here $q_N (u^*)$ is the unique solution with positive imaginary part of the saddle point equation (\ref{fNroot}), with $u$ (which is actually a short hand notation for $u^{(N)}$) replaced by the fixed $u_*$. Note that, since $|u^{(N)}-u_*|\leq C/N$, we find that the real part of the exponent of $h_N(w)$ (see (\ref{def:hN})) is bounded, $|r - \text{Re} w|/t\rho \leq C$, as $w$ runs through $\Gamma$. This choice also guarantees that, away from the saddle, \begin{equation} |h_N(w)|\le C e^{Ct^{-1}|Re\, w- a|}, \quad |g_N(z,w)| \; \le CN^3 \label{gh1} \end{equation} that hold for $|Im\; z|\ge \eta$, $Im \; w\ge 0$. These bound follow from \eqref{def:gN}, \eqref{def:hN} and \eqref{id} and when $w$ is near the real axis, we also used that $\Gamma_N$ is away from the $y_j$'s. \bigskip The integration in $A^{++}$ (see \eqref{A}) will be divided into regimes near the saddle $q_N$ (``inside'') or away from the saddle (``outside''): \begin{equation} A^{++} = A_{ii} + A_{io}+ A_{oi} + A_{oo}. \label{A++} \end{equation} Recall that $|q_N-q| =o(t)$ and $q= q^+= a+ ib$ (see \eqref{qsol}). For example $$ A_{io}:= N \int_{\gamma^+_N} \chi( Re \; z- a) \frac{{\rm d} z}{2\pi i} \int_{\Gamma^+_N} (1- \chi( Im \; w - b)) \frac{{\rm d} w}{2\pi i} h_N(w) g_N(z,w) e^{N(f_N(w)-f_N(z))} , $$ where $\chi $ is the characteristic function of the interval $[-{ e},{ e}]$. The other $A$'s are defined analogously. Using \eqref{reflower} \eqref{reflowerin} and \eqref{wlow1}, we have $$ |A_{io}| \le N\int_{\gamma_N} \,{\rm d} z\, \chi( Re \; z- a) \int_{\Gamma_N} (1- \chi( Im \; w - b)) |g_N(z,w)||h_N(w)|\; e^{N Re\; [f_N(w)- f_N(q_N)]} {\rm d} w. $$ The integral of the exponential term is bounded by $$ \int_{|y-b|\ge { e}=ct} {\rm d} y \; e^{-cN(y-b)^2/t} \le e^{-cNt}. $$ Taking into account \eqref{gh} and \eqref{gh1}, we see that $|A_{io}| \le e^{-cNt}$ since $t= N^{-1+\lambda}$. Similarly we can bound all other terms with an outside part. When $|Re\, z -a|\ge ct\gg N^{-1}$, then the exponential growth of $h_N$ in \eqref{gh1} will be controlled by the Gaussian decay of $$ e^{ -N \, Re[ f_N(z)-f_N(q_N)]}\le e^{-cNt^{-1}|Re \, z-a|^2} $$ from \eqref{reflower}. Finally, we have to compute the contribution of the saddle, i.e. the term $A_{ii}$. We let $\widetilde\gamma$ be the part of $\gamma_N^+$ with $|Re\; \gamma_N-a|\le { e}$ and similarly defined $\widetilde \Gamma$. Recall that $Im \; \phi_N =0$ on $\widetilde\gamma$. {F}rom standard Laplace asymptotics calculation, we have $$ \int_{\widetilde\gamma} e^{-N[f_N(z)-f_N(q_N)]} h_N(w)g_N(z,w) {\rm d} z = \int_{\widetilde\gamma} e^{-N[\phi_N(z)]^2/2t} h_N(w)g_N(z,w) {\rm d} z $$ \begin{equation} = \sqrt{\frac{2\pi}{N f_N''(q_N)}}\Bigg[ h_N(w)g_N(q_N,w) + \Omega(w)\Bigg] \label{statph} \end{equation} using \eqref{phider} with $$ |\Omega(w)|\le C \sqrt{\frac{t}{N}} \max_{z\in D_{ e}} |\partial_z g_N(z,w)||h_N(w)| $$ Using \eqref{gh}, we have $$ |\Omega|\le Ct^{-2}\sqrt{\frac{t}{N}} = \frac{C}{t} \frac{1}{\sqrt{Nt}} $$ while the main term in the bracket on the r.h.s. of \eqref{statph} is of order $t^{-1}$. Analogously performing the ${\rm d} w$ integration, we obtain that $$ A_{ii} = \frac{-1}{2\pi f_N''(q_N)} g_N(q_N, q_N) h_N ( q_N) \Big[ 1+ O\Big(\frac{1}{\sqrt{Nt}}\Big)\Big] = \frac{-h_N (q_N)}{2\pi} \Big[ 1+ O\Big(\frac{1}{\sqrt{Nt}}\Big)\Big], $$ where we also used $ g_N(q_N, q_N) = f_N''(q_N)$ following from \eqref{id}. So far we considered the saddle $q_N=q_N^+$ with positive imaginary part for both the $z$ and $w$ integrals. The same calculation can be performed at the saddle $z=w=q_N^-$. The mixed case, when $z$ is integrated near one of the saddles and $w$ is near the other one, gives zero contribution, since $g_N(q_N^-, q_N^+)= g_N(q_N^+, q_N^-) =0$ by \eqref{id}. Adding up the contributions of the two relevant saddles, $z=w=q_N^+$ and $z=w=q_N^{-}$, taking into account the opposite orientations of the two pieces of $\gamma_N$, one obtains $$ \frac{1}{2 \pi} \Big[ - h_N (q_N^+) + h_N ( q_N^-)\Big] = \frac{1}{2\pi\tau} \Big( - e^{-\tau (q_N^+-r)/t\varrho}+ e^{-\tau (q_N^- -r)/t\varrho} \Big) = \frac{\sin \pi\tau}{\pi\tau}(1+o(1)), $$ where we used the choice $r = \text{Re } q^{\pm}_N (u^*)$ (see after (\ref{gh})), which guarantees that $|r-\text{Re} q_N^\pm| \to 0$ as $N\to\infty$, and the equations \eqref{not}, \eqref{qsol}, and \eqref{qqN}. This completes the proof of Proposition \ref{prop:local}. \hfill\fbox{}\par\vspace{0.3mm} \section{Proof of the main theorems}\label{sec:mainthm} {\it Proof of Theorem \ref{mainthm}.} We follow the notations of Proposition \ref{meascomp}. In Proposition \ref{sinjoh} we have shown that the sine kernel holds for the measure $e^{t{\mathcal L}}G_t$ if $t=N^{-1+\lambda}$. More precisely, let $p_{N,t}(x)$, denote the density function of the eigenvalues $x=(x_1, \ldots, x_N)$ w.r.t. $e^{t{\mathcal L}}G_t$ and let $p_{N,t}^{(2)}$ be the two point correlation function, defined analogously to \eqref{corrfn}. Similarly, we define $p_{N,c}(x)$ and $p_{N,c}^{(2)}$ for the eigenvalue density and two point correlation function w.r.t. truncated measure $F_c=v^{\otimes n}$. In Proposition \ref{sinjoh} we showed that \begin{equation} \lim_{N\to\infty}\int_{{\mathbb R}^2} \frac{1}{\varrho^2} p_{N,t}^{(2)} \Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho}\Big) O(\alpha,\beta) {\rm d} \alpha{\rm d} \beta = \int_{{\mathbb R}^2} O(\alpha,\beta) \Big[1-\Big(\frac{\sin \pi(\alpha-\beta)}{\pi(\alpha-\beta)}\Big)^2\Big] {\rm d} \alpha {\rm d} \beta. \label{sint} \end{equation} for any $|u|<2$ and with the notation $\varrho=\varrho_{sc}(u)$. (We remark that $p_{N,t}^{(2)}$ was denoted by $\widetilde p_N^{(2)}$ in Proposition \ref{sinjoh} and the condition $|u|<2$ is translated into $|u|<1$ after rescaling.) To prove \eqref{maineq}, we thus only need to control the difference as follows \bigskip $$ \Bigg| \int \Big[ p^{(2)}_N\Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho} \Big) -p^{(2)}_{N,t}\Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho}\Big)\Big]O(\alpha, \beta) {\rm d} \alpha{\rm d} \beta \Bigg| \le (I)+ (II), $$ where $$ (I): = \Bigg| \int \Big[ p^{(2)}_N\Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho} \Big) -p^{(2)}_{N,c}\Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho}\Big)\Big]O(\alpha, \beta) {\rm d} \alpha{\rm d} \beta \Bigg|, $$ $$ (II): = \int \Big| p^{(2)}_{N,c}\Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho} \Big) -p^{(2)}_{N,t}\Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho}\Big)\Big|\, |O(\alpha, \beta)| {\rm d} \alpha{\rm d} \beta. $$ Using \eqref{FFtilde}, we have $$ (I)\le N^2 \|O\|_\infty \int |F-F_c|{\rm d}\mu^{\otimes n} \le Ce^{-cN^c} \to 0 $$ with some $c>0$ as $N\to\infty$. To estimate $(II)$, we have \begin{equation} \begin{split} (II) & \le \int \Bigg| \frac{p^{(2)}_{N,c} }{p_{N,t}^{(2)}} \Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho}\Big)-1\Bigg| p_{N,t}^{(2)} \Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho}\Big) |O(\alpha, \beta)| {\rm d} \alpha{\rm d} \beta \\ & \le \Big( \int \Big[ \frac{p_{N,c}^{(2)}}{p_{N,t}^{(2)}} \Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho}\Big)-1\Big]^2 p_{N,t}^{(2)} \Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho}\Big) |O(\alpha, \beta)| {\rm d} \alpha{\rm d} \beta \Big]^{1/2} \\ & \quad\times \Big[ \int p_{N,t}^{(2)} \Big( u+ \frac{\alpha}{N\varrho}, u+\frac{\beta}{N\varrho}\Big) |O(\alpha,\beta)| {\rm d} \alpha{\rm d} \beta \Big]^{1/2}. \label{long} \end{split} \end{equation} Using \eqref{sint} for the observable $|O|$ instead of $O$, the second factor on the r.h.s. of \eqref{long} is bounded. Since $O$ is bounded, the first factor is smaller than \begin{equation} \begin{split}\label{NN} C \Bigg[N^2\varrho^2 \int \Big[ \frac{p^{(2)}_{N,c}(z,y)}{p_{N,t}^{(2)}(z,y)} -1\Big]^2 p_{N,t}^{(2)}(z,y) {\rm d} z{\rm d} y \Bigg]^{1/2} & \le C \Bigg[N^2\varrho^2 \int \Big( \frac{p_{N,c}(x)}{p_{N,t}(x)} -1\Big)^2 p_{N,t}(x){\rm d} x\Bigg]^{1/2} \\ & \le C \Bigg[N^2\varrho^2 \int \frac{\big|e^{t{\mathcal L}}G_t- F_c\big|^2}{e^{t{\mathcal L}}G_t} {\rm d} \mu^{\otimes n} \Bigg]^{1/2} \\ & \le C N^{-1+4\lambda}. \end{split} \end{equation} Here in the first step we used that the quantity $D(f,g)= \int |f/g-1|^2g$ for two probability measures $f$ and $g$ decreases when taking marginals. In the second step, we used that $D(f,g)$ decreases when passing the probability laws from matrix elements to the induced probability laws for the eigenvalues. Finally, we used the estimate \eqref{FF}. This completes the proof of Theorem \ref{mainthm}. \hfill\fbox{}\par\vspace{0.3mm} \bigskip {\it Proof of Theorem \ref{mainthm2}.} We first prove Theorem \ref{mainthm2} for the ensemble $\widehat H + aV$ with $a=N^{-1/2+\lambda/2}$ (see the beginning of Section \ref{sec:timeevolved} for the necessary rescaling). Let ${\mathbb E}}\def\PP{{\mathbb P}$ denote the expectation with respect to this ensemble and let ${\mathbb E}}\def\PP{{\mathbb P}_y$ denote the expectation with respect to the density $x\to q_S(x,y)$ for any fixed $y$ and $S= a^2/N=N^{-2+\lambda}$. Then we have \begin{equation} \mathcal{ E} \, \Lambda (u; s, \cdot ) = \int {\mathbb E}}\def\PP{{\mathbb P}_y \; \Lambda (u; s, \cdot ) {\bf 1}(y\in{\mathcal Y}) {\rm d} \widehat \PP(y) + \int {\mathbb E}}\def\PP{{\mathbb P}_y \; \Lambda (u; s, \cdot ) {\bf 1}(y\in{\mathcal Y}^c) {\rm d} \widehat \PP(y) \label{lambdasplit} \end{equation} by recalling \eqref{def:qs}. The second term can be estimated by using $|\Lambda|\le N$ and \eqref{good} as \begin{equation} \int {\mathbb E}}\def\PP{{\mathbb P}_y \; \Lambda (u; s, \cdot ) {\bf 1}(y\in{\mathcal Y}^c) {\rm d} \widehat \PP(y) \le C Ne^{-cN^{\lambda/4}}. \label{tail} \end{equation} For the first term in \eqref{lambdasplit}, we use the exclusion-inclusion principle to compute \begin{equation} \begin{split}\label{8.1} \mathcal{ E}_y \, \Lambda (u; s, \cdot ) = \frac 1 { 2 N t_N \varrho} \sum_{m=2}^N (-1)^m & \int_{-t_N }^{t_N } {\rm d} v_1 \ldots \int_{-t_N }^{t_N} {\rm d} v_m {\bf 1} \Big\{ \max |v_i-v_j| \le \frac s { N \varrho}\Big\} \\ & \times \, {N \choose m} \, \widetilde p^{(m)}_{N,y,S}(u+v_1 , u+v_2, \ldots, u+ v_m) \end{split} \end{equation} with $\varrho=\varrho(u)$ (see \eqref{def:varrho}) and recall that $\widetilde p^{(m)}_{N,y,S}$ denote the correlation functions of $q_S(x,y)$ (see \eqref{tildecorr}). After a change of variables, \begin{equation} \begin{split} \mathcal{ E}_y \; \Lambda (u; s, \cdot )= \; & \; \frac 1 { 2 N t_N \varrho} \sum_{m=2}^\infty (-1)^m \int_{-N\varrho t_N }^{N\varrho t_N } {\rm d} z_1 \ldots \int_{-N\varrho t_N }^{N\varrho t_N } {\rm d} z_m \\ & \times\, {N \choose m} \, \frac{1}{(N\varrho)^m} \widetilde p^{(m)}_{N,y,S} \Big(u+ \frac {z_1}{N \rho} , \ldots, u+ \frac {z_m}{N \rho}\Big) {\bf 1} \Big\{\max |z_i-z_j| \le s\Big\} \\ = \; & \; \frac 1 { 2 N t_N \varrho} \sum_{m=2}^\infty (-1)^m m \int_{-N\varrho t_N }^{N\varrho t_N } {\rm d} z_1 \int_0^s {\rm d} a_2 \ldots \int_0^s {\rm d} a_m \\ & \times\, {N \choose m} \, \frac{1}{(N\varrho)^m}\; \widetilde p^{(m)}_{N,y,S} \Big(u+ \frac {z_1}{N \rho} , u+ \frac {z_1+a_2}{N \rho} , \ldots, u+ \frac {z_1+a_m}{N \rho}\Big), \end{split} \end{equation} where the factor $m$ comes from considering the integration sector $z_1\le z_j$, $j\ge 2$. Taking $N\to \infty$ and using Proposition \ref{prop:local}, we get \begin{equation} \lim_{N\to\infty} \mathcal{ E}_y \; \Lambda (u; s, \cdot ) = \sum_{m=2}^\infty \frac {(-1)^m }{(m-1)!} \int_0^s {\rm d} a_2 \ldots \int_0^s {\rm d} a_m \, \, \det \left ( \frac {\sin \pi(a_i-a_j)} {\pi(a_i-a_j)} \right )_{i,j=1}^m, \label{fred} \end{equation} where in the last determinant term we set $a_1=0$. The interchange of the limit and the summation can be justified by noting that the exclusion-inclusion principle guarantees that \eqref{8.1} is an alternating series where the difference between the sum and its $M$-term truncation can be controlled by the $(M+1)$-th term for any $M$. We note that the left hand side of \eqref{fred} is $\int_0^s p(\alpha){\rm d} \alpha$, where $p(\alpha)$ is the second derivative of the Fredholm determinant $\det (1-{\mathcal K}_\alpha)$ (see \eqref{maineq2}). Combining \eqref{fred} with the estimate \eqref{tail}, we have \begin{equation} \label{elambda} \lim_{N\to\infty} \mathcal{ E} \; \Lambda (u; s, \cdot ) =\int_0^s p(\alpha){\rm d} \alpha. \end{equation} After rescaling \eqref{resc}, we also conclude that the limit of the expectation of $\Lambda$ with respect to the time evolved ensemble $e^{t{\mathcal L}}G_t$ (see Proposition \ref{meascomp}) is given by right hand side of \eqref{elambda}. Finally, the difference of the expectation of $\Lambda$ with respect to the measure $e^{t{\mathcal L}}G_t$ and w.r.t. the initial ensemble $F$ vanishes since $|\Lambda|\le N$ and $\mbox{Var}(e^{t{\mathcal L}}G_t, F) \le CN^{-2+4\lambda}$ (see \eqref{FFtilde} and \eqref{FF}). This completes the proof of Theorem \ref{mainthm2}. \hfill\fbox{}\par\vspace{0.3mm} \section{Some extensions and comments}\label{sec:relax} In this section we explain how to relax some of the conditions on the initial distribution $\nu$. We first explain how to extend our proof to include distributions $\nu$ with compact support. Take for example a density w.r.t. the Gaussian measure ${\rm d} \mu(x) = e^{-x^2}$ that is given by a nice bump function $u(x)$ supported in $[-1, 1]$ decaying like $(1\pm x)^m$ near the boundary $x=\pm 1$. Clearly, for any $m$ fixed this distribution violates the assumptions of Theorem \ref{mainthm}. We now show that for $m$ large enough, it is still possible to prove the universality. Define a new distribution with density \begin{equation} q (x) = \frac{\tau^m + u(x)}{1+ \tau^m} \label{q} \end{equation} with a small parameter $\tau>0$ to be determined later. Near the edge $1$ we have $L q(1-y) \lesssim C y^{m-2}$ for $0\le y \ll 1$ with some $m$-dependent constant $C$. We thus need the condition \[ C y^{m-2} \le t^{-1} [\tau^m + y^m ] ,\qquad 0\le y \ll 1, \] to guarantee that $(1- t L) q$ is a probability density. This inequality holds if \[ \tau^2 \ge C t. \] The other conditions concerning $L^2$ and $L^3$ (see \eqref{AB}) can be handled similarly. Choosing $\tau= Ct^{1/2}$, the total variation norm is bounded by \[ \int |q^{\otimes n} - u^{\otimes n} | {\rm d} \mu^{\otimes n} \le Cn \tau^{m} = C n t^{m/2}. \] Since $n= N^2$ and $t= N^{-1+ { e}}$, we have \[ \int |q^{\otimes n} - u^{\otimes n} | {\rm d} \mu^{\otimes n} \le C_m N^{2- m/2+m { e} /2}. \] Let, say, $m\ge 9$, then the error term will be smaller than $N^{-2-\delta}$ with some $\delta>0$ and this will imply Theorem \ref{mainthm} for the initial distribution $u$. The modification of $u$ in \eqref{q} can certainly be more sophisticated to reduce the exponent $m$. \bigskip Second, we show that the Gaussian decay condition \eqref{cond2} can be replaced by the exponential decay \eqref{cond2relax}. For any $\ell > 0$ define \[ \nu_\ell (x) = \nu (x+ a_\ell ) {\bf 1}(|x| \le \ell) /Z_\ell, \] where $a_\ell$ and $Z_\ell$ are chosen so that \[ \int x \;{\rm d} \nu_\ell = 0, \qquad \int {\rm d} \nu_\ell (x) = 1. \] Due to the assumption \eqref{cond2relax}, we have \[ |a_\ell| + |Z_\ell-1| \le e^{ - c \ell}. \] Let $ \widetilde \nu_\ell (x) = \ell \nu_\ell (x \ell)$. Clearly, the random variable $x$ distributed according to $ \tilde \nu_\ell$ is bounded by $1$, in particular it has a finite Gaussian moment. Denote the variance of $ \tilde \nu_\ell$ by $\sigma_\ell^2$ and we have $\sigma_\ell =1/\ell + O(e^{ - c \ell})$. We will neglect all the exponential small terms $O(e^{ - c \ell})$ and assume $\sigma_\ell =1/\ell$. Similar cutoff and rescaling applies to the distribution of the diagonal elements. Consider the random matrix generated by the measure $\nu_\ell$ and $ \tilde \nu_\ell$ and denote the probability law of the eigenvalues by $f_\ell$ and $ \tilde f_\ell$. Since all quantities introduced below can be defined w.r.t. to both $\nu_\ell$ and $ \tilde \nu_\ell$, we will only give explicit definitions for $\nu_\ell$. Recall the Stieltjes transform of the eigenvalue distribution w.r.t. $ \nu_\ell$ is defined as \begin{equation} m_\ell= m_\ell(z) = \int_{\mathbb R} \frac{{\rm d} F_\ell (E)}{E-z}\,, \label{Sti} \end{equation} where $F_\ell $ is the empirical distribution function of the eigenvalues. Then the empirical density of eigenvalues and $ \tilde m_\ell$ converges to the rescaled semicircle law \[ \widetilde\rho^\ell_{sc} (x) = \ell \rho_{sc} (x \ell) = \frac \ell {2 \pi } \sqrt {4 - x^2\ell^2},\quad \widetilde m_{sc}^\ell (z) = \ell m_{sc} (z \ell). \] We now follow the proof given in \cite{ESY3} to prove the local semicircle law Theorem 4.1 \cite{ESY3} for $\widetilde \nu_\ell$. The key estimate is contained in Proposition 4.3 which depends on Proposition 4.5. The random variables $b_j$ in Proposition 4.5 are now distributed according to $ \tilde \nu_\ell$ and the only assumption of this proposition, the Gaussian bound (1.3) (i.e., the condition {\bf C1}) is now trivially satisfied since $\widetilde \nu_\ell$ has compact support. Hence we can now prove Proposition 4.3 using the same strategy. Thus the equation for the probability estimate appearing after (4.6) in the paper still holds but the upper bound on the constant $A^2$ defined in (4.6) now becomes $2 M\ell^2/ (N \eta)$ due to the scaling. Thus the key estimate at the end of the proof of Proposition 4.3 of \cite{ESY3} is now changed to \begin{equation} {\mathbb E}}\def\PP{{\mathbb P}_{ \widetilde\nu_\ell} \Big[ {\bf 1}_{\Omega^c}\cdot \P_{\bf{b}} [ |X|\ge \delta] \Big] \leq 4\exp\big( -c\min \{ \delta\sqrt{N\eta}/\ell , \, \delta^2 N\eta/\ell^2\}\big)\;. \label{lde} \end{equation} Therefore, Theorem 4.1 of \cite{ESY3} holds with the estimates taking the form \begin{equation} \P_{ \widetilde \nu_\ell} \Big\{ \sup_{E \in [-(2 - \kappa)/\ell, (2 - \kappa)/\ell]} | \widetilde m^\ell_N(E+i\eta)- \widetilde m^\ell_{sc}(E+i\eta)| \ge \delta \Big\} \leq C e^{-c \delta \sqrt{N\eta}/\ell} \label{mcont-old} \end{equation} for any $\delta \leq c_1 \kappa/\ell$. Passing from $\widetilde\nu_\ell$ to $ \nu_\ell$ via scaling, we have \begin{equation}\label{7.1} \P_{ \nu_\ell} \Big\{ \sup_{E \in [-2 + \kappa, 2- \kappa]} | m^\ell_N(E+i\eta )- m^\ell_{sc}(E+i\eta )| \ge \delta \Big\} \leq C e^{-c \delta \sqrt{N\eta/\ell}} \end{equation} for $\delta\le c_1\kappa$. Comparing this estimate with the original bound (4.1) in \cite{ESY3}, note that the only change is that the $\eta$ in the exponent has deterioriated to $\eta/\ell$. This is due to fact that we applied the Proposition 4.5 of \cite{ESY3} without taking the advantage that the variance is now reduced to $1/\ell^2$, which should enhance the large deviation estimate \eqref{lde}. For our case, however, the estimate \eqref{7.1} is already sufficient since we are interested in the case $N \eta = N^{{ e}}$ and $\ell = (\log N)^2$. Finally we need to pass estimates to the original measure $\nu_\ell^{\otimes n}$. We can check that \[ \mbox{Var}\big(\nu_\ell^{\otimes n} , \nu^{\otimes n}\big) \le C n e^{- c \ell}. \] Since in our application $n=N^2$ and $\ell = (\log N)^2$, the right hand side is smaller than any negative power of $N$, all necessary expectation values of observables w.r.t. $\nu_\ell^{\otimes n}$ can thus be passed to $ \nu^{\otimes n}$. This shows that the local semicircle law holds on scales $\eta \ge N^{-1+{ e}}$ for any ${ e}>0$ assuming only exponential bound \eqref{cond2relax} instead of the Gaussian bound \eqref{cond2} required in {\bf C1)} of \cite{ESY3}. This input is sufficient to conclude the proof in Section \ref{sec:timeevolved} if $t=a^2$ is changed to $N^{-1+\lambda}$ in \eqref{not}. \thebibliography{hhh} \bibitem{BP} Ben Arous, G., P\'ech\'e, S.: Universality of local eigenvalue statistics for some sample covariance matrices. {\it Comm. Pure Appl. Math.} {\bf LVIII.} (2005), 1--42. \bibitem{BI} Bleher, P., Its, A.: Semiclassical asymptotics of orthogonal polynomials, Riemann–Hilbert problem, and universality in the matrix model. {\it Ann. of Math.} {\bf 150} (1999): 185--266. \bibitem{BH} Br\'ezin, E., Hikami, S.: Correlations of nearby levels induced by a random potential. {\it Nucl. Phys. B} {\bf 479} (1996), 697--706, and Spectral form factor in a random matrix theory. {\it Phys. Rev. E} {\bf 55} (1997), 4067--4083. \bibitem{D} Deift, P.: Orthogonal polynomials and random matrices: a Riemann-Hilbert approach. {\it Courant Lecture Notes in Mathematics} {\bf 3}, American Mathematical Society, Providence, RI, 1999 \bibitem{DKMVZ1} Deift, P., Kriecherbauer, T., McLaughlin, K.T-R, Venakides, S., Zhou, X.: Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory. {\it Comm. Pure Appl. Math.} {\bf 52} (1999):1335--1425. \bibitem{DKMVZ2} Deift, P., Kriecherbauer, T., McLaughlin, K.T-R, Venakides, S., Zhou, X.: Strong asymptotics of orthogonal polynomials with respect to exponential weights. {\it Comm. Pure Appl. Math.} {\bf 52} (1999): 1491--1552. \bibitem{Dy1} Dyson, F.J.: Statistical theory of energy levels of complex systems, I, II, and III. {\it J. Math. Phys.} {\bf 3}, 140-156, 157-165, 166-175 (1962). \bibitem{Dy} Dyson, F.J.: A Brownian-motion model for the eigenvalues of a random matrix. {\it J. Math. Phys.} {\bf 3}, 1191-1198 (1962). \bibitem{ESY1} Erd{\H o}s, L., Schlein, B., Yau, H.-T.: Semicircle law on short scales and delocalization of eigenvectors for Wigner random matrices. Accepted in Ann. Probab. Preprint. {arXiv.org:0711.1730} \bibitem{ESY2} Erd{\H o}s, L., Schlein, B., Yau, H.-T.: Local semicircle law and complete delocalization for Wigner random matrices. {\it Commun. Math. Phys.} {\bf 287}, 641--655 (2009) \bibitem{ESY3} Erd{\H o}s, L., Schlein, B., Yau, H.-T.: Wegner estimate and level repulsion for Wigner random matrices. Submitted to Int. Math. Res. Notices (2008). Preprint {arxiv.org/abs/0811.2591} \bibitem{ERSY} Erd{\H o}s, L., Ramirez, J., Schlein, B., Yau, H.-T.: {\it Universality of sine-kernel for Wigner matrices with a small Gaussian perturbation.} Preprint {arxiv.org/abs/0905.2089} \bibitem{J} Johansson, K.: Universality of the local spacing distribution in certain ensembles of Hermitian Wigner matrices. {\it Commun. Math. Phys.} {\bf 215} (2001), no.3. 683--705. \bibitem{LL} Levin, E., Lubinsky, S. D.: Universality limits in the bulk for varying measures. {\it Adv. Math.} {\bf 219} (2008), 743-779. \bibitem{M} Mehta, M.L.: Random Matrices. Academic Press, New York, 1991. \bibitem{PS} Pastur, L., Shcherbina, M.: Bulk universality and related properties of Hermitian matrix models. J. Stat. Phys. {\bf 130} (2008), no.2., 205-250. \bibitem{Sosh} Soshnikov, A.: Universality at the edge of the spectrum in Wigner random matrices. {\it Commun. Math. Phys.} {\bf 207} (1999), no.3. 697-733. \bibitem{TV} Tao, T., Vu, V.: Random matrices: universality of local eigenvalue statistics. Preprint arxiv:0906.0510. \bibitem{W} Wigner, E.: Characteristic vectors of bordered matrices with infinite dimensions. {\it Ann. of Math.} {\bf 62} (1955), 548-564. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Throughout this paper, we often identify the finite field $\mathbb{F}_{2^{n}}$ with $\mathbb{F}^{n}_{2}$ which is the $n$-dimensional vector space over $\mathbb{F}_{2}$. Any function $F: \mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2^{m}}$ is called an {\it $(n,m)$-function} or vectorial Boolean functions if the values $ n $ and $ m $ are omitted. Vectorial Boolean functions are of critical importance in the field of symmetric cryptography, and the security of encryption algorithms heavily depends on the cryptographic properties of the vectorial Boolean functions. Researchers have proposed various properties to measure the resistance of a vectorial Boolean function to different kinds of cryptanalysis, including differential uniformity, nonlinearity, boomerang uniformity, algebraic degree, and so on. The lower the differential uniformity of a vectorial Boolean function, the better its security against differential cryptanalysis. In this paper, we mainly focus on the $ (n,n) $-functions. The differential uniformity of any such functions is at least 2, and the functions achieving this bound are called almost perfect nonlinear~(APN). It is difficult to find new infinite families of APN functions up to CCZ-equivalence. Up to now, only 6 infinite families of APN monomials and 14 infinite families of APN polynomials are known, since the early 90's. On the other hand, in contrast to these facts, there are a lot of APN functions even over ``small'' field: for example, thousands of CCZ-inequivalent APN functions have been found over $ \mathbb{F}_{2^8}$ \cite{Yu-Wang-Li-2014}. Constructing new instances of infinite families is an area of deep heading research. We present Tables I and II including all currently known infinite families of APN functions. To Table II, we add the new function found with Theorem \ref{thm 3.1} in Section 3 below. We refer the readers to a recent nice work of Budaghyan et al. for more details on the classification of the known families of APN functions \cite{Budaghyan-Calderini-Villa-2020}. \begin{table}[h] \centering \caption{Known infinite families of APN power functions over $ \mathbb{F}_{2^n} $} \label{table1} \centering \begin{tabular}{|m{40pt}|m{98pt}|m{80pt}|m{80pt}|m{36pt}|} \hline Family & Exponent & Conditions & Algebraic degree & Source \\ \hline Gold & $2^i+1$ & ${\rm gcd}(i,n)=1 $& 2 &\cite{Gold-1968} \\ \hline Kasami & $ 2^{2i}-2^i+1 $ & ${\rm gcd}(i,n)=1 $& $i+1$& \cite{Kasami-1971}\\ \hline Welch & $ 2^t+3$ & $ n=2t+1 $ & $3$& \cite{Dobbertin-1999} \\ \hline Niho & \tabincell{l}{$ 2^t+2^{t/2}-1$, $t$ even\\ $2^t+2^{(3t+1)/2}-1$, $t$ odd } & $n=2t+1$ & \tabincell{l}{$t/2+1$\\$t+1$} &\cite{Dobbertin-1999-Niho} \\ \hline Inverse & $ 2^{2t}-1$ & $n=2t+1$& $n-1$& \cite{Beth-Ding-1993, Nyberg-1994}\\ \hline Dobbertin & $ 2^{4i}+2^{3i}+2^{2i}+2^i-1$ & $ n=5i$ & $i+3$& \cite{Dobbertin-2001} \\ \hline \end{tabular} \end{table} Throughout this paper, let $\omega\in \mathbb{F}_{4} \backslash\{0,1\}.$ Very recently, Budaghyan, Helleseth, and Kaleyski introduced an infinite family of quadrinomials over $ \mathbb{F}_{2^{n}} $ of the following form: \begin{equation*} g_{s}(x)=x^3+a(x^{2^s+1})^{2^k}+bx^{3\cdot 2^m}+c(x^{2^{s+m}+2^m})^{2^k}, \end{equation*} where $ n=2m $. They showed that this family can provide new infinite families of APN functions \cite{Budaghyan-Helleseth-Kaleyski-2020}. More precisely, they showed that $ g_{s}(x) $ is a new APN function if $ k=0 $, $ (s,a,b,c)=(m-2,\omega, \omega^2,1)$, or $((m-2)^{-1}~{\rm mod}~n,\omega, \omega^2,1) $, if $ m $ is odd with $ {\rm gcd}(3,m)=1 $. They also pointed out that when $ k\geq 1 $, $ g_{s}(x) $ can also be APN, however, CCZ-equivalent to some known ones. Let $ n=2m $ and $ q=2^m $. In this paper, our motivation is to find new infinite families of APN functions over $ \mathbb{F}_{2^n} $. We revisit the above-mentioned two infinite families of APN quadrionomials obtained in \cite{Budaghyan-Helleseth-Kaleyski-2020}. Observing that for any odd positive integer $ s $, $ \omega^{2^s}=\omega^2$, the APN functions for $ s=m-2 $, or $ (m-2)^{-1} {\rm mod}~n$ can be rewritten as $ g_{s}(x)=a {\rm Tr}^{n}_{m}(bx^3)+a^q{\rm Tr}^{n}_{m}(cx^{2^s+1})$, $ a=\omega$, $b=c=\omega^2 $. Here $ {\rm Tr}^n_{m}(x):=x+x^{2^m} $ for $ n=2m $. Inspired by the quadrinomials and our observation, let $ a\in \mathbb{F}_{2^n} $, we study a class of functions with the following form: \begin{equation}\label{f(x)} f(x)=a {\rm Tr}^{n}_{m}(F(x))+a^q{\rm Tr}^{n}_{m}(G(x)), ~a+a^q\neq 0, \end{equation} where $ F $ and $G$ are quadratic functions with $ F(0)=G(0)=0 $. Based on the framework (\ref{f(x)}), we carefully choose quadratic functions $ F $ and $ G $ for finding APN functions. We mainly consider two kinds of functions in (\ref{f(x)}) by setting $ F $ and $ G $ as follows. $ i) $ $ F(x)=bx^3 $, $ G(x)=cx^{2^s+1} $; $ ii) $ $ F(x)=bx^{2^i+1}+cx^{2^{i+m}+1} $, $ G(x)=gx^{2^s+1}+ex^{2^{s+m}+1} $, where $ b, c, g, e\in\mathbb{F}_{2^n} $, and $ i, s $ are positive integers. Let $ n=2m $ with $ m $ odd. Let $ a\in \mathbb{F}_{2^n} $, and \begin{equation*} f_{s}(x)=a {\rm Tr}^{n}_{m}(bx^3)+a^q{\rm Tr}^{n}_{m}(cx^{2^s+1}), ~a+a^q\neq 0. \end{equation*} We can find two more exponents $ s=3$, or $m+2 $, and the corresponding conditions on the coefficients such that $ f_{s}(x) $ is an APN function over $ \mathbb{F}_{2^n} $. Code isomorphism tests~(see Sec. 2 below)~indicate that for the exponent $s=3$, the APN function found with Theorem \ref{thm 3.1}: \begin{equation*} f_{3}(x)=a {\rm Tr}^{n}_{m}(bx^3)+a^q{\rm Tr}^{n}_{m}(b^3x^{9}), \end{equation*} where $ b $ is a non-cube, is new up to CCZ-equivalence over $ \mathbb{F}_{2^{10}} $. We can also discover more coefficients for these two exponents $ s=m-2 $, and $ (m-2)^{-1} {\rm mod}~n$ discovered by Budaghyan et al. such that $ f_{s}(x) $ is APN without the assumption that $ {\rm gcd}(3,m)=1 $. In this way, some new instances of APN functions over $ \mathbb{F}_{2^{10}} $ and $ \mathbb{F}_{2^{14}} $ of the form $ f_{s}(x) $ can also be found. Let $ n=2m $, $ q=2^m $, $ a\in \mathbb{F}_{2^n} $, and \begin{eqnarray*} h_{i,s, b,c,g,e}(x)=a{\rm Tr}^n_{m}(bx^{2^i+1}+cx^{2^{i+m}+1})+ a^q{\rm Tr}^n_{m}(gx^{2^{s}+1}+ex^{2^{s+m}+1}),~a+a^q\neq 0. \end{eqnarray*} We can find two infinite families of APN functions as follows, by letting $ i=1 $, $ s=2 $, $ c=0 $. \begin{eqnarray*} h_{1,2, b,0,g,e}(x)=a{\rm Tr}^n_{m}(bx^{3})+ a^q{\rm Tr}^n_{m}(gx^{5}+ex^{4q+1}) , \end{eqnarray*} where $ a\in \mathbb{F}_{2^n} $ such that $ a+a^q\neq 0 $, $ m $ is odd, and $ b,~g,~e $ satisfy: $i)$ $ b $ not cube, $ g=1 $, $ e=\frac{1}{b^{2q-2}} $; or $ii)$ $ b $ not cube in $ \mathbb{F}^{\ast}_{2^n} $, and $ g=e=b $. By means of the code isomorphism test, we find that these two classes of APN functions are CCZ-inequivalent to each other, however, CCZ-equivalent to some functions in family F12 of Taniguchi over $ \mathbb{F}_{2^{10}}$. The critical technique needed in the proof is to forge links between the cube-ness of some certain elements and the number of solutions to the equation of the following form: \begin{eqnarray*} Ax^3+Bx^2+B^qx+A^q=0. \end{eqnarray*} The rest of the paper is organized as follows. Some basic definitions are given in Section 2. We characterize the condition for $ f(x) $ with the form (\ref{f(x)}) such that $ f(x) $ is an APN function over $ \mathbb{F}_{2^{n}} $, $ n=2m $. In Section 3, we investigate the APN property of the functions with the form (\ref{f(x)}) by letting $ F $, $ G $ are both Gold functions or both quadratic binomials. We can find a new infinite family of APN quadrinomials, and generalize the two infinite families of APN functions found by Budaghyan et al. in \cite{Budaghyan-Helleseth-Kaleyski-2020}. We can find two infinite families of APN hexanomials, which computationally proved that they belong to family F12 over $ \mathbb{F}_{2^{10}} $. We can also find (at least) two new APN instances over $ \mathbb{F}_{2^{10}} $. A few concluding remarks are given in Section 4. \section{Preliminaries} Let $\mathbb{F}_{2^{n}}$ be the finite field consisting of $2^{n}$ elements, then the group of units of $\mathbb{F}_{2^{n}}$, denoted by $\mathbb{F}^{\ast}_{2^{n}}$, is a cyclic group of order $2^{n}-1$. Let $\alpha \in \mathbb{F}_{2^n} .$ It is called a {\it cube} in $\mathbb{F}_{2^n} $, if $ \alpha=\beta^3 $ for some $\beta \in \mathbb{F}_{2^n} $; otherwise, it is called a {\it non-cube}. Let $m$ and $n$ be two positive integers satisfying $m~|~n$, we use ${\rm Tr}^{n}_{m}(\cdot)$ to denote the {\it trace function} form $\mathbb{F}_{2^{n}}$ to $\mathbb{F}_{2^{m}}$, i.e., $ {\rm Tr}^{n}_{m}(x)=x+x^{2^m}+x^{2^{2m}}+\cdots+x^{2^{(n/m-1)m}}.$ Let $ f(x) $ be a function over $ \mathbb{F}_{2^n} $. Then it can be uniquely represented as $ f(x)=\sum^{2^n-1}_{i=0}a_{i}x^i $. This is the {\it univariate~representation} of $ f $. Let $ 0 \leq i\leq 2^n-1 $. The {\it binary~weight} of $ i $ is $ w_{2}(i)=\sum^{n-1}_{s=0}i_{s}$, where $ i=\sum^{n-1}_{s=0}i_{s}2^s $, $ i_{s}\in \{0,1\} $. The {\it algebraic~degree} of $ f $, denoted by $ {\rm deg}(f) $, is the largest binary weight of an exponent $ i $ with $ a_{i}\neq 0 $ in the univariate representation of $ f $. Functions of algebraic degree one, and two are called {\it affine}, {\it quadratic}, respectively. Given an $ (n,n) $-function $ F $, we denote by $ \Delta_{F}(a,b) $ the number of solutions to the equation $ D_{a}F(x)=b $, where $ D_{a}F(x)=F(x)+F(x+a) $ is the \emph{derivative} of $ F $ in direction $ a\in \mathbb{F}_{2^n} $. $ F $ is called \emph{differentially $ \delta $-uniform} if the largest value of $ \Delta_{F}(a,b) $ equals to $\delta$, for every nonzero $ a $ and every $ b $. If $ F $ is differentially 2-uniform, we say that $ F $ is \emph{almost perfect nonlinear}~(APN). Two $(n,m)$-functions $F$ and $G$ are called {\it extended affine equivalent} (EA-equivalent) if there exist some affine permutation $L_1$ over $\mathbb{F}_{2^n}$ and some affine permutation $L_2$ over $\mathbb{F}_{2^m}$, and some affine function $A$ such that $F=L_2\circ G\circ L_1+A$. They are called {\it Carlet-Charpin-Zinoviev equivalent} (CCZ-equivalent) if there exists some affine automorphism $L=(L_1, L_2)$ of $\mathbb{F}_{2^n}\times \mathbb{F}_{2^m}$, where $L_1: \mathbb{F}_{2^n}\times \mathbb{F}_{2^m}\rightarrow \mathbb{F}_{2^n}$ and $L_2: \mathbb{F}_{2^n}\times \mathbb{F}_{2^m}\rightarrow \mathbb{F}_{2^m}$ are affine functions, such that $y=G(x)$ if and only if $L_2(x, y)=F\circ L_1(x, y)$. It is well known that EA-equivalence is a special kind of CCZ-equivalence, and that CCZ-equivalence preserves the differential uniformity \cite{CCZ}. Proving CCZ-inequivalence of functions can be very difficult in general, and this is resolved through code isomorphism. Let $ \alpha $ be the primitive element in $ \mathbb{F}_{2^n} $. Then two $ (n,n) $-functions functions $ F $ and $ G $ are CCZ-equivalent if and only if $\mathcal{C}_{F}$, $\mathcal{C}_{G}$ are isomorphic \cite{Bracken-Byrne-Markin-McGuire-2008}, where $\mathcal{C}_{F}$ is the linear code corresponding to $ F $ with the generating matrix as follows. \begin{equation*} \mathcal{C}_{F}=\left( \begin{array}{cccc} 1 & 1 & \cdots & 1\\ 0 & \alpha & \cdots & \alpha^{2^n-1}\\ F(0) & F(\alpha) & \cdots & F(\alpha^{2^n-1})\\ \end{array} \right) \end{equation*} Let $ f $ be a quadratic function over $ \mathbb{F}_{2^n} $ with $f(0)=0 $. Denote \begin{equation*} \Delta_{d,f}(x):=f(dx)+f(dx+d)+f(d). \end{equation*} Then it is well known that $ f $ is APN if and only if for every $ d\neq 0 $, $ \Delta_{d,f}(x)=0$ only has trivial solutions in $ x $, i.e., only $ x\in \mathbb{F}_{2}$ can be a solution to $ \Delta_{d,f}(x)=0$. In the following, we determine the APN-ness of the functions with the form (\ref{f(x)}). \begin{lem}\label{fundamental-lemma} Let $ n=2m $, and $ q=2^m $. Let $ F $, $ G $ be quadratic functions over $ \mathbb{F}_{2^n} $ satisfying that $ F(0)=0 $, and $ G(0)=0 $. Let $ f(x)=a{\rm Tr}^{n}_{m}(F(x))+a^q{\rm Tr}^{n}_{m}(G(x)),$ where $ a\in \mathbb{F}_{2^n} $ such that $ a+a^q\neq 0 $. Then $ f(x) $ is APN over $ \mathbb{F}_{2^n} $, if and only if the following system \begin{eqnarray}\label{fundamental} \begin{cases} \Delta_{d,F}(x) \in \mathbb{F}_{2^m} &\\ \Delta_{d,G}(x) \in \mathbb{F}_{2^m} & \end{cases} \end{eqnarray} only has $ x=0, 1 $ as its solutions for any $ d \neq 0 \in\mathbb{F}_{2^n} $. \end{lem} \begin{proof} Since $ f(x) $ is quadratic with $ f(0)=0 $, it is equivalent to showing that the following equation only has $ x= 0,1 $ as its solutions for any $d\neq 0 $ \begin{equation}\label{f-1} \Delta_{d,f}(x)=f(dx)+f(dx+d)+f(d)=0. \end{equation} We have \begin{equation}\label{f-2} \Delta_{d,f}(x)=a{\rm Tr}^{n}_{m}(\Delta_{d,F}(x))+a^q{\rm Tr}^{n}_{m}(\Delta_{d,G}(x))=0. \end{equation} In the following, we shall show that (\ref{f-2}) holds if and only if \begin{equation*} {\rm Tr}^{n}_{m}(\Delta_{d,F}(x))={\rm Tr}^{n}_{m}(\Delta_{d,G}(x))=0. \end{equation*} The sufficiency is clear. Let us show the necessity. Raising (\ref{f-2}) to its $ q $-th power, we have \begin{equation}\label{f-3} a^q{\rm Tr}^{n}_{m}(\Delta_{d,F}(x))+a{\rm Tr}^{n}_{m}(\Delta_{d,G}(x))=0. \end{equation} Adding (\ref{f-2}) and (\ref{f-3}), \begin{equation*} (a+a^q){\rm Tr}^{n}_{m}(\Delta_{d,F}(x))+(a+a^q){\rm Tr}^{n}_{m}(\Delta_{d,G}(x))=0, \end{equation*} which infers, since $ a+a^q\neq 0$, that \begin{equation}\label{f-4} {\rm Tr}^{n}_{m}(\Delta_{d,F}(x))={\rm Tr}^{n}_{m}(\Delta_{d,G}(x)). \end{equation} Substituting (\ref{f-4}) into (\ref{f-2}), we can obtain \begin{equation*} {\rm Tr}^{n}_{m}(\Delta_{d,F}(x))={\rm Tr}^{n}_{m}(\Delta_{d,G}(x))=0, \end{equation*} which is exactly the system (\ref{fundamental}). Therefore, $ f(x) $ is APN, if and only if the system (\ref{fundamental}) only has trivial solutions $ x=0,1 $, for any $ d\neq 0 $. \end{proof} \begin{table}[h] \centering \caption{Known infinite families of quadratic APN polynomials over $ \mathbb{F}_{2^n} $} \label{table2} \centering \begin{tabular}{|m{20pt}|m{150pt}|m{170pt}|m{35pt}|} \hline ID & Functions & Conditions & Source \\ \hline F1-F2 & $x^{2^s+1}+u^{2^k-1}x^{2^{ik}+2^{mk+s}}$ & $ n=pk $, $ {\rm gcd}(k,p)={\rm gcd}(s,pk)=1 $, $ p\in \{3,4\} $, $ i=sk~{\rm mod}~p $, $ m=p-i $, $ n\geq 12 $, $ u $ primitive in $ \mathbb{F}^{\ast}_{2^n} $& \cite{Budaghyan-Carlet-Leander-2008}\\ \hline F3 & $ sx^{q+1}+x^{2^i+1}+x^{q(2^i+1)}+dx^{2^iq+1}+d^qx^{2^i+q} $ & $ n=2m $, $ q=2^m $, $ {\rm gcd}(i,m)=1 $, $ d\in \mathbb{F}_{2^n} $, $ s\in \mathbb{F}_{2^n}\backslash\mathbb{F}_{2^m}$, $ X^{2^i+1}+dX^{2^i}+d^qX+1 $ has no solution $ x $ s.t. $ x^{q+1}=1 $& \cite{Budaghyan-Carlet-2008,Budaghyan-Calderini-Villa-2020}\\ \hline F4 & $ x^3+a^{-1}{\rm Tr}^{n}_{1}(a^3x^9)$ & $ a\neq 0 $ & \cite{Budaghyan-Carlet-Leander-2009}\\ \hline F5 & $ x^3+a^{-1}{\rm Tr}^{n}_{3}(a^3x^9+a^6x^{18})$ & $ 3~|~n $, $ a\neq 0 $ & \cite{Budaghyan-Carlet-Leander-2009-w}\\ \hline F6 & $ x^3+a^{-1}{\rm Tr}^{n}_{3}(a^6x^{18}+a^{12}x^{36})$ & $ 3~|~n $, $ a\neq 0 $ & \cite{Budaghyan-Carlet-Leander-2009-w}\\ \hline F7-F9 & $ ux^{2^s+1}+u^{2^k}x^{2^{-k}+2^{k+s}}+vx^{2^{-k}+1}+\omega u^{2^k+1}x^{2^{s}+2^{k+s}}$ & $n=3k$, $ {\rm gcd}(k,3)={\rm gcd}(s,3k)=1$, $v$, $\omega \in \mathbb{F}_{2^k}$, $v\omega \neq 1$, $ 3~|~(k+s) $, $ u $ primitive in $ \mathbb{F}^{\ast}_{2^n} $& \cite{Bracken-Byrne-Markin-McGuire-2008,Bracken-Byrne-Markin-McGuire-2011}\\ \hline F10 & $ cx^{q+1}+dx^{2^i+1}+d^qx^{q(2^i+1)}+\sum^{m-1}_{s=1}\gamma_{s}x^{2^s(q+1)} $ & $ n=2m $, $ q=2^m $, $ {\rm gcd}(i,m)=1 $, $ i $, $ m $ odd, $ \gamma_{s}\in \mathbb{F}_{q} $, $c \notin \mathbb{F}_{q}$, $ d $ not a cube & \cite{Bracken-Byrne-Markin-McGuire-2008}\\ \hline F11 & $ (x+x^q)^{2^k+1}+u^{\prime}(ux+u^qx^q)^{(2^k+1)2^i}+u(x+x^q)(ux+u^qx^q) $ & $ n=2m $, $m\geq 2$ even, $ {\rm gcd}(k,m)=1 $, $ q=2^m $, and $ i\geq 2 $ even, $ u $ primitive in $ \mathbb{F}^{\ast}_{2^n} $, $ u^{\prime}\in \mathbb{F}_{2^m} $ not a cube & \cite{Zhou-Pott-2013}\\ \hline F12 & $ u(u^qx+ux^q)(x+x^q)+(u^qx+ux^q)^{2^{2i}+2^{3i}}+\alpha(u^qx+ux^q)^{2^{2i}}(x+x^q)^{2^i}+\beta(x+x^q)^{2^{i}+1}$ & $ n=2m $, $q=2^m$, $ {\rm gcd}(i,m)=1 $, $ u $ primitive in $ \mathbb{F}^{\ast}_{2^n} $, $ \alpha $, $ \beta\in \mathbb{F}_{2^m} $, and $ X^{2^i+1}+\alpha X+\beta $ has no solution in $ \mathbb{F}_{2^m} $& \cite{Taniguchi-2019}\\ \hline F13 & $ L(x)^{2^i}x+L(x)x^{2^i} $ & $ n=km $, $ m\geq 2 $, $ {\rm gcd}(n,i)=1 $, $ L(x)=\sum^{k-1}_{j=0}a_{j}x^{2^{jm}} $ satisfies the conditions in Theorem 6.3 of \cite{Budaghyan-Calderini-Carlet-Coutter-Villa-2020} & \cite{Budaghyan-Calderini-Carlet-Coutter-Villa-2020} \\ \hline F14 & $ x^3+\omega x^{2^s+1}+\omega^2x^{3q}+x^{(2^s+1)q} $ & $ n=2m $, $q=2^m$, $ m$ odd, $ 3 \nmid m $, $\omega $ primitive in $ \mathbb{F}^{\ast}_{2^2} $, $ s=m-2 $, $ (m-2)^{-1}~{\rm mod}~n $ & \cite{Budaghyan-Helleseth-Kaleyski-2020}\\ \hline F15 & $ a{\rm Tr}^{n}_{m}(bx^3)+a^q{\rm Tr}^{n}_{m}(b^3x^9) $ & $ n=2m $, $ m$ odd, $q=2^m$, $ a \notin \mathbb{F}_{q} $, $ b $ not a cube & new \\ \hline \end{tabular} \end{table} \section{Three infinite families of APN functions} We want to find new APN functions of the form (\ref{f(x)}). In the following two subsections, the functions $ F $ and $ G $ were chosen very carefully to satisfy the conditions characterized in Lemma \ref{fundamental-lemma}. This will yield a new infinite family of APN quadrinomails, two infinite families of APN hexanomials, and (at least) two sporadic APN functions CCZ-inequivalent to any other known APN functions over $ \mathbb{F}_{2^{10}} $. \subsection{F, G are both of Gold type} We need the following two lemmas, which will be used in the proof of Theorem \ref{thm 3.1}. \begin{lem}\label{lemma} Let $ n=2m $ for $ m $ odd, $ q=2^m $. Suppose that for some $ c\in \mathbb{F}_{2^n} $ we have \begin{equation*} c^3(c+c^2+c^4)^q\in \mathbb{F}_{2^m}. \end{equation*} Then $ c $ is a cube in $ \mathbb{F}_{2^n} $. \end{lem} \begin{proof} Since $ {\rm gcd}(3, 2^m-1)=1$, any element of $ \mathbb{F}_{2^m} $ is a cube. In the following, we assume that $ c\notin \mathbb{F}_{2^m} $. Noting that $ c^3(c+c^2+c^4)^q=c^{(q+1)+2}+c^{2(q+1)+1}+c^{3(q+1)+q} $, we have $ c^{q+1}(c+c^q)^2+c^{2(q+1)}(c+c^q)+c^{3(q+1)}(c+c^q)=0 $ by the assumption that $ c^3(c+c^2+c^4)^q \in \mathbb{F}_{2^m} $. Since $ c+c^q\neq 0 $, we have $ c^{q+1}(c+c^q)+c^{2(q+1)}+c^{3(q+1)}=0 $, and hence $ c+c^q=c^{q+1}+c^{2(q+1)} $. Note that any nonzero element $ c$ of $ \mathbb{F}_{2^n} $ has a unique polar decomposition of the form $ c=vk $, where $ k^{q+1}=1 $, and $ v^{q-1}=1 $. Substituting $ c=vk $ into $ c+c^q=c^{q+1}+c^{2(q+1)} $, we have $ k+k^{-1}=v+v^3 $. By assumption that $ c\notin \mathbb{F}_{2^m} $, we have $ k\neq 1 $. Then according to \cite[Theorem 7]{KimMesnager2020}, we have that $ k $ is a cube in $ U:=\{x\in \mathbb{F}_{2^n}~|~x^{q+1}=1\} $. Therefore, $ c=vk $ is a cube in $ \mathbb{F}_{2^n} $. \end{proof} Let $ s $ be a positive integer with $ {\rm gcd}(s,n)=1 $. Let $ x\in \mathbb{F}_{2^n} $. It is clear that $ x+x^{2^s} \neq 0 $, if and only if $ x\neq 0,1 $. We have the following lemma. \begin{lem}\label{lemma-2} Let $ n=2m $ for $ m $ odd with $ {\rm gcd}(3,m)=1 $. Let s be a positive integer such that $ 3s \equiv 1 ~{\rm mod}~n $. Suppose that for some $ x\in \mathbb{F}_{2^n}\backslash \{0,1\} $, we have \begin{equation*} \frac{x+x^2}{(x+x^{2^s})^{2^{2s}-2^s+1}} \in \mathbb{F}_{2^m}. \end{equation*} Then $x+x^{2^s} $ is a cube. \end{lem} \begin{proof} Let $d=x+x^{2^s} $. Then $ d\neq 0 $, since $ x\neq 0, 1 $, and $ {\rm gcd}(s,n)=1 $. We can express $ x+x^2=d+d^{2^s}+d^{2^{2s}}$. Then \begin{equation*} \frac{x+x^2}{(x+x^{2^s})^{2^{2s}-2^s+1}}=\frac{d+d^{2^s}+d^{2^{2s}}}{d^{2^{2s}-2^s+1}}=d^{-2^s(2^s-1)}+d^{-(2^s-1)^2}+d^{2^s-1}=A^{-2^s}+A^{-2^s+1}+A, \end{equation*} where $ A=d^{2^s-1} $. Then the condition of this lemma is equivalent to that $ A^{-2^s}+A^{-2^s+1}+A+1\in \mathbb{F}_{2^m}, $ which is exaclty \begin{equation*} \frac{(A+1)^{2^s+1}}{A^{2^s}}\in \mathbb{F}_{2^m}. \end{equation*} If $ A=1 $, i.e., $ d^{2^s-1}=1 $, then $ d=1 $, and hence $x+x^{2^s}=1$ is a cube. In fact, since $ {\rm gcd}(2^s-1,2^n-1)=1 $, $ g(x)=x^{2^s-1} $ is a permutation of $ \mathbb{F}_{2^n} $. Then by $g(d)=g(1)=1 $, we have $ d=1 $. If $ A\neq 1 $, then there exists some $\alpha\in \mathbb{F}^{\ast}_{2^m}$ such that $ A^{2^s}=(A+1)^{2^s+1}\alpha $. Since $ s $ is odd, $ 3 ~|~2^s+1 $, we have $ A^{2^s+1}\alpha $ is a cube, and hence $ A^{2^s} $ is a cube, that is, $ A $ is a cube. However, note that $ {\rm gcd}(3,2^s-1)=1 $, we have that $ d $ is a cube, when $A=d^{2^s-1} $ is. \end{proof} In the following theorem, we investigate the APN property of the functions with the form (\ref{f(x)}) by letting $ F(x)=bx^3 $, and $ G(x)=cx^{2^s+1} $. This allows us to find a new infinite family of APN quadrinomials $f(x)=a{\rm Tr}^{n}_{m}(bx^3)+a^q{\rm Tr}^{n}_{m}(b^3x^9) $, where $ b $ is a non-cube in $ \mathbb{F}_{2^n} $. \begin{thm}\label{thm 3.1} Let $n=2m$ with $ m \geq 1 $ odd, and $ q=2^m $. Let $ a\in \mathbb{F}_{2^n} $, and $ f_{s}(x)=a{\rm Tr}^n_{m}(bx^3)+ a^q{\rm Tr}^n_{m}(cx^{2^{s}+1})$ with $ a\notin \mathbb{F}_{q} $, $ bc\neq 0 $, $ s $ odd. Then $ f_{s}(x) $ is APN over $ \mathbb{F}_{2^n} $, if $s, b, c$ satisfy the following i) $ s=m-2 $, $ b $ not a cube, $ \frac{c^4}{b}\in \mathbb{F}_{2^m} $; or ii) $ s=(m-2)^{-1}~{\rm mod}~n$, $ b $ not a cube, $ \frac{c^{2^s-1}}{b^{2^{2s}}}\in \mathbb{F}_{2^m} $; or iii) $ s=3 $, $ b $ not a cube, $ \frac{c}{b^3}\in \mathbb{F}_{2^m} $; or iv) ${\rm gcd}(3,m)=1$, $ 3s \equiv 1 {\rm ~mod~} n$, $ b $ not a cube, $ \frac{c}{b^{2^{2s}-2^s+1}}\in \mathbb{F}_{2^m} $; or v) $ s=m$, $ b $ not a cube, $ c \notin \mathbb{F}_{2^m};$ or vi) $ s=m+2 $, $ b $ not a cube, $ bc\in \mathbb{F}_{2^m} $; or vii) $ s=n-1$, $\frac{c^2}{b}\notin \mathbb{F}_{2^m} $. \end{thm} \begin{proof} Let $ F(x)=bx^3 $, $ G(x)=cx^{2^s+1} $. Then \begin{equation*} \Delta_{d,F}(x)=d^3b(x^2+x), ~{\rm and }~ \Delta_{d,G}(x)=d^{2^s+1}c(x^{2^s}+x). \end{equation*} According to Lemma \ref{fundamental-lemma}, proving $ f_{s}(x) $ is an APN function over $ \mathbb{F}_{2^n} $ is equivalent to showing that the system: $ \Delta_{d,F}(x) \in \mathbb{F}_{2^m}$, and $ \Delta_{d,G}(x)\in \mathbb{F}_{2^m} $ can only has trivial solutions $ x=0, 1 $ for any $ d\neq 0 $. Assume, to the contrary, that $ f_{s}(x) $ is not an APN function, when $ s, b, c $ satisfy the conditions of one item in this theorem. Then the following system \begin{eqnarray}\label{3} \begin{cases} d^3b(x^2+x)=\alpha ,&\\ d^{2^s+1}c(x^{2^s}+x)=\beta. & \end{cases} \end{eqnarray} has a non-trivial solution $ x\notin \mathbb{F}_{2} $ for some $ d\neq 0 $, where $ \alpha,~\beta \in \mathbb{F}_{2^m} $ with $ \alpha \neq 0 $. Since $ m $ is odd, $ {\rm gcd}(3,2^m-1)=1 $, we have that $ \alpha=e^3 $ for some $ e\in \mathbb{F}^{\ast}_{2^n} .$ Dividing both sides of the first equation in (\ref{3}) by $ e^3 $, we obtain that $ (d/e)^3b(x^2+x)=1 $. Dividing both sides of the second equation in (\ref{3}) by $ e^{2^s+1} $, we have $ (d/e)^{2^s+1}c(x^{2^s}+x)=\beta e^{-(2^{s}+1)} $. Since $ s $ is odd, we have $ 3~|~2^s+1 $, and $ e^{2^s+1} \in \mathbb{F}_{2^m}.$ Therefore, the system (\ref{3}) has a non-trivial solution $ x\notin \{0,1\} $ if and only if the system \begin{eqnarray}\label{4} \begin{cases} d^3b(x^2+x)=1 ,&\\ d^{2^s+1}c(x^{2^s}+x)=\beta. & \end{cases} \end{eqnarray} has a solution for some $ d\in \mathbb{F}^{\ast}_{2^n}$ and $\beta \in \mathbb{F}_{2^m}. $ $ i) $ $ s=m-2 $, $ b $ is a non-cube in $ \mathbb{F}_{2^n} $ and $ \frac{c^4}{b}\in \mathbb{F}^{\ast}_{2^m} $. Raising the second equation in (\ref{4}) to its fourth power, we have $ d^{q+4}c^4(x^q+x^4)=\beta^4 $. From the first equation, we have $ d^3=\frac{1}{b(x^2+x)} $. Substituting this relation into the previous equation, we have $ d^{q+1} \frac{c^4}{b} \frac{x^q+x^4}{x^2+x}\in \mathbb{F}_{2^m} $. Since $ d^{q+1}\in \mathbb{F}^{\ast}_{2^m}$, and $ \frac{c^4}{b}\in \mathbb{F}^{\ast}_{2^m} $ by assumption, we have $ \frac{x^q+x^4}{x+x^2}\in \mathbb{F}_{2^m} $. By \cite[Lemma 1]{Budaghyan-Helleseth-Kaleyski-2020}, we have $ x+x^2 $ is a cube in $ \mathbb{F}_{2^n} $, and hence $ b $ is a cube by $ d^3b(x^2+x)=1 $, a contradiction to the assumption that $ b $ is a non-cube. $ ii) $ $ s=(m-2)^{-1}~{\rm mod}~n $, $ b $ is a non-cube in $ \mathbb{F}^{\ast}_{2^n} $ and $ \frac{c^{2^s-1}}{b^{2^{2s}}}\in \mathbb{F}^{\ast}_{2^m} $. It can be seen from the proof of Theorem 2 in \cite{Budaghyan-Helleseth-Kaleyski-2020} that the critical conditions ensuring the APN-ness of this $ f_{s}(x) $ are exactly that $ b $ is a non-cube in $ \mathbb{F}_{2^n} $ and $ \frac{c^{2^s-1}}{b^{2^{2s}}}\in \mathbb{F}^{\ast}_{2^m} $. We invite the readers to check it, and we omit the arguments here. $ iii) $ $ s=3 $, $ b $ is a non-cube in $ \mathbb{F}_{2^n} $ and $ \frac{c}{b^3}\in \mathbb{F}^{\ast}_{2^m} .$ It can be seen that in this case (\ref{4}) becomes \begin{eqnarray*} \begin{cases} d^3b(x^2+x)=1 ,&\\ d^{9}c(x^{8}+x)=\beta. & \end{cases} \end{eqnarray*} Substituting $ d^3=\frac{1}{b(x+x^2)} $ into the second equation of the above system, we have \begin{eqnarray*} \frac{c}{b^3}\cdot \frac{x+x^8}{(x+x^2)^3}=\beta, \end{eqnarray*} which infers that $ \frac{x+x^8}{(x+x^2)^3}\in \mathbb{F}_{2^m} $, since $ \frac{c}{b^3}\in \mathbb{F}^{\ast}_{2^m} $ by assumption. It implies that $ (x+x^2)^3(x+x^8)^q\in \mathbb{F}_{2^m} $. Denoting $ e=x+x^2 $, we have $ x+x^8=e+e^2+e^4 $, and hence $ e^3(e+e^2+e^4)^q\in \mathbb{F}_{2^m} $. Now, according to Lemma \ref{lemma}, $ e=x+x^2 $ is a cube. Then $ b $ is a cube by $ d^3b(x+x^2)=1 $, which contradicts to the assumption that $ b $ is a non-cube. $ iv) $ ${\rm gcd}(3,m)=1$, $ 3s \equiv 1 {\rm ~mod~} n$, $ b $ is a non-cube in $ \mathbb{F}_{2^n} $ and $ \frac{c^{2^{2s}-2^s+1}}{b}\in \mathbb{F}^{\ast}_{2^m} $. Since $ {\rm gcd}(2^s-1,2^n-1)=2^{{\rm gcd}(s,n)}-1=1 $, we have that $ x+x^{2^s}\neq 0 $, when $ x\neq 0,1 $. Then (\ref{4}) becomes \begin{eqnarray*} \begin{cases} d^{2^{3s}+1}b(x+x^2)=1 ,&\\ d^{2^s+1}c(x+x^{2^s})=\beta, & \end{cases} \end{eqnarray*} where $ \beta\in \mathbb{F}_{2^m} $ with $ \beta\neq 0 $, since $ x+x^{2^s}\neq 0 $. By the second equation, we have $ d^{2^s+1}=\frac{\beta}{c(x+x^{2^s})} $. Substituting this relation into the first equation, noting that $ 2^{3s}+1=(2^s+1)(2^{2s}-2^s+1) $, we have \begin{eqnarray*} \frac{b}{c^{2^{2s}-2^s+1}}\cdot \frac{x+x^2}{(x+x^{2^s})^{2^{2s}-2^s+1}}\in \mathbb{F}_{2^m}, \end{eqnarray*} which infers, since $ \frac{b}{c^{2^{2s}-2^s+1}} \in \mathbb{F}^{\ast}_{2^m}$ by assumption, that \begin{eqnarray}\label{5} \frac{x+x^2}{(x+x^{2^s})^{2^{2s}-2^s+1}}\in \mathbb{F}^{\ast}_{2^m}. \end{eqnarray} Now, by the assumption that $ b $ is a non-cube in $ \mathbb{F}_{2^n} $ and $ \frac{c^{2^{2s}-2^s+1}}{b}\in \mathbb{F}^{\ast}_{2^m} $, we have that $ c $ is a non-cube. On the other hand, by (\ref{5}) and Lemma \ref{lemma-2}, we have that $ x+x^{2^s} $ is a cube, which infers that $ c $ is a cube from the second equation $ d^{2^s+1}c(x+x^{2^s})=\beta $ of the above system, a contradiction. $ v) $ $ s=m$, $ b $ is a non-cube in $ \mathbb{F}_{2^n} $, and $ c \notin \mathbb{F}_{2^m}.$ It can be seen that (\ref{4}) becomes \begin{eqnarray*} \begin{cases} d^{3}b(x+x^2)=1 ,&\\ d^{2^m+1}c(x+x^{2^m})=\beta, & \end{cases} \end{eqnarray*} where $ \beta\in \mathbb{F}_{2^m} $. Since $ c \notin \mathbb{F}_{2^m}$, and $ d^{2^m+1}\in \mathbb{F}^{\ast}_{2^m},$ $ x+x^{2^m}\in \mathbb{F}_{2^m} $ for any $ d\neq 0 $, $ x\in \mathbb{F}_{2^n}$, by the second equation, we have $ \beta $ must equal to zero, which infers that $ x\in \mathbb{F}_{2^m} $. Then by the fact that any element of $ \mathbb{F}_{2^m} $ is a cube, we have $ d^3(x+x^2) $ is a cube in $ \mathbb{F}^{\ast}_{2^n} $, which implies that $ b $ is a cube in $ \mathbb{F}^{\ast}_{2^n} $, a contradiction to the assumption that $ b $ is a non-cube. $ vi) $ $ s=m+2 $, $ b $ is a non-cube in $ \mathbb{F}_{2^n} $ and $ bc\in \mathbb{F}^{\ast}_{2^m} $. It can be seen (\ref{4}) becomes \begin{eqnarray*} \begin{cases} d^{3}b(x+x^2)=1 ,&\\ d^{4(q+1)-3}c(x+x^{4q})=\beta, & \end{cases} \end{eqnarray*} where $ \beta\in \mathbb{F}_{2^m} $ with $ \beta\neq 0 $ since $ x+x^{4q}\neq 0 $ when $ x\neq 0, 1$. Since $ d^{3}b(x+x^2)=1 $, we have $ d^{3}=\frac{1}{b(x+x^2)} $. Substituting this relation into the second equation, we have \begin{eqnarray*} d^{4(q+1)}bc(x+x^2)(x+x^{4q})=\beta. \end{eqnarray*} Then by the assumption that $ bc\in \mathbb{F}^{\ast}_{2^m} $, we have $ (x+x^2)(x+x^{4q})\in \mathbb{F}_{2^m} $. According to \cite[Lemma 1]{Budaghyan-Helleseth-Kaleyski-2020}, we have $ x+x^2\neq 0$ is a cube, which infers that $ b $ is a cube by $ d^{3}b(x+x^2)=1 $, a contradiction to the assumption that $ b $ is a non-cube. $ vii) $ $ s=n-1$, $\frac{c^2}{b}\notin \mathbb{F}_{2^m} $. Since $ {\rm gcd}(2^s-1,2^n-1)=2^{{\rm gcd}(s,n)}-1=1 $, we have that $ x+x^{2^s}\neq 0 $, if $ x\neq 0,1 $. It can be seen that (\ref{4}) becomes \begin{eqnarray*} \begin{cases} d^{3}b(x+x^2)=1 ,&\\ d^{2^s+1}c(x+x^{2^s})=\beta, & \end{cases} \end{eqnarray*} where $ \beta\in \mathbb{F}_{2^m}$ with $ \beta \neq 0 $. Squaring the second equation, we have $ d^{3}c^2(x+x^2)=\beta^2 $. Comparing with the first equation, we have $ \frac{c^2}{b}=\beta^2 \in \mathbb{F}_{2^m}$, which contradicts with the assumption that $ \frac{c^2}{b}\notin \mathbb{F}_{2^m} .$ \end{proof} \begin{rmk} Code isomorphism tests described in Section 2 suggest that all the polynomials from the same item of Theorem \ref{thm 3.1} are all CCZ-equivalent; the APN function $x^3+\omega x^{2^s+1}+\omega^2 x^{3q}+x^{(2^s+1)q} $ discovered in \cite{Budaghyan-Helleseth-Kaleyski-2020} is CCZ-equivalent to all the functions in i), ii), respectively, for $ s=m-2 $, and $ s=(m-2)^{-1}~{\rm mod}~n $, if $ {\rm gcd}(3,m)=1 $; the polynomials $ f_{s}(x) $ for $ s=m+2 $ in vi) are equivalent to the ones for $ s=m-2 $ in i); the polynomials $ f_{s}(x) $ for $ s=m $ in v) are equivalent to some functions in family F10 from Table \ref{table2}, see also the arguments in Remark \ref{rmk-f_{m}} below; the polynomial $ f_{s}(x) $ for $ s=n-1 $ in vii) is CCZ-equivalent to $ x^3 $. The remaining value of $ s=3$ in iii) yields APN quadrinomials $ f_{3}(x)$, which are CCZ-inequivalent to any currently known APN function over $ \mathbb{F}_{2^{10}} $. By the arguments above that all the polynomials in the same item are all CCZ-equivalent, we only take a representative of iii). We let $ f_{3}(x)=\omega {\rm Tr}^{n}_{m}(bx^3)+\omega^2{\rm Tr}^{n}_{m}(b^3x^9) $, where $ b $ is a non-cube, $ \omega \in \mathbb{F}_{2^2} \backslash \mathbb{F}_{2} $. We use this $ f_{3}(x) $ to compare against representatives from all the known infinite families including $ f_{s}(x)$, $s=m-2$, $(m-2)^{-1}~{\rm mod}~n $ in i), ii) which are essentially due to Budaghyan, Helleseth, and Kaleyski~(\cite{Budaghyan-Helleseth-Kaleyski-2020}). Note that, Budaghyan et al. had presented a table listing all the representatives, except family F12, of all the known CCZ-inequivalent APN functions over $\mathbb{F}_{2^{10}}$, see Table III of \cite{Budaghyan-Helleseth-Kaleyski-2020}. To complete the work of code isomorphism test, we have to find all the representatives of F12 over $ \mathbb{F}_{2^{10}} $. Thanks to the nice work \cite{Kaspers-Zhou-2020}, we can obtain these representatives. In fact, let $ \gamma $ be a primitive element in $ \mathbb{F}^{\ast}_{2^5} $, according to \cite[Theorem 4.5]{Kaspers-Zhou-2020}, there are exactly 6 of CCZ-inequivalent Taniguchi APN functions from F12: $ i=1 $, take $ \alpha=1 $, $ \beta=1,~\gamma^7,~\gamma^{11} $; $ i=2 $, take $ \alpha=1 $, $ \beta=1,~\gamma^3, ~\gamma^{15} $. The notations $ i,~\alpha,~\beta $ used here are the same as the ones used in family F12 of Table \ref{table2}. \end{rmk} \begin{rmk} Let $ n=2m $ with $ m $ odd, and $ {\rm gcd}(m,3)=1 $. Let $ q=2^m $. Let $ z $ be a primitive element in $ \mathbb{F}^{\ast}_{2^n} $, and $ \omega=z^{\frac{2^n-1}{3}} $. Then $\omega$ is a primitive element in $ \mathbb{F}_{2^2} $. Let $ s=m-2 $ or $ (m-2)^{-1} {\rm ~mod}~n $. Then $ g_{s}(x)=x^3+\omega x^{2^s+1}+\omega^2 x^{3q}+x^{(2^s+1)q} $ is an APN function (\cite{Budaghyan-Helleseth-Kaleyski-2020}). It can be seen that $ g_{s}(x) $ can be covered by our theorem. In fact, noting that $ \omega^{2^s}=\omega^2 $ for any odd $ s $, $ g_{s}(x)=\omega{\rm Tr}^{n}_{m}(\omega^2 x^3)+\omega^2{\rm Tr}^{n}_{m}(\omega^2 x^{2^s+1})=a{\rm Tr}^{n}_{m}(b x^3)+a^q{\rm Tr}^{n}_{m}(c x^{2^s+1})$, where $ a=\omega, b=c=\omega^2$. It is clear that $ a+a^q=1\neq 0 $, and $ b=\omega$ is a non-cube since $ {\rm gcd}(m,3)=1 $, and $ \frac{c^4}{b}=1=\frac{c^{2^t-1}}{b^{2^{2t}}} $, where $ t=(m-2)^{-1} {\rm ~mod}~n$. Then by $ i) $, $ ii) $ of the above theorem, we have that $ g_{s}(x) $ is APN over $ \mathbb{F}_{2^n} $, for $ s=m-2 $, and $ (m-2)^{-1} {\rm ~mod}~n $, respectively. \end{rmk} \begin{rmk}\label{f(m-2)} Let $ n=2m $ with $ m $ odd. Let us investigate the APN property of $ f_{m-2}(x) $ further. A pair ($ b,c $) is said to satisfy property $\mathbf{P}_{m-2}$, if $ b $ is a cube in $ \mathbb{F}^{\ast}_{2^n} $, and $ c\in \mathbb{F}^{\ast}_{2^n} $ such that the following assertion holds: \begin{center} For any $ x\in \mathbb{F}_{2^n}$ with $x\neq 0,1 $, $ x+x^2 $ is a non-cube in $ \mathbb{F}_{2^n}$, if $ \frac{c^4}{b}\cdot \frac{x^q+x^4}{x+x^2} \in \mathbb{F}_{2^m}$. \end{center} \noindent Then $ f_{m-2}(x) $ is APN over $ \mathbb{F}_{2^n}$ for these $ b $, $ c $. In fact, this assertion can be seen from the proof of $ i) $ in the above theorem. With the help of computer, we find that when $ m=5$, $7 $, there exist a lot of pairs ($ b,c $) satisfying $\mathbf{P}_{m-2}$. More precisely, let $m=5$ or $ 7 $, $ z $ be a primitive element in $ \mathbb{F}^{\ast}_{2^{2m}} $, $ j=\frac{(2^m+1)}{3} $, and $ U=\{(z^j)^i~|~{\rm gcd}(3,i)=1,~1\leq i\leq 2^n-1\}$. Then any pair ($ b,c $) with $ b\neq 0 $ a cube, and $\frac{c^4}{b}\in U$ satisfies $\mathbf{P}_{m-2}$. However, when $ m=9$, $11$, there does not exist such ($ b,c $). We therefore propose the following:\\ {\bf Open Problem 1.}~~Does there exist infinite odd integer $ m \geq 1 $ such that $\mathbf{P}_{m-2}$ holds? \end{rmk} \begin{rmk}\label{rmk-f_{m}} Let $ n=2m $ with $ m $ odd, and $ q=2^m $. Let us revisit the function $ f_{m}(x)=a{\rm Tr}^{n}_{m}(bx^{3})+a^q{\rm Tr}^{n}_{m}(cx^{2^m+1}) $ investigated in $ v) $. Replacing $ bx^3 $ by $ bx^{2^i+1} $, we let $ f(x)=a{\rm Tr}^{n}_{m}(bx^{2^i+1})+a^q{\rm Tr}^{n}_{m}(cx^{2^m+1})$, where $ i $ is an odd positive integer with $ {\rm gcd}(i,m)=1 $. With similar arguments, by $ 3~|~2^i+1 $ and $ {\rm gcd}(i,m)=1 $, we can obtain that $ f(x) $ is APN, if $ b $ is a non-cube in $ \mathbb{F}_{2^n}$, and $ c\notin \mathbb{F}_{2^m} $. Note that $ \frac{1}{a}f(x)=dx^{2^m+1}+{\rm Tr}^{n}_{m}(bx^{2^i+1})$, where $ d=a^{q-1}(c+c^q) $ can be chosen as any element in $ \mathbb{F}_{2^n}\backslash\mathbb{F}_{2^m} $, since $ a,~c\notin \mathbb{F}_{q} $, we have that $ f(x) $ in fact are exactly the functions in family F10 up to EA-equivalence. This observation suggests that it is worthy to finding APN functions with the following form: \begin{eqnarray}\label{f_{i,s}} f_{i,s}(x)=a{\rm Tr}^{n}_{m}(bx^{2^i+1})+a^q {\rm Tr}^{n}_{m}(cx^{2^s+1}),~\text{where~} a\in \mathbb{F}_{2^n}~such~that~ a+a^q\neq 0,~n=2m~\text{is a positive integer}. \end{eqnarray} \end{rmk} \begin{rmk} It is noted that there does not exist elements satisfying the conditions in $ iv) $. However, we decide to preserve this item, because we feel that the technique used in the proof may provide some insights for the constructions of APN functions. \end{rmk} \subsection{F, G are both quadratic binomials} Let us consider more general case. Let $ n=2m $ with $ m $ a positive integer. Let \begin{eqnarray}\label{h_{i,s,b,c,d,e}} h_{i,s, b,c,g,e}(x)=a{\rm Tr}^n_{m}(bx^{2^i+1}+cx^{2^{i+m}+1})+ a^q{\rm Tr}^n_{m}(gx^{2^{s}+1}+ex^{2^{s+m}+1}) , \end{eqnarray} where $ a\in \mathbb{F}_{2^n} $ such that $ a+a^q\neq 0 $, $ b,c,g,e\in \mathbb{F}_{2^n} $. In this subsection, we want to find APN functions of the form (\ref{h_{i,s,b,c,d,e}}). We remark first that the APN polynomials considered in family F3 can be covered by $ h_{i,s, b,c,g,e}(x) $. In fact, let $ i=m$, $ b\notin \mathbb{F}_{2^m} $, $c=0 $, $ g=1 $, then (\ref{h_{i,s,b,c,d,e}}) becomes $ a^{q-1}(b+b^q)x^{q+1}+x^{2^s+1}+x^{(2^s+1)q}+ex^{2^sq+1}+e^qx^{2^s+q} $, which are exactly the functions in F3, since $a^{q-1}(b+b^q)$ can be choosen as any elements in $\mathbb{F}_{2^n}\backslash \mathbb{F}_{2^m}$. We can find two infinite families of APN functions with the above form (\ref{h_{i,s,b,c,d,e}}), and computationally prove that they are CCZ-inequivalent to any APN power functions over $ \mathbb{F}_{2^{10}} $, and we can find a new sporadic instance of APN functions over $ \mathbb{F}_{2^{10}} $. \begin{thm}\cite{Williams}\label{Williams} Let $n=2m$, and $a\in \mathbb{F}^{\ast}_{2^n}$. Let $t_{1}$ be one solution in $\mathbb{F}_{2^n}$ of $t^2+at+1=0$ (if $ {\rm Tr}^{n}_{1}\Big(\frac{1}{a^2}\Big)=0 $). Let $f(x)=x^3+x+a$, then $\bullet$ $ f $ has no zeros in $\mathbb{F}_{2^n}$ if and only if ${\rm Tr}^{n}_{1}\Big(\frac{1}{a^2}\Big)=0$, and $t_{1}$ is not a cube in $\mathbb{F}_{2^n}$. $\bullet$ $ f $ has three zeros in $\mathbb{F}_{2^n}$ if and only if ${\rm Tr}^{n}_{1}\Big(\frac{1}{a^2}\Big)=0$, and $t_{1}$ is a cube in $\mathbb{F}_{2^n}$. \end{thm} We need the following theorem, which will be used for generating APN functions (see Corollary \ref{corollary}). Let $ n=2m $ with $ m $ being an odd positive integer, and $ q=2^m $. Let $ x\in \mathbb{F}_{2^n}$ with $ x\neq 0, 1 $. Then fix the following notations for this given element $x$. \begin{eqnarray*} &&r:=x^{q+1};~h:=x+x^q;~c:=x+x^2;\\ &&D:=A(A^{q+1}+B^{q+1});~ H:=A^2(A^qB^3+AB^{3q}+B^{2+2q}), \end{eqnarray*} where $ A, B $ are some elements determined by $ x $. By a routine work, we have that \begin{eqnarray*} h+h^2=c+c^q. \end{eqnarray*} The following result can not only give rise to APN functions of the form (\ref{h_{i,s,b,c,d,e}}) but can also yield Budaghyan-Carlet APN hexanomials (family F3), and hence it has its own importance and we state it as a theorem. The proof can be seen in the appendix. \begin{thm}\label{vital} Let $ n=2m $ with $ m $ being an odd positive integer. Let $x$ be any given element in $\mathbb{F}_{2^n} \backslash \{0,1\}$. Use the notations given as above. Let \begin{eqnarray}\label{key-1} f(y)=Ay^3+By^2+B^qy+A^q=0. \end{eqnarray} Then equation (\ref{key-1}) has no solutions in $\mathbb{F}_{2^n}$, if A, B, c satisfy 1) $ A=c^{2-2q}(h+c+c^2) $, $ B=c+c^2 $, and $ c=x+x^2$ is a non-cube in $ \mathbb{F}_{2^n}$; or 2) $ A=\frac{h+c+c^2}{c^q} $, $ B=1+c$, and $ c=x+x^2$ is a non-cube in $ \mathbb{F}_{2^n}$. \end{thm} \begin{rmk} Let $ n=2m $, and $ q=2^m $. Recall first that the condition needed in family F3 is that \begin{eqnarray}\label{F3} y^{2^i+1}+dy^{2^i}+d^qy+1=0 \end{eqnarray} has no solutions in $ U=\{x\in \mathbb{F}_{2^n}~|~x^{q+1}=1\}$. Here $ i $ is a positive integer with $ {\rm gcd}(i,m)=1 $. When $ i=1 $, this condition is exactly that $y^{3}+dy^{2}+d^qy+1=0$ has no solutions in $ U $. With the same notations as in Theorem \ref{vital}. Let $ A $ be the elements given in 1) or 2). Let $\Gamma=\{A\in \mathbb{F}^{\ast}_{2^m}~|~x\in \mathbb{F}_{2^n}\backslash \mathbb{F}_{2^m},~c=x+x^2~\text{not cube}\}$. Numerical experiments suggest that $ \Gamma $ is always nonempty for any odd $ m $. This can yield Budaghyan-Carlet APN functions in family F3. In fact, let $A\in \Gamma$, then (\ref{key-1}) becomes \begin{eqnarray*} y^3+dy^2+d^qy+1=0,~d=\frac{B}{A}. \end{eqnarray*} According to Theorem \ref{vital}, the above equation has no solutions in $ \mathbb{F}_{2^n} $. Therefore, this theorem can be used to yield APN functions in family F3. It is noted that the existence of the coefficients $ d $ such that the equation (\ref{F3}) has no solutions in $ U $ (or $ \mathbb{F}_{2^n} $) for a given positive integer $ i $ had also been studied in \cite{Bluher-2013,Bracken-Tan-Tan-2014}.We expect that $\Gamma$ does indeed empty for any odd positive integer $ m $, and hence propose the following: {\bf Open problem 2.} Let $ n=2m $ with $ m $ odd. Show that $ \Gamma $ is always nonempty. \noindent It is also interesting and important to consider the following question. {\bf Open problem 3.} Let $ n=2m $ with $ m $ a positive integer, $ q=2^m $. Let $ i $ be a positive with $ {\rm gcd}(m,i)=1 $. Find more exponents $ i $, and elements $ A, B $ such that the following equation has no solutions in $ \mathbb{F}_{2^n} $. \begin{eqnarray*} Ay^{2^i+1}+By^{2^i}+B^qy+A^q=0. \end{eqnarray*} \end{rmk} In the following, we investigate the APN property of the functions with the form (\ref{h_{i,s,b,c,d,e}}) by letting $ i=1, c=0$. We does indeed find two infinite families of APN functions. But, astonishingly enough, the function obtained happened to be CCZ-equivalent to some functions in family F12 with a completely different from that of Taniguchi. \begin{corollary}\label{corollary} Let $n=2m$ be a positive integer with $ m $ odd, and $ q=2^m $. Let $ h_{s}(x)=a{\rm Tr}^n_{m}(bx^3)+ a^q{\rm Tr}^n_{m}(gx^{2^{s}+1}+ex^{2^{s+m}+1})$ with $ a\notin \mathbb{F}_{q} $, $ bge\neq 0 $. Then $ h_{s}(x) $ is APN over $ \mathbb{F}_{2^n} $, if $ s, b, g, e $ satisfy \begin{eqnarray*} &1)&~~~s=2,~b{\rm~is~not~a~cube},~g=1,~ e=\frac{1}{b^{2q-2}}; {~~\rm or} \\ &2)&~~~s=2,~b{\rm~is~not~a~cube},~g=e=b. \end{eqnarray*} \end{corollary} \begin{proof} 1) $ s=2 $,~$ b ${\rm~is~not~a~cube},~$ g=1 $,~ $ e=\frac{1}{b^{2q-2}} $. Let $ F(x)=bx^3 $, $ G(x)=x^{2^s+1}+ex^{2^{s+m}+1} $. Then we have \begin{eqnarray*} \Delta_{d,F}=d^3b(x+x^2), ~ \Delta_{d,G}=d^{2^s+1}(x+x^{2^s})+d^{2^{s+m}+1}e(x+x^{2^{s+m}}). \end{eqnarray*} According to Lemma \ref{fundamental-lemma}, we have that $ h_{s}(x) $ is APN if the following system \begin{eqnarray*} \begin{cases} d^3b(x+x^2)=\alpha&\\ d^{2^s+1}(x+x^{2^s})+d^{2^{s+m}+1}e(x+x^{2^{s+m}})=\beta & \end{cases} \end{eqnarray*} only has $ x=0, 1 $ as its solutions for any $ d\neq 0 $, where $ \alpha$,~$\beta \in \mathbb{F}_{2^m} .$ Assume, to the contrary, that there exists some $ d\neq 0 $, $ x\neq 0,1 $ such that the above system holds. Now let $ s=2 $, $ b $ is a non-cube, $ e=\frac{1}{b^{2q-2}} $. Then $ \alpha\neq 0 $, $ b=\frac{\alpha}{d^3(x+x^2)} $, $ e=b^{-(2q-2)}=d^{6q-6}(x+x^2)^{2q-2} $ (note that $ \alpha^{2q-2}=1 $). Substituting it into the second equation of the above system, we have \begin{eqnarray*} d^5(x+x^4)+d^{10q-5}(x+x^2)^{2q-2}(x+x^{4q})=\beta, \end{eqnarray*} which is equivalent to \begin{eqnarray}\label{h-1} d^5(x+x^4)+d^{10q-5}(x+x^2)^{2q-2}(x+x^{4q})+\Big(d^5(x+x^4)+d^{10q-5}(x+x^2)^{2q-2}(x+x^{4q})\Big)^q=0. \end{eqnarray} Let $u=d^5$. Then the above equation becomes \begin{eqnarray}\label{h-2} u(x+x^4)+u^{2q-1}(x+x^2)^{2q-2}(x+x^{4q})+\Big(u(x+x^4)+u^{2q-1}(x+x^2)^{2q-2}(x+x^{4q})\Big)^q=0. \end{eqnarray} Note that any nonzero element $ u $ of $ \mathbb{F}_{2^n} $ has a unique polar decomposition of the form $ u=vk $, where $ v^{q+1}=1 $, and $ k^{q-1}=1 $. Substituting $ u=vk $ into (\ref{h-2}), then (\ref{h-2}) can be reduced as \begin{eqnarray*} v(x+x^4)+v^{2q-1}(x+x^2)^{2q-2}(x+x^{4q})+\Big(v(x+x^4)+v^{2q-1}(x+x^2)^{2q-2}(x+x^{4q})\Big)^q=0. \end{eqnarray*} Multiplying both sides by $ v^3 $ of the above equation, by the fact that $ v^{q}=v^{-1} $, we have \begin{eqnarray*} Ay^3+By^2+B^qy+A^q=0, \end{eqnarray*} where $ y=v^2\in \mathbb{F}_{2^n}$, and $ A $, $ B $ are given in 1) of Theorem \ref{vital}. Now, according to 1) of Theorem \ref{vital}, we obtian that the element $ x+x^2 $ is a cube, and hence $ b $ is a cube from the first equation $ d^3b(x+x^2)=\alpha $ of the system, since $ \alpha \in \mathbb{F}^{\ast}_{2^m} $ is a cube. This derives a contradiction to the assumption that $ b $ is a non-cube. 2) $ s=2 $,~$ b ${\rm~is~not~a~cube},~$ g=e=b $. Let $F(x)=bx^3$ and $G(x)=bx^5+bx^{4q+1}$. We have \begin{align*} \Delta_{d,F}(x)=d^3b(x+x^2)\hspace{0.2cm} {\rm and}\hspace{0.2cm} \Delta_{d,G}(x)=d^5b(x+x^4)+d^{4q+1}b(x+x^{4q}). \end{align*} By Lemma 2.1, $h_{s}(x)$ is APN if and only if the following system \begin{align*} \begin{cases} d^3b(x+x^2)=\alpha\\ d^5b(x+x^4)+d^{4q+1}b(x+x^{4q})=\beta \end{cases} \end{align*} only has trivial solutions $x\in\mathbb{F}_2$ for any $d\in\mathbb{F}_{2^n}^*$ and $\alpha, \beta\in\mathbb{F}_{2^m}$. Assume now that there exist some $d\in\mathbb{F}_{2^n}^*$, $ \alpha \in\mathbb{F}_{2^m} $, $\beta\in\mathbb{F}_{2^m}$ such that the system has non-trivial solutions $x\in\mathbb{F}_{2^n}\backslash\mathbb{F}_2$. Then $ \alpha\neq 0 $. By the first equation, we have $b=\frac{\alpha}{d^3(x+x^2)}$. Substituting this relation into the second equation, we have \begin{align*} \frac{d^2(x+x^4)}{x+x^2}+\frac{d^{4q-2}(x+x^{4q})}{x+x^2}=\frac{\beta}{\alpha}, \end{align*} which implies that \begin{align*} \frac{d^2(x+x^4)}{x+x^2}+\frac{d^{4q-2}(x+x^{4q})}{x+x^2}+\bigg(\frac{d^2(x+x^4)}{x+x^2}+\frac{d^{4q-2}(x+x^{4q})}{x+x^2}\bigg)^q=0, \end{align*} since $\alpha,~\beta\in\mathbb{F}_{2^m}$. Let $\mu=d^2$. We have \begin{align}\label{eq1} \frac{\mu(x+x^4)}{x+x^2}+\frac{\mu^{2q-1}(x+x^{4q})}{x+x^2}+\bigg(\frac{\mu(x+x^4)}{x+x^2}+\frac{\mu^{2q-1}(x+x^{4q})}{x+x^2}\bigg)^q=0. \end{align} To complete the proof, it suffices to show that $x+x^2$ is a cube of $\mathbb{F}_{2^n}$, which will derive that $ b $ is a cube from the first equation of the above system and this will yield a contradiction to the assumption that $ b $ is a non-cube. Let $\mu=\nu k$, where $\nu^{q+1}=1$ and $k\in\mathbb{F}_{2^m}^*$, and substitute $\mu=\nu k$ into \eqref{eq1}, we have \begin{align*} \frac{\nu(x+x^4)}{x+x^2}+\frac{\nu^{2q-1}(x+x^{4q})}{x+x^2}+\bigg(\frac{\nu(x+x^4)}{x+x^2}+\frac{\nu^{2q-1}(x+x^{4q})}{x+x^2}\bigg)^q=0. \end{align*} Multiplying both sides of the above equation by $\nu^3$, we have \begin{align*} Ay^3+By^2+B^qy+A^q=0, \end{align*} where $y=\nu^2$, $A=\Big(\frac{x+x^{4q}}{x+x^2}\Big)^q$ and $B=\frac{x+x^4}{x+x^2}=1+x+x^2$. According to 2) of Theorem \ref{vital}, $ x+x^2 $ is a cube in $ \mathbb{F}_{2^n} $, otherwise, the above equation has no solutions in $ \mathbb{F}_{2^n} $. \end{proof} {\bf Example 1}. Besides the two infinite classes of APN functions presented in Corollary \ref{corollary}, we can also find a new instance of APN functions over $ \mathbb{F}_{2^{10}} $ CCZ-inequivalent to any other known APN functions. Let $ z $ be a primitive element in $ \mathbb{F}^{\ast}_{2^{10}} $. Then \begin{eqnarray*}\label{h_{1,2, b,0,d,e}-instance} h_{s}(x)=a{\rm Tr}^n_{m}(bx^{3})+ a^q{\rm Tr}^n_{m}(gx^{5}+ex^{4q+1}) \end{eqnarray*} is an APN function over $ \mathbb{F}_{2^{10}} $, where $ b=1 $, $ g=z $, $ e=z^{369}$. \begin{table}[h] \centering \caption{All Known CCZ-inequivalent APN functions over $ \mathbb{F}_{2^{10}} $, $ q=2^5 $} \label{table3} \centering \begin{tabular}{|m{150pt}|m{117pt}|m{55pt}|} \hline Function & Conditions & Family \\ \hline $ x^{2^i+1} $ & $ i=1, 3 $ & Gold \\ \hline $ x^{57} $ & $ -$ & Kasami \\ \hline $ x^{339} $ & $ -$ & Dobbertin \\ \hline $ x^{6}+x^{33}+\alpha^{31}x^{192} $ & $ \alpha $ primitive in $ \mathbb{F}^{\ast}_{2^{10}} $ & F3 \\ \hline $ x^{33}+x^{72}+\alpha^{31}x^{258} $ &$ \alpha $ primitive in $ \mathbb{F}^{\ast}_{2^{10}} $ & F3 \\ \hline $ x^3+{\rm Tr}^{10}_{1}(x^9) $ & $ -$ & F4 \\ \hline $ x^3+\alpha^{-1} {\rm Tr}^{10}_{1}(\alpha ^3x^9) $ & $ \alpha $ primitive in $ \mathbb{F}^{\ast}_{2^{10}} $ & F4 \\ \hline \tabincell{l}{$ u(u^qx+ux^q)(x+x^q)+$\\ $(u^qx+ux^q)^{2^{2i}+2^{3i}}+$\\ $\alpha(u^qx+ux^q)^{2^{2i}}(x+x^q)^{2^i}+$\\ $\beta(x+x^q)^{2^{i}+1}$ } & \tabincell{l}{ $ u $ primitive in $\mathbb{F}^{\ast}_{2^{10}}$, \\ $z$ primitive in $ \mathbb{F}^{\ast}_{2^5} $, \\ $i=1$, $\alpha=1 $, $ \beta=1 , z^7, z^{11}$;\\ $i=2$, $\alpha=1 $, $ \beta=1 , z^3, z^{15}$} & F12 \\ \hline $ B(x)=x^3+\alpha^{341}x^{36} $ & $-$ & sporadic, see \cite{Edel-Kyureghyan-Pott-2006} \\ \hline \tabincell{l}{$ x^3+\omega x^{2^s+1}+$$\omega^2x^{3q}+x^{(2^s+1)q} $} & \tabincell{l}{$s=3,7,$ $\omega$ primitive in $ \mathbb{F}^{\ast}_{2^2} $\\} & F14 \\ \hline \tabincell{l}{$ \alpha{\rm Tr}^{n}_{m}(\alpha x^3)+\alpha^q {\rm Tr}^{n}_{m}(\alpha^{3}x^9) $}& \tabincell{l}{ $ \alpha $ primitive in $ \mathbb{F}^{\ast}_{2^{10}}$ } & F15 \\ \hline \tabincell{l}{$ \alpha{\rm Tr}^{n}_{m}(x^3)+\alpha^q {\rm Tr}^{n}_{m}(\alpha^{11}x^9) $}& \tabincell{l}{ $ \alpha $ primitive in $ \mathbb{F}^{\ast}_{2^{10}}$ } &\tabincell{l}{ sporadic, see\\ Remark \ref{f(m-2)}}\\ \hline \tabincell{l}{$ \alpha{\rm Tr}^{n}_{m}( x^3)+\alpha^q {\rm Tr}^{n}_{m}(\alpha x^5+\alpha^{369}x^{4q+1}) $}& \tabincell{l}{ $ \alpha $ primitive in $ \mathbb{F}^{\ast}_{2^{10}}$ } & \tabincell{l}{ sporadic, see\\ Example 1} \\ \hline \end{tabular} \end{table} \section{Conclusions} Let $ n=2m $, and $ q=2^m $. We studied a class of quadratic functions with the form $ f(x)=a{\rm Tr}^{n}_{m}(F(x))+a^q{\rm Tr}^{n}_{m}(G(x))$, where $ F $, $ G $ are quadratic functions. We found a new infinite family of APN quadrinomials over $ \mathbb{F}_{2^n} $, $ a\in \mathbb{F}_{2^n} $, $ n=2m $ with $ m $ odd as follows. \begin{eqnarray*} f_{1}(x)=a{\rm Tr}^{n}_{m}(bx^3)+a^q{\rm Tr}^{n}_{m}(b^3x^9), ~b \text{~not~a~cube},~a\notin \mathbb{F}_{q}. \end{eqnarray*} We generalized the two infinite families of APN functions obtained in \cite{Budaghyan-Helleseth-Kaleyski-2020} to a broader condition on $ m $, that is, the assumption that $ {\rm gcd}(3,m)=1 $ needed in \cite{Budaghyan-Helleseth-Kaleyski-2020} can be removed, up to CCZ-equivalence. We also found two infinite families of APN functions over $ \mathbb{F}_{2^{2m}} $ for odd $ m $, which turned out to be in family F12, that is, the the Taniguchi APN functions when $ m=5 $, as follows. \begin{eqnarray*} f_{2}(x)=a{\rm Tr}^{n}_{m}(bx^3)+a^q{\rm Tr}^{n}_{m}(x^5+\frac{1}{b^{2q-2}}x^{4q+1}), ~b \text{~not~a~cube},~a \in \mathbb{F}_{2^n} \backslash \mathbb{F}_{2^m}, \end{eqnarray*} and \begin{eqnarray*} f_{3}(x)=a{\rm Tr}^{n}_{m}(bx^3)+a^q{\rm Tr}^{n}_{m}(bx^5+bx^{4q+1}), ~b \text{~not~a~cube},~a \in \mathbb{F}_{2^n} \backslash \mathbb{F}_{2^m}. \end{eqnarray*} Code isomorphism tests showed that $ f_{2} $ and $ f_{3}$ are CCZ-inequivalent to each other over $ \mathbb{F}_{2^{10}}$. We found two new instances of APN functions over $ \mathbb{F}_{2^{10}} $. We also proposed three open problems, and we cordially invite the readers to attack these open problems.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The study of the metallic hydrogen's properties has been lasting for over seventy years. In 1935, Wigner and Huntington for the first time suggested, that under the influence of high pressure ($p$) the hydrogen should transform into the molecular metallic phase [1]. The later theoretical results set the metallization of hydrogen in the pressures range from $300$ GPa to $400$ GPa \cite{Zhang}, \cite{Stadele}. It is worth mentioning, that the understanding of the high-pressure properties of hydrogen seems to be substantial due to the fact, that this element in the metallic state (both molecular and atomic) is appearing inside the planets of the Jovian type \cite{Fortney}. The next step was made by Ashcroft who suggested, that the metallic hydrogen could be potential high-temperature superconductor \cite{Ashcroft}. Since that moment, the constant interest in the properties of the hydrogen's superconducting state has been dated. In particular, the numerical results predict that in the range of the "lower" pressures (up to $500$ GPa) the critical temperature ($T_{C}$) is of the order ($80$-$300$) K \cite{Zhang}, \cite{Richardson}, \cite{Caron}, \cite{Cudazzo}. For the extremely high pressure ($2000$ GPa) the superconducting state in the atomic metallic hydrogen has been studied in the papers \cite{Maksimov}, \cite{Szczesniak1}. It has been shown that the critical temperature decreases from $631$ K to $413$ K for $\mu_{C}^{*}\in(0.1,0.5)$, where $\mu_{C}^{*}$ denotes the critical value of the Coulomb pseudopotential. In the considered case the other thermodynamic parameters diverge from the BCS values \cite{BCS} e.g.: the dimensionless ratio $r_{1}\equiv \Delta C\left(T_{C}\right)/C^{N}\left(T_{C}\right)$ is changing from $1.82$ to $1.68$ together with the Coulomb pseudopotential's growth, whereas the minimum value of $r_{2}\equiv T_{C}C^{N}\left(T_{C}\right)/H^{2}_{C}\left(0\right)$ is equal to $0.162$ \cite{Szczesniak1}. The symbols defining the ratios $r_{1}$ and $r_{2}$ have following meaning: $\Delta C\left(T_{C}\right)$ denotes the specific heat difference between the superconducting and normal state at the critical temperature, $C^{N}\left(T_{C}\right)$ represents the specific heat of the normal state, while $H_{C}\left(0\right)$ is the value of the thermodynamic critical field at the temperature of zero Kelvin. In the literature the specific heat and the thermodynamic critical field were not determined for the molecular metallic hydrogen. Due to the large values of the electron-phonon constant ($\left[\lambda\right]_{p_{1}}=0.93$ and $\left[\lambda\right]_{p_{2}}=1.2$) it has to be presumed, that above quantities should be calculated in the framework of the Eliashberg formalism \cite{Eliashberg}. In the paper, we take into consideration the following values of the pressure: $p_{1}=347$ GPa and $p_{2}=428$ GPa. In this case the molecular metallic hydrogen crystallizes in the {\it Cmca} structure \cite{Zhang}, \cite{Cudazzo1}. \section{THE ELIASHBERG EQUATIONS} The BCS theory is based on the Hamiltonian, which models the pairing interaction in the simplest effective way. We notice that the BCS Hamiltonian can be derived from the more realistic Fr\"{o}hlich's operator ($H_{F}$), which describes the electron-phonon coupling in the open form \cite{Frohlich}, \cite{TransKanon}. The Eliashberg equations are derived directly from $H_{F}$ with an use of the thermodynamic Green functions \cite{Elk}. As a result one can obtain \cite{Eliashberg}: \begin{widetext} \begin{equation} \label{r1} Z_{n}=1+\frac{1}{\omega_{n}}\frac{\pi}{\beta}\sum_{m=-M}^{M} \lambda\left(i\omega_{n}-i\omega_{m}\right) \frac{\omega_{m}Z_{m}} {\sqrt{\omega_m^2Z^{2}_{m}+\phi^{2}_{m}}} \end{equation} and \begin{equation} \label{r2} \phi_{n}= \frac{\pi}{\beta}\sum_{m=-M}^{M} \left[\lambda\left(i\omega_{n}-i\omega_{m}\right)-\mu_{C}^{*}\theta\left(\omega_{c}-|\omega_{m}|\right)\right] \frac{\phi_{m}} {\sqrt{\omega_m^2Z^{2}_{m}+\phi^{2}_{m}}}. \end{equation} \end{widetext} The solutions of the Eliashberg equations are two functions defined on the imaginary axis: the wave function renormalization factor ($Z_{n}\equiv Z\left(i\omega_{n}\right)$) and the order parameter function ($\phi_{n}\equiv\phi\left(i\omega_{n}\right)$); $\omega_{n}\equiv \left(\pi / \beta\right)\left(2n-1\right)$ is the $n$-th Matsubara frequency, where $\beta\equiv\left(k_{B}T\right)^{-1}$ ($k_{B}$ denotes the Boltzmann constant). In the framework of the Eliashberg formalism the order parameter is defined as: $\Delta_{n}\equiv \phi_{n}/Z_{n}$. The symbol $\lambda\left(z\right)$ represents the pairing kernel: \begin{equation} \label{r3} \lambda\left(z\right)\equiv 2\int_0^{\Omega_{\rm{max}}}d\Omega\frac{\Omega}{\Omega ^2-z^{2}}\alpha^{2}F\left(\Omega\right). \end{equation} The Eliashberg functions for the pressures $p_{1}$ and $p_{2}$ ($\alpha^{2}F\left(\Omega\right)$) were determined in the paper \cite{Zhang}. The symbol $\Omega_{\rm{max}}$ denotes the maximum phonon frequency, where $\left[\Omega_{\rm{max}}\right]_{p_{1}}=477$ meV and $\left[\Omega_{\rm{max}}\right]_{p2}=508$ meV. The depairing correlations, appearing between electrons, are modeled with the help of the Coulomb pseudopotential $\mu_{C}^{*}$; the symbol $\theta$ denotes the Heaviside unit function and $\omega_{c}$ is the cut-off frequency ($\omega_{c}=3\Omega_{\rm{max}}$). In the paper we have assumed low value of the Coulomb pseudopotential for both considered pressures ($\mu^{*}_{C}=0.1$). The assumption above can be justified by referring to the Bennemann-Garland formula \cite{Bennemann}: $\mu_{C}^{*}\sim 0.26\rho\left(0\right)/\left[1+\rho\left(0\right)\right]$, where the symbol $\rho\left(0\right)$ indicates the value of the electronic density of states at the Fermi energy. In particular, we have: $\rho_{1}\left(0\right)=0.4512$ states/Ry/spin for $p_{1}$ and $\rho_{2}\left(0\right)=0.4885$ states/Ry/spin for $p_{2}$ \cite{Zhang}. Thus, $\left[\mu_{C}^{*}\right]_{p1}$ and $\left[\mu_{C}^{*}\right]_{p2}$ amounts $\sim 0.081$ and $\sim 0.085$ respectively. From the mathematical point of view the Eliashberg set is composed of the strongly non-linear algebraic equations with the integral kernel $\lambda\left(z\right)$. In order to achieve stable solutions one needs to take into account adequately large number of the equations. In the paper we have assumed $M=800$, what assured stability of the solutions beginning from the temperature of $T_{0}=11.6$ K ($1$ meV). The Eliashberg equations were solved by using the iterative method presented in the papers \cite{Szczesniak2} and \cite{Szczesniak3}. \section{THE NUMERICAL RESULTS} The solutions of the Eliashberg equations for the selected temperatures have been presented in Figs. \fig{f1} and \fig{f2}. It can be easily noticed, that the functions $Z_{m}$ and $\Delta_{m}$ decrease together with the Matsubara frequencies' growth. However, $Z_{m}$ saturates considerably slower than $\Delta_{m}$. The applied pressure significantly influences on the values of wave function renormalization factor and the order parameter. From the physical point of view the above fact means, that together with the increasing of $p$ increases the electron effective mass ($m^{*}_{e}\sim Z_{m=1}$) and the value of critical temperature ($\left[T_{C}\right]_{p_{1}}=108.2$ K, $\left[T_{C}\right]_{p_{2}}=162.7$ K). Analyzing the dependence of $Z_{m}$ and $\Delta_{m}$ on temperature it has been stated, that the solutions of the Eliashberg equations very unlikely evolve with $T$. In Fig. \fig{f3} we have plotted the functions $Z_{m=1}\left(T\right)$ and $\Delta_{m=1}\left(T\right)$. The presented results show, that the wave function renormalization factor is weakly dependent on the temperature and takes its maximum for $T=T_{C}$. In contrast, the temperature dependence of the order parameter is strong and can be modeled by using the formula: $\Delta_{m=1}\left(T\right)=\Delta_{m=1}\left(T_{0}\right)\sqrt{1-\left(\frac{T}{T_{C}}\right)^{\beta}}$, where: $\left[\Delta_{m=1}\left(T_{0}\right)\right]_{p_{1}}=18.15$ meV, $\left[\Delta_{m=1}\left(T_{0}\right)\right]_{p_{2}}=29.12$ meV, $\left[\beta\right]_{p_{1}}=3.58$ and $\left[\beta\right]_{p_{2}}=3.61$. \begin{figure}[h]% \includegraphics*[scale=0.35]{Rys1} \caption{The wave function renormalization factor on the imaginary axis for selected values of the temperature. The figure (A) shows results for $p_{1}$, the figure (B) for $p_{2}$.} \label{f1} \end{figure} \begin{figure}[h]% \includegraphics*[scale=0.35]{Rys2} \caption{The order parameter on the imaginary axis for selected values of the temperature. The figure (A) shows results for $p_{1}$, the figure (B) for $p_{2}$.} \label{f2} \end{figure} \begin{figure}[h]% \includegraphics*[scale=0.35]{Rys3} \caption{ (A) The dependence of the wave function renormalization factor for the first Matsubara frequency on the temperature. (B) The dependence of the order parameter for the first Matsubara frequency on the temperature. In both cases the results for $p_{1}$ and $p_{2}$ are presented.} \label{f3} \end{figure} The thermodynamic properties of the molecular metallic hydrogen can be explicitly determined on the basis of the free energy difference between the superconducting and normal state ($\Delta F$) \cite{Bardeen}: \begin{equation} \label{r4} \frac{\Delta F}{\rho\left(0\right)}=-\frac{2\pi}{\beta}\sum_{m=1}^{M} \left(\sqrt{\omega^{2}_{m}+\Delta^{2}_{m}}- \left|\omega_{m}\right|\right) (Z^{{\rm S}}_{m}-Z^{N}_{m}\frac{\left|\omega_{m}\right|} {\sqrt{\omega^{2}_{m}+\Delta^{2}_{m}}}), \end{equation} where the functions $Z^{S}_{m}$ and $Z^{N}_{m}$ denote the wave function renormalization factors for the superconducting (S) and normal (N) state respectively. In the first step, on the basis of Eq. \eq{r4}, we have calculated the specific heat difference between the superconducting and normal state $\left(\Delta C\equiv C^S-C^N\right)$: \begin{equation} \label{r5} \frac{\Delta C}{k_{B}\rho\left(0\right)} =-\frac{1}{\beta}\frac{d^{2}\left[\Delta F/\rho\left(0\right)\right]} {d\left(k_{B}T\right)^{2}}. \end{equation} Next, the specific heat in the normal state has been calculated with an use of the formula: \begin{equation} \label{r6} \frac{C^{N}}{ k_{B}\rho\left(0\right)}=\frac{\gamma}{\beta}, \end{equation} where $\gamma\equiv\frac{2}{3}\pi^{2}\left(1+\lambda\right)$. In Fig. \ref{f4} we have plotted the temperature dependence of the specific heat for the superconducting and normal state. Assuming previously given values of the electronic density of states it can be shown, that together with the growth of $p$ the specific heat's jump at the critical temperature very strongly increases. In particular, we have: $\left[\Delta C\left(T_{C}\right)\right]_{p_{2}}/\left[\Delta C\left(T_{C}\right)\right]_{p_{1}}\simeq 2.33$. \begin{figure}[h]% \includegraphics*[scale=0.35]{Rys4} \caption{ The dependence of the specific heat in the superconducting and normal state on the temperature. The figure (A) shows results for $p_{1}$, the figure (B) for $p_{2}$. The vertical line indicates a position of the specific heat jump that occurs at $T_{C}$.} \label{f4} \end{figure} Below, we have calculated the values of the thermodynamic critical field (cgs units): \begin{equation} \label{r7} \frac{H_{C}}{\sqrt{\rho\left(0\right)}}=\sqrt{-8\pi \left[\Delta F/\rho\left(0\right)\right]}. \end{equation} In Fig. \ref{f5} we have presented the dependence of $H_{C}/\sqrt{\rho\left(0\right)}$ on the temperature. On the basis of obtained results we can see, that the value of the thermodynamic critical field near the temperature of zero Kelvin ($H_{C}\left(0\right)\simeq H_{C}\left(T_{0}\right)$) also strongly increases with the pressure: $\left[H_{C}\left(0\right)\right]_{p_{2}}/\left[H_{C}\left(0\right)\right]_{p_{1}}\simeq 1.74$. \begin{figure}[h]% \includegraphics*[scale=0.35]{Rys5} \caption{ The thermodynamic critical field as a function of the temperature. The figure (A) shows results for $p_{1}$, the figure (B) for $p_{2}$.} \label{f5} \end{figure} On the basis of determined thermodynamic functions one can calculate two fundamental ratios: $r_{1}$ and $r_{2}$. Let us notice, that in the framework of BCS model these quantities have the universal values ($\left[r_{1}\right]_{\rm BCS}=1.43$ and $\left[r_{2}\right]_{\rm BCS}=0.168$) \cite{BCS}. For the molecular metallic hydrogen following results were obtained: \begin{equation} \label{r8} \left[r_{1}\right]_{p_{1}}=1.91,\qquad \left[r_{1}\right]_{p_{2}}=2.39 \end{equation} and \begin{equation} \label{r9} \left[r_{2}\right]_{p_{1}}=0.152,\qquad \left[r_{2}\right]_{p_{2}}=0.140. \end{equation} It is easy to notice that the calculated ratios significantly diverge from the values predicted by the BCS theory. Additionally it should be underlined, that $r_{1}$ is increasing together with the pressure's growth, whereas the ratio $r_{2}$ is decreasing. \section{SUMMARY} In the paper the free energy difference between the superconducting and normal state for the molecular metallic hydrogen was calculated. The pressure values $p_{1}=347$ GPa and $p_{2}=428$ GPa were taken into consideration. On the basis of achieved results it has been shown, that the specific heat's jump at the critical temperature and the thermodynamic critical field near the temperature of zero Kelvin strongly increase together with the pressure's growth ($\left[\Delta C\left(T_{C}\right)\right]_{p2}/\left[\Delta C\left(T_{C}\right)\right]_{p1}\simeq 2.33$ and $\left[H_{C}\left(0\right)\right]_{p2}/\left[H_{C}\left(0\right)\right]_{p1}\simeq 1.74$). The obtained thermodynamic quantities enable the determination of the fundamental ratios: $r_{1}$ and $r_{2}$. It has been proven, that the ratios $r_{1}$ and $r_{2}$ very considerably differ from the values predicted by the BCS model. In particular, $r_{1}$ is increasing from $1.91$ to $2.39$ together with the pressure's growth; whereas $r_{2}$ is decreasing from $0.152$ to $0.140$. \begin{acknowledgments} The authors wish to thank Prof. K. Dzili{\'n}ski for providing excellent working conditions and the financial support. We also thank A.P. Durajski and D. Szcz{\c{e}}{\'s}niak for the productive scientific discussion that improved the quality of the presented paper. All numerical calculations were based on the Eliashberg function sent to us by: {\bf L. Zhang}, Y. Niu, Q. Li, T. Cui, Y. Wang, {\bf Y. Ma}, Z. He and G. Zou for whom we are also very thankful. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A {\it Lie system} is a $t$-dependent system of first-order ordinary differential equations whose general solution can be expressed as an autonomous function, a {\it superposition rule}, depending on a generic finite family of particular solutions and some constants to be related to initial conditions. The simplest non-trivial examples of Lie systems are Riccati equations and most of their generalisations \cite{CL2011,GL2013,Win1983}. The Lie–Scheffers theorem says that a Lie system is equivalent to a $t$-dependent vector field taking values in a finite-dimensional Lie algebra of vector fields, a so-called {\it Vessiot--Guldberg Lie algebra}. This results shows that being a Lie system is the exception rather than the rule, although Lie systems have numerous and relevant physical and mathematical applications (see \cite{CL2011} and references therein). Lie systems admitting a Vessiot--Guldberg (VG) Lie algebra of Hamiltonian vector fields relative to different geometric structures have been deeply studied in recent years \cite{LS2020}. In particular, \cite{BBHLS13,BCHLS13,Car2019,CLS13,Car2000,Ru10} analyse Lie systems possessing a VG Lie algebra of Hamiltonian vector fields relative to a Poisson structure, see \cite{CLS13} for the symplectic case. Meanwhile, in \cite{Car2014} a no-go theorem was proved showing that Lie--Hamilton systems cannot be used to describe certain Lie systems and one has to consider their VG Lie algebra to consist of Hamiltonian vector fields relative to a Dirac structure. Additionally, $k$-symplectic Lie systems were analysed in \cite{LV15}, and multisymplectic Lie systems, along with a certain case of multisymplectic reduction, were studied in \cite{GLMV19,GLRRV22}. It is quite interesting that finding Lie systems with VG Lie algebras of Hamiltonian vector fields led to a bloom in the description of new applications of Lie systems, despite being differential equations satisfying more restrictive conditions than standard Lie systems \cite{BHLS15,Car2019,LL18}. It is remarkable that geometric structures allow for the construction of superposition rules, constants of motion, and the analysis of relevant properties of Lie systems without relying on the analysis/solution of systems of partial or ordinary differential equations \cite{CL2011,Car2000,CGM07,Win1983}. Geometric techniques also provide new viewpoints to the nature and properties of superposition rules \cite{BCHLS13} and mathematical/physical problems \cite{Car2019,LL18}. In this context, this work investigates Lie systems possessing a VG Lie algebra of Hamiltonian vector fields relative to a contact structure. Contact Lie systems can be considered as a particular case of Jacobi--Lie systems (see \cite{AHKR22,AHR22,HLS15}), which where first introduced in \cite{HLS15}, but that work just contained one example of contact Lie systems, without analysis those properties typical for contact Lie systems. In fact, \cite{HLS15} was mostly dealing with Jacobi--Lie systems on one and two-dimensional manifolds, which do not retrieve contact Lie systems, apart from the trivial ones given by a one-dimensional VG Lie algebra on $\mathbb{R}$. It is remarkable that this work introduces certain Liouville theorems, Marsden--Weinstein reductions and Gromov non-squeezing theorems for the study of contact Lie systems, which can be considered as pioneering in the use of such techniques in the literature of Lie systems. Moreover, it is remarkable that the literature on contact systems is mostly focused on dissipative systems \cite{Bra2017b,DeLeo2019b,Gas2019}. Meanwhile, this work treats mainly Hamiltonian systems not related to dissipation, but having relevant physical applications. Indeed, we analyse types of Liouville theorems for contact Lie systems. We also show that contact Lie systems are naturally related to symplectic Lie systems. This fact is employed to study several properties on the space of solutions of contact Lie systems, e.g. the evolution leaves invariant the volume of solutions and non-squeezing theorems can be applied, which can be understood as a new type of results in the study of contact Lie systems and other Lie systems with related geometric structures. Finally, Willet's reduction of contact manifolds \cite{Wil2002} is applied to the reduction of contact Lie systems. This is more general than some other momentum maps reductions appearing in the literature \cite{DeLeo2019}. It is worth noting that types of Marsden--Weinstein reductions have been applied to Lie systems in \cite{GLRRV22} for multisymplectic Lie systems. The structure of the work goes as follows. In Section 2, a review on contact geometry and contact Hamiltonian systems is provided and Willet's reduction of contact structures is sketched. Section 3 is the theoretical core of the article, introducing the notion of contact Lie system and providing some results concerning the existence of underlying geometric structures for Lie systems. In addition, a Gromov's non-squeezing theorem for conservative contact Lie systems. Section 4 is devoted to present three examples: the Brockett control system, the Schwarz equation and a quantum contact Lie systems. \section{Review on contact mechanics} From now on, all the manifolds and mappings are assumed to be smooth and connected, unless otherwise stated. This will be used to simplify our presentation while stressing its main points. Let us provide a brief introduction to contact geometry (see \cite{Ban2016,Gei2008,Kho2013} for details). \begin{definition} A \textit{contact structure} on a $(2n+1)$-dimensional manifold $M$ is a one-codimensional distribution $\xi$ on $M$. Note that $\xi$ can be locally described as the kernel of a one-form $\eta\in\Omega^1(M)$ such that $\eta\wedge(\mathrm{d}\eta)^n$ is a volume form on $M$. If there exists a global one-form $\eta$ such that $\xi = \ker\eta$, the \textit{contact manifold} $(M,\eta)$ is said to be \textit{co-orientable} and $\eta$ is a {\it contact form} on $M$. \end{definition} Since we are interested in local properties of contact systems, we will hereafter restrict ourselves to co-oriented contact manifolds. To simplify the notation, the co-oriented term will be skipped. Note that if $\eta$ is a contact one-form on $M$, then $f\eta$ is also a contact form on $M$ for every non-vanishing function $f\in\mathscr{C}^\infty(M)$. Moreover, $\eta\wedge(\mathrm{d}\eta)^n$ is a volume form if and only if $\eta$ induces a decomposition of the tangent bundle $\mathrm{T} M = \ker\eta\oplus\ker\mathrm{d}\eta$. A contact manifold $(M,\eta)$ determines a unique vector field $R\in\mathfrak{X}(M)$, called the {\it Reeb vector field}, such that $i(R)\mathrm{d}\eta = 0$ and $i(R)\eta = 1$. Then, $\mathscr{L}_R\eta = 0$ and, therefore, $\mathscr{L}_R\mathrm{d}\eta = 0$. \begin{theorem}{\bf (Darboux theorem)} Given a contact manifold $(M,\eta)$, where $\dim M=2n+1$, every point $p\in M$ admits a local open coordinate neighbourhood with coordinates $\{x^i, y_i, z\}$ and $i = 1,\dotsc,n$, called {\it Darboux coordinates}, such that $$ \eta = \mathrm{d} z - y_i\mathrm{d} x^i\,. $$ In these coordinates, $R = \tparder{}{z}$. \end{theorem} \begin{example}{\bf(Canonical contact manifold)} Consider the product manifold $M = \mathrm{T}^\ast Q\times\mathbb{R}$, where $Q$ is any manifold. The manifold $T^*Q$ admits adapted coordinates $\{q^1,\ldots,q^n,p_1,\ldots,p_n\}$ on ${\rm T}^*Q$, which in turn give rise to natural coordinates $\{q^1,\ldots,q^n,p_1,\ldots,p_n, s\}$ on ${\rm T}^*Q\times \mathbb{R}$. The one-form $\eta = \mathrm{d} s - \theta$, where $\theta$ is the pull-back of the Liouville one-form $\theta_\circ\in\Omega^1(\mathrm{T}^\ast Q)$ relative to the projection $\mathrm{T}^\ast Q\times \mathbb{R}\to\mathrm{T}^\ast Q$. In the chosen coordinates, one may define $$ \eta = \mathrm{d} s - p_i\mathrm{d} q^i\,,\qquad R=\parder{}{s}\,.$$ The coordinates $\{q^i,p_i,s\}$ are Darboux coordinates on $M$. It is remarkable that $\theta_\circ$, and thus $\eta$, is independent of the coordinates $\{q^1,\ldots,q^n\}$ chosen in the first place. \end{example} The previous example is a particular case of {\it contactification} of a symplectic manifold. Given an {\it exact symplectic manifold}, namely a symplectic manifold whose symplectic form is exact, e.g. $(N,-\mathrm{d}\theta)$, the product manifold $M = N\times\mathbb{R}$ is a contact manifold with the contact form $\eta = \mathrm{d} s - \theta$, where the variable $s$ stands for the canonical coordinate in $\mathbb{R}$. Let $(M,\eta)$ be a contact manifold. There exists a vector bundle isomorphism $\flat:\mathrm{T} M\to\mathrm{T}^\ast M$ given by $$ \flat(v) = i(v)\mathrm{d}\eta + (i(v)\eta)\eta\,,\qquad \forall v\in {\rm T}M\,. $$ This isomorphism can be extended to a $\mathscr{C}^\infty(M)$-module isomorphism $\flat:\mathfrak{X}(M)\to\Omega^1(M)$ in the natural manner. Taking into account this isomorphism, $R = \flat^{-1}(\eta)$. A {\it contact Hamiltonian system} \cite{Bra2017b,DeLeo2019b,Gas2019} is a triple $(M,\eta,H)$, where $(M,\eta)$ is a contact manifold and $H\in\mathscr{C}^\infty(M)$. Given a contact Hamiltonian system $(M,\eta,H)$, there exists a unique vector field $X\in\mathfrak{X}(M)$, called the {\it contact Hamiltonian vector field} of $H$, satisfying the following equivalent conditions \begin{enumerate}[(1)] \item $\ i(X)\mathrm{d}\eta = \mathrm{d} H - (\mathscr{L}_R H)\eta\quad$ and $\quad i(X)\eta = -H$, \item $\ \mathscr{L}_X\eta = -(\mathscr{L}_R H)\eta\quad$ and $\quad i(X)\eta = -H$, \item $\ \flat(X) = \mathrm{d} H - (\mathscr{L}_RH + H)\eta$. \end{enumerate} Unlike in the case of symplectic mechanics, the Hamiltonian function is not preserved. More precisely, $$ \mathscr{L}_XH = -(\mathscr{L}_RH)H\,. $$ In Darboux coordinates, the contact Hamiltonian vector field $X$ reads $$ X = \parder{H}{p_i} - \left( \parder{H}{q^i} + p_i\parder{H}{s} \right)\parder{}{p_i} + \left( p_i\parder{H}{p_i} - H \right)\parder{}{s}\,. $$ Its integral curves, $\gamma(t) = (q^i(t), p_i(t), s(t))$ satisfy the system of differential equations $$ \frac{\mathrm{d} q^i}{\mathrm{d} t} = \parder{H}{p_i}\,,\qquad \frac{\mathrm{d} p_i}{\mathrm{d} t} = - \left( \parder{H}{q^i} + p_i\parder{H}{s} \right)\,,\qquad \frac{\mathrm{d} s}{\mathrm{d} t} = p_j\parder{H}{p_j} - H\,,\qquad i = 1,\dotsc,n\,. $$ \begin{example} Consider the contact Hamiltonian system $(\mathrm{T}^\ast Q\times\mathbb{R}, \eta, H)$, where $Q = \mathbb{R}^n$ with linear coordinates $\{q^1,\ldots,q^n\}$ and $\eta = \mathrm{d} s - p_i\mathrm{d} q^i$ and $$ H = \frac{p^2}{2m} + V(q) + \gamma s\,, $$ where $p = \sqrt{p_1^2 + \dotsb + p_n^2}$, $m$ is the mass of a particle, $V(q)$ is a potential and $\gamma$ is a constant. This Hamiltonian function describes a mechanical system consisting of a particle under the influence of a potential $V(q)$ and with a friction force proportional to the momenta. The integral curves of the contact Hamiltonian vector field satisfy the system of equations $$ \frac{\mathrm{d} q^i}{\mathrm{d} t} = \frac{p_i}{m}\,,\qquad \frac{\mathrm{d} p_i}{\mathrm{d} t} = -\parder{V}{q^i}(q) - \gamma p_i\,,\qquad \frac{\mathrm{d} s}{\mathrm{d} t} = \frac{p^2}{2m} - V(q) - \gamma s\,,\qquad i = 1,\dotsc,n\,.$$ Combining the first two equations, one gets $$ m\frac{\mathrm{d}^2 q^i}{\mathrm{d} t^2} + \gamma m \frac{\mathrm{d} q^i}{\mathrm{d} t} + \parder{V}{q^i}(q) = 0\,,\qquad i=1,\ldots,n. $$ \end{example} The formalism presented in this section has a Lagrangian counterpart \cite{Gas2019,PhDThesisXRG}. Let us describe the reduction contact theory \cite{Wil2002}. \begin{definition} Let $\Phi:G\times M\rightarrow M$ be a Lie group acting preserving the contact form, $\eta$, of a contact manifold $(M,\eta)$. A \textit{contact momentum map} associated with the action is $J\colon M\to\mathfrak{g}^\ast$ defined by $$ \langle J(x), \xi\rangle = i_{\tilde\xi_x}\eta_x\,,\qquad \forall x\in M\,, $$ where $\tilde\xi\in\mathfrak{X}(M)$ is the fundamental vector field corresponding to $\xi\in\mathfrak{g}$. \end{definition} The contact momentum map is coad-equivariant \cite{Gei2008}. Note that the momentum map $J$ gives rise to a comomentum map $\lambda\colon\xi\in\mathfrak{g}\mapsto J_\xi\in\mathscr{C}^\infty(M)$ defined by $J_\xi(x) = \langle J(x),\xi\rangle$ for every $x\in M$ and $\xi\in\mathfrak{g}$. \begin{proposition} Let $\Phi:G\times M\rightarrow M$ be a Lie group acting properly on $M$ and preserving the contact form $\eta$ on a contact form $(M,\eta)$. Consider its associated contact momentum map $J\colon M\to\mathfrak{g}^\ast$. Then, \begin{enumerate}[(1)] \item The level sets of the momentum map $J$ are invariant under the action of the flow of the Reeb vector field. \item For every $x\in M$, $v\in\mathrm{T}_x M$ and $\xi\in\mathfrak{g}$, one has $$ \mathrm{d} J_\xi = -i_{\tilde\xi}\mathrm{d}\eta\,. $$ \item If $J(x) = 0$, then $\mathrm{T}(G\cdot x)$ is an isotropic subspace of the symplectic vector space $(\ker\eta_x, \mathrm{d}_x\eta)$. \item $\Ima(\mathrm{d} J(x))^\circ = \{ \xi\in\mathfrak{g}\mid \tilde\xi_x\in\ker\mathrm{d}_x\eta \}$. \end{enumerate} \end{proposition} \begin{definition} Let $\Phi:G\times M\rightarrow M$ be a Lie group acting properly on a contact manifold $(M,\eta)$ preserving the contact form $\eta$. Consider its associated contact momentum map $J\colon M\to\mathfrak{g}^\ast$ and $\mu\in\mathfrak{g}^\ast$. The \textit{kernel group} of $\mu$ is the unique connected Lie subgroup of $G_\mu$ with Lie algebra $\mathfrak{k}_\mu = \ker \restr{\mu}{\mathfrak{g}_\mu}$, where $\mathfrak{g}_\mu$ is the Lie algebra of the isotropy group $G_\mu$. We denote by $K_\mu$ the kernel group of $\mu$. The \textit{contact quotient}, or \textit{contact reduction} of $M$ by $G$ at $\mu$ is $$ M_\mu = J^{-1}(\mathbb{R}^+\mu) / K_\mu\,. $$ \end{definition} \begin{theorem}\label{thm:reduction-willet} Let $G$ be a Lie group acting on a contact manifold $(M,\eta)$ preserving the one-form $\eta$, and let $J:M\to\mathfrak{g}^\ast$ be its associated contact momentum map. Take $\mu\in\mathfrak{g}^\ast$ and let $K_\mu$ be the connected Lie subgroup of $G_\mu$ with Lie algebra $\mathfrak{k}_\mu = \ker \restr{\mu}{\mathfrak{g}_\mu}$. If \begin{enumerate}[(i)] \item $K_\mu$ acts properly on $J^{-1}(\mathbb{R}^+\mu)$, \item $J$ is transverse to $\mathbb{R}^+\mu$, \item $\ker\mu + \mathfrak{g}_\mu = \mathfrak{g}$, \end{enumerate} then the quotient $M_\mu = J^{-1}(\mathbb{R}^+\mu)/K_\mu$, if a manifold, is naturally a contact manifold, i.e. $$ \ker\eta\cap \mathrm{T}\left(J^{-1}(\mathbb{R}^+\mu)\right) $$ gives rise to a contact structure on the quotient $M_\mu$. \end{theorem} \section{Contact Lie systems} A {\it Lie algebra} is a vector space $V$ with a bilinear antisymmetric bracket $[\cdot,\cdot]\colon V\times V\to V$, a {\it Lie bracket}. We denote a Lie algebra by $(V,[\cdot,\cdot])$, or just by $V$ if the Lie bracket is clear from the context. Given two subsets $\mathcal{A},\mathcal{B}\subset V$, we denote by $[\mathcal{A},\mathcal{B}]$ the real vector space generated by the Lie brackets between the elements of $\mathcal{A}$ and $\mathcal{B}$. Then, $\mathrm{Lie}(\mathcal{A},V,[\cdot,\cdot])$, or simply $\mathrm{Lie}(\mathcal{A})$, stands by the smallest Lie subalgebra in the sense of inclusion of $V$ containing $\mathcal{A}$. A {\it $t$-dependent vector field} on $M$ is a map $X\colon\mathbb{R}\times M\to\mathrm{T} M$ such that, for every $t\in\mathbb{R}$, the map $X_t = X(t,\cdot)\colon M\to\mathrm{T} M$ is a vector field. An {\it integral curve} of $X$ is an integral curve $\gamma\colon \mathbb{R}\to\mathbb{R}\times M$ of the {\it autonomisation} of $X$, namely $\tparder{}{t} + X(t,x)\in\mathfrak{X}(\mathbb{R}\times M)$. Every $t$-dependent vector field on $M$ gives rise to its, referred to as {\it associated system} of $X$, of the form $$ \frac{\mathrm{d} x}{\mathrm{d} t} = X(t,x)\,. $$ The curves $\gamma:t\in \mathbb{R}\mapsto (t,x(t))\in \mathbb{R}\times M$, where $x(t)$ is a solution of the above system of differential equations, is called an {\it integral curve} of $X$. Conversely, every system of first-order differential equations in normal form describes the integral curves of a unique $t$-dependent vector field $X$. Hence, this allows us to identify $X$ with its associated system and to use $X$ to refer to both. Note that this will not lead to contradiction, as it will be clear from context what we mean by $X$ in each case. The {\it minimal Lie algebra} of a $t$-dependent vector field $X$ is the Lie algebra $V^X = \mathrm{Lie}(\{X_t\}_{t\in\mathbb{R}})$. A {\it Lie system} on a manifold $M$ is a $t$-dependent vector field $X$ on $M$ whose smallest Lie algebra $V^X$ is finite-dimensional. Notice that $X$ admits a Vessiot--Guldberg Lie algebra if, and only if, $V^X$ is finite-dimensional. A \textit{locally automorphic Lie system} is a triple $(N,X,V)$ such that $V$ is a Vessiot--Guldberg Lie algebra of $X$ whose associated generalised distribution, $\mathcal{D}^V$, is equal to $\mathrm{T} N$. \begin{example}{\rm (Riccati equations)} Consider the differential equation \begin{equation}\label{eq:Riccati} \frac{\mathrm{d} x}{\mathrm{d} t} = a_0(t) + a_1(t)x + a_2(t)x^2\,, \end{equation} where $a_0(t),a_1(t),a_2(t)$ are arbitrary $t$-dependent functions. Note that system \eqref{eq:Riccati} is the associated system of the $t$-dependent vector field $$ X(t,x) = a_0(t)X_0(x) + a_1(t)X_1(x) + a_2(t)X_2(x)\,, $$ where $$ X_0 = \parder{}{x}\,,\quad X_1 = x\parder{}{x}\,,\quad X_2 = x^2\parder{}{x}\,. $$ Since $$ [X_0,X_1] = X_0\,,\quad [X_0,X_2] = 2X_1\,,\quad [X_1,X_2] = X_2\,, $$ it follows that $X_1,X_2,X_3$ span a Lie algebra isomorphic to $\mathfrak{sl}_2$. Thus, $X$ defines a Lie system on $\mathbb{R}$ with Vessiot--Guldberg Lie algebra $\langle X_1,X_2,X_3\rangle\simeq\mathfrak{sl}_2$. \end{example} \begin{definition} A {\it contact Lie system} is a triple $(M,\eta,X)$ where $\eta$ is a contact form on $M$ and $X$ is a Lie system on $M$ whose smallest Lie algebra $V^X$ is a finite-dimensional real Lie algebra of contact Hamiltonian vector fields relative to $\eta$. A {\it contact Lie system} is called \textit{conservative} if the Hamiltonian functions of the vector fields in $V^X$ are fist-integrals of the Reeb vector field of $(M,\eta)$. \end{definition} \begin{example}{\bf (A simple control system)}\label{ex:simple-control} Consider the following system of differential equations in $\mathbb{R}^3$ of the form \begin{equation}\label{eq:simple-control} \left\{\begin{gathered} \frac{\mathrm{d} x}{\mathrm{d} t} = b_1(t)\,,\\ \frac{\mathrm{d} y}{\mathrm{d} t} = b_2(t)\,,\\ \frac{\mathrm{d} z}{\mathrm{d} t} = b_2(t)x\,, \end{gathered}\right. \end{equation} where $b_1(t),b_2(t)$ are two arbitrary functions of time. The relevance of this system is due to its occurrence in control problems \cite{Ra11}. The solutions to \eqref{eq:simple-control} are the integral curves of the $t$-dependent vector field \begin{equation}\label{eq:t-field-simple-control} X = b_1(t)X_1 + b_2(t)X_2\,, \end{equation} where $$ X_1 = \parder{}{x}\,,\quad X_2 = \parder{}{y} + x\parder{}{z}\,. $$ The vector fields $X_1, X_2$, along with the vector field $X_3 = \partial/ \partial {z}$, close a three-dimensional Vessiot--Guldberg Lie algebra $V^X = \langle X_1,X_2,X_3\rangle\simeq\mathfrak{h}_3$ of $X$ on $\mathbb{R}^3$. Note that $\mathfrak{h}_3$ is the so-called three-dimensional Heisenberg Lie algebra. Indeed, the commutations relations for $X_1,X_2,X_3$ read $$ [X_1,X_2] = X_3\,,\quad [X_1,X_3] = 0\,,\quad [X_2,X_3] = 0\,. $$ The vector fields $X_1,X_2,X_3$ are contact Hamiltonian vector fields with respect to the contact form on $\mathbb{R}^3$ given by $$ \eta = \mathrm{d} z - y\,\mathrm{d} x\,, $$ with Hamiltonian functions $$ h_1 = y\,,\quad h_2 = -x\,,\quad h_3 = -1\,, $$ respectively. Hence, the $t$-dependent Hamiltonian for \eqref{eq:simple-control} is given by $$ h(t)=b_1(t)y-b_2(t)x\,. $$ Thus, $(\mathbb{R}^3,\eta,X)$ is a contact Lie system. Note that $\eta$ gives rise to a volume form $\Omega_\eta=\eta\wedge \mathrm{d}\eta$ on $\mathbb{R}^3$. Since $h_1,h_2,h_3$ are first-integrals of the Reeb vector field of $\eta$, namely $X_3$, the evolution of \eqref{eq:simple-control} leaves $\Omega_\eta$ invariant. \end{example} \bigskip The next proposition is a no-go result for the existence of a Poisson structure turning the vector fields of a Vessiot--Guldberg Lie algebra of a Lie system into Hamiltonian vector fields. \begin{proposition} Every locally automorphic Lie system $(N,X,V^X)$ on an odd-dimensional manifold $N$ is not Hamiltonian relative to any Poisson structure. \end{proposition} \begin{proof} Let us prove the proposition by reductio ad absurdum. The characteristic distribution of a Poisson bivector on a manifold is a generalised distribution whose rank is even, but not necessarily constant, at every point of the manifold. Hence, all Hamiltonian vector fields must be contained in a generalised distribution that must have even rank at every point. Meanwhile, the vector fields of a locally automorphic Lie system on an odd-dimensional manifold span an odd-dimensional distribution whose rank matches the dimension of the manifold. If all the vector fields of the Vessiot--Guldberg Lie algebra of the locally automorphic Lie system are Hamiltonian, then the unique distribution containing all of them is odd-dimensional. But then, they cannot be contained in a generalised distribution of even rank at every point. This is a contradiction and locally automorphic Lie systems are not Hamiltonian relative to any Poisson structure. \end{proof} Let us analyse the existence of a contact form making a locally automorphic Lie system into a contact Lie system. \begin{proposition} Let $(N,X,V^X)$ be locally automorphic Lie system. If $\eta$ is a differential form on $N$ such that $\mathcal{L}_X\eta=0$ for every $X\in V^X$, then the value of $\eta$ at a point $N$ determines the value of $\eta$ on the whole $N$. \end{proposition} Let us study the behaviour of the volume form, $\Omega_\eta=\eta\wedge (d\eta)^n$, induced by a contact $(2n+1)$-dimensional manifold $(M,\eta)$ relative to the dynamics of a contact Lie system on $M$ relative to contact form $\eta$. \begin{proposition}\label{Prop:ConLiou} Let $(N,\eta,X)$ be a conservative contact Lie system, then $$ \mathcal{L}_{X_t}\Omega_\eta=0,\qquad \forall t\in \mathbb{R}. $$ \end{proposition} \begin{proof} Recall that the vector fields of the Vessiot--Guldberg Lie algebra of a contact Lie system are such that $$ \mathscr{L}_{X_f}\Omega_\eta=\mathscr{L}_{X_f}(\eta\wedge (\mathrm{d}\eta)^n)=(\mathscr{L}_{X_f}\eta)\wedge (\mathrm{d}\eta)^n+n\eta\wedge \mathrm{d}\mathcal{L}_{X_f}\eta\wedge (\mathrm{d}\eta)^{n-1}=-(n+1)(Rf)\Omega_{\eta}, $$ since $\mathscr{L}_{X_f}\eta=-(Rf)\eta$. Hence, $$ \mathscr{L}_{X_f}\Omega_{\eta}=-(n+1)(Rf)\Omega_\eta. $$ And using $Rf=0$, the result follows. \end{proof} \begin{proposition}\label{prop:conservative-contact-project-symplectic} If $(N,\eta,X)$ is a conservative contact Lie system and the space of integral curves of the Reeb vector field $R$, let us say $N/R$, is a manifold and $\pi_R:N\rightarrow N/R$ is the canonical projection, then $(N/R,\Omega,\pi_*X)$, where $\pi_R^*\Omega=\mathrm{d}\eta$, is a Lie--Hamilton system relative to the symplectic form $\Omega$. \end{proposition} \begin{proof} Since $(N,\eta,X)$ is conservative, then the Lie derivative of the Reeb vector field $R$ with Hamiltonian vector fields, e.g. the elements of $V^X$, is zero. Therefore, all the elements of $V^X$ are projectable onto $N/R$. Moreover, $\mathscr{L}_R\mathrm{d}\eta=0$ and $\iota_R\mathrm{d}\eta=0$. Hence, $\mathrm{d}\eta$ can be projected onto $N/R$. In other words, there exists a two-form, $\Omega$, on $N/R$ such that $\pi^*\Omega=\mathrm{d}\eta$. Moreover, if $\iota_Y\Omega=0$ for a vector field $Y$ on $N/R$, then there exists a vector field $\widetilde{Y}$ on $N$ projecting onto $N/R$. Then, $\pi^*\iota_Y\Omega=\iota_{\widetilde{Y}}\mathrm{d}\eta=0$. Hence, $\widetilde{Y}$ takes values in the kernel of $\mathrm{d}\eta$ and its proportional to $R$. Hence, $\pi_*Y=0$ and $\Omega$ is non-degenerate. Since $\Omega$ is closed, it becomes a symplectic form and the vector fields of $\pi_*V^X$ span a finite-dimensional Lie algebra of Hamiltonian vector fields relative to $\Omega$. Therefore, the $t$-dependent vector field $\pi_*X$ becomes a Lie-Hamilton system relative to $\Omega$. \end{proof} \begin{theorem} {\bf (Gromov's non-squeezing theorem)}. Let $(M,\omega)$ be a symplectic manifold and let $\{x^1,\ldots,x^n,p_1,\ldots,p_n\}$ be Darboux coordinates on an open subset $U\subset M$. Given the set of points $$ B(r)=\left\{(x,p)\in U:\sum_{i=1}^n\left[(x^i-x^i_0)^2+(p_i-p_i^0)^2\right]\leq r^2\right\}, $$ where $(x_0^1,\ldots,x_0^n,p_1^0,\ldots,p_n^0)\in U$, if the image of $B(r)$ under a symplectomorphism $\phi:M\rightarrow M$ is such that $\phi(B(r))\subset C_R$, where $$ C_R=\left\{(x,p)\in U:(x^1-x^1_0)^2+(p_1^0-p_1^0)^2\leq R^2\right\}, $$ then $r\geq R$. \end{theorem} The interest of the Gromov's non-squeezing theorem is due to the fact that it applies to the Hamiltonian system relative to a symplectic form appearing as a projection of a conservative contact Lie system $(N,\eta,X)$ onto $N/R$. Let us recall that a multisymplectic Lie system is triple $(N,\Omega,X)$ where $X$ is a Lie system on $N$ admitting a Vessiot--Guldberg Lie algebra of vector fields relative to the multisymplectic form $\Omega$ (see \cite{GLMV19,GLRRV22} for details). The following proposition, whose prove is immediate, links the conservative contact Lie systems with multisymplectic Lie systems. \begin{corollary} If $(N,\eta,X)$ is a contact Lie system, then $(N,\Omega_\eta,X)$ is a multisymplectic Lie system. \end{corollary} \section{Examples} \subsection{The Brockett control system} Let us consider a second example of a contact Lie system. The Brockett control system in $\mathbb{R}^3$ is given by \cite{Ra11} \begin{equation}\label{eq:Brocket-system} \begin{dcases} \frac{\mathrm{d} x}{\mathrm{d} t} = b_1(t)\,,\\ \frac{\mathrm{d} y}{\mathrm{d} t} = b_2(t)\,,\\ \frac{\mathrm{d} z}{\mathrm{d} t} = b_2(t)x - b_1(t)y\,, \end{dcases} \end{equation} where $b_1(t)$ and $b_2(t)$ are arbitrary $t$-dependent functions. System \eqref{eq:Brocket-system} is associated with the $t$-dependent vector field $$ X = b_1(t)X_1 + b_2(t)X_2\,, $$ where $$ X_1 = \parder{}{x} - y\parder{}{z}\,,\quad X_2 = \parder{}{y} + x\parder{}{z}\,, $$ along with the vector field $X_3 = 2\dparder{}{z}$, close a three-dimensional Vessiot--Guldberg Lie algebra $V^X = \langle X_1,X_2,X_3\rangle$ with commutation relations $$ [X_1,X_2] = X_3\,,\quad [X_1,X_3] = 0\,,\quad [X_2,X_3] = 0\,. $$ As in Example \ref{ex:simple-control}, the $\langle X_1,X_2,X_3\rangle$ is a Vessiot--Guldberg Lie algebra is isomorphic to the three-dimensional Heisenberg Lie algebra. The Lie algebra of symmetries of $V^X$, let us say $\mathrm{Sym}(V^X)$, is spanned by the vector fields $$ Y_1 = \parder{}{x} + y\parder{}{z}\,,\quad Y_2 = \parder{}{y} - x\parder{}{z}\,,\quad Y_3 = 2\parder{}{z}\,, $$ with commutation relations $$ [Y_1,Y_2] = -Y_3\,,\quad [Y_1,Y_3] = 0\,,\quad [Y_2,Y_3] = 0\,, $$ The dual base to $Y_1,Y_2,Y_3$ is $$ \eta_1 = \mathrm{d} x\,,\quad \eta_2 = \mathrm{d} y\,,\quad \eta_3 = \frac{1}{2}(\mathrm{d} z - y\mathrm{d} x + x\mathrm{d} y)\,. $$ It is clear that $\mathrm{d}\eta_3 = \mathrm{d} x\wedge\mathrm{d} y$. Since $\eta_3\wedge\mathrm{d}\eta_3 = \frac{1}{2}\mathrm{d} x\wedge\mathrm{d} y\wedge\mathrm{d} z\neq 0$, we have that $\eta_3$ is a contact form in $\mathbb{R}^3$. It is easy to check that $X_1,X_2,X_3$ are contact Hamiltonian vector fields with respect to the contact structure given by $\eta_3$ with Hamiltonian functions $$ H_1 = y\,,\quad H_2 = -x\,,\quad H_3 = -1 $$ respectively. Thus, the triple $(\mathbb{R}^3,\eta_3,X)$ is a contact Lie system with Vessiot--Guldberg Lie algebra $\langle X_1,X_2,X_3\rangle\simeq \mathfrak{h}_3$. Moreover, one sees that the Reeb vector field is given by $Y_3$. The projection of the original Hamiltonian contact system to $\mathbb{R}^2$ reads \begin{equation}\label{Eq:ProjCon} \frac{\mathrm{d} x}{\mathrm{d} t}=b_1(t)\,,\qquad \frac{\mathrm{d} y}{\mathrm{d} t}=b_2(t)\,, \end{equation} which, as foreseen by Proposition \ref{prop:conservative-contact-project-symplectic}, is Hamiltonian relative to the symplectic form $\Omega=\mathrm{d} x\wedge \mathrm{d} y$ that is determined by the condition $\mathrm{d}\eta=\pi^*\Omega$ for $\pi:\mathbb{R}^3\rightarrow \mathbb{R}^2$. It is worth noting that the Liouville theorem for $\Omega$ on $\mathbb{R}^2$ tells us that the evolution of \eqref{Eq:ProjCon} on $\mathbb{R}^2$ leaves invariant the area of a surface, but since $\{x,y\}$ are Darboux coordinates for $\Omega$, the non-squeezing theorem also says that given a ball in $\mathbb{R}^2$ centred at a point of radius $r$, then if the image of such a ball under the dynamics of \eqref{Eq:ProjCon} is inside a ball in $\mathbb{R}^2$ of radius $R$, then $R\geq r$. In fact, it is easy to see that the evolution of \eqref{Eq:ProjCon} is given by $$ x' = x+\int_0^tb_1(t')\mathrm{d} t'\,,\qquad y' = y+\int_0^tb_2(t')\mathrm{d} t'\,. $$ Then, the image of a ball with center at a point $(x,y)$ at the time $t_0=0$ evolved relative to the evolution given by \eqref{Eq:ProjCon} until $t$ is a new ball with center at $(x',y')$ and the same radius. It is worth noting that, by the Liouville theorem for conservative contact Lie systems, one has that the volume of a space of solutions in $\mathbb{R}^3$ does not vary on time. It is worth noting that \eqref{eq:Brocket-system} is then a Hamiltonian system relative to a multisymplectic form $\Omega_\eta$, and therefore different methods developed in \cite{GLRRV22} can be applied to study its properties. \subsection{Schwarz equation} Consider a Schwarz equation \cite{Ber2007, Ovs2009} of the form \begin{equation}\label{eq:Schwarz-equation} \frac{\mathrm{d}^3 x}{\mathrm{d} t^3} = \frac{3}{2}\left(\frac{\mathrm{d} x}{\mathrm{d} t}\right)^{-1}\left(\frac{\mathrm{d}^2 x}{\mathrm{d} t^2}\right)^2 + 2 b_1(t)\frac{\mathrm{d} x}{\mathrm{d} t}\,, \end{equation} where $b_1\in\mathscr{C}^\infty(\mathbb{R})$ is a non-constant function. Equation \eqref{eq:Schwarz-equation} is of great relevance since it appears when dealing with Ermakov systems \cite{Lea2008} and the Schwarzian derivative \cite{Car2014}. It is well known that equation \eqref{eq:Schwarz-equation} is a higher-order Lie system \cite{Car2012}, i.e. the associated first-order system $$ \frac{\mathrm{d} x}{\mathrm{d} t} = v\,,\quad \frac{\mathrm{d} v}{\mathrm{d} t} = a\,,\quad \frac{\mathrm{d} a}{\mathrm{d} t} = \frac{3}{2}\frac{a^2}{v} + 2b_1(t)v\,, $$ is a Lie system. Indeed, this system is associated with the vector field $X = X_3 + b_1(t) X_1$ defined in $\mathcal{O} = \{ (x,v,a)\in\mathbb{R}^3\mid v\neq 0 \}$ where $$ X_1 = 2v\parder{}{a}\,,\quad X_2 = v\parder{}{v} + 2a\parder{}{a}\,,\quad X_3 = v\parder{}{x} + a\parder{}{v} + \frac{3}{2}\frac{a^2}{v}\parder{}{a}\,. $$ These vector fields satisfy the commutation relations $$ [X_1,X_2] = X_1\,,\quad [X_1,X_3] = 2X_2\,,\quad [X_2,X_3] = X_3\,, $$ and thus span a three-dimensional Lie algebra $V = \langle X_1,X_2,X_3\rangle\simeq\mathfrak{sl}_2$. The Schwarz equation admits a Lie algebra of Lie symmetries, denoted $\mathrm{Sym}(V)$ spanned by the vector fields $$ Y_1 = \parder{}{x}\,,\quad Y_2 = x\parder{}{x} + v\parder{}{v} + a\parder{}{a}\,,\quad Y_3 = x^2\parder{}{x} + 2vx\parder{}{v} + 2(ax + v^2)\parder{}{a}\,. $$ These Lie symmetries satisfy the commutation relations $$ [Y_1,Y_2] = Y_1\,,\quad [Y_1,Y_3] = 2Y_2\,,\quad [Y_2,Y_3] = Y_3\,, $$ and thus $V \simeq\mathrm{Sym}(V)$. The basis $Y_1,Y_2,Y_3$ admits a dual basis $\eta_1,\eta_2,\eta_3$ given by $$ \eta_1 = \mathrm{d} x - \frac{x(ax + 2v^2)}{2v^3}\mathrm{d} v + \frac{x^2}{2v^2}\mathrm{d} a\,,\quad \eta_2 = \frac{ax + v^2}{v^3}\mathrm{d} v - \frac{x}{v^2}\mathrm{d} a\,,\quad \eta_3 = -\frac{a}{2v^3}\mathrm{d} v + \frac{1}{2v^2}\mathrm{d} a\,. $$ Since $$ \eta_2\wedge\mathrm{d}\eta_2 = \frac{1}{v^3}\mathrm{d} x\wedge\mathrm{d} v\wedge\mathrm{d} a\,, $$ we have that $(\mathcal{O},\eta_2)$ is a contact manifold. It is easy to check that $X_1,X_2,X_3$ are contact Hamiltonian vector fields with Hamiltonian functions $$ H_1 = \frac{2x}{v}\,,\quad H_2 = \frac{ax - v^2}{v^2}\,,\quad H_3 = \frac{a(ax-2v^2)}{2v^3} $$ respectively. Thus, we have found that $(\mathcal{O},\eta_2, X)$ is a contact Lie system and its Reeb vector field is $Y_2$. \subsubsection*{Darboux coordinates} However, coordinates $(x,v,a)$ are not Darboux-like coordinates. Consider a new coordinate set given by $$ q = \frac{a}{v}\,,\qquad p = \frac{x}{v}\,,\qquad z = \ln v\,. $$ Using these coordinates, we have $\eta_2 = \mathrm{d} z - p\mathrm{d} q$, the Reeb vector field $Y_2$ becomes $\tparder{}{z}$, and $$ X_1 = 2\parder{}{q}\,,\qquad X_2 = q\parder{}{q} - p\parder{}{p} + \parder{}{z}\,,\qquad X_3 = \frac{q^2}{2}\parder{}{q} + (1-pq)\parder{}{p} + q\parder{}{z}\,. $$ In Darboux coordinates, the Lie symmetries $Y_1,Y_2,Y_3$ read $$ Y_1 = \frac{1}{e^z}\parder{}{p}\,,\qquad Y_2 = \parder{}{z}\,,\qquad Y_3 = e^z\left( 2\parder{}{q} - p^2\parder{}{p} + 2p\parder{}{z} \right)\,. $$ Of course, the vector fields $X_1,X_2,X_3$ are Hamiltonian relative to the contact form $\eta_2$ and the Hamiltonian functions $$ H_1 = 2p\,,\qquad H_2 = pq-1\,,\qquad H_3 = \frac{1}{2}q^2 p - q\,, $$ respectively. Using the Darboux coordinates $(q,p,z)$, we have that $$ X = X_3 + b_1(t)X_1 = \left(\frac{q^2}{2} + 2b_1(t)\right)\parder{}{q} + (1-pq)\parder{}{p} + q\parder{}{z}\,, $$ defining the system of ordinary differential equations \begin{equation}\label{eq:schwarz-darboux} \begin{dcases} \frac{\mathrm{d} q}{\mathrm{d} t} = \frac{q^2}{2} + 2b_1(t)\,,\\ \frac{\mathrm{d} p}{\mathrm{d} t} = 1 - pq\,,\\ \frac{\mathrm{d} z}{\mathrm{d} t} = q\,. \end{dcases} \end{equation} The phase portrait of the system \eqref{eq:schwarz-darboux} is depicted in Figure \ref{fig:Schwarz-darboux}. It is a well-known result in contact dynamics \cite{DeLeo2019b,Gas2019} that the evolution of the Hamiltonian function along a solution is given by $$ \mathscr{L}_{X_H}H = -(\mathscr{L}_R H)H\,, $$ where $R$ denotes the Reeb vector field. Since our Reeb vector field is $Y_2 = \tparder{}{z}$ and the Hamiltonian functions $H_1,H_2,H_3$ do not depend on the coordinate $z$, we have that our system preserves the energy along the solutions. It is said to be conservative. \begin{figure}[ht] \centering \includegraphics[width=0.3\textwidth]{KumSchPlan1.jpeg} \includegraphics[width=0.3\textwidth]{KumSchPlan2.jpeg} \includegraphics[width=0.3\textwidth]{KumSchPlan3.jpeg} \caption{Phase portrait of the system \eqref{eq:schwarz-darboux} from three different perspectives} \label{fig:Schwarz-darboux} \end{figure} Note that the system \eqref{eq:schwarz-darboux} can be projected onto $\mathcal{O}/Y_2\simeq \mathbb{R}^2$. The projected system reads \begin{equation}\label{Eq:ProSc} \frac{\mathrm{d} q}{\mathrm{d} t}=\frac{q^2}2+2b_1(t),\qquad \frac{\mathrm{d} p}{\mathrm{d} t}=1-pq, \end{equation} which is Hamiltonian relative to the symplectic form $\Omega=\mathrm{d} q\wedge \mathrm{d} p$. Indeed, its Hamiltonian function reads $$ K(t,q,p)=\frac 12q^2p+2b_1(t)p $$ The system \eqref{Eq:ProSc} has no equilibrium points for $b_1(t)\geq0$ and two equilibrium points at $$ q = \pm 2\sqrt{-b_1(t)}\,,\quad p = \frac{\pm 1}{2\sqrt{-b_1(t)}} $$ for $b_1(t) < 0$. Taking $b_1(t) = -1/4$, the system \eqref{Eq:ProSc} has the form \begin{equation}\label{Eq:ProSc-part} \frac{\mathrm{d} q}{\mathrm{d} t}=\frac{q^2}2-\frac{1}{2},\qquad \frac{\mathrm{d} p}{\mathrm{d} t}=1-pq, \end{equation} and has equilibrium points $(1,1)$ and $(-1,-1)$. It is easy to check that both equilibria are saddle points. The phase portrait for the system \eqref{Eq:ProSc-part} is depicted in Figure \begin{figure}[ht] \centering \includegraphics[width=0.3\textwidth]{KumSchPlan4.jpeg} \caption{Phase portrait of the reduced Schwarz system. One can see the two saddle points at $(-1,-1)$ and $(1,1)$. \eqref{Eq:ProSc}} \label{fig:phase-portrait-schwarz-reduced} \end{figure} As commented in previous section, the volume of the ball is constant, as can be seen in Figure \ref{fig:evolution-ball-schwarz}, but if the initial ball has radius $r$, then the evolution of the ball cannot be immersed in a ball of radius smaller than $r$. \begin{figure}[ht] \centering \includegraphics[width=0.3\textwidth]{ProjecKum.jpeg} \caption{Evolution of a ball under the reduced Schwarz system \eqref{Eq:ProSc}. One can see that although the ball is deformed, its area is preserved.} \label{fig:evolution-ball-schwarz} \end{figure} \subsection{A quantum contact Lie system} Let us illustrate how the contact reduction can be used to reduce contact Lie systems. Consider the linear space over the real numbers, $\mathfrak{W}=\langle i\widehat{H}_1,\ldots,i\widehat{H}_5\rangle $ spanned by the basis of skew-Hermitian operators $$ i\widehat{H}_1:=i\hat{x}\,,\quad i\widehat{H}_2:=i\hat{p}_x = \parder{}{x}\,,\quad i\widehat{H}_3:=i\hat{y}\,,\quad i\widehat{H}_4:=i\hat{p}_y = \parder{}{y}\,,\quad i\widehat{H}_5:=i\mathrm{Id}\,, $$ where the only non-vanishing commutation relations between the elements of the basis read $$ [i\widehat{H}_1,i\widehat{H}_2] = -i\widehat{H}_5\,,\quad [i\widehat{H}_3,i\widehat{H}_4] = -i\widehat{H}_5\,. $$ The Lie algebra $\mathfrak{W}$ appears in many quantum mechanical problems. Let us consider the Lie algebra morphism $\rho:\mathfrak{W}\mapsto \mathfrak{X}(\mathbb{R}^5)$ satisfying that \begin{gather*} \rho(i\widehat{H}_1) =: X_1 = \parder{}{x_1}\,,\qquad\rho(i\widehat{H}_2) =: X_2 = \parder{}{x_2} - x_1\parder{}{x_5}\,,\qquad \rho(i\widehat{H}_3) =: X_3 = \parder{}{x_3}\,,\\ \rho(i\widehat{H}_4) =: X_4 = \parder{}{x_4} - x_3\parder{}{x_5}\,,\qquad \rho(i\widehat{H}_5) =: X_5 = \parder{}{x_5}\,. \end{gather*} Consider the Lie system on $\mathbb{R}^5$ associated with the $t$-dependent vector field $$ X^Q(t,x)=\sum_{\alpha=1}^5b_\alpha(t)X_\alpha(x)\,,\qquad \forall t\in \mathbb{R}\,,x\in \mathbb{R}^5\,, $$ with arbitrary $t$-dependent functions $b_1(t),\ldots,b_5(t)$, which has a Vessiot--Guldberg Lie algebra $V^Q=\langle X_1,\ldots,X_5\rangle$. The Lie algebra of symmetries of $V^Q$, i.e. the vector fields on $\mathbb{R}^5$ commuting with all elements of $V^Q$, is spanned by the vector fields $$\begin{gathered} Y_1 = \parder{}{x_1} - x_2\parder{}{x_5}\,,\qquad Y_2 = \parder{}{x_2}\,,\qquad Y_3 = \parder{}{x_3} - x_4\parder{}{x_5}\,,\\ Y_4 = \parder{}{x_4}\,,\qquad Y_5 = \parder{}{x_5}\,. \end{gathered} $$ Since $Y_1,\wedge \ldots \wedge Y_5\neq 0$, there exists a basis of dual differential one-forms to $Y_1,\ldots,Y_5$ given by $$ \eta_1 = \mathrm{d} x_1\,,\quad\eta_2 = \mathrm{d} x_2\,,\quad\eta_3 = \mathrm{d} x_3\,,\quad\eta_4 = \mathrm{d} x_4\,,\quad\eta_5 = \mathrm{d} x_5 + x_2\mathrm{d} x_1 + x_4\mathrm{d} x_3\,, $$ i.e. $\eta_i(X_j)=\delta_{ij}$, for $i,j=1,\ldots,5$, where $\delta_{ij}$ is the Kronecker's delta function. Then, $\eta_5\wedge (\mathrm{d}\eta_5)^2=2\mathrm{d} x_1\wedge\mathrm{d} x_2\wedge\mathrm{d} x_3\wedge\mathrm{d} x_4\wedge\mathrm{d} x_5$ is a volume form on $\mathbb{R}^5$ and thus $\eta_5$ becomes a contact form on $\mathbb{R}^5$. Moreover, $X_1,X_2,X_3,X_4,X_5$ are contact Hamiltonian vector fields with Hamiltonian functions $$ h_1 = -x_2\,,\quad h_2 = x_1\,,\quad h_3 = -x_4\,,\quad h_4 = x_3\,,\quad h_5 = -1, $$ respectively. Thus, $(\mathbb{R}^5, \eta_5, X^Q)$ admits a Vessiot--Guldberg Lie algebra $V^Q$ of Hamiltonian vector field relative to $\eta_5$ and $(\mathbb{R}^5, \eta_5, X^Q)$ becomes a contact Lie system. The Reeb vector field of $\eta_5$ is given by $X_5$. Since the Hamiltonian functions $h_1,\ldots,h_5$ are first-integral of the Reeb vector field, it is said that $(\mathbb{R}^5,\eta_5,X)$ is a conservative contact Lie system. It is relevant that many important techniques for studying contact Lie system will be available only for convervative contact Lie systems. Let us consider a Lie algebra of symmetries of $V^Q$ given by $$ V^S = \langle Y_1,Y_2,Y_5\rangle. $$ This Lie algebra is isomorphic to the Heisenberg three-dimensional Lie algebra $\mathfrak{h}_3$. Moreover, the vector fields of $V^S$ are also Hamiltonian relative the contact structure $\eta_5$. The momentum map $J:\mathbb{R}^5\rightarrow \mathfrak{h}_3^*$ associated with $V^S$ is such that $\iota_{X_i}\eta_5=J^i$ for $i=1,2,5$, where $$ J^1=x_2,\qquad J^2=-x_1,\qquad J^5=-1,\qquad $$ Note that $J$ is not a submersion, but its tangent map has constant rank. By the Constant Rank Theorem, $J^{-1}(\mu)$ is a submanifold for every $\mu\in \mathfrak{h}_3^*$ and the tangent space to one of its points is given by the kernel of $\mathrm{T}_pJ$ for every $\mu\in \mathfrak{h}_3^*$. By Theorem \ref{thm:reduction-willet}, the submanifold $J^{-1}(\mathbb{R}_+\mu)$ is invariant relative to the evolution of the contact Lie system. Let us integrate the action \begin{align*} & X_1\,,\qquad x_1'=x_1+\lambda_1\,,&& x_2'=x_2\,,&& x_5'=x_5\,,\\ & X_2\,,\qquad x_1'=x_1\,,&& x_2'=x_2+\lambda_2\,,&& x_5'=x_5-\lambda_2x_1\,,\\ & X_5\,,\qquad x_1'=x_1\,,&& x_2'=x_2\,,&& x_5'=x_5+\lambda_3\,. \end{align*} Therefore, $\mathscr{L}_{X_5}{ J}=0$ and $ \lambda(\mu_1,\mu_2,-1)=(\lambda\mu_1,\lambda \mu_2,-\lambda)\notin {\rm Im}J$ unless $\lambda=1$. Then, $J^{-1}(\mathbb{R}_+\mu)=J^{-1}(\mu)=\{x_1,x_2\}\times \mathbb{R}^3.$ Moreover, $$ J^{-1}(\mu)/G_5=\{x_1,x_2\}\times \mathbb{R}^3. $$ Note that the projection of the initial contact Lie system reads $$ \bar X^Q(t,x)=\sum_{\alpha=3}^5b_\alpha(t)X_\alpha(x),\qquad \forall t\in \mathbb{R}\,,\qquad x\in \mathbb{R}^3 $$ while the projection of the initial Vessiot--Guldberg Lie algebra is given by \begin{gather*} X_3 = \parder{}{x_3}\,,\qquad X_4 = \parder{}{x_4} - x_3\parder{}{x_5}\,,\qquad X_5 = \parder{}{x_5}\,. \end{gather*} These are Hamiltonian vector fields relative to the contact form $\mathrm{d} x_5 + x_4\mathrm{d} x_3$ with Hamiltonian functions $$ \bar h_3 = -x_4\,,\qquad \bar h_4 = x_3\,,\qquad \bar h_5 = -1\,. $$ Since $X_5$ is the Reeb vector fields on $\mathbb{R}^3$ relative to $\mathrm{d} x_5+x_4\mathrm{d} x_3$, the reduced contact Lie system is also conservative. In fact, it could be projected onto $\mathbb{R}^3/X_5\simeq \mathbb{R}^2$, giving rise to a Lie--Hamilton system on $\mathbb{R}^2$ of the form $$ \frac{\mathrm{d} x_3}{\mathrm{d} t}=b_3(t),\qquad \frac{\mathrm{d} x_4}{\mathrm{d} t}=b_4(t) $$ relative to $\omega=\mathrm{d} x_4\wedge \mathrm{d} x_3$. \section{Conclusions and further research} In this paper we have introduced the notion of contact Lie system: systems of first-order differential equations describing the integral curves of a t-dependent vector field taking values in a finite-dimensional Lie algebra of Hamiltonian vector fields relative to a contact structure. In particular, we have studied families of conservative contact Lie systems. We have also developed Liouville theorems, a contact reduction and a Gromov non-squeezing theorems for contact Lie systems. In order to illustrate these results, we have worked out several examples, such as the Brockett control system, the Schwarz equation and a quantum contact Lie system. The reduction procedures developed by Willet \cite{Wil2002} and Albert \cite{Alb1989} and the one introduced in this paper open the door to develop an energy-momentum method \cite{Mars1988} for contact systems, both conservative and non-conservative. This will allow to study the relative equilibria of these systems. \section*{Acknowledgements} X. Rivas acknowledges the financial support of the Ministerio de Ciencia, Innovaci\'on y Universidades (Spain), project PGC2018-098265-B-C33. J. de Lucas and X. Rivas acknowledge partial finantial support from the Novee Idee 2B-POB II project PSP: 501-D111-20-2004310 funded by the ``Inicjatywa Doskonałości - Uczelnia Badawcza'' (IDUB) program. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The aim of this paper is to describe oscillatory properties of sequences of gradients of bi-Lipschitz maps in the plane that preserve the orientation, i.e., the gradients of which have a positive determinant. Such mappings naturally appear in non-linear hyperelasticity where they act as \emph{deformations}. Although there are more general definitions of a {deformation}, i.e.\ a function $y: \Omega \to {\mathbb R}^n$ that maps each point in the reference configuration to its current position, we confine ourselves to the one by P.G.~Ciarlet \cite[p.~27]{ciarlet} which requires injectivity in the domain $\Omega\subset{\mathbb R}^n$, sufficient smoothness and orientation preservation. Here, ``sufficient smoothness'' will mean that a considered deformation will be a \emph{homeomorphism} in order to prevent cracks or cavitation and its (weak) deformation gradient will be integrable, i.e.\ $y\in W^{1,p}(\O;{\mathbb R}^n)$ with $1 < p \le +\infty$. Clearly, a deformation is an invertible map but, in our modeling, we put an additional requirement on $y^{-1}$ ---namely, it should again qualify as a deformation, which is motivated by the fact that we aim to model \emph{the elastic response of the specimen}. In the elastic regime, the specimen returns to its original shape after all loads are released and so, since the r\^{o}les of the reference and the deformed configuration can be exchanged, we would like to understand the releasing of loads as applying a new loading, inverse to the original one, in the deformed configuration and the ``return'' of the specimen as the corresponding deformation. Thus, we define the following set of deformations \begin{align} W^{1,p,-p}_+(\Omega;\mathbb{R}^n) = \Big\{&y: \Omega \mapsto y(\Omega) \text{ an orientation preserving homeomorphism}; \nonumber \\ &y\in W^{1,p}(\O;{\mathbb R}^n) \text { and } y^{-1}\in W^{1,p}(y(\Omega);{\mathbb R}^n) \Big\}\ . \label{deformations} \end{align} Although invertibility of deformations is a fundamental requirement in elasticity it is still often omitted in modeling due to the lack of appropriate mathematical tools \color{black} to handle it. \color{black} However, let us mention that some ideas of incorporating invertibility of the deformation already appeared e.g.~in \cite{ball81,ciarlet-necas,fonseca-ganbo,currents,tang,MullerQi,MullerSpector,Henao} and very recently e.g.~in \cite{iwaniec1,daneri-pratelli}. Stable states of the specimen are found by minimizing \begin{eqnarray}\label{motivation} J(y)=\int_\O W(\nabla y(x))\,{\rm d} x\ , \end{eqnarray} where $W \colon {\mathbb R}^{n\times n}\to{\mathbb R}$ is the stored energy density, i.e.\ the potential of the first Piola-Kirchhoff stress tensor, over the set of admissible deformations \eqref{deformations}; possibly with respect to a Dirichlet boundary condition $y = y_0$ on $\partial \Omega$. A natural, still open, question is under which minimal conditions on a continuous $W$ satisfying $W(A)=+\infty$ if $\det A\le 0$ and \begin{equation}\label{blowup} W(A) \to +\infty \qquad \text{whenever} \qquad \det\, A \to 0_+ \end{equation} we can guarantee that $J$ is weakly lower-semicontinuous on \eqref{deformations}. In fact, Problem 1 in Ball's paper \cite{ball-puzzles}: ``\emph{Prove the existence of energy minimizers for elastostatics for quasiconvex stored-energy functions satisfying \eqref{blowup}}'' is closely related. Here we answer this question for the special case of bi-Lipschitz mappings in the plane; i.e.\ we restrict our attention to the setting $p=\infty, n=2$. It is natural to conjecture that the sought equivalent characterization of weak* lower semicontinuity will lead to a suitable notion of quasiconvexity. We confirm this conjecture and show that $J$ is weakly* lower semicontinuous on $W^{1,\infty,-\infty}_+(\Omega;\mathbb{R}^2)$ if and only if it is \emph{bi-quasiconvex} in the sense of Definition \ref{biqc-Def}. \begin{remark}[Quasiconvexity] We say that $W$ is quasiconvex if \begin{eqnarray}\label{quasiconvexity} |\O|W(A)\le\int_\O W(A+\nabla\varphi(x))\,{\rm d} x\ \end{eqnarray} holds for all $\varphi\in W^{1,\infty}_0(\O;{\mathbb R}^n)$ and all $A\in{\mathbb R}^{n\times n}$ \cite{morrey}. It is well known \cite{dacorogna} that if $W$ takes only finite values and is quasiconvex then $J$ in \eqref{motivation} is weakly* lower semicontinuous on $W^{1,\infty}(\Omega;\mathbb{R}^n)$ and so, in particular, also on $W^{1,\infty,-\infty}_+(\Omega;\mathbb{R}^2)$. Nevertheless, as we shall see, classical quasiconvexity is too restrictive in the bi-Lipschitz setting; indeed, since we narrowed the set of deformations it can be expected that a larger class of energies will lead to weak* lower semicontinuity of $J$. This can be also understood from a mechanical point of view: quasiconvex materials are described by energies having the property that among all \emph{deformations} with affine boundary data the affine ones are stable. Thus, since we now restricted the set of deformations it seems natural to verify \eqref{quasiconvexity} only for bi-Lipschitz functions; this is indeed the sought after convexity notion which we call bi-quasiconvexity (cf. Def. \ref{biqc-Def}). \end{remark} To prove our main result, we completely and explicitly characterize gradient Young measures generated by sequences in $W^{1,\infty,-\infty}_+(\Omega;\mathbb{R}^2)$ (cf. Section \ref{results}). Young measures extend the notion of solutions from Sobolev mappings to parametrized measures \cite{ball3,fonseca-leoni,pedregal,r,schonbek,tartar,tartar1,y}. The idea is to describe the limit behavior of $\{J(y_k)\}_{k\in{\mathbb N}}$ along a minimizing sequence $\{y_k\}_{k\in{\mathbb N}}$. Actually, one needs to work with the so-called gradient Young measures because it is the gradient of the deformation entering the energy in \eqref{motivation}. Their explicit characterization is due to Kinderlehrer and Pedregal \cite{k-p1,k-p}; however, it does not take into account any constraint on determinants or invertibility of the generating mappings. In spite of this drawback, gradient Young measures are massively used in literature to model solid-to-solid phase transitions as appearing in, e.g., shape memory alloys; cf.~\cite{ball-james1,mueller,kruzik-luskin,pedregal,r}. Yet, not excluding matrices with a negative determinant may add non-realistic phenomena to the model. Indeed, it is well-known that the modeling of solid-to-solid phase transitions via Young measures is closely related to the so-called quasiconvex envelope of $W$ which must be convex along rank-one lines, i.e.\ lines whose elements differ by a rank-one matrix. Not excluding matrices with negative determinants, however, adds many non-physical rank-one lines to the problem. Notice, for instance, that {\it any} element of SO$(2)$ is on a rank-one line with {\it any} element of $\mathrm{O}(2)\setminus{\rm SO}(2)$. Consequently, the determinant must inevitably change its sign on such line. The first attempt to include constraints on the sign of the determinant of the generating sequence appeared in \cite{quasiregular} where quasi-regular generating sequences in the plane were considered; however injectivity of the mappings could only be treated in the homogeneous case. Then, in \cite{bbmkgpYm} the characterization of gradient Young measures generated by sequences whose gradients are invertible matrices for the case where gradients as well as their inverse matrices are bounded in the $L^\infty$-norm was given. Very recently, Koumatos, Rindler, and Wiedemann \cite{krw} characterized Young measures generated by orientation preserving maps in $W^{1,p}$ for $1<p<n$; however they did not account for the restriction that deformations should be injective. Therefore, this contribution (to our best knowledge) presents the first characterization of Young measures that are generated by sequences that are orientation-preserving and \emph{globally invertible} and so qualify to be admissible deformations in elasticity. Generally speaking, the main difficulty in characterizing sets of Young measures generated by deformations (or, at least, mappings having constraints on the invertibility and/or determinant of the deformation gradient) is that this constraint is \emph{non-convex}. Thus, many of the standardly used techniques such as smoothing by a mollifier kernel are not applicable. In our context, we need to be able to modify the generating sequence on a vanishingly small set near the boundary to have the same boundary conditions as the limit; i.e.\ to construct a cut-off technique. It can be seen from \eqref{quasiconvexity}, that standard proofs of characterizations of gradient Young measures \cite{k-p1,k-p} or weak lower semicontinuity of quasiconvex functionals \cite{dacorogna} \emph{will rely} on such techniques since the test functions in \eqref{quasiconvexity} have fixed boundary data. Usually, the cut-off is realized by convex averaging which is, of course, ruled out here. Novel ideas in \cite{bbmkgpYm,krw} are to solve differential inclusions near the boundary to overcome this drawback. This allows to impose restrictions on the determinant of the generating sequence in several ``soft-regimes''; nevertheless, such techniques have not been generalized to more rigid constraints like the global invertibility. Here we follow a different approach and, for bi-Lipschitz mappings in the plane, we obtain the result by exploiting bi-Lipschitz extension theorems \cite{daneri-pratelli-extension,tukia-ext}. Thus, by following a strategy inspired by \cite{daneri-pratelli} we modify the generating sequence (on a set of gradually vanishing measure near the boundary) first on a one-dimensional grid and then extend it. The main reason why we confine ourselves to the bi-Lipschitz case and do not work in $W^{1,p,-p}_+(\Omega;\mathbb{R}^2)$ with $p< \infty$ is the fact that our technique relies on the extension theorem or, in other words, a full characterization of traces of bi-Lipschitz functions. To our best knowledge, such a characterization is at the moment completely open in $W^{1,p,-p}_+(\Omega;\mathbb{R}^2)$ with $p< \infty$. Still, let us point out its importance for finding minimizers of $J$ over \eqref{deformations}: in fact, constructing an extension theorem allows to precisely characterize the set of Dirichlet boundary data admissible for this problem. Notice that this question appears also in the existence proof for polyconvex materials and usually one assumes there that the set of admissible deformations is nonempty; \cite{ciarlet}. \begin{remark}[Growth conditions] Even though in this paper we restrict our attention to bi-Lipschitz functions, let us point under which growth of the energy we can guarantee that the minimizing sequence of $J$ lies in $W^{1,p,-p}_+(\Omega;\mathbb{R}^n)$. Namely, it follows from the works of J.M. Ball \cite{ball77, ball81} that it suffices to require that $W$ is finite only on the set of matrices with positive determinant and (``cof'' stands for the cofactor in dimension 2 or 3) \begin{equation} C\Big(|A|^p + \frac{1}{\det\,A} + \frac{|\mathrm{cof}(A)|^p}{\det\, A^{p-1}} - 1\Big) \leq W(A) \leq C\Big(|A|^p + \frac{1}{\det\, A} + \frac{|\mathrm{cof}(A)|^p}{\det\, A^{p-1}} + 1\Big), \label{growth-improved} \end{equation} as well as fix suitable boundary data (for example bi-Lipschitz ones).\footnote{As pointed above, since the traces of functions in $W^{1,p,-p}_+(\Omega;\mathbb{R}^2)$ are not precisely characterized to date, it is hard to decide what ``suitable boundary data'' are. In any case, in the plane bi-Lipschitz boundary data are sufficient.} Polyconvexity, i.e.\ convexity in all minors of $A$, is fully compatible with such growth conditions (they are themselves polyconvex) whence if $W$ is polyconvex minimizers of \eqref{motivation} \emph{over $W^{1,p}(\O;{\mathbb R}^n)$, $p>n$} are indeed deformations; i.e.\ are globally invertible and elements of $W^{1,p,-p}_+(\Omega;\mathbb{R}^n)$. We refer, e.g.~, to \cite{ciarlet,dacorogna} for various generalizations of this result. However, while polyconvexity is a sufficient condition it is not a \emph{necessary one}. On the other hand, classical results on quasiconvexity yielding existence of minimizers \cite{dacorogna} are compatible with neither the growth conditions proposed in this remark nor \eqref{blowup}. In fact, existence of a minimizer of \eqref{motivation} on $W^{1,p}(\O;{\mathbb R}^n)$ for quasiconvex $W$ can be, to date, proved only if \begin{eqnarray}\label{gr-cond} c(-1+|A|^p)\le W(A)\le \tilde c(1+|A|^p)\ . \end{eqnarray} The reason why the current proofs of existence of minimizers for quasiconvex cannot be extended to \eqref{growth-improved} is exactly the non-convexity detailed above. \end{remark} The plan of the paper is as follows. We first introduce necessary definitions and tools in Section~\ref{Start}. Then we state the main results in Section~\ref{results}. Proofs are postponed to Section~\ref{Proofs} while the novel cut-off technique is presented in Section \ref{sect-cutOff}. \bigskip \section{Preliminaries}\label{Start} Before stating our main theorems in Section~\ref{results}, let us summarize, at this point, the notation as well as background information that we shall use later on. We define the following subsets of the set of invertible matrices: \begin{align} \label{Rvarrho} {R_\varrho^{2 \times 2}}&=\{A\in{\mathbb R}^{2\times 2}\mbox{ invertible};\ |A^{-1}|\leq \varrho\ \&\ |A| \leq \varrho \}\ , \\ \label{Rvarrho+ }\!\!\!\! {R_{\varrho+}^{2 \times 2}}&=\{A\in {R_\varrho^{2 \times 2}};\ {\rm det}\ A>0\}\, \end{align} for $1\leq\varrho<\infty$. Note that both ${R_\varrho^{2 \times 2}}$ and ${R_{\varrho+}^{2 \times 2}}$ are compact. Set $$ {\mathbb R}^{2 \times 2}_\mathrm{inv} = \bigcup_\varrho {R_\varrho^{2 \times 2}} \qquad \qquad {\mathbb R}^{2 \times 2}_\mathrm{inv+} = \bigcup_\varrho {R_{\varrho+}^{2 \times 2}}. $$ We assume that the matrix norm used above is sub-multiplicative, i.e.\ that $|AB|\leq|A||B|$ for all $A,B\in{\mathbb R}^{2\times 2}$ and such that the norm of the identity matrix is one. This means that if $A\in{R_{\varrho+}^{2 \times 2}}$ then $\min(|A|,|A^{-1}|)\geq 1/\varrho$. \begin{definition} A mapping $y:\O\to{\mathbb R}^2$ is called L-bi-Lipschitz (or shortly bi-Lipschitz) if there is $L\ge 1$ such that for all $x_1,x_2\in\O$ \begin{equation}\label{bi-li} \frac{1}{L}|x_1-x_2|\le |y(x_1)-y(x_2)|\le L|x_1-x_2|\ . \end{equation} The number $L$ is called the bi-Lipschitz constant of $y$. \end{definition} This means that $y$ as well as its inverse $y^{-1}$ are Lipschitz continuous, hence $y$ is homeomorphic. Notice that $\frac{1}{L}\le|\nabla y(x)|\le L$ for almost all $x\in\O$. \begin{definition} We say that $\{y_k\}_{k\in{\mathbb N}}\subset W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ is bounded in $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ if the bi-Lipschitz constants of $y_k$, $k\in{\mathbb N}$, are uniformly bounded and $\{y_k\}_{k\in{\mathbb N}}$ is bounded in $W^{1,\infty}(\O;{\mathbb R}^n)$. Moreover, we say that $y_k \stackrel{*}{\rightharpoonup} y$ in $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ if the sequence is bounded and $y_k \stackrel{*}{\rightharpoonup} y$ in $W^{1,\infty}(\O;{\mathbb R}^2)$. \end{definition} We would like to stress the fact that $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ is not a linear function space. \begin{remark}\label{convergence} Notice that if $y_k \stackrel{*}{\rightharpoonup} y$ in $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$, we can give a precise statement on how the inverses of $ \{y_k\}$ converge if the target domain is fixed throughout the sequence; i.e.\ if $y_k: \Omega \to \widetilde{\Omega}$ for all $k \in \mathbb{N}$. This can be achieved for example by fixing Dirichlet boundary data through the sequence. In such a case it is easy to see that $y_k^{-1} \stackrel{*}{\rightharpoonup} y^{-1}$ in $W^{1,\infty}(\widetilde{\Omega}, \Omega)$: Since the gradients of the inverses $\nabla y_k^{-1}$ are uniformly bounded by the uniform bi-Lipschitz constants, we may select at least a subsequence converging weakly* in $W^{1,\infty}(\widetilde{\Omega}, \Omega)$ and thus strongly in $L^{\infty}(\widetilde{\Omega}, \Omega)$. Nevertheless, the latter allows us to pass to the limit in the identity $y_k^{-1}(y_k(x)) = x$ for any $x \in \Omega$ and therefore to identify the weak* limit as $y^{-1}$; in other words, the weak limit is identified independently of the selected subsequence which assures that the whole sequence $ \{y_k^{-1}\}_{k\in \mathbb{N}}$ converges weakly* to $y^{-1}$. \end{remark} \bigskip Let us now summarize the theorems on invertibility, extension from the boundary in the bi-Lipschitz case and on approximation by smooth functions needed below. \begin{theorem}[Taken from \cite{ball81}]\label{jmball} Let $\O\subset{\mathbb R}^n$ be a bounded Lipschitz domain. Let $u_0:\overline{\O}\to{\mathbb R}^n$ be continuous in $\overline{\O}$ and one-to-one in $\O$ such that $u_0(\O)$ is also bounded and Lipschitz. Let $u\in W^{1,p}(\O;{\mathbb R}^n)$ for some $p>n$, $u(x)=u_0(x)$ for all $x\in\partial\O$, and let $\det\nabla u>0$ a.e.~in $\O$. Finally, assume that for some $q>n$ \begin{equation} \int_\O|(\nabla u(x))^{-1}|^q\det\nabla u(x)\,{\rm d} x<+\infty\ . \label{integral-Cond} \end{equation} Then $u(\overline{\O})=u_0(\overline{\O})$ and $u$ is a homeomorphism of $\O$ onto $u_0(\O)$. Moreover, the inverse map $u^{-1}\in W^{1,q}(u_0(\O);{\mathbb R}^n)$ and $\nabla u^{-1}(z)=(\nabla u(x))^{-1}$ for $z=u(x)$ and a.a.~$x\in\O$. \end{theorem} \begin{remark} Let us point out that the original statement of the Theorem~\ref{jmball} requires that $u_0(\O)$ satisfies the so-called cone condition and that $\O$ is strongly Lipschitz. These conditions hold if $\O$ and $u_0(\O)$ are bounded and Lipschitz domains; cf.~\cite[p.~83-84]{adams-fournier}. \end{remark} \begin{theorem}[Square bi-Lipschitz extension theorem due to \cite{daneri-pratelli-extension} and previously \cite{tukia-ext}] \label{extensionTheorem} There exists a geometric constant $C\leq 81\cdot 63600$ such that every $L$ bi-Lipschitz map $u: \partial \mathcal{D}(0,1) \mapsto \mathbb{R}^2$ (with $\mathcal{D}(0,1)$ the unit square) admits a $C L^4$ bi-Lipschitz extension $v: \mathcal{D}(0,1) \mapsto \Gamma$ where $\Gamma$ is the bounded closed set such that $\partial \Gamma = u(\partial \mathcal{D}(0,1))$. \end{theorem} \begin{remark}[Rescaled squares] Let us note, that the theorem above holds with the \emph{same geometric constant} $C$ also for rescaled squares $\mathcal{D}(0, \epsilon)$ with some $\epsilon > 0$, possibly small. Indeed, for $u: \partial \mathcal{D}(0, \epsilon) \mapsto \mathbb{R}^2$, we define the rescaled function $\tilde{u}: \partial \mathcal{D}(0,1) \mapsto \mathbb{R}^2$ through $\tilde{u}(x) = \epsilon u(x/\epsilon)$; note that both functions have the same bi-Lipschitz constant. This function is then extended to obtain $\tilde{v}: \mathcal{D}(0,1) \mapsto \mathbb{R}^2$ as in the above theorem. Again we rescale $\tilde{v}$, under preservation of the bi-Lipschitz constant, to $v: \mathcal{D}(0, \epsilon) \mapsto \mathbb{R}^2$ $v=\frac{1}{\epsilon} \tilde v(\epsilon x)$. So, $v$ is $C L^4$ bi-Lipschitz and, since $\tilde{u}$ coincides with $\tilde{v}$ on the boundary of the unit square, $v$ coincides with $u$ on $ \partial \mathcal{D}(0, \epsilon)$. \end{remark} \begin{theorem}[Smooth approximation \cite{iwaniec1} and in the bi-Lipschitz case also by \cite{daneri-pratelli}] \label{SmoothApprox} Let $\O\subset{\mathbb R}^2$ be bounded open and $y\in W^{1,p}(\Omega; {\mathbb R}^2)$ ($1 < p < \infty$) be an orientation preserving homeomorphism. Then it can be, in the $W^{1,p}$-norm, approximated by diffeomorphisms having the same boundary value as $y$. Moreover, if $y$ is bi-Lipschitz, then there exists a sequence of diffeomorphisms $\{y_k\}$ having the same boundary value as $y$ and $y_k$, $y_k^{-1}$ approximate $y$, $y^{-1}$ in $W^{1,p}$-norm with $1 < p < \infty$, respectively. \end{theorem} \subsection{Young measures} We denote by ``${\rm rca}(S)$'' the set of Radon measures on a set $S$. Young measures on a bounded domain $\O\subset\R^{n}$ are weakly* measurable mappings $x\mapsto\nu_x:\O\to {\rm rca}({\mathbb R}^{n\times n})$ with values in probability measures; the adjective ``weakly* measurable'' means that, for any $v\in C_0({\mathbb R}^{n\times n})$, the mapping $\O\to{\mathbb R}:x\mapsto\A{\nu_x,v}=\int_{{\mathbb R}^{n\times n}} v(s)\nu_x(\d s)$ is measurable in the usual sense. Let us remind that, by the Riesz theorem, ${\rm rca}({\mathbb R}^{n\times n})$, normed by the total variation, is a Banach space which is isometrically isomorphic with $C_0({\mathbb R}^{n\times n})^*$, where $C_0({\mathbb R}^{n\times n})$ stands for the space of all continuous functions ${\mathbb R}^{n\times n}\to{\mathbb R}$ vanishing at infinity. Let us denote the set of all Young measures by ${\cal Y}(\O;{\mathbb R}^{n\times n})$. It is known (see e.g.~\cite{r}) that ${\cal Y}(\O;{\mathbb R}^{n\times n})$ is a convex subset of $L^\infty_{\rm w}(\O;{\rm rca}({\mathbb R}^{n\times n}))\cong L^1(\O;C_0({\mathbb R}^{n\times n}))^*$, where the subscript ``w'' indicates the aforementioned property of weak* measurability Let $S\subset{\mathbb R}^{n\times n}$ be a compact set. A classical result \cite{tartar} states that for every sequence $\{Y_k\}_{k\in{\mathbb N}}$ bounded in $L^\infty(\O;{\mathbb R}^{n\times n})$ such that $Y_k(x)\in S$ there exists a subsequence (denoted by the same indices for notational simplicity) and a Young measure $\nu=\{\nu_x\}_{x\in\O}\in{\cal Y}(\O;{\mathbb R}^{n\times n})$ satisfying \begin{eqnarray}\label{jedna2} \forall v\in C(S):\ \ \ \ \lim_{k\to\infty}v(Y_k)=\int_{{\mathbb R}^{n\times n}}v(s)\nu_x(\d s)\ \ \ \ \ \ \ \mbox{ weakly* in }L^\infty(\O)\ . \end{eqnarray} Moreover, $\nu_x$ is supported on $\bar S$ for almost all $x\in\O$. On the other hand, if $\mu=\{\mu_x\}_{x\in\O}$, $\mu_x$ is supported on $S$ for almost all $x\in\O$ and $x\mapsto\mu_x$ is weakly* measurable then there exist a sequence $\{Z_k\}_{k\in{\mathbb N}}\subset L^\infty(\O;{\mathbb R}^{n\times n})$, $Z_k(x)\in S$ and \eqref{jedna2} holds with $\mu$ and $Z_k$ instead of $\nu$ and $Y_k$, respectively. Let us denote by ${\cal Y}^\infty(\O;{\mathbb R}^{n\times n})$ the set of all Young measures which are created in this way, i.e.\ by taking all bounded sequences in $L^\infty(\O;{\mathbb R}^{n\times n})$. Moreover, we denote by ${\cal GY}^\infty(\O;{\mathbb R}^{n\times n})$ the subset of ${\cal Y}^\infty(\O;{\mathbb R}^{n\times n})$ consisting of measures generated by gradients of $\{y_k\}_{k\in{\mathbb N}}\subset W^{1,\infty}(\O;{\mathbb R}^n)$, i.e.\ $Y_k=\nabla y_k$ in \eqref{jedna2}. The following result is due to Kinderlehrer and Pedregal \cite{k-p1,k-p} (see also \cite{mueller,pedregal}): \begin{theorem}[adapted from \cite{k-p1,k-p}] Let $\Omega$ be a bounded Lipschitz domain. Then the parametrized measure $\nu \in {\cal Y}^\infty(\O;{\mathbb R}^{n\times n})$ is in ${\cal GY}^\infty(\O;{\mathbb R}^{n\times n})$ if and only if \begin{enumerate} \item there exists $z \in W^{1,\infty}(\Omega; {\mathbb R}^n)$ such that $\nabla z(x) = \int_{{\R^{n\times n}}} A \nu_x(\d A)$ for a.e. \ $x \in \Omega$, \item $\psi(\nabla z(x)) \leq \int_{{\R^{n\times n}}}\psi(A) \nu_x(\d A) $ for a.e. \ $x \in \Omega$ and for all $\psi$ quasiconvex, continuous and bounded from below, \item $\mathrm{supp} \ \nu_x \subset K$ for some compact set $K \subset {\R^{n\times n}}$ for a.e. \ $x \in \Omega$. \end{enumerate} \end{theorem} \bigskip \section{Main results} \label{results} We shall denote, for $\varrho \geq 1$, \begin{align*} \mathcal{GY}_\varrho^{+\infty,-\infty}(\O;\R^{n\times n}) = \big\{ \nu &\in \mathcal{Y}^\infty(\Omega; {\mathbb R}^{2 \times 2}) \\ &\text { that are generated by $\varrho$-bi-Lipschitz, orientation preserving maps} \big\}, \end{align*} and $$ \mathcal{GY}_+^{\infty,-\infty}(\O;\R^{2\times 2}) = \bigcup_{\varrho \geq 1} \mathcal{GY}_\varrho^{+\infty,-\infty}(\O;\R^{n\times n}). $$ As already pointed out in the introduction we seek for an explicit characterization of $\mathcal{GY}_+^{\infty,-\infty}(\O;\R^{2\times 2})$; it can be expected that, when compared to \cite{k-p1}, we shall restrict the support of the Young measure as in \cite{quasiregular,bbmkgpYm,krw} but also alter the Jensen inequality by changing the notion of quasiconvexity. \begin{definition} \label{biqc-Def} Suppose $v:{\mathbb R}^{2\times 2}\to{\mathbb R}\cup\{+\infty\}$ is bounded from below and Borel measurable. Then we denote $$ Z v(A)=\inf_{\varphi\in W^{1,\infty, -\infty}_{A}(\O;{\mathbb R}^2)}|\O|^{-1}\int_\O v(\nabla \varphi(x))\,{\rm d} x\ , $$ with $$ W^{1,\infty,-\infty}_{A} (\O;{\mathbb R}^2) = \begin{cases} \Big\{y \in W^{1,\infty, -\infty}_+(\O;{\mathbb R}^2); y(x)=Ax \text{ if $x\in\partial \Omega$} \Big\} & \text{if $\det\,A > 0$,} \\ \emptyset &\text{else.} \end{cases} $$ and say that $v$ is bi-quasiconvex on ${\mathbb R}^{2 \times 2}_\mathrm{inv+}$ if $Zv(A) = v(A)$ for all $A \in {\mathbb R}^{2 \times 2}_\mathrm{inv+}$. Here we set $\inf \emptyset=+\infty$. \end{definition} \begin{remark} \mbox{} \label{biQCProp} \begin{enumerate} \item Notice that actually $Zv(A)\le v(A)$ if det $A>0$ and $ Zv(A) = +\infty$ otherwise, so that $Zv\not\le v$ in general. Moreover, the infimum in the definition of $Zv(A)$ is, generically, not attained.\item Any $v$ as in Definition \ref{biqc-Def} bi-quasiconvex if and only if \begin{eqnarray}\label{bi-def} |\O|v(A)\le\int_\O v(\nabla\varphi(x))\,{\rm d} x\ \end{eqnarray} for all $\varphi \in W^{1,\infty,-\infty}_+(\Omega;\mathbb{R}^2)$, $\varphi = Ax$ on $\partial \Omega$ and all $A\in{\mathbb R}^{2 \times 2}_\mathrm{inv+}$. Indeed, clearly if v is bi-quasiconvex then \eqref{bi-def} holds. On the other hand, if \eqref{bi-def} holds, we have that $v(A)\le Zv(A)$ for $A\in {\mathbb R}^{2 \times 2}_\mathrm{inv+}$ by taking the infimum in \eqref{bi-def}. Moreover, $Zv(A)\le v(A)$ for such $A$, so that $Zv(A)=v(A)$. \item We recall that the condition of bi-quasiconvexity is less restrictive than the usual quasiconvexity and there obviously exist bi-quasiconvex functions on ${\mathbb R}^{2\times 2}$ which are not quasiconvex (for example, take $v:{\mathbb R}\to{\mathbb R}$ with $v(0)=1$ and $v(A)=0$ if $A\ne 0$.). Also, we can allow for the growth \eqref{blowup}. \item It is interesting to investigate whether, for any $v$ as from Definition \ref{biqc-Def}, $Zv(A)$ is already a bi-quasiconvex function. If one wants to follow the standard approach known from the analysis of classical quasiconvex function \cite{dacorogna}, this consists in showing that $Zv$ can be actually replaced by $Z'v$ defined through $$ Z' v(A)=\inf_{\varphi\in W^{1,\infty, -\infty}_{A}(\O;{\mathbb R}^2)\text{ piecewise affine }}|\O|^{-1}\int_\O v(\nabla \varphi(x))\,{\rm d} x\ , $$ and that the latter is bi-quasiconvex. To do so, one relies on the density of piecewise affine function which, in our case, is available through Theorem \ref{SmoothApprox}. Moreover, to employ the density argument, one needs to show that $Z'v$ is rank-1 convex on ${\mathbb R}^{2 \times 2}_\mathrm{inv+}$ and hence continuous. This is done by constructing a sequence of faster and faster oscillating laminates that are altered near the boundary to meet the boundary condition. Now, since an appropriate cut-off technique becomes available through this work, it seems that this approach should be feasible. Nevertheless, the details are beyond the scope of the present paper and we leave them for future work. Let us remark that an alternative to the above methods may be possible along the lines of the recent work \cite{Conti}. \end{enumerate} \end{remark} The main result of our paper is the following characterization theorem. \begin{theorem}\label{THM1} Let $\Omega \subset {\mathbb R}^2$ be a bounded Lipschitz domain. Let $\nu\in \mathcal{Y}^\infty(\O;{\mathbb R}^{2\times 2})$. Then $\nu \in \mathcal{GY}_+^{\infty,-\infty}(\O;\R^{2\times 2})$ if and only if the following three conditions hold: \begin{gather} \exists \varrho \geq 1 \text{ s.t. }\, \mathrm{supp}\, \nu_x\subset {R_{\varrho+}^{2 \times 2}} \mbox{ for a.a.~$x\in\O$}\ , \label{supp} \\ \exists\ u\in W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)\ :\ \nabla u(x)=\int_{{\R^{2\times 2}_{\rm inv+}}} A \nu_x(\d A)\ \ ,\label{firstmoment0} \\ \intertext{$\exists \bar{c}(\varrho) > \varrho$ such that for a.a.~$x\in\O$, all $\tilde\varrho\in[\bar{c}(\varrho);+\infty]$, and all $v\in \mathcal{O}(\tilde\varrho)$ the following inequality is valid} Z v(\nabla u(x))\le\int_{{\R^{2\times 2}_{\rm inv+}}} v(A)\nu_x({\rm d} A)\ , \label{qc0} \end{gather} with \begin{equation} \mathcal{O}(\varrho)=\{ v:{\mathbb R}^{2\times 2}\to{\mathbb R}\cup\{+\infty\};\ v\in C({R_\varrho^{2 \times 2}})\ ,\ v(A)=+\infty \mbox { if $A\in{\mathbb R}^{2\times 2}\setminus{R_{\varrho+}^{2 \times 2}}$}\}\ . \label{Orho} \end{equation} \end{theorem} An easy corollary is the following: \begin{corollary}\label{wlsc} Let $\Omega \subset {\mathbb R}^2$ be a bounded Lipschitz domain. Let $v$ be in $\mathcal{O}(+\infty)$. Let $\{y_k\}_{k \in {\mathbb N}} \subset W^{1,\infty,-\infty}_+(\Omega;\mathbb{R}^2)$ and suppose that $y_k \stackrel{*}{\rightharpoonup} y$ in $W^{1,\infty,-\infty}_+(\Omega;\mathbb{R}^2)$. Then $v$ is bi-quasiconvex if and only if $y\mapsto I(y)=\int_\O v(\nabla y(x))\,\d x$ is sequentially weakly* lower semicontinuous with respect to the convergence above. \end{corollary} Finally, as an application we can state the following statement about the existence of minimizers. \begin{proposition}\label{minimizer} Let $\Omega \subset {\mathbb R}^2$ be a bounded Lipschitz domain and let $0\le v \in\mathcal{O}(+\infty)$ be bi-quasiconvex. Let further $\varepsilon>0$ and define $I_\varepsilon:W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)\to{\mathbb R}$ $$ I_\varepsilon (u)= \int_\O v(\nabla u(x))\,\d x +\varepsilon(\|\nabla u\|_{L^\infty(\O;{\mathbb R}^{2\times 2})}+\|\nabla u^{-1}\|_{L^\infty(u(\O);{\mathbb R}^{2\times 2})})\ .$$ Let $u_0\in W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ and $$\mathcal{A}=\{ u\in W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2);\, u=u_0\mbox{ on }\partial\O\}\ .$$ Then there is a minimizer of $I_\varepsilon$ on $\mathcal{A}$. \end{proposition} \begin{remark} \mbox{} \begin{enumerate} \item Note that, we needed in Theorem \ref{THM1} that $\tilde\varrho>\varrho$ so that boundedness of $\int_\Omega v(\nabla y_k) {\rm d} x$ does not yield the right $L^\infty$-constraint of the gradient of the minimizing sequence. This is actually a known fact in the $L^\infty$-case \cite{k-p1} and is usually overcome by assuming that the generating sequence does not need to be Lipschitz but is only bounded in some $W^{1,p}(\Omega; {\mathbb R}^2)$ space. Alternatively, one can use Proposition~\ref{minimizer} stated above. \item It will follow from the proof that the constant $\bar{c}(\varrho)$ is actually determined by the extension Theorem \ref{extensionTheorem}. \item Note that if one can show that $Zv$ is already a bi-quasiconvex function (cf. Remark \ref{biQCProp}(4)) then \eqref{qc0} can be replaced by requiring that \begin{equation} v(\nabla u(x))\le\int_{{\R^{2\times 2}_{\rm inv+}}} v(A)\nu_x({\rm d} A) \label{qc0-alt} \end{equation} is fulfilled for all bi-quasiconvex $v$ in $\mathcal{O}(\tilde\varrho)$. Indeed, \eqref{qc0-alt} follows directly from \eqref{qc0} if $v$ is bi-quasiconvex. On the other hand, if \eqref{qc0-alt} holds and if we knew that $Zv$ is bi-quasiconvex, we know that $$ Zv(\nabla u(x))\le\int_{{\R^{2\times 2}_{\rm inv+}}} Zv(A)\nu_x({\rm d} A) \leq \int_{{\R^{2\times 2}_{\rm inv+}}} v(A)\nu_x({\rm d} A)\ , $$ where the second inequality is due to Remark \ref{biQCProp}(1). \end{enumerate} \end{remark} \bigskip \section{Proofs}\label{Proofs} Here we prove Theorem~\ref{THM1}. Actually, we follow in large parts \cite{k-p1,pedregal} since, as pointed out in the introduction, the main difficulty lies in constructing an appropriate cut-off which we do in Section \ref{sect-cutOff}; so, we mostly just sketch the proof and refer to these references. \subsection{Proof of Theorem \ref{THM1} - Necessity} Condition \eqref{supp} follows from \cite[Propositions~2.4 and 3.3]{bbmkgpYm} and from the fact that any Young measure generated by a sequence bounded in the $L^\infty$ norm is supported on a compact set. In order to show \eqref{firstmoment0}, realize that it expresses the fact that the first moment of $\nu$ is just the weak* limit of a generating sequence $\{\nabla y_k\}\subset L^\infty(\O;{\mathbb R}^{2\times 2})$. The sequence $\{y_k\}$ is also bounded in $W_+^{1,\infty,-\infty}(\O;{\mathbb R}^2)$ and $\{y_k\}$ converges strongly to some $y\in W^{1,\infty}(\O;{\mathbb R}^2)$. Passing to the limit in \eqref{bi-li} written for $y_k$ instead of $y$ shows that $y$ is bi-Lipschitz. The $L^\infty$- weak* convergence of $\det\nabla y_k$ to $\det\nabla y$ finally implies that $y\in W_+^{1,\infty,-\infty}(\O;{\mathbb R}^2)$ as a bi-Lipschitz map cannot change sign of its Jacobian on $\O$. To prove \eqref{qc0} we follow a standard strategy, e.g., as in \cite{pedregal}. First, we show that almost every individual measure $\nu_x$ is a homogeneous Young measure generated by bi-Lipschitz maps with affine boundary data. The latter fact is implied by Theorem~\ref{cut-off}. Then \eqref{qc0} stems from the very definition of bi-quasiconvexity. \begin{lemma}\label{localization} Let $\nu\in \mathcal{GY}_\varrho^{+\infty,-\infty}(\O;\R^{n\times n})$. Then $\mu=\{\nu_a\}_{x\in\O}\in\mathcal{GY}_\varrho^{+\infty,-\infty}(\O;\R^{n\times n})$ for a.e. \ $a\in \O$. \end{lemma} \bigskip \noindent {\it Proof.} Note that the construction in the proof of \cite[Th.~7.2]{pedregal} does not affect orientation-preservation nor the bi-Lipschitz property. Namely, if gradients of a bounded sequence $\{u_k\}\subset W_+^{1,\infty,-\infty}(\O;{\mathbb R}^2)$ generate $\nu$ then for almost all $a\in\O$ one constructs a localized sequence $\{ju_k(a+x/j)\}_{j,k\in{\mathbb N}}$ (note that this function is clearly injective if $u_k$ was; since the norm of the gradient is just shifted this yields the bi-Lipschitz property) whose gradients generate $\mu$ as $j,k\to\infty$. \hfill $\Box$ \bigskip \begin{proposition}\label{proposition:jensen} Let $\nu\in\mathcal{GY}_+^{\infty,-\infty}(\O;\R^{2\times 2})$, supp$\,\nu\subset{R_\varrho^{2 \times 2}}$ be such that $\nabla y(x)= \int_{R_\varrho^{2 \times 2}} A\nu_x({\rm d} A)$ for almost all $x\in\O$, where $y\in W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$. Then for all $\tilde\varrho\in[\bar{c}(\varrho);+\infty]$, almost all $x\in\O$ and all $v\in\mathcal{O}(\tilde\varrho)$ we have \begin{eqnarray} \int_{\R^{2\times 2}_{\rm inv}} v(A)\nu_x({\rm d} A)\ge Z v(\nabla y(x))\ .\end{eqnarray} \end{proposition} \bigskip \noindent{\it Proof.} We know from Lemma~\ref{localization} that $\mu=\{\nu_a\}_{x\in\O}\in\mathcal{GY}_\varrho^{+\infty,-\infty}(\O;\R^{n\times n})$ for a.e.~$a\in\O$, so there exits its generating sequence $\{\nabla u_k\}_{k\in{\mathbb N}}$ such that $\{u_k\}_{k\in{\mathbb N}}\subset W_+^{1,\infty,-\infty}(\O;{\mathbb R}^2)$ and for almost all $x\in\O$ and all $k\in{\mathbb N}$ $\nabla u_k(x)\in{R_\varrho^{2 \times 2}}$. Moreover, $\{u_k\}_{k\in{\mathbb N}}$ weakly* converges to the map $x \mapsto (\nabla y(a))x$ which is bi-Lipschitz. Using Corollary~\ref{corollary-cut-off}, we can, without loss of generality, suppose that $u_k$ is $\tilde{\varrho}$-bi-Lipschitz for all $k\in{\mathbb N}$ and $u_k(x)=\nabla y(a)x$ if $x\in\partial\O$. Therefore, we have $$ |\O|\int_{\R^{2\times 2}_{\rm inv+}} v(A)\nu_a({\rm d} A) = \lim_{k\to\infty} \int_\O v(\nabla u_k(x))\,{\rm d} x \ge |\O|Z v(\nabla y(a)) \ .$$ \hfill $\Box$ \bigskip \subsection{Proof of Theorem~\ref{THM1} - sufficiency} We need to show that conditions \eqref{supp},\eqref{firstmoment0}, and \eqref{qc0} are also sufficient for $\nu \in\mathcal{Y}^\infty(\O;{\mathbb R}^{2\times 2})$ to be in ${\mathcal{GY}^{+\infty,-\infty}(\O;{\mathbb R}^{2\times 2})}$. Put \begin{eqnarray}\mathcal{U}^\varrho_A=\{y\in W_{A}^{1,\infty,-\infty}(\O;{\mathbb R}^2);\ \nabla y(x)\in {R_{\varrho+}^{2 \times 2}}\,\mbox{for a.a.~$x\in\O$}\}\ ;\end{eqnarray} In other words this is the set of $\varrho$-bi-Lipschitz functions with affine boundary values equal to $x\mapsto Ax$. Consider for $A\in{\mathbb R}^{2\times 2}_\mathrm{inv}$ the set \begin{eqnarray}\mathcal{M}^\varrho_A=\{\overline{\delta_{\nabla y}};\ y\in\mathcal{U}^\varrho_A\}\ ,\end{eqnarray} where $\overline{\delta_{\nabla y}}\in {\rm rca}({\mathbb R}^{2\times 2})$ is defined \color{black} for all $v\in C_0({\mathbb R}^{2\times 2})$ \color{black} as $\left\langle \overline{\delta_{\nabla y}}, v\right\rangle=|\O|^{-1}\int_\O v(\nabla y(x))\,{\rm d} x$; $\overline{\mathcal{M}^\varrho_A}$ will denote its weak$^*$ closure. \begin{lemma}\label{convexity} Let $A\in{R_{\varrho+}^{2 \times 2}}$. Then the set $\mathcal{M}^\varrho_A$ is nonempty and convex. \end{lemma} \begin{proof} To show that $\mathcal{M}^\varrho_A\ne\emptyset$ is trivial because $x\mapsto y(x)=Ax$ is an element of this set as $A$ has a positive determinant. To show that $\mathcal{M}^\varrho_A$ is convex we follow \cite[Lemma~8.5]{pedregal}. We take $y_1,y_2\in \mathcal{U}^\varrho_A$ and, for a given $\lambda\in(0,1)$, we find a subset $D\subset\O$ such that $|D|=\lambda|\O|$. There are two countable disjoint families of subsets of $D$ and $\O\setminus D$ of the form $$ \{a_i+\epsilon_i\O;\ a_i\in D,\ \epsilon_i>0,\ a_i+\epsilon_i\O\subset D\}$$ and $$ \{b_i+ \rho_i\O;\ b_i\in \O\setminus D,\ \rho_i>0,\ b_i+\rho_i\O\subset\O \setminus D\}\ $$ such that $$ D=\bigcup_{i}(a_i+\epsilon_i\O)\cup N_0 \ , \qquad \O\setminus D=\bigcup_{i}(b_i+\rho_i\O)\cup N_1 \ , $$ where the Lebesgue measure of $N_0$ and $N_1$ is zero. We define $$ y(x)= \begin{cases} \epsilon_iy_1\left(\frac{x-a_i}{\epsilon_i}\right)+Aa_i & \mbox{ if $x\in a_i+\epsilon_i\O$, }\\ \rho_iy_2\left(\frac{x-b_i}{\rho_i}\right)+Ab_i & \mbox{ if $x\in b_i+\rho_i\O$, }\\ Ax &\mbox{ otherwise,} \end{cases} \quad \mbox{yielding} \quad \nabla y(x)= \begin{cases} \nabla y_1\left(\frac{x-a_i}{\epsilon_i}\right)& \mbox{ if $x\in a_i+\epsilon_i\O$, }\\ \nabla y_2\left(\frac{x-b_i}{\rho_i}\right) & \mbox{ if $x\in b_i+\rho_i\O$, }\\ A &\mbox{ otherwise.} \end{cases} $$ We must show that $y$ is $\varrho$-bi-Lipschitz; actually, as $\nabla y(x) \in {R_{\varrho+}^{2 \times 2}}$ a.e., we only need to check the injectivity of the mapping. To this end, we apply Theorem~\ref{jmball}. Notice that \eqref{integral-Cond} clearly holds for any $q \in (1, \infty)$ due to the a.e. \ bounds on $\nabla y$. Moreover, we have affine boundary data, $y(x)=Ax$, so that indeed the boundary data form a homeomorphism and, since $\Omega$ was a bounded Lipschitz domain, so will be $A\Omega=\{Ax;\, x\in\O\}$. Thus we conclude that, indeed, $y$ is $\varrho$-bi-Lipschitz. In particular, $y\in{\mathcal U}_A^\varrho$ and $\overline{\delta_{\nabla y}}=\lambda\overline{\delta_{\nabla y_1}}+(1-\lambda)\overline{\delta_{\nabla y_2}}\ .$ \end{proof} The following homogenization lemma can be proved the same way as \cite[Th.~7.1]{pedregal}. The argument showing that a generating sequence of $\overline{\nu}$ comes from bi-Lipschitz orientation preserving maps comes from Theorem~\ref{jmball} the same way as in the proof of Lemma~\ref{convexity}. \bigskip \begin{lemma}\label{homogenization} Let $\{u_k\}_{k\in{\mathbb N}}\subset W^{1,\infty, -\infty}_A(\Omega;{\mathbb R}^2)$ be a bounded sequence in $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$. Let the Young measure $\nu\in \mathcal{GY}_+^{\infty,-\infty}(\O;\R^{2\times 2})$ be generated by $\{\nabla u_k\}_{k\in{\mathbb N}}$. Then there is a another bounded sequence $\{w_k\}_{k\in{\mathbb N}}\subset W^{1,\infty, -\infty}_A(\Omega;{\mathbb R}^2)$ that generates a homogeneous (i.e. independent of $x$) measure $\bar\nu$ defined through \begin{eqnarray}\label{homog} \int_{{R_{\varrho+}^{2 \times 2}}} v(s)\bar\nu({\rm d} s)= \frac{1}{|\O|}\int_{\O}\int_{{R_{\varrho+}^{2 \times 2}}} v(s)\nu_x({\rm d} s)\,{\rm d} x\ , \end{eqnarray} for any $v\in C({R_{\varrho+}^{2 \times 2}})$ and almost all $x\in\O$. Moreover, $\bar\nu \in \mathcal{GY}_+^{\infty,-\infty}(\O;\R^{2\times 2})$. \end{lemma} \bigskip \begin{proposition}\label{homocase} Let $\mu$ be a probability measure supported on a compact set $K\subset {\mathbb R}^{2\times 2}_{\alpha+}$ for some $\alpha \geq 1$ and let $ A=\int_K s\mu({\rm d} s)$. Let $\varrho>\alpha$ and let \begin{eqnarray}\label{jensen} Z v(A)\le \int_{K} v(s)\mu({\rm d} s)\ ,\end{eqnarray} for all $v\in \mathcal{O}(\varrho)$. Then $\mu\in \mathcal{GY}_+^{\infty,-\infty}(\O;\R^{2\times 2})$ and it is generated by gradients of mappings from $\mathcal{U}^\varrho_A$. \end{proposition} \bigskip \begin{proof} First, notice that $|A|\le \alpha<\varrho<+\infty $. Secondly, the set of measures $\mu$ in the statement of the proposition is convex and contains $\mathcal{M}^\varrho_A$ as its convex and non void subset due to Lemma~\ref{convexity}. We show that no fixed $\mu$ satisfying \eqref{jensen} can be separated from the weak* closure of $\mathcal{M}^\varrho_A$ by a hyperplane. We argue by a contradiction argument. Then by the Hahn-Banach theorem, assume \color{black} that there is $\tilde v\in C_0({\mathbb R}^{2\times2})$ \color{black} that separates $\mathcal{M}^\varrho_A$ from $\mu$. In other words, there exists a constant $\tilde{c}$ such that $$ \left\langle \nu,\tilde v\right\rangle \geq \tilde{c} \text{ for all $\nu\in \mathcal{M}^\varrho_A$} \qquad \text{and} \qquad \left\langle \mu,\tilde v\right\rangle < \tilde{c} $$ However, since we are working with probability measures, we may use $\tilde{v}-\tilde c$ instead of $\tilde{v}$. In this way, we can put $\tilde{c} =0$. Hence, without loss of generality, we assume that $$ 0\le \left\langle \nu,\tilde v\right\rangle=\int_{R_\varrho^{2 \times 2}} \tilde v(s)\nu({\rm d} s) =|\O|^{-1}\int_\O \tilde v(\nabla y(x))\,{\rm d} x\ , $$ for all $\nu \in \mathcal{M}^\varrho_A$ (and hence all $y\in\mathcal{U}_A^\varrho$) and $0> \left\langle \tilde \mu,\tilde v\right\rangle$. Now, the function $$ v(F) = \begin{cases} \tilde {v}(F) &\text{if $F \in {R_{\varrho+}^{2 \times 2}}$},\\ +\infty &\text{else}, \end{cases} $$ is in $\mathcal{O}(\varrho)$. Notice that it follows from (\ref{jensen}) that $Z v(A)$ is finite. Thus, $Z v(A)=\inf_{\mathcal{U}_A^\varrho}|\O|^{-1}\int_\O v(\nabla y(x))\,{\rm d} x$. Hence, $Z v(A) \geq 0$ and, by \eqref{jensen}, $0\le Zv(A)\le \int_{K} v(s)\mu({\rm d} s)=\int_{K} \tilde v(s)\mu({\rm d} s)$. \color{black} As this holds for all hyperplanes, $\mu\in \overline{\mathcal{M}^\varrho_A}$, a contradiction. \color{black} As \color{black} $C_0({\mathbb R}^{2\times 2})$ \color{black} is separable, the weak* topology on bounded sets in \color{black} its dual, ${\rm rca}({\mathbb R}^{2\times 2})$, \color{black} is metrizable. Hence, there is a sequence $\{u_k\}_{k\in{\mathbb N}}\subset \mathcal{U}_A^\varrho$ such that for all $v\in C({R_{\varrho+}^{2 \times 2}})$ (and all $v\in\mathcal{O}(\varrho)$) \begin{eqnarray}\label{lim1} \lim_{k\to\infty}\int_\O v(\nabla u_k(x))\,{\rm d} x= |\O|\int_{{R_{\varrho+}^{2 \times 2}}} v(s)\mu({\rm d} s)\ ,\end{eqnarray} and $\{u_k\}_{k\in{\mathbb N}}$ is bounded in $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^{2\times 2})$. Let $\nu$ be a Young measure generated by $\{\nabla u_k\}$ (or a subsequence of it). Then we have for $v$ as above \begin{eqnarray}\label{lim2} \lim_{k\to\infty}\int_\O v(\nabla u_k(x))\,{\rm d} x =\int_\O\int_{{R_{\varrho+}^{2 \times 2}}} v(s)\nu_x({\rm d} s)\,{\rm d} x=|\O|\int_{{R_{\varrho+}^{2 \times 2}}} v(s)\mu({\rm d} s) \ . \end{eqnarray} As $u_k(x)=Ax$ for $x\in\partial\O$ we apply Lemma~\ref{homogenization} to get a new sequence $\{\tilde u_k\}$ bounded in $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^{2\times 2})$ with $\tilde u_k(x)=Ax$ for $x\in\partial\O$. The sequence $\{\nabla \tilde u_k\}$ generates a homogeneous Young measure $\bar\nu$ given by \eqref{homog}, so that in view of \eqref{lim2} we get for $g\in L^1(\O)$ $$ \lim_{k\to\infty}\int_\O g(x)v(\nabla \tilde u_k(x))\,{\rm d} x= \int_\O g(x)\,{\rm d} x\frac{1}{|\O|}\int_{\O}\int_{{R_{\varrho+}^{2 \times 2}}} v(s)\nu_x({\rm d} s)\,{\rm d} x =\int_\O\int_{{R_{\varrho+}^{2 \times 2}}} g(x)v(s)\mu({\rm d} s)\,{\rm d} x\ .$$ \end{proof} \begin{lemma}\label{auxiliary} (see \cite[Lemma~7.9]{pedregal} for a more general case) Let $\O\subset{\mathbb R}^n$ be an open domain with $|\partial\O|=0$ and let $N\subset\O$ be of the zero Lebesgue measure. For $r_k:\O\setminus N\to (0,+\infty)$ and $\{f_k\}_{k\in{\mathbb N}}\subset L^1(\O)$ there exists a set of points $\{a_{ik}\}\subset \O\setminus N$ and positive numbers $\{\epsilon_{ik}\}$, $\epsilon_{ik}\le r_k(a_{ik})$ such that $\{a_{ik}+\epsilon_{ik}\bar\O\}$ are pairwise disjoint for each $k\in{\mathbb N}$, $\bar\O=\cup_i \{a_{ik}+\epsilon_{ik}\bar\O\}\cup N_k$ with $|N_k|=0$ and for any $j\in{\mathbb N}$ and any $g\in L^\infty(\O)$ $$ \lim_{k\to\infty}\sum_i f_j(a_{ik})\int_{a_{ik}+\epsilon_{ik}\O}g(x)\,{\rm d} x= \int_\O f_j(x)g(x)\,{\rm d} x\ .$$ \end{lemma} \color{black} In fact, the points $\{a_{ik}\}$ can be chosen from the intersection of sets of Lebesgue points of all $f_j$, $j\in{\mathbb N}$. Notice that this intersection has the full Lebesgue measure. Here for each $j\in{\mathbb N}$, $f_j$ is identified with its precise representative \cite[p.~46]{evans-gariepy}. We adopt this identification below whenever we speak about a value of an integrable function at a particular point. \color{black} \ \bigskip \noindent {\it Proof of Theorem~\ref{THM1} - sufficiency.} \mbox{} Some parts of the proof follow \cite[Proof of Th.~6.1]{k-p1}. We are looking for a sequence $\{u_k\}_{k\in{\mathbb N}}\subset W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ satisfying $$ \lim_{k\to\infty} \int_\O v(\nabla u_k(x))g(x)\,{\rm d} x=\int_{\O} \int_{{\mathbb R}^{2\times 2 n}}v(s)\nu_x({\rm d} s)g(x)\,{\rm d} x\ $$ for all $g\in \Gamma$ and any $v\in S$, where $\Gamma$ and $S$ are countable dense subsets of $C(\bar\O)$ and $C({R_{\varrho+}^{2 \times 2}})$, respectively. First of all notice that, as $u\in W^ {1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ from \eqref{firstmoment0} is differentiable in $\O$ outside a set of measure zero called $N$, we may find for every $a\in\O\setminus N$ and every $k > 0$ some $1/k>r_k(a)>0$ such that for any $0<\epsilon < r_k(a)$ we have for every $y\in\O$ \begin{eqnarray}\label{derivative} \frac1\epsilon | u(a+\epsilon y)-u(a)-\epsilon \nabla u(a)y|\le \frac1k\ . \end{eqnarray} Applying Lemma~\ref{auxiliary} and using its notation, we can find $a_{ik}\in\O\setminus N$, $\epsilon_{ik}\le r_k(a_{ik})$ such that for all $v\in S$ and all $g\in \Gamma$ \begin{eqnarray}\label{79} \lim_{k\to\infty}\sum_i \bar V(a_{ik})g(a_{ik}) |\epsilon_{ik} \O|= \int_\O \bar V(x)g(x)\,{\rm d} x\ ,\end{eqnarray} where $$\bar V(x)=\int_{{\R^{2\times 2}_{\rm inv}}} v(s)\nu_x({\rm d} s)\ .$$ In view of Lemma~\ref{homocase}, we see that $\{\nu_{a_{ik}}\}_{x\in\O}\in\mathcal{GY}^{+\infty,-\infty}(\O;\R^{n\times n})$ is a homogeneous gradient Young measure and we call $\{\nabla y^{ik}_j\}_{j\in{\mathbb N}}\subset W^ {1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ its generating sequence. We know that we can consider $\{y^{ik}_j\}_{j\in{\mathbb N}}\subset \mathcal{U}^{\tilde\varrho}_{\nabla u(a_{ik})}$ for arbitrary $+\infty >\tilde\varrho>\varrho$. Hence \begin{eqnarray}\label{imp14} \lim_{j\to\infty} \int_\O v(\nabla y_j^{ik}(x))g(x)\,{\rm d} x=\bar V(a_{ik})\int_\O g(x)\,{\rm d} x\ \end{eqnarray} and, in addition, $y^{ik}_j$ weakly$^*$ converges to the map $x \mapsto \nabla u(a_{ik})x$ for $j\to\infty$ in $W^ {1,\infty}(\O;{\mathbb R}^2)$ and due to the Arzela-Ascoli theorem also uniformly on $C(\bar\O;{\mathbb R}^2)$. Further, consider for $k\in\mathbb{N}$ $y_k\in W^{1,\infty}(a_{ik}+\epsilon_{ik}\O;{\mathbb R}^2)$ defined for $x\in a_{ik}+\epsilon_{ik}\O$ by $$ y_k(x): = u(a_{ik})+\epsilon_{ik}y^{ik}_j\left(\frac{x-a_{ik}}{\epsilon_{ik}}\right) $$ where $j=j(k, i)$ will be chosen later. Note that the above formula defines $y_k$ almost everywhere in $\O$. We write for almost every $x\in a_{ik}+\epsilon_{ik}\O$ that \begin{eqnarray}\label{upscale} |u(x)-y_k(x)|&\leq&\left |u(x)-u(a_{ik})-\epsilon_{ik}\nabla u(a_{ik})\left(\frac{x-a_{ik}}{\epsilon_{ik}}\right)\right|\nonumber\\ &+&\epsilon_{ik}\left|\nabla u(a_{ik})\left(\frac{x-a_{ik}}{\epsilon_{ik}}\right)-y^{ik}_j\left(\frac{x-a_{ik}}{\epsilon_{ik}}\right)\right|\leq \frac{2\epsilon_{ik}}{k}\ , \end{eqnarray} if $j$ is large enough. The first term on the right-hand side is bounded by $\epsilon_{ik}/k$ because of \eqref{derivative} while the second one due to the \color{black} uniform \color{black} convergence of $y^{ik}_j\to x \mapsto \nabla u(a_{ik})x$. Notice that $y_k$ as well as $u$ are bi-Lipschitz and orientation preserving on $a_{ik}+\epsilon_{ik}\O$. If $x\in a_{ik}+\epsilon_{ik}\O$ we set $\tilde x= (x-a_{ik})/\epsilon_{ik}\in\O$ and define $\tilde u(\tilde x)=\epsilon_{ik}^{-1}u(a_{ik}+\epsilon_{ik}\tilde x)$ and $\tilde y_k(\tilde x)= \epsilon_{ik}^{-1}y_k(a_{ik}+\epsilon_{ik}\tilde x)$ so that we get by \eqref{upscale} for all $x\in\O$ $$ |\tilde u(\tilde x)-\tilde y_k(\tilde x)|\le \frac2k\ $$ Additionally, note that the bi-Lipschitz constant of $\tilde y_k$, $k\in{\mathbb N}$ is again $L$. Hence, we can take $k>0$ large enough that $\|\tilde u-\tilde y_k\|_{C(\bar\O;{\mathbb R}^2)}$ is arbitrarily small. Therefore, we can use Theorem~\ref{corollary-cut-off} and modify $\tilde y_k$ so that it has the same trace as $\tilde u$ on the boundary of $\O$. Let us call this modification $\tilde u_k$, i.e., $$ \tilde u_k(\tilde x)=\begin{cases} \tilde y_k(\tilde x) &\mbox{ if $x\in\O$},\\ \tilde u(\tilde x) & \mbox{otherwise}. \end{cases} $$ Then we proceed in the opposite way to define for $x=a_{ik}+\epsilon_{ik}\tilde x$: $u_k(x)= \epsilon_{ik}\tilde u_k(\tilde x)$. Then, since $\{u_k\}_{k\in{\mathbb N}}$ is bounded in $W^{1,\infty}(\O;{\mathbb R}^2)$, we may assume the weak$^*$ convergence of $u_k$ to $u$. It remains to show that every $u_k$ is bi-Lipschitz. To do so, we again apply Theorem~\ref{jmball}. We see that for every $k\in\mathbb{N}$ $\det\nabla u_k>0$. Further, $\sup_{k\in{\mathbb N}}|(\nabla u_k)^{-1}|<+\infty$ follows from construction of the sequence, and $u_k=u$ on $\partial\O$, so that $u_k$ is indeed bi-Lipschitz. For $k,i$ fixed we take $j=j(k,i)$ so large that for all $(g,v)\in \Gamma\times S$ $$ \left|\epsilon_{ik}^2\int_\O g(a_{ik}+\epsilon_{ik}y)v(\nabla u_j^{ik}(y))\,{\rm d} y -\bar V(a_{ik})\int_{a_{ik}+\epsilon_{ik}\O}g(x)\,{\rm d} x\right|\le\frac{1}{2^ik}\ .$$ Using this estimate and (\ref{imp14}) we get for any $(g,v)\in \Gamma\times S$ \begin{eqnarray*} \lim_{k\to\infty}\int_\O g(x)v(\nabla u_k(x))\,{\rm d} x &=& \lim_{k\to\infty}\sum_i\epsilon_{ik}^n\int_\O g(a_{ik}+\epsilon_{ik}y)v(\nabla u_j^{ik}(y))\,{\rm d} y\\ &=& \lim_{k\to\infty}\sum_i\bar V(a_{ik})\int_{a_{ik}+\epsilon_{ik}\O}g(x)\,{\rm d} x= \int_\O \bar V(x)g(x)\,{\rm d} x \\ &=& \int_\O\int_{{\mathbb R}^{2\times 2}} v(s)\nu_x({\rm d} s)g(x)\, {\rm d} x \ . \end{eqnarray*} \hfill$\Box$ \bigskip \subsection{Proofs of Corollary~\ref{wlsc} and Proposition~\ref{minimizer}} {\it Proof of Corollary~\ref{wlsc}.} For showing the weak lower semicontinuity, we realize that the sequence $\{\nabla y_k\}_{k \in \mathbb{N}}$ generates a measure in $\mathcal{GY}_+^{\infty,-\infty}(\O;\R^{2\times 2})$ and so if $v$ is bi-quasiconvex we easily have from \eqref{qc0} $$ \int_\Omega v(\nabla y(x)) {\rm d} x = \int_\Omega Z v(\nabla y(x)) {\rm d} x \le\int_\Omega \int_{{\R^{2\times 2}_{\rm inv+}}} v(s)\nu_x({\rm d} s) \d x= \liminf_{k \to \infty} \int_\Omega v(\nabla y_k) {\rm d} x. $$ On the other hand, we realize that every $y\in W^{1,\infty,-\infty}_A(\O;{\mathbb R}^2)$ defines a homogeneous Young measure $\nu\in\mathcal{GY}^{\infty,-\infty}_+(\O;{\mathbb R}^{2\times 2})$ by setting $$\int_{{\mathbb R}^{2\times 2}}f(s)\nu(\d s)=|\O|^{-1}\int_\O f(\nabla y(x))\,\d x\ $$ for every $f$ continuous on matrices with positive determinant. Notice that the first moment of $\nu$ is $A$. Let $\{\nabla y_k\}_{k \in \mathbb{N}}$ be a generating sequence for $\nu$ which can be taken such that $\{y_k\}_{k \in \mathbb{N}} \subset W^{1,\infty,-\infty}_A(\O;{\mathbb R}^2)$. Moreover, the weak* limit of $\nabla y_k$ is $A$. As we assume that $I(y)=\int_\O v(\nabla y(x))\,\d x$ and that $I$ is weakly$^*$ lower semicontinuous on $W^{1,\infty,-\infty}_A(\O;{\mathbb R}^2)$ we get $$|\O|v(A)\le \liminf_{k\to\infty}I(y_k)=\int_\Omega \int_{{\mathbb R}^{2\times 2}}v(s)\nu(\d s) \d x = \int_\O v(\nabla y(x))\,\d x\ ,$$ which shows that $v$ is bi-quasiconvex. \hfill$\Box$ \bigskip {\it Proof of Proposition~\ref{minimizer}.} Notice that $u_0\in\mathcal{A}$ so that the admissible set is nonempty. Let $\{u_k\}_{k\in{\mathbb N}}\subset\mathcal{A}$ be a minimizing sequence for $I_\varepsilon$, i.e., $\lim_{k\to\infty} I_\varepsilon(u_k)=\inf_\mathcal{A}I_\varepsilon\ge 0$. Hence, $\|\nabla u\|_{L^\infty(\O;{\mathbb R}^{2\times 2})}\le C$ and $\|\nabla u^{-1}\|_{L^\infty(u_0(\O);{\mathbb R}^{2\times 2})}\le C$ for some finite $C>0$. Applying a Poincar\'{e} inequality we get that $\{u_k\}$ is bounded in $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$. Therefore, there is a subsequence converging weakly* to some $u\in W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$. Compactness of the trace operator ensures that $u=u_0$ on the boundary of $\O$. Consequently, $u\in\mathcal{A}$ and weak* lower semicontinuity of $I_\varepsilon$ finishes the argument. Indeed, as $v$ is bi-quasiconvex the weak* lower semicontinuity of the first two terms is obvious. The last term is weak* lower semicontinuous in view of Remark~\ref{convergence}. \hfill $\Box$ \section{Cut-off technique preserving the bi-Lipschitz property} \label{sect-cutOff} One of the main steps in the characterization of gradient Young measures \cite{k-p1,pedregal} is to show that having a bounded sequence $\{y_k\}_{k\in{\mathbb N}}\subset W^{1,\infty}(\O;{\mathbb R}^2)$, such that it converges weakly$^*$ to $y(x):\O\mapsto{\mathbb R}^2$ and $\{\nabla y_k\}$ generates a Young measure $\nu$, then there is a modified sequence $\{u_k\}_{k\in{\mathbb N}}\subset W^{1,\infty}(\O;{\mathbb R}^2)$, $u_k(x)=y(x)$ for $x\in\partial\O$ and $\{\nabla u_k\}$ still generates $\nu$. Standard proofs of this fact use a cut-off technique based on convex combinations near the boundary; due to the non-convexity of our constraints, however, this could destroy the bi-Lipschitz property, so it is not at all suitable for our purposes. Therefore, we resort to a different approach borrowing from recent results by S.~Daneri and A.~Pratelli \cite{daneri-pratelli-extension,daneri-pratelli}. More precisely, the following theorem is a main ingredient of our approach. \begin{theorem}\label{cut-off} Let $\O\subset{\mathbb R}^2$ be a bounded Lipschitz domain, let $\mathrm{diam}\ \Omega>>\delta > 0$ and $L\ge 1$ be fixed. Then there exists $\varepsilon > 0$ that is only dependent on $\delta$ and $L$ such that if $\tilde y, y \in W^{1,\infty, -\infty}_+(\O;{\mathbb R}^2)$ are L-bi-Lipschitz maps satisfying $$ \|\tilde y-y\|_{C(\overline{\Omega};{\mathbb R}^2)} \leq \varepsilon(\delta, L), $$ then we can find another $\bar{c}(L)$-bi-Lipschitz map $u \in W^{1,\infty, -\infty}_+(\O;{\mathbb R}^2)$ satisfying $u=y$ on $\partial\O$ and $|\{x\in\O;\, \nabla u(x)\ne\nabla \tilde y(x)\}| \leq \delta$. \end{theorem} The following corollary allows us to modify convergent sequences at the boundary of $\O$. \begin{corollary}\label{corollary-cut-off} Assume that $\{y_k\}_{k\in{\mathbb N}}\subset W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ is a sequence of $L$-bi-Lipschitz maps and $y_k\stackrel{*}{\rightharpoonup} y$ in $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ as $k\to\infty$. Then there is a subsequence of $\{y_{k_n}\}_{n\in{\mathbb N}}$ and $\{u_{k_n}\}_{n\in{\mathbb N}}\subset W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ bounded such that $u_{k_n}\stackrel{*}{\rightharpoonup} y$ in $W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ as $n\to\infty$, for all $n\in{\mathbb N}$ $u_{k_n}=y$ on $\partial\O$ and $\lim_{n\to\infty}|\{x\in\O;\, \nabla u_{k_n}\ne\nabla y_{k_n}\}|\to 0$. In particular, the sequences $\{\nabla y_{k_n}\}$ and $\{\nabla u_{k_n}\}$ generate the same Young measure. \end{corollary} \bigskip \begin{proof} Let $\{\delta_n\}_{n\in{\mathbb N}}$ be a sequence of positive numbers converging to zero as $n\to\infty$. We apply Theorem~\ref{cut-off} and uniform convergence of $\{y_k\}_{k\in{\mathbb N}}$ to $y$ in $C(\bar\O;{\mathbb R}^2)$ to find $\{\varepsilon_n(\delta_n,L)\}_{n\in{\mathbb N}}$ and $\{y_{k_n}\}_{n\in{\mathbb N}}$ such that $\|y_{k_n}-y\|_{C(\bar\O;{\mathbb R}^2)}\le \varepsilon_n(\delta_n,L)$. Use Theorem~\ref{cut-off} with $\tilde y:=y_{k_n}$ to obtain $u_{k_n}\in W^{1,\infty,-\infty}_+(\O;{\mathbb R}^2)$ with the mentioned properties. \end{proof} {\it Proof of Thm.~\ref{cut-off}} { We devote the rest of this section to proving Theorem \ref{cut-off}, large parts of the proof, collected in its third section, are rather technical. Therefore, we start with an overview of the proof: \vspace{2ex} \noindent \textit{\textbf{Section 1 of the proof: Overview}}\\ We define the open set $$ \Omega^\delta = \big\{x \in \Omega: \mathrm{dist}(x,\partial\O) < \delta \big\}; $$ now, we find $r = r(\delta)$ and a corresponding, suitable \emph{$r(\delta)$-tiling of $\Omega^\delta$}, i.e.\ a \emph{finite} collection of closed squares \begin{equation} \Omega_r = \bigcup_{i=1}^N \mathcal{D}(z_i, r) \qquad \quad \text{with $z_i \in \Omega^\delta$} \label{tiling} \end{equation} that satisfies that $\Omega_r \subsetneq \Omega^\delta$ and that two squares have in common \emph{only} either a whole edge or a vertex. Furthermore, we require the tiling to be fine enough so that there exists a \emph{collection of edges $\Gamma$} satisfying the following properties: \begin{itemize} \item every continuous path connecting two points $x_1$ and $x_2$ such that $x_1 \in \partial\Omega$ and $x_2 \in \partial \Omega^\delta \setminus \partial \Omega$ crosses $\Gamma$, \item $\Gamma \subset \mathrm{int} \, \Omega_r$. \end{itemize}} This setting is best imagined in the case when $\Omega$ is simply connected. Then, $\Omega_r$ forms a thin strip of squares near the boundary and $\Gamma$ is a closed curve consisting of edges \emph{in the interior} of this strip. We will refer to the special case of a simple connected domain for a better imagination of the introduced concepts at several places bellow; nevertheless, simple connectivity of $\Omega$ is never explicitly used and, in fact, not needed. Further, we separate $\Omega$ into three parts: $$ \Omega = \Omega_\mathrm{bulk} \cup \Omega_r \cup \Omega_\mathrm{bound}, $$ where \begin{align*} \Omega_\mathrm{bulk} &= \{x \in \Omega \setminus \Omega_r: \text{every continuous path from $x$ to $\partial \Omega$ crosses $\Omega_r$}.\} \\ \Omega_\mathrm{bound} &= \Omega^\delta\setminus(\Omega_\mathrm{bulk} \cup \Omega_r). \end{align*} Let us again, for a moment, think of a simply connected $\Omega$. Then, $\Omega_\mathrm{bulk}$ forms the interior of the domain, $\Omega_r$ is the thin strip of squares and $\Omega_\mathrm{bound}$ is also a strip that reaches up to $\partial \Omega$ and is not tiled. With these basic notations set, we explain how we construct the cut-off. Let us choose $\varepsilon = \frac{r(\delta)}{12L^3}$ so that we have that \begin{equation} \|\tilde y-y\|_{C(\overline{\Omega};{\mathbb R}^2)} \leq \frac{r(\delta)}{12L^3}. \label{LargeK} \end{equation} Now, we alter $\tilde y$ on $\Omega_r$ to obtain the function $u_\delta: \Omega_r \to \mathbb{R}^2$ that has the property that $[u_\delta]_{\mid_{\partial \Omega_r\cap \partial \Omega_\mathrm{bulk}}} = \tilde y$ and $[u_\delta]_{\mid_{\partial \Omega_r\cap \partial \Omega_\mathrm{bound}}}=y$. If we think once more of simple connected $\Omega$, this means that on the inner boundary of $\Omega_r$ we obtain the function $\tilde y$ while on the outer boundary we already have the sought boundary data. We will give a precise definition of $u_\delta$ in the next section of the proof. In fact, in view of the available extension Theorem \ref{extensionTheorem}, it is sufficient to give a definition of $u_\delta$ on all the edges in $\Omega_r$, which we will exploit. Namely, on the edges the ``fitting'' of $\tilde y$ to $y$ is essentially one-dimensional and hence our technique will be essentially a linear interpolation. In the third section of the proof, which is the most technical one, we then show that $u_\delta$, thus so far defined only on the edges, is $18L$-bi-Lipschitz (cf. \eqref{biL}) and so extending it to $\Omega_r$ via Theorem \ref{extensionTheorem} will yield a $\bar{c}(L)$-bi-Lipschitz function $u_\delta: \Omega_r \to \mathbb{R}^2$ having the above described properties. \color{black} Indeed, $\partial u_\delta(\mathcal{D}(z_i,r))= u_\delta(\partial \mathcal{D}(z_i,r))$ for all admissible $i$, so that $u_\delta:\O_r\to{\mathbb R}^2$ is injective. \color{black} Therefore, we may define $$ u(x) = \begin{cases} \tilde y(x) & \text{on $\Omega_\mathrm{bulk},$} \\ u_\delta(x) & \text{on $\Omega_r$} ,\\ y(x) & \text{on $\Omega_\mathrm{bound}$,} \end{cases} $$ It is obvious that the obtained mapping is Lipschitz and satisfies \color{black} $|\nabla u(x)^{-1}| > c(L)$ \color{black} a.e.~on $\Omega$. The injectivity of \color{black} $u$ \color{black} follows from the fact that $u(\Omega_\mathrm{bulk}), u(\Omega_\mathrm{bound})$ and $u(\Omega_r)$ are mutually disjoint, which is a consequence of the ``fitting'' boundary data through $[u_\delta]_{\mid_{\partial \Omega_r\cap \partial \Omega_\mathrm{bulk}}} = \tilde y$ and $[u_\delta]_{\mid_{\partial \Omega_r\cap \partial \Omega_\mathrm{bound}}}=y$. Thus, the mapping $u$ is globally bi-Lipschitz and hence orientation preserving since it preserves orientation on $\Omega_\mathrm{bulk}$. \begin{figure}[h!] \centering \subfigure[r-tiling of the set $\Omega^\delta$]{\includegraphics[width = 0.35\paperwidth]{Tiling.pdf}} \subfigure[Detail of cross on $\Gamma$]{\includegraphics[width = 0.35\paperwidth]{Cross.png}} \caption{Tiling near boundary and detail of one cross} \label{Fig_CutOFF} \end{figure} \vspace{2ex} \noindent \textit{\textbf{Section 2 of the proof: Partitioning of the grid and definition of $u_\delta$}}\\ In this section we give a precise definition of $u_\delta(x)$ on the \emph{grid} of the tiling $\Omega_r$, denoted $\mathcal{Q}$, which consists of all edges of $\Omega_r$; in other words, $$ \mathcal{Q} = \bigcup_{i=1}^N \partial \mathcal{D}(z_i, r) \qquad \quad \text{with $z_i$ as in \eqref{tiling}}. $$ Clearly, $\Gamma \subset \mathcal{Q}$ and we divide $\mathcal{Q}$ into two other parts $$ \mathcal{Q} = \mathcal{Q}^\mathrm{outer} \cup \Gamma \cup \mathcal{Q}^\mathrm{inner}, $$ defined through \begin{align} \mathcal{Q}^\mathrm{inner} &=\big \{x \in \mathcal{Q}\setminus \Gamma; \text{every continuous path connecting $x$ to $\partial \Omega$ crosses $\Gamma$}\}, \\ \mathcal{Q}^\mathrm{outer} &= \mathcal{Q}\setminus(\Gamma \cup \mathcal{Q}^\mathrm{inner}). \end{align} The names of these two other parts are lent from the situation when $\Omega$ is simply connected; namely, then $\mathcal{Q}^\mathrm{inner}$ corresponds to those edges that are ``further away'' from the boundary than $\Gamma$ and so in the ``interior'' while $\mathcal{Q}^\mathrm{outer}$ are the edges in the exterior. Nevertheless, as already stressed above, simple-connectivity of $\Omega$ is not needed. For further convenience, we shall fix some notation (in accord with \cite{daneri-pratelli}); see also Figure \ref{Fig_CutOFF}(b). We shall denote \begin{itemize} \item $w_\alpha$ any vertex of the grid $\mathcal{Q}$ that lies on $\Gamma$, \item for any $w_\alpha$ we denote $w_\alpha^i$ all vertices that are at distance of $r$ to $w_\alpha$; note that from construction there always exist 4 such vertices (as $w_\alpha$ cannot lie on the boundary of $\Omega_r$), \item for any $w_\alpha$ the largest numbers $\xi_\alpha^i > 0$ that satisfy \begin{align*} \big|\tilde y\big(w_\alpha+\xi_\alpha^i(w_\alpha^i - w_\alpha)\big)- y(w_\alpha) \big| &= \frac{r}{4L} & &\text{if the edge $w_\alpha w_\alpha^i \subset \mathcal{Q}^\mathrm{inner}$}, \\ \big|y\big(w_\alpha+\xi_\alpha^i(w_\alpha^i - w_\alpha)\big)- y(w_\alpha) \big| &= \frac{r}{4L} & &\text{else}; \end{align*} \item we call the ``boundary cross'' the set $$ Z_\alpha = \bigcup_{i=1}^{4} \big\{w_\alpha+ t (w_\alpha^i-w_\alpha) : 0 \leq t \leq \xi_\alpha^i \big\} $$ and denote the extremals of this cross $p_\alpha^1 \ldots p_\alpha^4$. \end{itemize} It is due to the $L$-bi-Lipschitz property of $\tilde y$ and $y$ as well as \eqref{LargeK} that all the concepts above are well defined. In particular, we can assure that \begin{equation} \text{the numbers $\xi_\alpha^i$ can be found in the interval $[1/(6L^2),1/3]$,} \label{numbers_xi} \end{equation} so that the boundary crosses are mutually disjoint. We postpone the proof of \eqref{numbers_xi} until the end of this section. Now, we are in the position to define the sequence $u_{k\delta}(x)$ on $\mathcal{Q}$ as follows: first, we define $u_{\delta}(x)$ everywhere in $\mathcal{Q}$ except for the boundary crosses: $$ u_\delta(x) = \begin{cases} \tilde y(x) & \text{if $x \in \mathcal{Q}^\mathrm{inner}\setminus (\bigcup_\alpha Z_\alpha),$} \\ y(x) & \text{if $x \in (\mathcal{Q}^\mathrm{outer}\cup \Gamma)\setminus (\bigcup_\alpha Z_\alpha);$} \end{cases} $$ while on the cross the $u_\delta$ will be continuous and piecewise affine, i.e. $$ u_\delta(w_\alpha + t(w_\alpha^i- w_\alpha)) = \begin{cases} y(w_\alpha) + \frac{t}{\xi_\alpha^i }\Big(\tilde y(p_\alpha^i)- y(w_\alpha) \Big) & \text{if $w_\alpha w_\alpha^i \subset \mathcal{Q}^\mathrm{inner}$ and $t \in [0,\xi_\alpha^i ]$,} \\ y(w_\alpha) + \frac{t}{\xi_\alpha^i }\Big(y(p_\alpha^i)- y(w_\alpha) \Big) & \text{if $w_\alpha w_\alpha^i \not \subset \mathcal{Q}^\mathrm{inner}$ and $t \in [0,\xi_\alpha^i ].$} \end{cases} $$ The rough idea behind this construction is that the matching, or the cut-off, actually happens on the boundary crosses where we, on each edge, replace $\tilde y$ as well as $y$ by an affine map. By adjusting the slopes of these affine replacements we get a continuous piecewise affine, and hence bi-Lipschitz, map on the cross. What we need to show are then, essentially, the following two properties of such a replacement: it connects in a bi-Lipschitz way to $u_{\delta}$ along the endpoints of the boundary cross and the adjustment of the slopes needed to obtain continuity is just small so that the overall $L$-bi-Lipschitz property is not affected much. For the former, we mimic the strategy of S.~Daneri and A.~Pratelli \cite{daneri-pratelli} who were also able to connect an affine replacement of a bi-Lipschitz function to the original map. The latter is due to the fact that $\tilde y$ and $y$ are suitably close to each other (as expressed by the property \eqref{LargeK}) which assures that the change of slope on the cross needed for the cut-off will depend just on $L$. We will show in the next section that $u_\delta$ is $18L$-bi-Lipschitz on $\mathcal{Q}$; cf.\ \eqref{biL}. Therefore, we can apply Theorem~\ref{extensionTheorem} to extend $u_\delta$ from $\mathcal{Q}$ (without changing the notation) to each square of the tiling. As for every square $\mathcal{D}(z_i,r)$ of the tiling we have that $\partial u_\delta(\mathcal{D}(z_i,r))= u_\delta(\partial \mathcal{D}(z_i,r))$ we see that the extended mapping is globally injective on $\O_r$. \vspace{1ex} \noindent \emph{Proof of \eqref{numbers_xi}:} For $w_\alpha w_\alpha^i \subset \mathcal{Q}^\mathrm{inner}$, we notice that the function $t \mapsto \big|\tilde y\big(w_\alpha+t(w_\alpha^i - w_\alpha)\big)- y(w_\alpha) \big| $ is continuous on $[0,1]$ and, owing to \eqref{LargeK}, smaller or equal than $\frac{r}{12L^3}$ in $0$ while in $t=1$ we have that $$ \Big|\tilde y(w_\alpha^i) - \tilde y(w_\alpha)+ \tilde y(w_\alpha)- y(w_\alpha) \Big| \geq \Big|\frac{r}{L}-\frac{r}{12L^3} \Big| \geq \frac{r}{4L}; $$ which yields the existence of $\xi_\alpha^i \in [0,1]$ such that $$ \big|\tilde y\big(w_\alpha+t(w_\alpha^i - w_\alpha)\big)- y(w_\alpha) \big| = \frac{r}{4L}. $$ To establish the bounds on $\xi_\alpha^i$, we note that \begin{align*} \frac{r}{4L}&=\Big|\tilde y\big(w_\alpha+\xi_\alpha^i(w_\alpha^i - w_\alpha)\big)- y(w_\alpha) \Big| = \Big|\tilde y\big(w_\alpha+\xi_\alpha^i(w_\alpha^i - w_\alpha)\big) + \tilde y(w_\alpha)- \tilde y(w_\alpha) - y(w_\alpha) \Big| \\ &\leq L\xi_\alpha^i r + \frac{r}{12L^3} \leq L\xi_\alpha^i r + \frac{r}{12L}, \end{align*} i.e.\ $\xi_\alpha^i \geq 1/(6L^2)$. On the other hand we have that \begin{align*} \frac{r}{4L}&=\Big|\tilde y\big(w_\alpha+\xi_\alpha^i(w_\alpha^i - w_\alpha)\big)- y(w_\alpha) \Big| = \Big|\tilde y\big(w_\alpha+\xi_\alpha^i(w_\alpha^i - w_\alpha)\big) + \tilde y(w_\alpha)- \tilde y(w_\alpha) - y(w_\alpha) \Big| \\& \geq \frac{r}{L}\big(\xi_\alpha^i - \frac{1}{12L^2}\big) \geq \frac{r}{L}\big(\xi_\alpha^i - \frac{1}{12}\big), \end{align*} which is satisfied if $0\leq \xi_\alpha^i \leq 1/3$. In the case when $w_\alpha w_\alpha^i \not\subset \mathcal{Q}^\mathrm{inner}$, we proceed in a similar way and rely just on the bi-Lipschitz property of $y$; exploiting \eqref{LargeK} is not necessary. \vspace{2ex} \noindent \textit{\textbf{Section 3 of the proof: Bi-Lipschitz property of $u_\delta$:}}\\ The function $u_\delta$ defined in the previous section is continuous on the grid $\mathcal{Q}$ and we claim that it is even bi-Lipschitz, i.e.\ (as long as \eqref{LargeK} holds true) \begin{equation} 18 L |z-z'| \geq |u_\delta(z)-u_\delta(z')| \geq \frac{1}{18L}|z-z'| \qquad \quad \forall z,z' \in \mathcal{Q} \label{biL} \end{equation} The proof of this claim is the content of this section and will be performed in several steps. \noindent \emph{Step 1 of the proof of \eqref{biL}: Suppose that $z$ and $z'$ lie in $Z_\alpha$.} \\ Let us first consider the situation when both $z, z'$ lie on the same edge; i.e.\ $z,z' \in w_\alpha w_\alpha^i$ for some $i=1\ldots4$. In this a case $u_\delta$ is affine and we have that \begin{align*} \frac{|u_\delta(z)-u_\delta(z')|}{|z-z'|} &= \frac{\big|u_\delta(w_\alpha) - u_\delta(p_\alpha^i)\big|}{\xi_\alpha^i r} \\&\!\!\!\!\!\!\!\!= \begin{cases} \frac{\big|\tilde y(p_\alpha^i) - \tilde y(w_\alpha) + \tilde y(w_\alpha)-y(w_\alpha)\big|}{\xi_\alpha^i r} \geq \frac{1}{L} - \frac{1}{\xi_\alpha^i r} \frac{r}{12L^3} \geq \frac{1}{L} - \frac{6 L^2}{r} \frac{r}{12L^3} \geq \frac{1}{2L} & \!\!\!\!\text{if $w_\alpha w_\alpha^i \subset \mathcal{Q}^\mathrm{inner}$}\\ \frac{\big|y(p_\alpha^i)-y(w_\alpha)\big|}{\xi_\alpha^i r} \geq \frac{1}{L} \geq \frac{1}{2L} &\!\!\!\! \text{if $w_\alpha w_\alpha^i \not \subset \mathcal{Q}^\mathrm{inner}$.} \end{cases} \end{align*} Similarly, \begin{align*} \frac{|u_\delta(z)-u_\delta(z')|}{|z-z'|} &= \frac{\big|u_\delta(w_\alpha) - u_\delta(p_\alpha^i)\big|}{\xi_\alpha^i r} \\ &\!\!\!\!\!\!\!\!= \begin{cases} \frac{\big|\tilde y(p_\alpha^i) - \tilde y(w_\alpha) + \tilde y(w_\alpha)-y(w_\alpha)\big|}{\xi_\alpha^i r} \leq L + \frac{1}{\xi_\alpha^i r} \frac{r}{12L^3} \leq L + \frac{6 L^2}{r} \frac{r}{12L^3} \leq 2L & \text{if $w_\alpha w_\alpha^i \subset \mathcal{Q}^\mathrm{inner}$}\\ \frac{\big|y(p_\alpha^i)-y(w_\alpha)\big|}{\xi_\alpha^i r} \leq L \leq 2L & \text{if $w_\alpha w_\alpha^i \not \subset \mathcal{Q}^\mathrm{inner}$.} \end{cases} \end{align*} If $z$ and $z'$ are not on the same edge let, for example, $z \in w_\alpha p_\alpha^1$ and $z' \in w_\alpha p_\alpha^2$. Moreover, we may assume, without loss of generality, that $$ |u_\delta(z)-y(w_\alpha)| \leq |u_\delta(z') - y(w_\alpha)| $$ and, hence, define $z''$ in the segment $w_\alpha z'$ such that $$ |u_\delta(z)-y(w_\alpha)|=|u_\delta(z'')-y(w_\alpha)|. $$ Then, as the points $u_\delta(z)$, $u_\delta(z'')$ and $u_\delta(z')$ form a triangle that is obtuse at $u_\delta(z'')$ (cf. also Figure \ref{obtuse}) we may apply Remark \ref{ObtuseTriangle} to obtain \begin{align} |u_\delta(z) - u_\delta(z')| &\geq \frac{1}{\sqrt{2}}\Big(|u_\delta(z) - u_\delta(z'')| + |u_\delta(z') - u_\delta(z'')| \Big) \nonumber\\ &\geq \frac{1}{\sqrt{2}}\Big(|u_\delta(z) - u_\delta(z'')| + \frac{1}{2L}|z'-z''| \Big) \label{Step1_1} \end{align} since the points $z'$, $z''$ lie on the same edge where we already proved the bi-Lipschitz property. Further, by the fact that $u_\delta$ is piecewise affine on the cross, \footnote{Notice that on any the segment $w_\alpha p_\alpha^i$ we can write $u_\delta(t) = u_\delta(w_\alpha) + t (u_\delta(p_\alpha^i) -u_\delta(w_\alpha))$. Therefore, the points $z$,$z''$ correspond to such $t$, $t''$ that $t |u_\delta(p_\alpha^1) -u_\delta(w_\alpha)| = t''|u_\delta(p_\alpha^2) -u_\delta(w_\alpha)|$. By definition, however, $|u_\delta(p_\alpha^1) -u_\delta(w_\alpha)| = |u_\delta(p_\alpha^2) -u_\delta(w_\alpha)| = \frac{r}{4L}$ so that $t=t''$.} \begin{figure}[h!] \centering \includegraphics[width = 0.5\paperwidth]{obtuse.pdf} \caption{The obtuse triangle formed by $u_\delta(z)$, $u_\delta(z'')$ and $u_\delta(z')$ in the image of the boundary cross as needed in Step 1. Notice that since $u_\delta$ is piecewise affine on the cross, each segment of the cross forms again a part of a straight line.} \label{obtuse} \end{figure} \begin{align*} \frac{|u_\delta(z) - u_\delta(z'')|}{|z-z''|} &= \frac{|u_\delta(p_\alpha^1) - u_\delta(p_\alpha^2)|}{|p_\alpha^1-p_\alpha^2|} \\ & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!= \begin{cases} \frac{|\tilde y(p_\alpha^1) - \tilde y(p_\alpha^2)|}{|p_\alpha^1-p_\alpha^2|} \geq \frac{1}{L} & \text{if both $p_\alpha^1$, $p_\alpha^2$ lie in $\mathcal{Q}^\mathrm{inner}$} \\ \frac{|y(p_\alpha^1) - y(p_\alpha^2)|}{|p_\alpha^1-p_\alpha^2|} \geq \frac{1}{L} & \text{if neither $p_\alpha^1$ nor $p_\alpha^2$ lies in $\mathcal{Q}^\mathrm{inner}$} \\ \frac{|\tilde y(p_\alpha^1) - \tilde y(p_\alpha^2) + \tilde y(p_\alpha^2) - y(p_\alpha^2)|}{|p_\alpha^1-p_\alpha^2|} \geq \frac{1}{L} - \frac{1}{|p_\alpha^1-p_\alpha^2|}\frac{r}{12L^3} \geq \frac{1}{2L}, & \text{$p_\alpha^1 \in \mathcal{Q}^\mathrm{inner}$ $p_\alpha^2, \notin \mathcal{Q}^\mathrm{inner}$} \end{cases} \end{align*} where we realized that $|p_\alpha^1-p_\alpha^2| \geq \frac{r}{6L^2}$ because the triangle formed by the points $p_\alpha^1$,$w_\alpha$, $p_\alpha^2$ is right angled or a line. Notice also that the situation when $p_\alpha^1 \notin \mathcal{Q}^\mathrm{inner}$, $p_\alpha^2 \in \mathcal{Q}^\mathrm{inner}$ is completely symmetrical to the already covered case. So, returning to \eqref{Step1_1}, we have by the triangle inequality $$ |u_\delta(z) - u_\delta(z')| \geq \frac{\sqrt{2}}{4L} |z-z'|\ . $$ On the other hand, by exploiting that the triangle formed by the points $z$,$z'$ and $w_\alpha$ is either right angled or a line, we get that $$ |u_\delta(z) - u_\delta(z')| \leq |u_\delta(z) - y(w_\alpha) + y(w_\alpha) - u_\delta(z')| \leq 2L\big(|z-w_\alpha|+|z'-w_\alpha|\big) \leq 2L \sqrt{2} |z-z'|. $$ \vspace{2ex} \noindent \emph{Step 2 of the proof of \eqref{biL}: Suppose that $z \notin Z_\alpha$ and $z' \notin Z_\beta$ for all $\alpha, \beta$.} \\ Notice that we only have to investigate the case when $z \in \mathcal{Q}^\mathrm{inner}$ and $z' \notin \mathcal{Q}^\mathrm{inner}$ for the other options are trivial. Then, however, we have that $|z-z'| \geq \frac{r}{6L^2}$ and so the Lipschitz property follows immediately as $$ |u_\delta(z)-u_\delta(z')| \leq |y(z)-\tilde y(z)+\tilde y(z)-\tilde y(z')| \leq \frac{r}{6L^2}\,\frac{1}{2L} + L|z-z'| \leq 2L|z-z'|. $$ On the other hand, $$ \frac{|u_\delta(z)-u_\delta(z')|}{|z-z'|} = \frac{|y(z)-\tilde y(z)+\tilde y(z)-\tilde y(z')|}{|z-z'|} \geq \frac{1}{L} - \frac{r}{12L^2 |z-z'|} \geq \frac{1}{2L} $$\vspace{2ex} \noindent \emph{Step 3 of the proof of \eqref{biL}: Suppose that $z \in Z_\alpha$ and $z' \notin Z_\beta$ for all $\beta$.} \\ To obtain the lower bound in \eqref{biL} we rely on Remark \ref{BallArgument}; indeed the choice of $z$, $z'$ is such that $u_\delta(z')$ lies outside the ball $\mathcal{B}(y(w_\alpha); \frac{r}{4L})$ while $u_\delta(z) \in \mathcal{B}(y(w_\alpha); \frac{r}{4L})$. In particular, we may assume that $u_\delta(z) $ lies on the segment $y(w_\alpha) u_\delta(p_\alpha^1)$ (recall that $u_\delta$ is affine on the cross). So, $$ |u_\delta(z) - u_\delta(z')| \geq \frac{|u_\delta(p_\alpha^1)-u_\delta(z)| + |u_\delta(p_\alpha^1)-u_\delta(z')|}{3} \ . $$ Clearly, we only have to care about the latter term on the right hand side. Employing \eqref{LargeK} and the triangle inequality, we get that \begin{align*} \frac{|u_\delta(p_\alpha^1) - u_\delta(z')|}{|p_\alpha^1-z'|} \geq \begin{cases} \frac{|\tilde y(p_\alpha^1) - \tilde y(z')|}{|p_\alpha^1-z'|} \geq \frac{1}{L} & \text{if $p_\alpha^1, z' \in \mathcal{Q}^\mathrm{inner}$} \\ \frac{|y(p_\alpha^1) - y(z')|}{|p_\alpha^1-z'|} \geq \frac{1}{L} & \text{if $p_\alpha^1, z' \notin \mathcal{Q}^\mathrm{inner}$} \\ \frac{|\tilde y(p_\alpha^1) - \tilde y(z') + \tilde y(z') - y(z')|}{|p_\alpha^1-z'|} \geq \frac{1}{L}-\frac{6L^2}{12L^3} \geq \frac{1}{2L} & \text{if $p_\alpha^1 \in \mathcal{Q}^\mathrm{inner}$ and $z' \notin \mathcal{Q}^\mathrm{inner}$;} \end{cases} \end{align*} where, in the last case, $p_\alpha^1$ and $z'$ necessarily lie in different edges and so $|p_\alpha^1-z'| \geq \frac{r}{6L^2}$. Notice that since the r\^{o}le of $p_\alpha^1$ and $z'$ is symmetric we really exhausted all possibilities belonging to this step. Summing up, $$ |u_\delta(z) - u_\delta(z')| \geq \frac{|z-z'|}{6L}. $$ To obtain the upper bound, we first realize that if $z' $ is at the boundary to the cross, i.e.\@ $z' = p_\alpha^i$ for some $i=1\ldots4$, the procedure from Step 2 applies in verbatim. Therefore, we may restrict our attention to the situation in which $z' $ is strictly in the interior of the cross; then, since all $p_\alpha^i$ are at distance at most $r/3$ from $w_\alpha$ and since $z' \notin w_\alpha p_\alpha^i$ $\forall i$, at least one of these $p_\alpha^i$ has to satisfy that the triangle $z, p_\alpha^i, z'$ has an obtuse (or right) angle at $p_\alpha^i$ (see Figure \ref{triangle}) -- let it for notational convenience be $ p_\alpha^1$. So, we are in the position to apply Remark \ref{ObtuseTriangle} below and estimate \begin{align*} &|u_\delta(z) - u_\delta(z')|=|u_\delta(z) - u_\delta(p_\alpha^1) + u_\delta(p_\alpha^1) - u_\delta(z')| \\ &\qquad \leq \left \{ \begin{array}{ll} |u_\delta(z) - \tilde y(p_\alpha^1) + \tilde y(p_\alpha^1) - \tilde y(z')| & \\ \quad \leq 2\sqrt{2} L \big(|z-p_\alpha^1|+|p_\alpha^1-z'|\big) \leq 4L|z-z'|\big) & \text{if $z' \in \mathcal{Q}^\mathrm{inner}$ and $p_\alpha^1 \in \mathcal{Q}^\mathrm{inner}$} \\ |u_\delta(z) - y(p_\alpha^1) + \tilde y(p_\alpha^1) - \tilde y(z') + y(p_\alpha^1) - \tilde y(p_\alpha^1)| & \\ \quad \leq 2 \sqrt{2} L \big(|z - p_\alpha^1| + |p_\alpha^1 - z'| \big) + \frac{r}{6L^2}\, \frac{1}{2L} \leq 5L|z-z'| & \text{if $z' \in \mathcal{Q}^\mathrm{inner}$ and $p_\alpha^1 \notin \mathcal{Q}^\mathrm{inner}$} \\ |u_\delta(z) - y(p_\alpha^1) + y(p_\alpha^1) - y(z')| & \\ \quad \leq 2\sqrt{2} L \big(|z-p_\alpha^1|+|p_\alpha^1-z'|\big) \leq 4L|z-z'|\big) & \text{if $z' \notin \mathcal{Q}^\mathrm{inner}$ and $p_\alpha^1 \notin \mathcal{Q}^\mathrm{inner}$} \\ |u_\delta(z) - y(p_\alpha^1) + \tilde y(p_\alpha^1) - \tilde y(z') + y(p_\alpha^1) - \tilde y(p_\alpha^1) | & \\ \quad \leq 2 \sqrt{2} L \big(|z - p_\alpha^1| + |p_\alpha^1 - z'| \big) + \frac{r}{6L^2}\, \frac{1}{2L} \leq 5L|z-z'| & \text{if $z' \notin \mathcal{Q}^\mathrm{inner}$ and $p_\alpha^1 \in \mathcal{Q}^\mathrm{inner}$} \\ \end{array} \right. \end{align*} where we used that we already proved the bi-Lipschitz property inside the cross $Z_\alpha$ and in the second and fourth case we used that $\frac{r}{6L^2} \leq |p_\alpha^1-z'|$ since, in this cases, $p_\alpha^1$ and $z'$ have to lie on different edges. \begin{figure}[h!] \centering \includegraphics[width = 0.3 \paperwidth]{triangle.png} \caption{The obtuse triangle formed by $z, z', p_\alpha^1$ as needed in Step 2} \label{triangle} \end{figure} \vspace{2ex} \noindent \emph{Step 4 of the proof of \eqref{biL}: Suppose that $z\in Z_\alpha$, $z' \in Z_\beta$ with $\alpha \neq \beta$}. \\ The last case we need to consider is when $z$, $z'$ lie in two crosses corresponding to two different vertices, respectively. In such a case $|w_\alpha - w_\beta|\geq r$ and also, from definition, $|u_\delta(z')-y(w_\beta)| \leq \frac{r}{4L}$ (as $z'$ belongs to the cross). Therefore, $$ |y(w_\alpha) - u_\delta(z')| = |y(w_\beta) - y(w_\alpha) + u_\delta(z') - y(w_\beta)| \geq \frac{r}{L}-\frac{r}{4L} > \frac{r}{4L}; $$ i.e.\ $u_\delta(z') \notin \mathcal{B}(y(w_\alpha) ; \frac{r}{4L})$ and we may apply Remark \ref{BallArgument} to get (with $p_\alpha^1$ being the extremal of $Z_\alpha$ lying on the same edge as $z$) $$ |u_\delta(z') - u_\delta(z)| \geq \frac{|u_\delta(p_\alpha^1)-u_\delta(z)| + |u_\delta(p_\alpha^1)-u_\delta(z')|}{3}. $$ Similarly, also $u_\delta(p_\alpha^1) \notin \mathcal{B}(y(w_\beta) ; \frac{r}{4l})$ as $$ |y(w_\beta) - y(w_\alpha) - u_\delta(p_\alpha^1) + y(w_\alpha)| \geq \frac{r}{L}-\frac{r}{4L} > \frac{r}{4L}; $$ and hence, again relying on Remark \ref{BallArgument} ($p_\beta^2$ denotes the extremal of $Z_\beta$ lying on the same edge as $z'$) \begin{align*} |u_\delta(z') - u_\delta(z)| &\geq \frac{|u_\delta(p_\alpha^1)-u_\delta(z)| + |u_\delta(p_\alpha^1)-u_\delta(p_\beta^2)| + |u_\delta(p_\beta^2)-u_\delta(z')|}{9} \\ &\geq \frac{1}{18L} \Big(|p_\alpha^1-z| + |p_\alpha^1-p_\beta^2| + |p_\beta^2-z'| \Big) \geq \frac{|z-z'|}{18L}, \end{align*} by applying the triangle inequality. Moreover, we exploited that $|u_\delta(p_\alpha^1)-u_\delta(z)| \geq \frac{|p_\alpha^1-z|}{2L}$ as $p_\alpha^1$ and $z$ lie on the same edge within the same cross (cf.~Step 1); similarly also for $|u_\delta(p_\beta^2)-u_\delta(z')|$. Finally, we can see that $|u_\delta(p_\alpha^1)-u_\delta(p_\beta^2)| \geq \frac{|p_\alpha^1-p_\beta^2|}{2L}$ by the same procedure as employed in Step 3. It, finally, remains to prove the upper bound in \eqref{biL}. But this follows from the fact that, since $z$,$z'$ belong to different crosses, there has to exist a point $p \in \mathcal{Q}$ that does not belong to any cross such that the triangle $z p z'$ is obtuse (or right) at $p$. Here, we admit also the extreme case in which $z p z'$ lie on a straight line; in this case, we understand the angle at $p$ to be $\pi$ and hence obtuse. Therefore, exploiting \eqref{ObtuseTriangle}, readily gives $$ |u_\delta (z) - u_\delta (p) + u_\delta (p)-u_\delta (z')| \leq 5L \big(|z-p|+|z'-p|\big) \leq \frac{10L}{\sqrt{2}}(|z-z'|) \leq 18L|z-z'|\ . $$ \hfill $\Box$ \bigskip \begin{remark}[Obtuse triangle inequality] \label{ObtuseTriangle} Let us consider a triangle formed by three points $z, p_1, z' \in \mathbb{R}^2$ such that the angle $\gamma$ at $p_1$ is obtuse or right (= larger or equal to $\pi/2$). Then it follows from the cosine law \begin{align} |z-z'| = \sqrt{|z-p_1|^2+|z'-p_1|^2 - 2|z-p_1|\, |z'-p_1|\cos(\gamma)} \geq \sqrt{|z-p_1|^2+|z'-p_1|^2}\nonumber\\ \geq \frac{\sqrt{2}}{2} \big( |z-p_1|+|z'-p_1|\big)\ . \end{align} \end{remark} \begin{remark}[Ball separation inequality] \label{BallArgument} Let us consider a ball centered at $w$ with radius $\xi$ and a point $a$ lying inside this ball on the segment $w b$ with $|b-w|=\xi$. Moreover, let $c$ be a point lying outside this ball. Then, since $b$ is the nearest to $a$ lying on the boundary of the mentioned ball it has to hold that $|a-b| \leq |a-c|$ and so by the triangle inequality\footnote{Indeed $|b-c|\leq |a-b|+|a-c| \leq 2 |a-c|$ and so $|a-b|+|b-c| \leq 3|a-c|$ as desired.} $$ |a-c| \geq \frac{|a-b|+|b-c|}{3}. $$ \end{remark} \bigskip \bigskip {\bf Acknowledgment:} We are indebted to two anonymous referees for many remarks, many useful suggestions, and for the extremely careful reading of the manuscript. This work was supported by the GA\v{C}R grants P201/10/0357, P201/12/0671, P107/12/0121, 14-15264S, 14-00420S (GA\v{C}R), and the AV\v{C}R-DAAD project CZ01-DE03/2013-2014. \bigskip\bigskip \vspace*{1cm} \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Neutron stars with very strong magnetic fields are known as magnetars \cite{duncan,usov,pacz}. Recent observations suggest that anomalous x-ray pulsars (AXPs) and soft $\gamma$-ray repeaters (SGRs) are candidates for magnetars \cite{kouv,hurley,mareghetti}. The magnetic field at the surface of the magnetars may be as strong as $10^{14}$- $10^{15}$ G and magnetars are warm, young stars, $\sim~ 1$ kyear old. On the other hand, it is estimated that the interior field in neutron stars may be as large as $10^{18}$ G \cite{shap83}. Ferrario and Wickammasinghe \cite{ferrario} suggest that the extra strong magnetic field of the magnetars results from their stellar progenitor with a high magnetic field core. Iwazaki \cite{iwazaki} proposed that the huge magnetic field of the magnetars is some color ferromagnetism of quark matter. Recently, Vink and Kuiper \cite{vink} have suggested that the magnetars orginate from rapid rotating protoneutron stars. Motivated by the existence of strong magnetic fields in neutron stars, theoretical studies on the effects of extremely large fields on dense matter and neutron stars have been carried out by many authors \cite{chakrabarty,broderick,cardall,Mao03,aziz08}. For densities below twice normal nuclear matter density $(\rho \sim~0.153$ fm$^{-3})$, the matter consists only of nucleons and leptons. However, for baryon densities above $2\rho_0$, the equation of state (EOS) and the composition of matter is much less certain and the strangeness degrees of freedom should be taken into account either through the inclusion of hyperons, kaon condensation or a deconfinement phase transition into strange quark matter. The inclusion of hyperons and kaon condensation \cite{broderick1,suh,yue,yue1,aziz09}, tends to soften the EOS at high densities In \cite{broderick1}, it was shown that the threshold densities of hyperons can be significantly altered by strong magnetic fields. Similar conclusions were obtained in \cite{yue} where the strangeness was included through an antikaon condensation or in \cite{yue1} where not only hyperons but also the strange mesons $\sigma^*$ and $\phi$ were included in the EOS. The effects of the magnetic field on the structure and composition of a neutron star allowing quark--hadron phase transition has been studied in \cite{pal}. A strong magnetic field makes the overall EOS softer. However, due to the positive contribution of the magnetic field pressure to the total EOS, an increase is of the maximum mass is predicted \cite{broderick1,hpais,aziz09,njl}. Protoneutron stars appear as the outcome of the gravitational collapse of a massive star. During its early evolution, the protoneutron star, with an entropy per baryon of the order of 1 (units of the Bolztmann constant), contains trapped neutrinos. This stage is followed by a deleptonization period, during which the core is heated up and reaches an entropy per particle ${\mathfrak{s}}\sim 2$, before cooling occurs. During the cooling stage, exotic degrees of freedom such as hyperons or a kaon condensate, will appear \cite{prakash97}. In this paper, we focus on the properties of warm stellar matter under a strong magnetic field which is composed of a chemically equilibrated and charge neutral mixture of nucleons, hyperons and leptons. We will consider both neutrino free matter and matter with trapped neutrinos. The effect of the magnetic field on the composition of warm stellar matter, both with trapped neutrinos and neutrino free, and the properties of the equation of state (EOS) will be discussed. Both the Landau quantization which affects the charged particles and the incorporation of the nucleon anomalous magnetic moments (AMM) for field strengths $B> 10^5 B_e^c$ ($B_e^c=4.414\times 10^{13}$ G is the electron critical field) have important effects. This work is organised as follows: In section II, we derive the equation of state for hadronic matter at finite temperature with magnetic field. We present the results in section III. Finally, the conclusions are summarized in section IV. \section{Hadron matter equation of state} For the description of the EOS of stellar matter, we employ a field-theoretical approach in which the baryons interact via the exchange of $\sigma-\omega-\rho$ mesons in the presence of a static magnetic field $B$ along the $z$-axis \cite{bb,gm91,glen00}. The Lagrangian density of the non-linear Walecka model (NLWM) we consider has the form \cite{gm91} \begin{equation} {\cal L}= \sum_{b}{\cal L}_{b} + {\cal L}_{m}+ \sum_{l}{\cal L}_{l}. \label{lan} \end{equation} The baryon, lepton ($l$=$e$, $\mu$), and meson ($\sigma$, $\omega$ and $\rho$) Lagrangians are given by \cite{bb,glen00} \begin{widetext} \begin{eqnarray} {\cal L}_{b}&=&\bar{\Psi}_{b}\left(i\gamma_{\mu}\partial^{\mu}- q_{b}\gamma_{\mu}A^{\mu}- m_{b}+g_{\sigma b}\sigma -g_{\omega b}\gamma_{\mu}\omega^{\mu}-g_{\rho b}\tau_{3_{b}} \gamma_{\mu}\rho^{\mu}-\frac{1}{2}\mu_{N}\kappa_{b}\sigma_{\mu \nu} F^{\mu \nu}\right )\Psi_{b} \nonumber\\ {\cal L}_{l}&=& \bar{\psi}_{l}\left(i\gamma_{\mu}\partial^{\mu}-q_{l}\gamma_{\mu}A^{\mu} -m_{l}\right )\psi_{l} \nonumber\\ {\cal L}_{m}&=&\frac{1}{2}\partial_{\mu}\sigma \partial^{\mu}\sigma -\frac{1}{2}m^{2}_{\sigma}\sigma^{2}-U\left(\sigma \right) +\frac{1}{2}m^{2}_{\omega}\omega_{\mu}\omega^{\mu} -\frac{1}{4}\Omega^{\mu \nu} \Omega_{\mu \nu} \cr &-&\frac{1}{4} F^{\mu \nu}F_{\mu \nu} +\frac{1}{2}m^{2}_{\rho}\boldsymbol{\rho}_{\mu}\boldsymbol{\rho}^{\mu} -\frac{1}{4} \mathbf{R}^{\mu\nu}\mathbf{R}_{\mu\nu} \label{lagran} \end{eqnarray} \end{widetext} where $\Psi_{b}$ and $\psi_{l}$ are the baryon and lepton Dirac fields, respectively. The index $b$ runs over the eight lightest baryons $n$, $p$, $\Lambda$, $\Sigma^-$, $\Sigma^0$, $\Sigma^+$, $\Xi^-$ and $\Xi^0$, and the sum on $l$ is over electrons and muons ($e^{-}$ and $\mu^{-}$). $\sigma$, $\omega$, and $\rho$ represent the scalar, vector, and vector-isovector meson fields, which describe the nuclear interaction and $A^\mu=(0,0,Bx,0)$ refers to a external magnetic field along the z-axis. The baryon mass and isospin projection are denoted by $m_{b}$ and $\tau_{3_b}$, respectively. The mesonic and electromagnetic field tensors are given by their usual expressions: $\Omega_{\mu \nu}=\partial_{\mu}\omega_{\nu}-\partial_{\nu}\omega_{\mu}$, $\boldsymbol{R}_{\mu \nu}=\partial_{\mu}\boldsymbol{\rho}_{\nu}- \partial_{\nu}\boldsymbol{\rho}_{\mu}$, and $F_{\mu \nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$. The baryon AMM are introduced via the coupling of the baryons to the electromagnetic field tensor with $\sigma_{\mu \nu} =\frac{i}{2}\left[\gamma_{\mu}, \gamma_{\nu}\right] $ and strength $\kappa_{b}$. The electromagnetic field is assumed to be externally generated (and thus has no associated field equation), and only frozen-field configurations will be considered. The interaction couplings are denoted by $g$, the electromagnetic couplings by $q$ and the baryons, mesons and leptons masses by $m$. The scalar self-interaction is taken to be of the form \begin{equation} U\left(\sigma \right)=\frac{1}{3}b m_n \left(g_{\sigma N}\sigma \right)^3+ \frac{1}{4}c \left(g_{\sigma N}\sigma \right)^4. \end{equation} From the Lagrangian density in Eq.~(\ref{lan}), we obtain the following meson field equations in the mean-field approximation \begin{eqnarray} m^{2}_{\sigma} \sigma +\frac{\partial U\left(\sigma \right)}{\partial\sigma} &=&\sum_{b}g_{\sigma b}\rho^{s}_{b}=g_{\sigma N}\sum_{b}x_{\sigma b} \rho^{s}_{b} \label{mes1} \\ m^{2}_{\omega} \omega^{0} &=& \sum_{b}g_{\omega b}\rho^{v}_{b}= g_{\omega N}\sum_{b}x_{\omega b}\rho^{v}_{b} \label{mes2} \\ m^{2}_{\rho} \rho^{0} &=&\sum_{b}g_{\rho b}\tau_{3_{b}}\rho^{v}_{b}= g_{\rho N}\sum_{b}x_{\rho b}\tau_{3_{b}}\rho^{v}_{b} \label{mes3} \end{eqnarray} where $\sigma=\left\langle \sigma \right\rangle,\; \omega^{0}= \left\langle \omega^{0} \right\rangle\;\hbox{and}\; \rho= \left\langle\rho^{0} \right\rangle$ are the nonvanishing expectation values of the mesons fields in uniform matter, and $\rho^v_b$ and $\rho^s_b$ are, respectively, the baryon vector and scalar densities. The Dirac equations for baryons and leptons are, respectively, given by \begin{widetext} \begin{eqnarray} \big[i\gamma_{\mu}\partial^{\mu}-q_{b}\gamma_{\mu}A^{\mu}-m^{*}_{b} -\gamma_{0}\left(g_{\omega}\omega^{0} +g_{\rho}\tau_{3_{b}}\rho^{0}\right) -\frac{1}{2}\mu_{N}\kappa_{b}\sigma_{\mu \nu} F^{\mu \nu}\big] \Psi_{b} &=&0 \label{MFbary}\\ \left(i\gamma_{\mu}\partial^{\mu}-q_l\gamma_{\mu}A^{\mu}-m_l \right) \psi_{l}&=&0 \label{MFlep} \end{eqnarray} \end{widetext} where the effective baryon masses are given by \begin{equation} m^{*}_{b}=m_{b}-g_{\sigma}\sigma \label{effmass}, \end{equation} and $\rho^{s}_b$ and $\rho^{v}_b$ are the scalar density and the vector density, respectively. For a stellar matter consisting of a $\beta$-equilibrium mixture of baryons and leptons, the following equilibrium conditions must be imposed: \begin{eqnarray} \mu_b=q_b \, \mu_n - q_l \, \mu_e,\label{free}\end{eqnarray} \begin{equation} \mu_{\mu}=\mu_{e}, \label{beta} \end{equation} where $\mu_i$ is the chemical potential of species $i$. The electric charge neutrality condition is expressed by \begin{equation} \sum_{b} q_{b} \rho^{v}_{b}+\sum_{l}q_{l} \rho^{v}_{l}=0, \label{neutra} \end{equation} where $ \rho^{v}_{i}$ is the number density of particle $i$. If trapped neutrinos are included, we replace $\mu_{e}\rightarrow \mu_{e}-\mu_{\nu_e}$ in the above equations, \begin{eqnarray} \mu_b&=&q_b \, \mu_n - q_l \, \left(\mu_e-\mu_{\nu_e}\right).\\ \mu_{\mu}-\mu_{\nu_\mu}&=&\mu_{e}-\mu_{\nu_e},\label{trap} \end{eqnarray} where $\mu_{\nu_e}$ is the electron neutrino chemical potential. The introduction of additional variables, the neutrino chemical potentials, requires additional constraints, which we supply by fixing the lepton fraction $Y_{Le}=Y_{e}+Y_{\nu_{e}}=0.4$ \cite{prakash97,burrows86}. Since no muons are present before and during the supernova explosion, the constraint $Y_{L\mu}=Y_{\mu}+Y_{\nu_{\mu}}=0$ must be imposed. However, because the muon fraction is very small in matter with trapped neutrinos, we only include muons in neutrino free matter. The energy spectra for charged baryons, neutral baryons and leptons (electrons and muons) are, respectively, given by \begin{eqnarray} E^{b}_{\nu, s}&=&\tilde \varepsilon^b_{\nu, s}+g_{\omega b} \omega^{0} +\tau_{3_{b}}g_{\rho b}\rho^{0} \label{enspc1}\\ E^{b}_{s}&=&\tilde \varepsilon^b_{s}+g_{\omega b} \omega^{0}+ \tau_{3_{b}}g_{\rho b}\rho^{0}\label{enspc2} \\ E^{l}_{\nu, s}&=&\tilde \varepsilon^l_{\nu, s}= \sqrt{\left(k^{l}_{\parallel}\right)^{2}+m_{l}^{2}+ 2\nu |q_{l}| B}\label{enspc3} \end{eqnarray} where, \begin{eqnarray} \tilde\varepsilon^{b}_{\nu, s}&=&\sqrt{\left(k^{b}_{\parallel}\right)^{2}+ \left(\sqrt{m^{* 2}_{b}+2\nu |q_{b}|B}-s\mu_{N}\kappa_{b}B \right)^{2}} \\ \tilde \varepsilon^{b}_{s}&=& \sqrt{\left(k^{b}_{\parallel}\right)^{2} + \left(\sqrt{m^{* 2}_{b}+\left(k^{b}_{\bot}\right)^{2} }-s\mu_{N}\kappa_{b}B \right)^{2}} \end{eqnarray} and $\nu=n+\frac{1}{2}-sgn(q)\frac{s}{2}=0, 1, 2, \ldots$ enumerates the Landau levels (LL) of the fermions with electric charge $q$, the quantum number $s$ is $+1$ for spin up and $-1$ for spin down cases and $k_\parallel, \, k_\bot$ are, respectively, the momentum components parallel and perpendicular to the magnetic field. At finite temperature the occupation number distribution functions are given, respectively for charged and neutral baryons, by \begin{widetext} \begin{eqnarray} f^b_{k,\nu, s} &=& \frac{1}{1+\exp\left[\beta(\tilde\varepsilon^b_{\nu, s} -\mu^{*}_{b}) \right]} \qquad \bar{f}^b_{k,\nu, s} = \frac{1}{1+\exp\left[\beta(\tilde\varepsilon^b_{\nu, s} +\mu^{*}_{b} )\right]},\\ f^b_{k, s} &=& \frac{1}{1+\exp\left[\beta(\tilde\varepsilon^b_{s} -\mu^{*}_{b}) \right]} \qquad \bar{f}^b_{k, s} = \frac{1}{1+\exp\left[\beta(\tilde\varepsilon^b_{s} +\mu^{*}_{b} )\right]}, \end{eqnarray} and for the charged leptons \begin{eqnarray} f^l_{k,\nu, s} &=& \frac{1}{1+\exp\left[\beta(\tilde\varepsilon^l_{\nu, s} -\mu_{l}) \right]} \qquad \bar{f}^l_{k,\nu, s} = \frac{1}{1+\exp\left[\beta(\tilde\varepsilon^l_{\nu, s} +\mu_{l} )\right]}, \end{eqnarray} \end{widetext} where the baryon effective chemical potential $(\mu_b)^*$ is given by \begin{eqnarray} \mu^{*}_{b}&=&\mu_{b}-g_{\omega b}\omega^{0}-g_{\rho b}\tau_{3_{b}}\rho^{0}. \end{eqnarray} For the charged baryons, the scalar and vector densities are, respectively, given by \begin{eqnarray} \rho^{s}_{b}&=&\frac{|q_{b}|Bm^{*}_{b}}{2\pi^{2}}\sum_{\nu, s} \int^{\infty}_{0}\frac{dk^b_{\parallel}}{\sqrt{(k^{b}_{\parallel})^2 +(\bar m^c_{b})^2}}\left( f^b_{k,\nu, s}+\bar{f}^b_{k, \nu, s}\right), \cr \rho^{v}_{b}&=&\frac{|q_{b}|B}{2\pi^{2}}\sum_{\nu, s}\int^{\infty}_{0} dk^b_{\parallel}\left( f^b_{k,\nu, s}-\bar{f}^b_{k,\nu, s}\right), \end{eqnarray} where we have introduced the effective mass \begin{equation} \bar m^c_{b}=\sqrt{m^{* 2}_{b}+2\nu |q_{b}|B}-s\mu_{N}\kappa_{b}B. \label{mc} \end{equation} the expressions of For the neutral baryons, the scalar and vector densities of the neutral baryon $b$ are, respectively, given by \begin{widetext} \begin{eqnarray} \rho^{s}_{b}&=&\frac{1}{2\pi^{2}}\sum_{s}\int^{\infty}_{0} k^{b}_{\bot}dk^{b}_{\bot}\left(1-\frac{s\mu_{N}\kappa_{b}B} {\sqrt{m^{* 2}_{b}+\left(k^{b}_{\bot}\right)^{2}}} \right) \int^{\infty}_{0}\, dk^{b}_{\parallel} \frac{m^*_b}{\tilde \varepsilon^{b}_{s}}\left( f^b_{k, s}+\bar{f}^b_{k, s}\right), \cr \rho^{v}_{b}&=&\frac{1}{2\pi^{2}}\sum_{s}\int^{\infty}_{0}k^{b}_{\bot} dk^{b}_{\bot} \int^{\infty}_{0}\, dk^{b}_{\parallel} \left( f^b_{k, s}-\bar{f}^b_{k, s}\right). \end{eqnarray} \end{widetext} The vector density of the charged leptons is given by \begin{equation} \rho^{v}_{l}=\frac{|q_{l}|B}{2\pi^{2}}\sum_{\nu, s}\int^{\infty}_{0} dk^{l}_{\parallel}\left( f^l_{k,\nu, s}-\bar{f}^l_{k,\nu, s}\right), \end{equation} and for neutrinos by \begin{equation} \rho^{v}_{\nu_e} = \frac{1}{2 \pi^2}\int^{\infty}_{0}k^2 dk \left( f^\nu_{k, s}-\bar{f}^\nu_{k, s}\right). \end{equation} We solve the coupled Eqs.~(\ref{mes1})-(\ref{MFlep}) self-consistently at a given baryon density $\rho=\sum_{b}\rho^{v}_{b}$ in the presence of a strong magnetic field. The energy density of stellar matter is given by \begin{equation} \varepsilon_m=\sum_{b} \varepsilon_{b}+\sum_{l=e,\mu}\varepsilon_{l}+\frac{1} {2}m^{2}_{\sigma}\sigma^{2}+U\left(\sigma \right) + \frac{1}{2}m^{2}_{\omega}\omega^{2}_{0}+\frac{1}{2}m^{2}_{\rho}\rho^{2}_{0}. \label{ener} \end{equation} where the energy densities of charged baryons $\varepsilon_b^c$, neutral baryons $\varepsilon_b^n$, and leptons $\varepsilon_l$ have, respectively, the following forms \begin{widetext} \begin{eqnarray} \varepsilon^c_{b}&=&\frac{|q_{b}|B}{2\pi^ {2}}\sum_{\nu, s}\int^{\infty}_{0} dk^b_{\parallel}\sqrt{(k^{b}_{\parallel})^2+(\bar m^c_{b})^2}\left( f^b_{k,\nu, s}+\bar{f}^b_{k,\nu, s}\right) ,\cr \varepsilon^n_{b}&=&\frac{1}{2\pi^ {2}}\sum_{s}\int^{\infty}_{0}k^{b}_{\bot} dk^{b}_{\bot} \int^{\infty}_{0} dk^{b}_{\parallel} \sqrt{\left(k^{b}_{\parallel}\right)^{2} +\left(\sqrt{m^{* 2}_{b}+ \left(k^{b}_{\bot}\right)^{2} }-s\mu_{N}\kappa_{b}B \right)^{2}} \left( f^b_{k, s}+\bar{f}^b_{k, s}\right) \cr \varepsilon_{l}&=&\frac{|q_{l}|B}{2\pi^ {2}}\sum_{\nu, s}\int^{\infty}_{0} dk^{l}_{\parallel}\sqrt{(k^{l}_{\parallel})^2+m_{l}^{2}+2\nu |q_{l}| B} \left( f^l_{k, \nu, s}+\bar{f}^l_{k,\nu, s}\right ). \end{eqnarray} \end{widetext} The thermodynamical grand potential and the free energy density are defined as \begin{equation} \Omega= {\cal F}-\sum_{b}\mu_{b}\rho^{v}_{b},\quad\quad {\cal F}=\varepsilon_m -T{\cal S}, \end{equation} where the entropy density $\cal S$ is given by \begin{equation} {\cal S}= \sum_b {\cal S}_b+\sum_l {\cal S}_l \end{equation} with, \begin{widetext} \begin{eqnarray} {\cal S}^c_b&=&-\frac{|q_{b}|B}{2\pi^ {2}}\sum_{\nu, s}\int^{\infty}_{0} dk^b_{\parallel}\left\lbrace f^b_{k,\nu, s}\log f^b_{k,\nu, s}+(1-f^b_{k,\nu, s})\log(1- f^b_{k,\nu, s})+\bar{f}^b_{k,\nu, s}\log\bar{f}^b_{k,\nu, s}+(1-\bar{f}^b_{k,\nu, s})\log(1- \bar{f}^b_{k, \nu, s})\right\rbrace \cr {\cal S}^n_b&=& -\frac{1}{2\pi^ {2}}\sum_{s}\int^{\infty}_{0}k^{b}_{\bot} dk^{b}_{\bot} \int^{\infty}_{0} dk^{b}_{\parallel} \left\lbrace f^b_{k, s}\log f^b_{k, s}+(1-f^b_{k, s})\log(1- f^b_{k, s})+ \bar{f}^b_{k, s}\log\bar{f}^b_{k, s}+(1-\bar{f}^b_{k, s})\log(1- \bar{f}^b_{k, s}) \right\rbrace \cr {\cal S}_l &=&-\frac{|q_{l}|B}{2\pi^ {2}}\sum_{\nu, s}\int^{\infty}_{0} dk^{l}_{\parallel}\left\lbrace f^l_{k, \nu, s}\log f^l_{k, \nu, s}+(1-f^l_{k, \nu, s}) \log(1- f^l_{k, \nu, s})+\bar{f}^\nu_{k, \nu, s}\log\bar{f}^\nu_{k, \nu, s}+(1-\bar{f}^\nu_{k, \nu, s}) \log(1-\bar{f}^\nu_{k, \nu, s}) \right\rbrace\nonumber\\ \end{eqnarray} \end{widetext} The pressure of neutron star matter is given by \begin{equation} P_{m}=- \Omega=\mu_{n}\sum_{b} \rho^{v}_{b} -\varepsilon_{m}+ T{\cal S}, \label{press} \end{equation} where the charge neutrality and $\beta$-equilibrium conditions are used to get the last equality. If the stellar matter contains neutrinos trapped, their energy density, pressure, and entropy contributions, respectively, \begin{eqnarray} \varepsilon_{\nu_e}&=&\frac{1}{2\pi^ {2}} \int^{\infty}_{0}k^3 dk \left\lbrace f^\nu_{k, s}+\bar{f}^\nu_{k, s}\right\rbrace \cr P_{\nu_e}&=&\frac{1}{6\pi^ {2}} \int^{\infty}_{0}k^3 dk \left\lbrace f^\nu_{k, s}+\bar{f}^\nu_{k, s}\right\rbrace\cr {\cal S}^{\nu_e}_l &=&-\frac{1}{2\pi^ {2}} \int^{\infty}_{0}(k^{l})^2 dk^{l} \left\lbrace f^\nu_{k, s}\log f^\nu_{k, s}+(1-f^\nu_{k, s})\log(1- f^\nu_{k, s}) + \bar{f}^\nu_{k, s}\log\bar{f}^\nu_{k, s}+(1-\bar{f}^\nu_{k, s})\log(1-\bar{f}^\nu_{k, s}) \right\rbrace, \end{eqnarray} should be added to the stellar matter energy and pressure. \section{Results and discussion} \begin{figure}[th] \vspace{1.5cm} \centering \includegraphics[width=0.75\linewidth,angle=0]{figure1.eps}\\ \caption{(Color online) Matter pressure as a function of the baryonic density, for several values of magnetic field ($B^*=0$, $B^{*}=10^5,\; 2\times 10^5$) without (left panels) and with AMM (right panels) for an entropy per baryon ${\mathfrak{s}}=0$, and 2, and for neutrino free matter.} \label{eosnf} \end{figure} \begin{figure}[th] \vspace{1.5cm} \centering \includegraphics[width=0.75\linewidth,angle=0]{figure2.eps} \caption{(Color online) The same as Fig.~\ref{eosnf} but for matter with trapped neutrinos and lepton fraction $Y_{Le}=0.4$.} \label{eosnt} \end{figure} We now study the stellar matter at finite temperature with magnetic fields. We include the baryonic octet in the EOS and choose the GM1 parameter set~\cite{gm91} for our calculation. The static properties of the baryons were given in previous works~\cite{broderick,aziz09}. The parameters of the model are the nucleon mass $m_n=939$ MeV, the masses of mesons $m_\sigma$, $m_\omega$, $m_\rho$ and the coupling constants. The meson-hyperon couplings are assumed to be fixed fractions of the meson-nucleon couplings, $g_{i H}=x_{i H} g_{i N}$, where for each meson $i$, the values of $x_{i H}$ are assumed equal for all hyperons $H$. The values of $x_{i H}$ are chosen to reproduce the binding energy of the $\Lambda$ at nuclear saturation as suggested in~\cite{gm91}, and given in Table~\ref{table2}. A different choice could have been done in order to consider that the optical potential of the $\Sigma^-$ in nuclear matter is repulsive as was done in~\cite{chiap09}. However, there is very little experimental information that can be used to fix the $\Sigma^-$ interaction. More, the main difference that would occur would be the onset of the $\Sigma^-$ at larger densities and the $\Xi^-$ at lower densities. We consider that the external magnetic field is constant. The magnetic field will be defined in units of the critical field $B_e^c=4.414\times 10^{13}$ G so that $B=B^*B_e^c$. \begin{table*} \caption{The parameter set GM1 \cite{gm91}used in the calculation.} \label{table2} \begin{ruledtabular} \begin{tabular}{ c c c c c c c c c c cc} $\rho_{0}$& -B/A & &$g_{\sigma N}/m_{\sigma}$ &$g_{\omega N}/m_{\omega}$&$g_{\rho N}/m_{\rho}$& & & & &\\ ($fm^{-3}$) &(MeV)& $M^{*}/M$ & (fm) & (fm) & (fm) &$x_{\sigma H}$&$x_{\omega H}$ & $x_{\rho H}$ &b &c \\ \hline 0.153&16.30&0.70& 3.434 & 2.674 & 2.100 & 0.600 & 0.653 & 0.600 &0.002947 &-0.001070 \\ \end{tabular} \end{ruledtabular} \end{table*} In Fig.~\ref{eosnf} and Fig.~\ref{eosnt} the pressure is plotted versus density for $B^*=0,\, 10^5$ and $2\times 10^5$ for neutrino free matter (Fig.~\ref{eosnf}) and matter with trapped neutrinos (Fig.~\ref{eosnt}), without and with the inclusion of the AMM, respectively, in left and right panels. We consider an entropy per baryon ${\mathfrak{s}}=0$ and 2. The kink on each EOS curve identifies the onset of hyperons. The effects of the AMM are only noticeable for a strong magnetic field above $B^*=10^5$ as already discussed in~\cite{broderick}. \begin{figure} \vspace{1.5cm} \centering \includegraphics[width=0.75\linewidth,angle=0]{figure3.eps}\\ \caption{(Color online) Matter pressure as a function of the baryonic density, for $B^*=2\times 10^5$: comparison between both neutrino free and neutrino trapped matter for several values of the entropy per baryon $\mathfrak{s}$.} \label{eos3} \end{figure} We will start by discussing neutrino free matter, shown in Fig.~\ref{eosnf}. The strong magnetic field makes the EOS softer at lower densities and harder at higher densities due to the Landau quantization which affects charged particles~\cite{broderick,aziz09}. However, at finite temperature these effects are partially washed out and, we may see in the left panel of Fig. \ref{eosnf} that the EOS for $B^*=2\times 10^5$ is not much softer than the corresponding EOS for $B^*=0$ in the lower range of densities plotted. The effects of temperature are even stronger when the AMM is included (right panel). As discussed in~\cite{broderick}, in the presence of a strong magnetic field the extra hardness due to the inclusion of AMM is mainly due to an increase of the neutron pressure degeneracy due to spin polarization of the neutrons. Temperature raises partially this degeneracy and, consequently, gives rise to a softening of the EOS: at large densities the three EOS plotted for ${\mathfrak{s}}=2$ with $B^*=0, 10^5, 2\times 10^5$ almost coincide. While for zero magnetic field the EOS becomes harder with the inclusion of temperature, the opposite may occur for a strong magnetic field. One of the main effects of temperature on the EOS of matter under a strong magnetic field is to wash out the effects of the Landau quantization and spin polarization. In Fig.~\ref{eosnt} we show results for matter with trapped neutrinos and a lepton fraction $Y_{Le}=0.4$. In this case, at low densities, above $\rho=0.5\rho_0$, the EOS becomes harder for a finite external magnetic field. This is mainly due to the larger electron fraction below the onset of strangeness. However, just as before, the effect of the field is not so strong for a finite entropy per baryon. Above $\rho=2\rho_0$ the equation becomes softer in the presence of a magnetic field both taking into account AMM or not if ${\mathfrak{s}}=2$, although for ${\mathfrak{s}}=0$ the opposite occurs. This is due to the fact that above this density for $B^*=2\times 10^5$ the proton fraction becomes comparable or even larger than the neutron fraction, and therefore the kinetic contribution to the pressure reduces. When the AMM is included there is not much difference at ${\mathfrak{s}}=2$, however for ${\mathfrak{s}}=0$ the EOS is a slightly harder than in the case without AMM due to a larger electron fraction and smaller hyperon fraction. In Fig.~\ref{eos3} we have plotted for $B^*=2\times 10^5 $ and ${\mathfrak{s}}=0,\, 1$ and 2, the pressure for neutrino free matter and neutrino trapped matter, without (left panel) and with (right panel) AMM in order to compare the effect of a strong magnetic field in matter with and without neutrinos. At zero magnetic field the EOS of neutrino free matter becomes harder at finite temperature for the lower densities~\cite{prakash97}. However, the onset of strangeness occurs at lower densities for finite temperature and this will soften the EOS~\cite{prakash97,menezes03,panda10}. As a consequence, for a range of intermediate densities it may occur that the EOS at finite temperature is softer~\cite{prakash97,panda10}. Trapped neutrinos increase (decrease) the proton (neutron) fraction at low density and this softens the EOS relatively to the neutrino free case due to an overall smaller baryonic kinetic pressure. However, at larger densities trapped neutrinos hinder the onset of hyperons, therefore shifting the softening of the EOS due to strangeness to larger densities. For $B^*=2\times 10^5$ and zero temperature neutrino trapped matter is not softer than neutrino free matter, because the decrease on the neutron fraction due to trapped neutrinos is much smaller and the overall decrease on the baryonic kinetic pressure does not compensate the increase of the total lepton contribution to the EOS. At finite temperature, however, the onset of hyperons at lower densities and the crossing of the neutron with the proton fraction also at a lower density gives rise to a softening of the EOS for densities $2\rho_0<\rho<3\rho_0$. The inclusion of the AMM makes the nucleonic fraction of neutrino trapped matter closer to the ones of neutrino free matter, and, therefore, the softening that occurs due to a more equilibrated distribution of protons and neutrons does not compensate the larger lepton contribution in neutrino trapped matter. However, finite temperature washes LL filling effects and spin polarization effects and a softer EOS results. In Fig.~\ref{effe}, the nucleon effective mass is shown as function of the baryon density for ${\mathfrak{s}}=0$ and ${\mathfrak{s}}=2$, and for $B^*=0,\; \hbox{and}\; 2\times 10^5$ without and with AMM. Figs.~\ref{effe} (a), (b) correspond to neutrino free matter, and Figs.~\ref{effe} (c) and (d) to matter with trapped neutrinos and a lepton fraction $Y_{Le}=0.4$. For ${\mathfrak{s}}=0$, if the AMM is not taken into account, the effective mass reduces faster with an increase of the density at finite magnetic field while the opposite occurs when AMM is included,~\cite{broderick}. However, at finite temperature, even taking into account the AMM, the nucleon effective mass decreases at a given density when $B^*$ increases because spin polarization effects are partially washed out with temperature. We will see later that this has an important effect on the temperature of matter with a fixed entropy per baryon. \begin{figure}[ht] \vspace{1.5cm} \centering \includegraphics[width=0.85\linewidth,angle=0]{figure4.eps} \caption{(Color online) Nucleon effective mass as a function of the baryonic density, for $B^*=0,\; \hbox{and}\; 2\times 10^5$, without (left panels) and with AMM (right panels), for neutrino free matter (top panel) and matter with trapped neutrinos (bottom panel).} \label{effe} \end{figure} \begin{figure}[ht] \vspace{1.5cm} \centering \begin{tabular}{c} \includegraphics[width=0.85\linewidth,angle=0]{figure5.eps}\\ \vspace{1.5cm}\\ \includegraphics[width=0.85\linewidth,angle=0]{figure6.eps} \end{tabular} \caption{(Color online) Particle fractions as a function of the baryonic density, for $B^*=0$ and $2\times 10^5$, without (left panels) and with AMM (right panels), for neutrino free matter (top panel) and matter with trapped neutrinos (bottom panel). } \label{frac1} \end{figure} In Fig.~\ref{frac1} the particle fraction $Y_i=\rho_i/\rho$ for baryons and leptons is plotted as a function of the baryon density for $B^*=2\times 10^5$ (thick lines) and $B^*=0$ (thin lines) for two values of the entropy per particle, ${\mathfrak{s}}=1$ and 2. Neutrino free matter is represented in the top panel and matter with trapped neutrinos in the bottom panel. At zero magnetic field, the main effect of temperature is to move the onset of hyperons to lower densites~\cite{prakash97}. This feature is still true for a finite magnetic field. If matter is under the effect of a strong magnetic field we find the same effects at finite temperature as at zero temperature: a) if the AMM is not included the onset of the $\Sigma^-$ is shifted to larger densities, the onset of $\Sigma^+$ to smaller densities, and the neutral hyperons are not much affected; b) including the AMM the onset $\Sigma^-$ is shifted to larger densities but less than in the previous case, the one of $\Sigma^+$ is shifted to even smaller densities, and in this case the neutral hyperons are affected with $\Lambda$ behaving like $\Sigma^-$ and $\Sigma^0$ like $\Sigma^+$. The onset of the cascate $\Xi^-$ occurs below $6\rho_0$ for ${\mathfrak{s}}=1$ and 2, and behaves like $\Sigma^-$ with the magnetic field. As discussed in~\cite{broderick1,aziz10} the behaviour observed among the different hyperons is mainly due to a decrease of the neutron chemical potential, due to a smaller isospin asymmetry, and a decrease of the electron chemical potential due to Landau quantization. In fact, at low densities, the fraction of proton and leptons are significantly affected by the magnetic field. The Landau quantization increases the proton abundance, and, therefore, the electron abundance due to the charge neutrality condition. The inclusion of AMM reduces the chemical potential of all the hyperons and a complicated balance between the different terms, including the magnitude of the AMM, will define whether the onset is shifted to larger or smaller densities. For instance the AMM of $\Sigma^0$, $\Sigma^+$ is much larger than the one of $\Sigma^-$, $\Lambda$ and $\Xi^-$ (see Fig.~\ref{frac1} bottom panel) and this may explain the different behavior of these hyperons when AMM is included. At $B=0$ and $T=0$, the presence of neutrinos shifts the onset of hyperons to larger densities because the neutron chemical potential is smaller and the neutrino chemical potential finite, with temperature making this effect less effective. For warm matter under a strong magnetic field we conclude: a) if AMM is not taken into account, a smaller chemical potential of both neutrons and neutrinos explains a smaller (larger) shift of the negatively (positively) charged baryons to larger densities, and a smaller neutron chemical potential gives rise to a shift to larger densites of the neutral baryons onset; b) including AMM may change these trends mainly the ones associated with the baryons with a larger AMM, because the hyperon chemical potentials decrease and, therefore the onset of all hyperons will occur at smaller densities than in the absence of AMM. \begin{figure}[ht] \vspace{1.5cm} \centering \begin{tabular}{c} \includegraphics[width=0.8\linewidth,angle=0]{figure7.eps}\\ \vspace{1.5cm}\\ \includegraphics[width=0.8\linewidth,angle=0]{figure8.eps} \end{tabular} \caption{(Color online) Strangeness fraction as a function of the baryonic density, for severals values of the magnetic field B and ${\mathfrak{s}}=0,1,2$, without and with AMM, for GM1 model. Top panel: the strangeness fraction at a fixed entropy for different values of $B$; bottom panel: the strangeness fraction at a fixed $B$ for different values of the entropy per particle ${\mathfrak{s}}$.} \label{strange1} \end{figure} The effect of the magnetic field on the total strangeness fraction is better seen in Fig.~\ref{strange1}, where we show the total strangeness fraction for different values of magnetic field and different values of the entropy per baryon $\mathfrak{s}$ with and without AMM. The fraction of the strangeness in the system is given by \begin{equation} r_{\mathfrak{s}}=\frac{\sum_bq_s^b\rho_b}{3\rho}, \end{equation} where $q_s^b$ is the strange charge of baryon $b$. At zero entropy the strangeness onset occurs respectively, below $2\rho_0$ and $3\rho_0$ for neutrino free and neutrino trapped matter. At $\rho=6\rho_0$ neutrino trapped matter has a strangeness fraction 0.03-0.04 smaller due to the larger electron fraction. The magnetic field lowers the strangeness fraction for neutrino free matter and almost does not affect the strangeness fraction for neutrino trapped matter. This effect was already discussed in \cite{aziz10} and is due to the fact that with the presence of neutrinos the neutron chemical potential does not suffer such a large reduction as in neutrino free matter. The top panel of Fig.~\ref{strange1} allows a comparison of the strangeness fraction at a fixed entropy and different $B$ intensities. For a finite entropy the effect of $B$ in neutrino free matter is equivalent to the one already obtained at ${\mathfrak{s}}=0$, there is a reduction of strangeness with the increase of $B$. However, in neutrino trapped matter the opposite may occur and for ${\mathfrak{s}}=2$ the larger the magnetic field the larger is strangeness fraction. This is true whether AMM is taken into account or not. In neutrino free matter the effect of temperature is not strong enough to oppose the shift of the onset of strangeness to larger densities due to the decrease of the neutron chemical potential. In the bottom panel of Fig.~\ref{strange1} we compare the strangeness fraction for a fixed magnetic field and different entropies. Except for the larger field it is verified the general trend of temperature that shifts the onset of strangeness to lower densities for both neutrino trapped and neutrino free matter~\cite{menezes04,panda10}. However, in neutrino trapped matter at large densities this trend may fail if the temperature is not high enough. \begin{figure}[ht] \vspace{1.5cm} \centering \includegraphics[width=0.6\linewidth,angle=0]{figure9.eps} \caption{(Color online) Neutrino fraction as a function of the baryonic density, for severals values of of the magnetic field B and entropy per baryon ${\mathfrak{s}}$, without (left panel) and with (right panel) AMM.} \label{fneutrino} \end{figure} It is also interesting to analyse the effect of $B$ on the neutrino fraction. shown in Fig. \ref{fneutrino} as a function of the baryonic density for several values of magnetic field and the entropy. In \cite{aziz10} it was discussed that, for zero temperature, the magnetic field gives rise to a strong neutrino suppression at small densities, as seen in the top panels. This was attributed to the large proton and, therefore, also electron fractions. At finite temperature the suppression at low densities persists, although, the fluctuations due to the filling of the Landau levels disappear. It is seen that for a finite entropy also at high densities a strong magnetic field gives rise to a decrease of neutrino fraction, due to the larger proton fraction that favors a larger electron fraction. A smaller neutrino fraction may imply a slower cooling of the star core, since the core cools essentially by neutrino emission \cite{prakash97}. \begin{figure}[ht] \vspace{1.5cm} \centering \includegraphics[width=0.75\linewidth,angle=0]{figure10.eps} \caption{(Color online) Temperature as a function of the baryonic density, for severals values of of the magnetic field B, without (left panels) and with (right panels) AMM for neutrino free matter (top panels) and matter with trapped neutrinos (bottom panels). The thin line is for ${\mathfrak{s}}=1$ and the thick line for ${\mathfrak{s}}=2$.} \label{temp1} \end{figure} In Fig.~\ref{temp1}, the temperature of the system is plotted for a entropy per baryon ${\mathfrak{s}}=1$ and $2$, respectively, for neutrino free matter and matter with trapped neutrinos. As expected, for a larger entropy per baryon, higher temperatures are reached. In a strong magnetic field these temperatures can be even larger. At low densities, before the onset of hyperons, temperature rises slower for the larger magnetic fields because the proton (neutron) fraction increases (decreases) with $B$ and the nucleonic degrees of freedom become more equally distributed. The kink in all the curves identifies the onset of hyperons, and as discussed before, occurs at lower densities for lower values of $B$. This is true both for ${\mathfrak{s}}=1$ and 2, with or without AMM. At high densities the temperature becomes approximately constant: this is clearly seen for ${\mathfrak{s}}=1$; for ${\mathfrak{s}}=2$ the temperature saturation occurs at higher densities because the hyperon fractions increase during a larger range of densities until they attain a saturation fraction. If AMM is included the temperature raises to larger values at finite $B$ probabily due to the larger lepton fraction. For matter with trapped neutrinos, we have a similar situation. However, because the proton fraction is larger due to the fixed lepton fraction constraint, the effect of $B$ in reducing the neutron fraction at low densities is not so large and the dependence of $T$ on the baryonic density does not depend on $B$ until the onset of hyperons, which occurs in a smaller range of densities in matter with trapped neutrinos. Also a smaller overall neutron fraction favors lower temperatures as compared with neutrino free matter. The differences occuring below $2\rho$ for the description with AMM is mainly due to the increase of the neutrino fraction and a faster decrease of the effective mass with the baryonic density at finite temperature and finite $B$. Since, to date, there is no information available on the interior magnetic field of the star, we will assume that the magnetic field is baryon density-dependent as suggested by Ref.~\cite{chakrabarty}. The variation of the magnetic field $B$ with the baryons density $\rho$ from the center to the surface of a star is parametrized~\cite{chakrabarty, Mao03} by the following form \begin{equation} B\left(\frac{\rho}{\rho_0}\right) =B^{\hbox{surf}} + B_0\left[1- \exp\left\lbrace-\beta\left( \frac{\rho}{\rho_0}\right)^\gamma \right\rbrace\right], \label{brho} \end{equation} where $\rho_0$ is the saturation density, $B^{\hbox{surf}}$ is the magnetic field at the surface taken equal to $10^{15}$G in accordance with the values inferred from observations and $B_0$ represents the magnetic field at large densities. The parameters, $\beta $ and $\gamma$ are chosen in such way that the field decreases with the density from the centre to the surface. In this work, we will use the set of values $\beta=0.05$ and $\gamma=2$, allowing a slowly varying field with the density. The magnetic field will be considered in units of the critical field $B^c_e=4.414 \times 10^{13}$~G, so that $B_0=B^*_0 \, B^c_e$. We take $B^*_0$ as a free parameter to check the effect of magnetic fields on stellar matter. Here, we consider the EOS by taking the hyperon-meson coupling constants $x_\sigma=0.6$. Hadron star properties are obtained from the EOS studied, for several values of magnetic field, by solving the Tolman-Oppenheimer-Volkoff equations resulting from the Einstein's general relativity equations for spherically symmetric static stars. This is as approximation since the magnetic field distroys the spherical symmetry, and, therefore, we interpret the obtained results as average values. We do not allow the magnetic field at the centre of the star to exceed $\sim 3\times 10^{18}$ G according to the results of~\cite{broderick1} which indicate that stable stars do not occur with a larger central magnetic field. In Fig.~\ref{massgrav1} we show the family of stars corresponding to the maximum mass configuration given in Tables~\ref{table3} and~\ref{table4}. Two main conclusions may be drawn a) warm stars with trapped neutrinos have larger masses and radii. However, the differences get smaller for the most massive stars if a strong magnetic field exists; b) a strong magnetic field makes the star radius larger, as well as the mass of the maximum mass star configuration. In table~\ref{table3} the maximum gravitational and baryonic masses of stable stars, their radii, central energy densities and magnetic field at the center, are given for neutrino free matter. In table \ref{table4} the same quantities are displayed for matter with trapped neutrinos. The main conclusions we draw from the tables are: a) the maximum baryonic mass of the star always decreases with ${\mathfrak{s}}$, independently of the existence of a magnetic field or trapped neutrinos; b) for a finite magnetic field the maximum gravitational mass decreases slightly with ${\mathfrak{s}}$, however, the opposite occurs for $B=0$; c) in the presence of a huge magnetic field the central baryonic density of the star is smaller and the radius larger. Neutrinos diffuse out of the core after a first period when they are trapped in the core of a proto-neutron, and the star reachs an entropy per particle of ${\mathfrak{s}}\sim 1$ (in units of the Boltzmann constant). During the deleptonization period the core is heated up and reaches an entropy per particle ${\mathfrak{s}}\sim 2$. After the core deleptonizes exotic degrees of freedoms such as hyperons will appear. We will discuss how the magnetic field may influence the evolution of the star from a stage with ${\mathfrak{s}}=1$ and trapped neutrinos, to a stage of warm neutrino free matter with ${\mathfrak{s}}=2$ and finally a cold neutrino free star with ${\mathfrak{s}}=0$. During this evolution the gravitational mass of the star decreases but its baryonic mass will stay constant. At zero magnetic field, the maximum baryonic mass of a star with trapped neutrinos and ${\mathfrak{s}}=1$ is 2.27 $M_\odot$. This star will deleptonize and heat up. However, since the maximum baryonic mass of a neutrino free star with ${\mathfrak{s}}=2$ is 0.27 $M_\odot$ smaller (equal do 2.00 $M_\odot$ the star will evolve into a low-mass black hole \cite{prakash97}). This is due to the softening that the EOS suffers with the appearence of hyperons and is well illustrated in Fig. \ref{massgrav2} by the full lines: for $B=0$ all configurations with trapped neutrinos and a baryonic mass above the maximum of the neutrinos free ${\mathfrak{s}}=2$ configurations, 2.024 $M_\odot$, will evolve into a black hole. We will now consider that the decay of the magnetic field will occur in longer timescale than the deleptonization phase, and therefore, during the star evolution the magnetic field remains constant. If we consider $B^*=10^5$, the maximum baryonic mass of a star with trapped neutrinos and ${\mathfrak{s}}=1$ is 2.69 $M_\odot$. The maximum mass of a neutrino free star with ${\mathfrak{s}}=2$ is smaller but the difference is much smaller that the one occuring at $B=0$: a maximum mass of 2.60 $M_\odot$ corresponds only to a 0.09 $M_\odot$ difference. The set of stars that will evolve to a low mass black hole is much smaller. In Fig. \ref{massgrav2}, the dashed curves correspond to $B^*_0=10^5$, from top to bottom ${\mathfrak{s}}=1$ with trapped neutrinos, neutrino free ${\mathfrak{s}}=2$ and ${\mathfrak{s}}=0$. The configurations of star with trapped neutrinos and ${\mathfrak{s}}=1$ above the maximum of the neutrino free ${\mathfrak{s}}=2$ curve, 2.597 $M_\odot$, will evolve to a blackhole. However, if we consider $B^*_0=2\times 10^5$ all configurations with trapped neutrinos and ${\mathfrak{s}}=1$ will evolve to stable ${\mathfrak{s}}=2$ and afterwards ${\mathfrak{s}}=0$ neutrino free star configurations. The maximum mass star configuration with trapped neutrinos and ${\mathfrak{s}}=1$ is smaller than the maximum mass neutrino free star with ${\mathfrak{s}}=2$, respectively 3.07 $M_\odot$ and 3.15 $M_\odot$. No evolution to a low mass blackhole will occur. This could be expected since we have seen discussed that the magnetic field hinders the appearance of hyperons. However, it is important to notice that if the star cools down in a stable star keeping the magnetic field configuration described by Eq. (\ref{brho}) with $B^*=2\times 10^5$ it may still decay into a low mass blackhole during the magnetic field decay. \begin{table*}[htb] \caption{Properties of the stable baryon star with maximum mass, for several values of magnetic field using [see Eq. (\ref{brho})] with the parametrisation. $M_{max}$, $M^b_{max}$, R, $E_{0}$, $\rho^c$, $B_{c}$, and $T_c$ are, respectively, the gravitational and baryonic masses, the star radius, the central energy density, the central baryonic density, and the values of the magnetic field and the temperature at the centre. Neutrino-free matter.} \label{table3} \begin{ruledtabular} \begin{tabular}{ccccccccc} $B^{*}_{0}$ & ${\mathfrak{s}}$ & $M_{max} [M_\odot]$ & $M^b_{max} [M_\odot]$ & R [km] & $E_{0}[\hbox{fm}^{-4}]$ & $\rho_c(\hbox{fm}^{-3})$ & B$_{c}$(G) & $T_c$(MeV) \\ \hline $B = 0$ & 0 & 1.790 & 2.033 & 11.527 & 5.939 & 0.985 & - & - \\ & 1 & 1.794 & 2.024 & 11.717 & 5.854 & 0.967 & - & 19.24 \\ & 2 & 1.808 & 2.004 & 12.467 & 5.656 & 0.922 & - & 40.79 \\ $B^{*}_{0} = 10^{5}$ & 0 & 2.372 & 2.672 & 12.694 & 4.846 & 0.688 & 2.812$\times 10^{18}$ & -\\ & 1 & 2.368 & 2.652 & 12.796 & 4.860 & 0.687 & 2.808$\times 10^{18}$ & 17.96 \\ & 2 & 2.358 & 2.597 & 13.269 & 4.837 & 0.674 & 2.746$\times 10^{18}$ & 37.43 \\ $B^{*}_{0} = 2\times 10^{5}$& 0 & 2.926 & 3.234 & 14.509 & 3.629 & 0.454 & 3.150$\times 10^{18}$ & - \\ & 1 & 2.919 & 3.207 & 14.630 & 3.616 & 0.451 & 3.117$\times 10^{18}$ & 15.92 \\ & 2 & 2.902 & 3.147 & 15.032 & 3.612 & 0.445 & 3.040$\times 10^{18}$ & 33.54 \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*}[htb] \caption{Properties of the stable baryon star with maximum mass, for several values of magnetic field using [see Eq. (\ref{brho})] with the parametrisation. $M_{max}$, $M^b_{max}$, R, $E_{0}$, $\rho^c$, $B_{c}$, and $T_c$ are, respectively, the gravitational and baryonic masses, the star radius, the central energy density, the central baryonic density, and the values of the magnetic field and the temperature at the centre. Neutrino-trapped matter.} \label{table4} \begin{ruledtabular} \begin{tabular}{ccccccccc} $B^{*}_{0}$ & ${\mathfrak{s}}$ & $M_{max} [M_\odot]$ & $M^b_{max} [M_\odot]$ & R [km] & $E_{0}[\hbox{fm}^{-4}]$ & $\rho_c(\hbox{fm}^{-3})$ & B$_{c}$(G) & $T_c$(MeV) \\ \hline $B = 0$ & 0 & 2.046 & 2.293 & 12.455 & 5.420 & 0.856 & - & - \\ & 1 & 2.040 & 2.271 & 12.529 & 5.395 & 0.847 & - & 16.21 \\ & 2 & 2.036 & 2.226 & 13.198 & 5.192 & 0.808 & - & 34.78 \\ $B^{*}_{0} = 10^{5}$ & 0 & 2.454 & 2.694 & 13.123 & 4.819 & 0.660 & 2.678$\times 10^{18}$ & -\\ & 1 & 2.449 & 2.686 & 13.122 & 4.842 & 0.662 & 2.682$\times 10^{18}$ & 16.76 \\ & 2 & 2.441 & 2.641 & 13.622 & 4.731 & 0.644 & 2.593$\times 10^{18}$ & 34.72 \\ $B^{*}_{0} = 2\times 10^{5}$& 0 & 2.889 & 3.063 & 14.547 & 3.886 & 0.464 & 3.261$\times 10^{18}$ & - \\ & 1 & 2.890 & 3.071 & 14.621 & 3.834 & 0.461 & 3.228$\times 10^{18}$ & 16.81 \\ & 2 & 2.897 & 3.058 & 15.094 & 3.720 & 0.451 & 3.117$\times 10^{18}$ & 33.83 \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{figure}[ht] \vspace{1.5cm} \centering \begin{tabular}{ccc} \includegraphics[width=0.45\linewidth,angle=0]{figure11.eps} \end{tabular} \caption{(Color online) Mass-radius curve of neutron stars for several values of the magnetic field, using a density dependent magnetic field $B$ given by Eq.~(\ref{brho}). Thin lines correspond to neutrino-free matter and thick lines to trapped neutrino matter with lepton fraction $Y_{Le}= 0.4$.} \label{massgrav1} \end{figure} \begin{figure}[ht] \vspace{1.5cm} \centering \begin{tabular}{ccc} \includegraphics[width=0.75\linewidth,angle=0]{figure12.eps} \end{tabular} \caption{(Color online) Gravitational mass as a function of the baryonic mass of neutron stars for several values of the magnetic field, using a density dependent magnetic field $B$ given by Eq.~(\ref{brho}). } \label{massgrav2} \end{figure} \section{Conclusions} In the present work, we have studied the effect of a very strong magnetic field on the EOS and properties of warm stars. We have used a relativistic mean field model with the GM1 parameter set \cite{gm91} and considered stellar matter both neutrino free and with trapped neutrinos. Previously, it was shown that there is a strong neutrino suppression at low densities for finite magnetic fields \cite{aziz10}. For a finite entropy also at high densities a strong magnetic field gives rise to a decrease of the neutrino fraction, due to the larger proton fraction that favors a larger electron fraction. A smaller neutrino fraction may imply a slower cooling of the star core, since the core cools essentially by neutrino emission \cite{prakash97}. Another important effect of the magnetic field for a finite entropy per baryon is the faster reduction of the effective mass with density even when AMM is taken into account. For zero and finite temperature the effect of $B$ in neutrino free matter is mainly a reduction of the strangeness with the increase of $B$. However, in matter with trapped neutrinos, the opposite may occur and for ${\mathfrak{s}}=2$ the larger the magnetic field the larger is the strangeness fraction. In neutrino free matter the effect of temperature is not strong enough to oppose the shift of the onset of strangeness to larger densities due to the magnetic field. It has been shown that a strong magnetic field increases the mass and radius of the most massive cold stable star configuration \cite{broderick1}. This is still true for warm stars. The radius of these stars increases with $\mathfrak{s}$, just as it occurs for $B=0$ \cite{prakash97,menezes03}, but in a much smaller rate. On the other side, their baryonic masses decrease with the entropy per particle, and the larger decrease occurs for the larger magnetic field. For stronger magnetic fields the contribution of the magnetic field to the total EOS is larger giving rise to a stiffer EOS. As a result, the star central energy density and baryon density decreases as the magnetic field increases. The mass of the observed neutron stars may set an upper limit on the possible magnetic field acceptable in the interior of a star. Of course it may also occur that the most massive stars decay into low mass blackholes when the magnetic field in there interior decays. For a lower value of the magnetic field, the EOS becomes softer due to the onset of hyperons and a smaller maximum baryonic mass results. For $B=0$ an hybrid star may evolve to a low mass blackhole because the maximum baryonic mass of a warm star with trapped neutrinos and an entropy per particle ${\mathfrak{s}}\sim 1$ is larger than the maximum mass of a warm deleptonized star with ${\mathfrak{s}}=2$ or a cold deleptonized star \cite {prakash97,menezes04}. In the present study, it was shown that for a strong enough magnetic field, the star would cool down as a stable compact star, if the magnetic field does not decay during the deleptonization phase. However, the decay of the magnetic field may cause star instability and, consequently, the formation of a blackhole. \begin{acknowledgments} This work was partially supported by FCT/FEDER under Projects PTDC/FIS/113292/2009 and CERN/FP/116366/2010, and by COMPSTAR, an ESF Research Networking Programme. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} \label{intro} The answer to the very fundamental question of ``Why we do what we do?'' has always been a challenging one in the field of psychology. There are intense debates as to what really motivates us to achieve our goals or to drive our behavior. A proper understanding of goals or motives can be vital in our understanding of human behavior~\cite{mietzel2005wege}. Human motivation is the experience of our desires to get something or our tendency to avoid something. The hierarchical organization of the human motivation system leads to the self-regulation of interaction and behavior~\cite{siegert2004toward}. Ordering of motives or goals based on the priority of our needs is crucial in understanding our actions, reactions or expressions~\cite{mcleod2007maslow}. In other words, human-human interaction is highly influenced by the hierarchical structure of the motivational system. It is important to know the degree of satisfaction of a motive to assess the achievement of our goals. This is known as valence. The degree or intensity of a stimulus (for example, how exciting or thrilling a stimulus is to a human) also contributes to the appraisal of emotions~\cite{ortony1988cognitive}. This is called arousal. The major aspects of experience coming out of emotional appraisal include feelings, bodily responses, expressive behaviors and sense of purpose~\cite{keltner1999functional}. With the advancement in the field of robotics, Human Robot Interaction (HRI) has become a focal point of research. The inclusion of robots in human environments requires a thorough understanding of the behavioral changes involved during an interaction and the robot's adaptability to various scenarios~\cite{wilcox2013optimization}. A key aspect of social robots is to perceive the interaction partner's behavior and provide a suitable affective response~\cite{breazeal2001affective}. This leads to the necessity of developing appropriate robot control architectures for the generation of behavior. The core component of such architectures comprises of a motivational and an appraisal system responsible for generating an internal emotional state for a robot. The existing perception system~\cite{al2016perception} of ROBIN can evaluate a large set of stimuli called ``percepts'' of an interaction partner. The reactions are mainly triggered by these visual percepts. The highest level of perception task that the robot is able to perform is the recognition of human personality traits~\cite{zafar2018real} based on non-verbal cues, making the interaction process more diverse. A large set of gestures and facial expressions have been implemented on the robot to deal with various situations during interaction. The robot's actions or reactions have been pseudo-randomized based on a given emotional state of the robot~\cite{paplu2020pseudo}, ensuring behavioural variability. However, it is observed that a rigid percept-driven interaction often leads to more of a reactive than an adaptive behavior of the robot. Therefore, there is a need for an appraisal and motivation mechanism in the robot so as to assess emotional states on-the-fly. The major focus of this paper is to evaluate interaction partners in diverse scenarios and generate an internal emotional state of a robot based on a two-dimensional (i.e., arousal and valence) appraisal mechanism. The internal emotional state is not restricted to only 6 basic emotions~\cite{ekman1999basic}. For a technical system, the realization of the world or the generation of a mental model is different in comparison with humans. In this work, we have also formalized the definitions of the dimensions of appraisal system in the context of the robot used, ensuring a robot-centered emotion appraisal. \section{LITERATURE SURVEY} \label{related_work} Many theories have been proposed over the decades to model emotions. Russel proposed the Circumplex Model~\cite{russell1980circumplex} in which emotions are placed on the circumference of a circle. Various human-centered experiments were conducted to prove that the placement of emotions on the circumference was correct. Fig.~\ref{Circumplex_model} shows the ordering of the emotions in a 2D space where x-axis is pleasure-displeasure i.e., valence and y-axis is the degree of arousal. \begin{figure}[ht] \centering \includegraphics[width = 0.5 \textwidth]{circumplex_model.PNG} \caption{Circumplex model of emotions~\cite{russell1980circumplex}} \label{Circumplex_model} \end{figure} Arousal and valence are the responses to certain stimuli presented to a group of participants. According to this model, the emotion words are not discretely separated in the 2D space, rather the points on the circumference represent the instance where the emotion is the strongest. As one point moves away towards the other, the membership of the emotion decreases from the emotion at the point of origin and increases for the emotion at the point of destination. For example, when one moves from pleased to happy, the emotion becomes~\textit{less pleased} and~\textit{more happy}. A total of 28 emotion words, after being evaluated by the participants, found to fall meaningfully on the circle. The model proposes that the space in the middle of the circle is the ``neutral state''. The area of the circumference depends on the implementation and interpretation of the arousal and valence dimensions. Mehrabian \textit{et al.}~\cite{mehrabian1980basic} proposed a 3D emotion space model with \textit{pleasure}, arousal and dominance being the dimensions. Breazeal~\cite{breazeal2001affective} discussed the affective space for the proposed emotion model for a robot called KISMET. Arousal(A), Valence(V) and Stance (S) used as the dimensions in the emotion space. Releasers are activated when a percept activates it above a certain threshold. The releasers are tagged with some affective information [A,V,S] where each tag has an associated intensity for the AVS dimensions. As a result, the emotion arbitration associates an emotion which in turn gets a specific emotion-based behavior from the behavior system. Finally, a behavior is executed in the motor systems. Hirth~\cite{Hirth12b} proposed a robot control architecture for social robots. Three dimensional appraisal system was used with arousal, valence and stance being the dimensions. With the motives or goals defined, the robot tries to achieve all the motives at any given time. The motive with the highest satisfaction is selected to influence the behavior of the robot. This approach was applied only for a gaming scenario. Moreover, the appraisal systems explained above triggers mainly six basic goal-directed emotions. In addition, the combination of speech, gesture and facial expressions to represent an affective behaviour was less explored. To deal with the situational as well as scenario-oriented interactions, there is a need for a diverse set of emotional states and the display of appropriate behaviour based on the emotional state. The use of emotional space of the Circumplex Model~\cite{russell1980circumplex} can broaden the possibilities for a more natural and reliable interaction between a human and a robot. \section{ROBOT AND FRAMEWORK USED} \label{robo_framework} The robot used for the experiment, called ROBIN, has an ASUS Xtion Pro RGB-D Kinect sensor mounted on the chest and a built-in RGB camera on the head, forming the perception system of the robot. The robot is equipped with arms and hands, with 14 Degrees of Freedom (DoF) in each hand. A backlit projected face ensures display of various facial expressions based on facial action units. Moreover, there is a dialog system implemented in the system, establishing interaction with humans. A C++ based robotic framework called Finroc~\cite{finroc} is employed to implement applications on the robot. This work utilizes a behavior architecture called Integrated Behavior-Based Control (iB2C)~\cite{proetzsch2010development}. The basic building block of iB2C architecture is the ``Behavior module''. A behavior module represents a single behavior in an the architecture. A key component of this architecture is the ability to combine simple behaviors to generate complex behaviors. \section{PROPOSED DEFINITIONS} In order to map meaning of the perceived stimuli of the robot onto the emotion space of the Circumplex Model~\cite{russell1980circumplex}, the following definitions have been proposed: \subsection{Arousal} Arousal can be defined as how arousing a stimulus is to the robot. Percepts of a robot can broadly be categorised into visual, audio and physical percepts. Visual percepts comprise of anything perceived from the visual system of the robot. It may be facial expressions, body posture/ gestures, gaze, location etc. It is proposed that arousal is directly influenced by the intensity of the visual percepts and the proximity of located objects/ interlocutor. Additionally, the speed of motion influences the arousal value, e.g., a very slow motion leads to negative arousal. \begin{equation} A_v = w_1*RF + w_2*PR + w_2*SM \end{equation} $A_v$ is the Arousal value for visual percepts (normalized between $0$ and $1$), $RF$ stands for ``recognised features'', $PR$ is the proximity of an interlocutor, $SM$ is speed or degree of movement of the percept. $w_1, w_2 \;and\; w_3$ are the weights. \subsection{Valence} Valence is directly dependent on the satisfaction of a motive, in which a higher satisfaction value leads to a high Valence value and vice versa. It is argued that the attribute pleasant or unpleasant of any percept is dependent on the current motive/goal of the robot. \begin{equation} V = f(S) \end{equation} where, $V$ denotes valence, $S$ denotes the motive's satisfaction value. For example, if a person does not pay attention to the conversation process with the robot, the satisfaction level of the Interact motive will gradually go down. At one stage, it reaches a threshold, triggering an unsatisfied state. This will eventually lead to a shift from this motive. \subsection{Gestures vs. Behaviors} \label{gesture_behavior_def} There is a fine line between gestures and behaviours. For the sake of clarity and consistency, we define ``gesture'' as the movement of body parts to convey a specific sentiment or message and ``behavior'' as a combination of gestures, facial expressions and speech in response to a specific stimulus. \section{Motivation System} \label{motivation_system} The goals for humans' actions are represented by motives. The perceived stimuli from the perception system are fed into each ``motive'' as inputs. The output from each ``motive'' is its satisfaction value used to determine the valence in the appraisal system. The motives are implemented in a hierarchical fashion. The bottom-most motive has the highest priority, with the top-most motive having the lowest priority. Each motive inhibits the subsequent motives with low priorities. For example, the motive ``Self Preservation'', when active would inhibit the motives ``Social Motives'' and ``Self Entertainment''. \subsection{Motive: Obey Humans} The motive ``Obey Human'' is responsible for incorporating the rule that the robot should obey all commands issued by the human. This rule is popularly known as Asimov's second law~\cite{anderson2008asimov} of robotics. There may always be a possibility where a human has to control the robot manually or issue commands for the robot. This motive had the highest priority and inhibits all other motives. \subsection{Motive: Self Preservation} The motive ``Self Preservation'' tries to emulate the safety needs that are observed in human beings as discussed in Maslow's work~\cite{boeree2006abraham}. The safety needs vary based on the situation a person faces. However, the core idea is that there exists a need to ``find safe circumstances, stability, protection''. Taking this into account, it is proposed that there exists a need for the robot to protect and preserve itself from external harm. The robot should behave in a way that it draws attention or seek assistance from the interaction partner if the current circumstance or action poses a threat to the robot. If an interaction partner comes too close to the robot, this motive gets activated and the robot is in an unsatisfied state. The motive reaches satisfaction when the interaction partner moves to a safe distance zone. No minimum satisfaction threshold is used in this case as the robot needs to seek attention to the threat as long it exists. The robot moves away from the ``Self Preservation'' motive only when it determines that the interaction partner is at a safe distance. \subsection{Motive: Social Motives} \label{social_motive} The goal of the robot within this motive is to interact with an interaction partner and engage in a conversation. To achieve this, the motive ``Social Motives'' is split into 3 smaller motives or goals namely ``Capture Skeleton Information'', ``Greeting'' and ``Interact''. ROBIN needs skeleton information in order to successfully operate its perception system. Usually, the skeleton information of the interaction partner is detected very fast and does not need any manual intervention. However, at times there have been instances where the interaction partner had to move his/her hands or position himself/herself at various distances and postures in order for ROBIN to detect the skeleton. This Motive is responsible to guide the interaction partner till ROBIN acquires the skeleton information. When ROBIN detects the face of an interaction partner, with no skeleton information of the interlocutor available, the motive is activated but in an unsatisfied state. The motive reaches satisfaction when skeleton information is available. No minimum satisfaction threshold is defined in this case as the motive needed to be active as long as there is no skeleton information. Once the skeleton information is available, the next step is to greet. As is the case in normal human-human interaction, people usually begin their interaction with a greeting. The greeting can be a simple hand gesture, a verbal greeting or a combination of both. If an interaction partner is detected and the Greeting motive has not been activated before, this motive gets activated with a low satisfaction value. The \textit{``first time''} flag is used in our implementation to record this information. This ensures that an interaction partner is greeted only once after being identified and not multiple times, triggering more realistic interaction. Once the interaction partner greets back, the motive gains high score on satisfaction. As ROBIN does not have the capability to utilize any audio information, the ``greeting back'' gesture is realized based on the recognition of a set of relevant hand gestures. The motive ``Interact'' is responsible to make interaction between the robot and the interaction partner possible. The goal of the robot withing this motive is to get engaged with the interaction partner and display behaviors in a natural human-like manner. When there is an interaction partner available, the motive gets activated but with an unsatisfied state. To determine if the human is interested in the interaction, the perception system observes if the person is attentive and looking forward. In case, the interlocutor looks away or looks back or does anything suggesting that he/she is not interested, the Satisfaction value decreases by ``Neg\_Step''. The Satisfaction value increases by ``Pos\_Step'' when the person seems interested to interact. The values of ``Pos\_Step'' and ``Neg\_Step'' are used to control the rate by which the Satisfaction value changed over time, and the above mentioned values are experimentally found to keep the interaction natural. This is shown in table~\ref{interact_par}. \begin{table}[ht] \caption{Parameters for Motive Interact} \label{interact_par} \begin{center} \begin{tabular}{|p{3.9cm}|p{3cm}|} \hline \textbf{Property} & \textbf{Value} \\ \hline Motive Type & Event Based \\ \hline Triggering Events & Human present \\ \hline Satisfying Events & Looking forward \\ \hline Maximum Satisfaction Threshold & 0.9 \\ \hline Minimum Satisfaction Threshold & -0.8 \\ \hline Pos\_Step & 0.003 \\ \hline Neg\_Step & -0.02 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Motive: Self Entertainment} \label{self_entertainment} This motive is responsible to engage the robot in random activities, provided there is a lack of perceptual stimuli. By performing various activities such as singing or acting, it is attempted to emulate the behavior of self entertainment often observed in humans. In this way, the robot is also able to attract attention of any potential interaction partners in the vicinity and thereby gain a chance to interact. The moment a human is detected, the motive switches from very low to full satisfaction. \subsection{Fusion of Motives} \label{fusion_motive} The next step is to integrate the implemented motives in a hierarchical fashion. A fusion behavior module from the iB2C architecture is used to cater the fusion. All the ``inhibition'', ``target rating'' and ``output'' signals are fed into the fusion behavior. The final output is chosen based on the \textit{``winner takes it all''} principle and only one set of output signal is passed on to the Emotion Appraisal system. The fusion behavior is responsible for maintaining the hierarchical architecture of all the motives discussed. It filters out the output signals from inactive motives and allow only the output signals from the active motive to go through. \section{EMOTION APPRAISAL} \label{appraisal_system} \subsection{Perception of Stimuli} \label{perception_system} OpenNI and NiTE libraries enable us to detect humans by utilizing depth and Infra Red (IR) sensor of ASUS Xtion. The algorithms not only detect humans but also track them efficiently. They extract human skeleton joints using NiTE library and convert them into angles. Feature vectors are generated using angles between each joint and classify them with Support Vector Machines (SVMs). The system uses low-level perception features to understand high-level perception behaviors, e.g., head gesture recognition~\cite{saleh2015nonverbal}, facial expression recognition~\cite{al2016action}, body posture~\cite{zafar2018real} etc. These nonverbal features lie in low-level perception and can be used to recognize high-level perception behavior, when analyzed over a period of time. Movements performed by the limbs of a human play an important role in the recognition of activity. \subsection{Calculation of Arousal} \label{calc_arousal} We tagged each percept (facial expression, hand gestures, head gestures and body postures) with an intensity value ranging from 0 to 1, with 0 being the lowest intensity value and 1 being the highest. The values are set empirically. The intensity values of the perceived stimuli vary based upon the proximity from the robot and the activity or movement associated with the stimuli. For example, waving with one hand has lower stimulus intensity than waving with both hands. The distance zones proposed by E.T. Hall~\cite{hall1910hidden} have been used. A variable \textit{zone intensity} is defined and set to 1. If a person is in the social zone, the intensity of the perceived stimulus is directly reflected as the overall intensity, whereas if the person is in the personal zone, the intensity of the stimulus has 50\% weight and the \textit{zone intensity} has 50\%. If a person is in the intimate space, the stimulus intensity has 0\% weight and \textit{zone intensity} has 100\%. For public zone, the stimuli intensity has a weight of 25\% and the \textit{zone intensity} is set to 0 in public zone. The algorithm~\ref{algoArousal} have been applied to calculate arousal, where \textit{step} is set to 0.25 and \textit{weight} is set to 1. \textit{Step} value increases the level of arousal. $Weight$ indicates how fast the arousal value increases. \begin{figure}[!ht] \let\@latex@error\@gobble \begin{algorithm}[H] \SetAlgoLined \KwResult{Arousal($A_t$)} \uIf{$change\;in(overall\;intensity) = 0$} {$A_t\;=\;A_{t-1}\;-\;step;\;$} \Else {$A_t = weight\;\cdot\;overall\;intensity$} limit $A_t$ to the range of [-1,1]; \caption{Calculation of Arousal} \label{algoArousal} \end{algorithm} \end{figure} \subsection{Calculation of Valence} The satisfaction of a motive is calculated based on the triggering and satisfying events for a motive, where $pos\;step$ and $neg\;step$ determine the rate at which the satisfaction value increases or decreases. These values need to be be selected based on the motive in question. The algorithm \ref{algoValance} is used to calculate the valance($V$) for a motive. \begin{figure}[!ht] \let\@latex@error\@gobble \begin{algorithm}[H] \SetAlgoLined \KwResult{Valance(V)} $V_{t-1}= 0$\; {$S = Satisfaction \;Value \;of \;the\;active\;motive$\;} $step = min(|S - V_{t-1}|, weight)$\; \uIf{$S > V{t-1}$} {$V_t = V_{t-1} + step;$} \Else{$V_t = V_{t-1} - step;$} limit $V_t$ to the range of [-1,1]; \caption{Calculation of Valance(V)} \label{algoValance} \end{algorithm} \end{figure} $weight$ is the maximum rate by which the Valence($V$) value changes and $step$ is used to increase or decrease the Valence value in a step-wise manner. Two threshold values, $S_{max}$ and $S_{min}$, of \textit{Satisfaction} are used to determine the activity of a motive. $S_{max}$ denotes the maximum \textit{Satisfaction} value of the motive at which the motive is satisfied and should in turn switch to being inactive. Similarly, $S_{min}$ denotes the minimum \textit{Satisfaction} value of the motive to decide the activity. \begin{subnumcases}{a=} 1 & $S_{min} < S < S_{max}$ \label{positive-subnum} \\ 0 & $otherwise$ \label{negative-subnum} \end{subnumcases} \subsection{Emotion from Arousal \& Valence} \label{emo_from_AV} After the values for \textit{arousal} and \textit{valence} are calculated, they are used to determine an \textit{emotion state} of the robot. The 28 emotion words from Russel's work~\cite{russell1980circumplex} are used as \textit{emotion State} in this work. The emotion words fall meaningfully on the circle with the following degree values: \begin{multicols}{2} \begin{itemize} \item happy : 7.8\degree \item delighted : 24.9\degree \item Excited : 48.6\degree \item Astonished : 69.8\degree \item Aroused : 73.8\degree \item Tense : 92.8\degree \item Alarmed : 96.5\degree \item Angry : 99\degree \item Afraid : 116\degree \item Annoyed : 123\degree \item Distressed : 138\degree \item Frustrated : 141\degree \item Miserable : 188.7\degree \item Sad : 207.5\degree \item Gloomy : 209\degree \item Depressed : 211\degree \item Bored : 242\degree \item Droopy : 256.7\degree \item Tired : 267.7\degree \item Sleepy : 271.9\degree \item Calm : 316.2\degree \item Relaxed : 318\degree \item Satisfied : 319\degree \item At ease : 321\degree \item Content : 323\degree \item Serene : 328.6\degree \item Glad : 349\degree \item Pleased : 353.2\degree \end{itemize} \end{multicols} Emotion Space consists of two dimensions with \textit{arousal} and \textit{valence} being the dimensions. The X-axis and Y-axis represent the \textit{valence} and \textit{arousal} values respectively. The range for both the axes is $[-1,+1]$. The membership of the \textit{emotion states} are defined as sectors in an unit circle. As described by Russel, the specific degree values represented the points in the ``emotion space'' where the membership of the emotion word is maximum. Taking this as the basis, the specific degree value is taken as the midpoint of the arc of the sector for each \textit{emotion state}. For example, ``happy'' has a degree value of 7.8\degree. So, 7.8\degree \;is taken as the midpoint of an arc of the sector. The end points of the arc are calculated as the mid-point between the degree values of ``Pleased-Happy'' and ``Happy-Delighted'' respectively which are 0.5\degree\; and 16.35\degree. To calculate $\theta$ in the AV 2D space, the following function is used to convert the 2D coordinate point into degrees. \begin{equation} \theta = arctan2(\frac{y}{x}) \end{equation} $y$ and $x$ are the \textit{arousal} and \textit{valence} values respectively. Depending on the $\theta$ value obtained, the corresponding \textit{emotion state} is calculated. For example, any point in AV coordinate system that resulted in a $\theta$ value between 0.5\degree\; to 16.35\degree\; is assigned an \textit{emotion state} of ``happy''. \section{EXPERIMENTATION \& EVALUATION } Based on the emotion derived from the proposed appraisal system, robot behavior is generated in the form of gestures, facial expressions and dialogues. Separate lists of behaviours, comprising of these three channels, have been created and integrated with the existing XML-based dialog system of the robot. There is a direct mapping of emotional state of the robot to its relevant behaviour, enabling more autonomy to the interaction process. Human-centred evaluation of the system developed has been conducted to verify if the emotional states generated by the robot in interaction scenarios are realistic. It is also important to investigate how the robot switches its motives during an interaction with humans. In a typical scenario, the robot starts greeting an interaction partner once he/she is visible by the robot. In this case, the goal or the motive of the robot is to get a greeting back from the interlocutor. The moment the robot gets a satisfying event, the valence goes high and there is a possible switch in its motive. Given this scenario, the robot ends up being in ``interact'' motive as the motive is satisfied but there is a gradual change in emotional states as the values for arousal and valence change over time based on the stimuli perceived. The robot's emotional state at a specific time of interaction can be observed in the fig.~\ref{exp_fingui}. At this point, the degree value calculated out of arousal and valence is 127.52\degree\; which triggers an emotional state of annoyance. Additionally, an ``engagement'' scenario has been created in which the robot observes if an interlocutor is paying attention to the responses or queries generated by the robot. The events that slowly trigger a transition from this motive to the other relevant motive are the actions of a human that doesn't imply engagement, for example, looking away, looking down, showing little or no physical activity etc. This motive-driven interaction process ensures much more autonomy in the robot's behaviour as compared to a reactive process in which emotional state is pre-defined. \begin{figure}[ht] \centering \includegraphics[width = 0.48 \textwidth]{fingui_exp_final.png} \caption{User interface showing existing emotional state of ROBIN} \label{exp_fingui} \end{figure} A total of 16 participants, university students and employees, are invited to evaluate the system. We briefly explained them the circumplex model that evaluates emotions. They are asked to interact with the robot standing in a room and observe carefully how the robot changes its internal state on its own, being in various motives. Each interaction partner was exposed to the scenarios explained earlier. The entire interaction goes on between the participant and the robot, with the system interface's screencast switched on. A snapshot of the interface can be seen in fig.~\ref{exp_fingui}. The interlocutors could see the screencast of our interface right after the interactions. The screencasting helps participants and other observers to judge if the emotional state is realistic, given the scenarios. Each of the subject is provided with a questionnaire comprising of 5 questions. The questions include: (i) is the change in robot's emotional state meaningful or realistic, (ii) is the switch between motives appropriate? (iii) was the robot's speech, gesture and facial expressions synchronized properly? (iv) can the implemented appraisal system, in reality, comply with the circumplex model? Each of these questions had 3 options to choose: realistic, unrealistic and unclear. Additionally, there was an open-ended question, asking about the overall user experience with the interaction scenarios. \begin{figure}[ht] \centering \includegraphics[width = 0.48 \textwidth]{evaluation-results.png} \caption{Questionnaire-based evaluation: percentage of users vs. various interaction aspects} \label{eval_res} \end{figure} User experience with the system implemented has been collected from the questionnaire. Fig.~\ref{eval_res} depicts a summary of the data on user-experience collected after the experiments. It can be observed that $67\%$ of the participants considered the change in the robot's behaviour during interaction to be realistic. However, the synchronization of speech, gesture and facial expressions was found out to be unrealistic by a big chunk of users ($33\%$). System latency often leads to this problem. In addition, it was unclear for $31\%$ of the interaction partners if the system complies with the Circumplex Model of psychology. The participants often expected the robot to change its motive much faster. In contrast, the proposed approach applies gradual increase or decrease in the calculation of valence. This often resulted in unexpected delay in the switch between the motives of the robot, as far as human's perspective is concerned. Overall, most of the participants expressed their satisfaction over the technical system generating emotional states and displaying relevant behaviour under some conditions. \section{CONCLUSIONS \& FUTURE WORK } In order to make a robot emotionally intelligent, emotion appraisal mechanism is vital. Manually informing the robot about an emotional state is an obstacle as far as intelligent human-robot interaction is concerned. This work ensures that the robot itself creates a mental model of the interaction partner and derives an emotional state on-the-fly. Experimental results showed that the robot, in most cases, managed to generate a suitable emotional state on its own based on the appraisal mechanism proposed in this work. The hyper-parameters used during the calculation of an emotional state can be fine tuned with additional experiments. Existing gestures, postures and facial expressions of the robot can be enriched to ensure better display of emotions on the robot. \addtolength{\textheight}{-12cm} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The subdiffusion is usually defined as a process where the mean square displacement of a particle $\left\langle \Delta x^2\right\rangle$ is a power function of time \begin{equation}\label{eq01} \left\langle\Delta x^2(t)\right\rangle=\frac{2D_\alpha}{\Gamma(1+\alpha)}t^\alpha , \end{equation} where the subdiffusion parameter $\alpha$ is less than one $(0<\alpha<1)$, $D_\alpha$ is the subdiffusion coefficient measured in the units $m^2/s^\alpha$ \cite{mk}. The case of $\alpha=1$ corresponds to the normal diffusion. The subdiffusion is related to an infinitely long average time that a random walker waits to make a finite jump. Then, its average displacement square, which is observed in a finite time interval, is dramatically suppressed. The subdiffusion occurs in systems with complex internal structure such as gels or porous media. To describe the subdiffusion the non-linear differential equation of natural order derived on the base of Tsallis formalism \cite{tb,dhw} or the normal diffusion euqation with diffusion coefficient which is assumed to be a power function of time $D(t)=D_\alpha t^{\alpha-1}$ \cite{lm} were used. The Green's functions obtained for these equations fulfill the relation (\ref{eq01}), but the physical meaning of the assumptions providing to the equations is not always clear. For example, it is difficult to explain the decreasing of diffusion coefficient in time in a homogeneous system. The subdiffusion linear equation with fractional time derivative, derived on the base of Continuous Time Random Walk formalism \cite{mk,compte96}, has not got such disadvantages. The subdiffusion in a membrane system was recently studied experimentally and theoretically. The motivation of the study is that the understanding of subdiffusion in a membrane system can help to model transport processes in systems so different as living cells and membrane microfilters (see for example \cite{ra}). The system with a membrane can also be used to measure subdiffusion parameters by comparing theoretical and empirical concentration profiles of substances of interest \cite{kdm}. To model a transport process in such a system the parabolic subdiffusion equation was applied. However, the parabolic normal diffusion and parabolic subdiffusion equations give the solutions which posses the `unphysical' property. Namely, for spatially unrestricted system the Green's function $G(x,t;x_0)$ (which is a probability density of finding a particle in the position $x$ after time $t$ under condition that at the initial moment the particle was located in $x_0$) have non-zero values for any $x$ and $t>0$. This fact can be interpreted as an infinite speed propagation of some particles. To avoid this `unphysical' property Cattaneo proposed the hyperbolic normal diffusion equation based on the assumption that the diffusion flux is delayed in time $\tau$ with respect to the concentration gradient \cite{cattaneo}. The Green's function of the equation is equal to zero for finite $x-x_0$, so the propagation velocity of the particles is finite. In phenomenological way the hyperbolic subdiffusion equation can be derived by involving the fractional time derivative into a flux or continuity equation. In the paper \cite{compte} there was noted that the hyperbolic subdiffusion equation can be derived in three different manners and the equations obtained are not equivalent to each other. The delaying effect of the flux with respect to the concentration gradient seems to be stronger in a membrane system than in a homogeneous one, since the flux can be involved into boundary conditions at the membrane. So, the delaying effect can appear not only in the equation but also in the boundary conditions. As far as we know, the hyperbolic subdiffusion equation has not been applied to describe the subdiffusion in a membrane system yet. In our paper we compare the solutions of the parabolic and hyperbolic subdiffusion equation in a homogeneous system and in a system with a thin membrane. The problem of choosing a transport model in a membrane system is more complicated since one of the boundary conditions at the membrane is not set unambiguously. Two boundary conditions which are not equivalent to each other were used. First of them demands the constant ratio of the concentrations at the membrane surfaces \cite{koszt1998,dkwm}, the second one assumes that the flux is proportional to the concentration difference between membrane surfaces \cite{koszt2001a,koszt2001b}. The qualitatively difference between them is manifested in the long time limit because the concentrations calculated for second boundary condition goes to a continuous function at the membrane, unlike than for the first one. In our paper we find the solutions of the hyperbolic equations for a system with a thin membrane for two boundary conditions mentioned above and compare them with the ones obtained from parabolic equation. We consider the system where the thin membrane separates a homogeneous solution from a pure solvent (we add that such a system was often used in experimental studies \cite{kdm,dkwm,dsdoww,dwor2006}). In our study we assume that the system is one-dimensional where the diffusion or subdiffusion parameter as well as the parameter of membrane permeability do not depend on time and concentration, the first one is also independent of space variable. The paper is organized as follows. In \sref{nde} we present the phenomenological derivation of the hyperbolic equation for normal diffusion. We show plots of the Green's functions obtained for the long times for the homogeneous system without a membrane. In \sref{subdifeq} we present the hyperbolic equation and the Green's functions for the subdiffusive system. The boundary conditions at a thin membrane are derived in \sref{bcond}. Solutions of the hyperbolic equation for the system where the homogeneous solution is separated by the thin membrane from pure solvent are presented in \sref{sol}. To illustrate our considerations the function obtained in sections \ref{nde}, \ref{subdifeq} and \ref{sol} are shown in several plots. Analyzing the plots we discuss the properties of the solutions in \sref{sol}. \section{Normal diffusion equation\label{nde}} \subsection{Parabolic equation} It is well known that the normal diffusion equation \begin{equation}\label{eq0a} \frac{\partial C(x,t)}{\partial t}=D\frac{\partial^2 C(x,t)}{\partial x^2} \end{equation} with normal diffusion coefficient $D$ (measured in the units $m^2/s$), can be derived phenomenologically by combining the first Fick's law \begin{equation}\label{eq0b} J(x,t)=-D\frac{\partial C(x,t)}{\partial x} , \end{equation} and the continuity equation \begin{equation}\label{eq3} \frac{\partial C(x,t)}{\partial t}=-\frac{\partial J(x,t)}{\partial x} . \end{equation} The Green's function is defined as a solution of the equation for the initial condition \begin{equation}\label{eq5a} G(x,t;x_0)=\delta(x-x_0) , \end{equation} and boundary conditions appropriate for considered system. When the system is not spatially restricted, there is \begin{equation}\label{eq0c} G(-\infty,t;x_0)=G(\infty,t;x_0)=0 , \end{equation} and the Green's function reads \begin{equation}\label{eq0d} G(x,t;x_0)=\frac{1}{2\sqrt{\pi Dt}}\exp\left(-\frac{(x-x_0)^2}{4Dt}\right) . \end{equation} The function (\ref{eq0d}) is different from zero for any $x$ and $t>0$. Utilizing the probability interpretation of the Green's function one concludes that some particles are transported with the infinite speed propagation. \subsection{Hyperbolic equation} To ensure the finite velocity of the particle propagation one assumes that the flux is delayed with respect to the concentration gradient \begin{equation}\label{eq1} J(x,t+\tau)=-D\frac{\partial C(x,t)}{\partial x} , \end{equation} where $\tau$ is the delay time. Assuming that the parameter $\tau$ is sufficiently small, the left hand side of equation \eref{eq1} can be approximated by the first two terms of Taylor series with respect to $\tau$ \begin{equation}\label{eq2} J(x,t)+\tau\frac{\partial J(x,t)}{\partial t}=-D\frac{\partial C(x,t)}{\partial x} . \end{equation} Applying the operator $\partial/\partial x$ to equation (\ref{eq2}) and taking into considerations the continuity equation (\ref{eq3}) one gets the hyperbolic diffusion equation \begin{equation}\label{eq4} \tau\frac{\partial^2 C(x,t)}{\partial t^2}+\frac{\partial C(x,t)}{\partial t}=D\frac{\partial^2 C(x,t)}{\partial x^2} . \end{equation} We add that equation (\ref{eq4}) can be derived from differential-difference equations with continuous time and discrete space variable \cite{pottier}. The process can be interpreted as a process with `minimal' memory which extends to one time step more than in `ordinary' diffusion process described by parabolic diffusion equation. \Eref{eq4} ensures the finite propagation velocity of the particles $v=\sqrt{D/\tau}$. In the limit $\tau\rightarrow 0$ we get parabolic diffusion equation with infinite $v$. To solve equation (\ref{eq4}) we must take two initial conditions. Let us assume that one of them is \begin{equation}\label{eq5} \left.\frac{\partial C(x,t)}{\partial t}\right|_{t=0}=0 , \end{equation} what means that at an initial moment the concentration does not aim at its change and is effectively changed after time $\tau$ since the particle flux is not generated before this time. The second boundary condition reads as $C(x,0)=f(x)$. \subsection{Green's function} We obtain the Green's function for equation (\ref{eq4}) solving it by means of the Laplace $L[f(t)]=\hat{f}(s)=\int_0^\infty f(t)\exp(-st)dt$ and Fourier $F[g(x)]=\hat{g}(k)=\int_{-\infty}^\infty g(x)\exp(ikx)dx$ transforms method for the initial conditions (\ref{eq5a}) (with $x_0=0$) and (\ref{eq5}). After simple calculations we get the Green's function in terms of Lapalce and Fourier transforms \begin{equation}\label{eq6} \hat{G}(k,s;0)=\frac{1+\tau s}{s+\tau s^2+Dk^2} . \end{equation} The inverse Fourier transform of equation (\ref{eq6}) reads \begin{equation}\label{eq7} \hat{G}(x,s;0)=\frac{\sqrt{1+\tau s}}{2\sqrt{Ds}}\exp\left(-\frac{|x|\sqrt{s}}{\sqrt{D}}\sqrt{1+\tau s}\right) . \end{equation} The hyperbolic equation was derived on the assumption that we omit the terms which iclude the parameter $\tau^k$, $k>1$, in the Taylor series of the flux (see equation (\ref{eq2})). We find similar approximation of equation (\ref{eq7}), namely \begin{equation}\label{eq8} \hat{G}(x,s;0)=\frac{1}{2\sqrt{Ds}}\left(1+\frac{\tau\sqrt{s}}{2}-\frac{|x|s}{2\sqrt{D}}\right) \exp\left(-\frac{|x|\sqrt{s}}{\sqrt{D}}\right) . \end{equation} The inverse Laplace transform of equation (\ref{eq8}) is \begin{equation}\label{eq9} \fl G(x,t;0)=\frac{1}{2\sqrt{\pi Dt}}\left(1+\frac{|x|\tau}{4t\sqrt{D}}\right)\exp{\left(-\frac{x^2}{4Dt}\right)} -\frac{|x|\tau}{4D}f_{1/2,1/2}\left(t;\frac{|x|}{\sqrt{D}}\right) , \end{equation} where the function $f$ is defined as \cite{koszt2004} \begin{displaymath} f_{\nu,\beta}(t;a)\equiv L^{-1}\left[s^\nu\exp\left(-as^\beta\right)\right] , \end{displaymath} for $a,\beta>0$. This function can be expressed by the Fox function $H$ and reads \begin{eqnarray}\label{eq9a} f_{\nu,\beta}(t;a)&=&\frac{1}{\beta a^{(1+\nu)/\beta}}H^{1 0}_{1 1}\left(\left.\frac{a^{1/\beta}}{t}\right| \begin{array}{cc} 1 & 1 \\ (1+\nu)/\beta & 1/\beta \end{array} \right) \nonumber\\ &=&\frac{1}{t^{1+\nu}}\sum_{k=0}^{\infty} \frac{1}{k!\Gamma(-k\beta-\nu)}\left(-\frac{a}{t^\beta}\right)^k . \end{eqnarray} \begin{figure}[h] \centering \includegraphics{Fig1.EPS} \caption{The plots of the normal diffusion Green's functions for different values of parameter $\tau$ given in the legend, here $t=500$, $D=10^{-3}$.\label{fig:Fig1}} \end{figure} The plots of function (\ref{eq9}) are presented in \fref{fig:Fig1} for different values of the parameter $\tau$. As we can see, only relatively large values of $\tau$ make the noticeable difference between the Green's functions obtained for the parabolic equation (represented by the solutions for $\tau=0$) and the hyperbolic one. \section{Subdiffusion equation\label{subdifeq}} \subsection{Parabolic equation} The hyperbolic subdiffusion equation can be derived by analogy with the derivation of the parabolic one. There are few ways to find the parabolic subdiffusion equation in phenomenological way. In the following we consider two of them which are natural generalization of the derivation of the parabolic normal diffusion equation. In the first one it is assumed that the subdiffusive flux reads as \begin{equation}\label{eq10f} J(x,t)=-D_\alpha\frac{\partial_{\rm RL}^{1-\alpha}}{\partial t^{1-\alpha}}\frac{\partial C(x,t)}{\partial x}, \end{equation} where $\partial_{\rm RL}^\alpha/\partial t^\alpha$ denotes the Riemann-Liouville fractional time derivative defined as \cite{os,pod} (here $\alpha>0$) \begin{equation}\label{eq10a} \frac{d_{\rm RL}^{-\alpha}f(t)}{dt^{-\alpha}}=\frac{1}{\Gamma(\alpha)}\int^{t}_{0}(t-u)^{\alpha-1}f(u)du , \end{equation} and \begin{equation}\label{eq10b} \frac{d_{\rm RL}^{\alpha}f(t)}{dt^{\alpha}}=\frac{d^n}{dt^n}\frac{d_{\rm RL}^{\alpha-n}f(t)}{dt^{\alpha-n}} , \end{equation} where $n$ is the lowest natural number that fulfills $n\geq\alpha$. The Laplace transform of the Riemann-Liouville derivative is \begin{equation}\label{eq10c} L\left[\frac{d^\alpha_{\rm RL} f(t)}{dt^\alpha}\right]=s^\alpha\hat{f}(s)-\left. \sum_{k=0}^{n-1}s^k\frac{d^{\alpha-k-1}_{\rm RL}f(t)}{dt^{\alpha-k-1}}\right|_{t=0} , \end{equation} where $n-1\leq\alpha<n$. Combining equation (\ref{eq10f}) with equation (\ref{eq3}) one gets the parabolic subdiffusion equation \cite{mk,compte96} \begin{equation}\label{eq10g} \frac{\partial C(x,t)}{\partial t}=D_\alpha\frac{\partial_{\rm RL}^{1-\alpha}}{\partial t^{1-\alpha}}\frac{\partial^2 C(x,t)}{\partial x^2} . \end{equation} In general, to solve the differential equation with the fractional Riemann-Liouville derivative by means of the Laplace transform method, one should fix the initial condition for time derivatives of the fractional negative order (see equation (\ref{eq10c})), what is beyond of physical interpretation. However, this remark does not concern the subdiffusion equation (\ref{eq10g}) since for a limited function there is (see Appendix) \begin{equation}\label{eq10s} \left.\frac{\partial_{\rm RL}^{\alpha-1}C(x,t)}{\partial t^{\alpha-1}}\right|_{t=0}=0 , \end{equation} when $0<\alpha<1$. The Laplace and Fourier transforms of (\ref{eq10g}) is \begin{equation}\label{eq10h} s\hat{C}(k,s)-F[C(x,0)]=-D_\alpha s^{1-\alpha}k^2\hat{C}(k,s) . \end{equation} For the Green's function with the initial condition (\ref{eq5a}) we get $F[C(x,0)]=1$, what leads to the form of equation (\ref{eq10h}) obtained from the Continuous Time Random Walk formalism \cite{mk}. Another scenario provides to the subdiffusion equation consists in changing the time derivative of natural order to the fractional Caputo one in the continuity equation (\ref{eq3}) according to the formula $\partial/\partial t\rightarrow\theta\partial^\alpha_{\rm C}/\partial t^\alpha$ , where $\theta$ is a parameter which is involved to achieve an appropriate physical units. The Caputo fractional derivative is defined by the relation \cite{pod} \begin{equation}\label{eq10d} \frac{d_{\rm C}^{\alpha}f(t)}{dt^{\alpha}}=\frac{1}{\Gamma(n-\alpha)}\int^{t}_{0}(t-u)^{\alpha-1}\frac{d^n}{dt^n}f(u)du , \end{equation} and its Laplace transform reads as \begin{equation}\label{eq10e} L\left[\frac{d_{\rm C}^\alpha f(t)}{dt^\alpha}\right]=s^\alpha\hat{f}(s)-\left.\sum_{k=0}^{n-1}s^{\alpha-k-1}\frac{d^k f(t)}{dt^k}\right|_{t=0} , \end{equation} where $n-1\leq\alpha<n$. Thus, we get \begin{equation}\label{eq10k} \theta\frac{\partial^\alpha_{\rm C} C(x,t)}{\partial t^\alpha}=-\frac{\partial J(x,t)}{\partial x} . \end{equation} In the following we take \begin{equation}\label{eq10l} \theta=\frac{D}{D_\alpha} . \end{equation} Combining (\ref{eq0b}), (\ref{eq10k}) and (\ref{eq10l}) one gets the subdiffusion equation \begin{equation}\label{eq10j} \frac{\partial_{\rm C}^\alpha C(x,t)}{\partial t^\alpha}=D_\alpha\frac{\partial^2 C(x,t)}{\partial x^2} . \end{equation} \Eref{eq10j} is equivalent to equation (\ref{eq10g}) since their Laplace and Fourier transforms are expressed by (\ref{eq10h}). \subsection{Hyperbolic equation} The hyperbolic subdiffusion equation can be obtained by introducing the time derivative of fractional order to equations (\ref{eq3}) or (\ref{eq2}). As was noticed in the paper \cite{compte}, where the Riemman-Liouville fractional derivative only was taken into considerations, it can be done in three different manners. Unlike in \cite{compte}, we involve Caputo fractional derivative into the continuity equation (\ref{eq3}). We assume that the flux is given as follow \begin{equation}\label{eq10} J(x,t+\tau)=-D_\alpha\frac{\partial^{1-\alpha}_{\rm RL}}{\partial t^{1-\alpha}}\frac{\partial C(x,t)}{\partial x} . \end{equation} Similarly to the previous case, let us approximate the left hand of equation (\ref{eq10}) for $\tau\ll t$ by the first two terms of Taylor series with respect to $\tau$ \begin{equation}\label{eq11} J(x,t)+\tau\frac{\partial J(x,t)}{\partial t}=-D_\alpha\frac{\partial^{1-\alpha}_{\rm RL}}{\partial t^{1-\alpha}}\frac{\partial C(x,t)}{\partial x} . \end{equation} From equation (\ref{eq11}) and equation (\ref{eq3}) we get the hyperbolic subdiffusion equation \begin{equation}\label{eq12} \tau\frac{\partial^2 C(x,t)}{\partial t^2}+\frac{\partial C(x,t)}{\partial t}=D_\alpha\frac{\partial^{1-\alpha}_{\rm RL}}{\partial t^{1-\alpha}}\frac{\partial^2 C(x,t)}{\partial x^2} . \end{equation} Let us note that from equations (\ref{eq2}), (\ref{eq10k}) and (\ref{eq10l}) we get the hyperbolic subdiffusion equation with Caputo fractional derivatives \begin{equation}\label{eq10m} \tau\frac{\partial^{1+\alpha}_{\rm C} C(x,t)}{\partial t^{1+\alpha}}+\frac{\partial^\alpha_{\rm C} C(x,t)}{\partial t^\alpha}=D_\alpha\frac{\partial^2 C(x,t)}{\partial x^2} . \end{equation} \Eref{eq10m} is fully equivalent to equation (\ref{eq12}) since the Laplace and Fourier transforms of the equations are the same. \subsection{Green's function} As previous, we take the initial conditions (\ref{eq5a}) (for $x_0=0$) and (\ref{eq5}) to solve equation (\ref{eq12}). After calculations we get \begin{equation}\label{eq13} \hat{G}(k,s;0)=\frac{1+\tau s}{s+\tau s^2+D_\alpha s^{1-\alpha}k^2} , \end{equation} the inverse Fourier transform of equation (\ref{eq13}) is \begin{equation}\label{eq14} \hat{G}(x,s;0)=\frac{\sqrt{1+\tau s}}{2\sqrt{D_\alpha} s^{1-\alpha/2}}\exp\left(-\frac{s^{\alpha/2}|x|\sqrt{s}}{\sqrt{D_\alpha}}\sqrt{1+\tau s}\right) . \end{equation} The hyperbolic equation was derived under the assumption that we take into account linear terms in the Taylor series of the flux (see equation (\ref{eq2})). Let us perform similar approximation for equation (\ref{eq7}), what gives \begin{equation}\label{eq15} \hat{G}(x,s;0)=\frac{1}{2\sqrt{D_\alpha}s^{1-\alpha/2}}\left(1+\frac{\tau s^{\alpha/2}}{2} -\frac{|x|s^{\alpha}}{2\sqrt{D_\alpha}}\right)\exp\left(-\frac{|x|s^{\alpha/2}}{\sqrt{D_\alpha}}\right) . \end{equation} The inverse Laplace transform of equation (\ref{eq15}) is \begin{eqnarray}\label{eq16} \fl G(x,t;0)=\frac{1}{2\sqrt{ D_\alpha}}\left[f_{\alpha/2-1,\alpha/2}\left(t;\frac{|x|}{\sqrt{D_\alpha}}\right) +\frac{\tau}{2}f_{\alpha/2,\alpha/2}\left(t;\frac{|x|}{\sqrt{D_\alpha}}\right)\right. \nonumber\\ \left.-\frac{|x|\tau}{2\sqrt{D_\alpha}}f_{\alpha,\alpha/2}\left(t;\frac{|x|}{\sqrt{D_\alpha}}\right)\right]. \end{eqnarray} \begin{figure}[h] \centering \includegraphics{Fig2.EPS} \caption{Hyperbolic subdiffusion. The plots of the Green's functions for different values of $\tau$, here $t=500$, $D_\alpha=10^{-3}$, $\alpha=0.5$.\label{fig:Fig2}} \end{figure} \begin{figure}[h] \centering \includegraphics{Fig3.EPS} \caption{The description as in \fref{fig:Fig2} but for $\alpha=0.8$.\label{fig:Fig3}} \end{figure} \begin{figure}[h] \centering \includegraphics{Fig4.EPS} \caption{The Green's functions for $t=500, 1000, 1500, 2000$, here $D_\alpha=10^{-3}$, $\alpha=0.8$, the dashed lines correspond to $\tau=100$, continuous ones represent the Green's functions with $\tau=0$.\label{fig:Fig4}} \end{figure} The plots of the Green's functions (\ref{eq16}) are presented in figures \ref{fig:Fig2}-\ref{fig:Fig4}. Contrary to the normal diffusion case, the effect of delaying is hardly observed in the considered cases. \section{Boundary conditions at thin membrane\label{bcond}} We denote the concentration and flux in the region $x<x_m$ as $C_1$ and $J_1$, and in the region $x>x_m$ as $C_2$ and $J_2$, respectively, where $x_m$ is the membrane position. Since the equation is of the second order with respect to the space variable, we need two boundary conditions in each of the region. Two boundary conditions demand finiteness of the solutions at $x\rightarrow -\infty$ and $x\rightarrow \infty$, two other ones are fixed at the membrane. The first of them is rather obvious and it assumes the continuity of the flux at the membrane \begin{equation}\label{eq18} J_1(x_m^-,t)=J_2(x_m^+,t)\equiv J(x_m,t) . \end{equation} However, the problem of fixing the second boundary condition at the thin membrane is not unambiguously solved. The missing boundary condition at the membrane has been chosen in two ways. The first one demands the constant ratio of the concentrations at two opposite sides of the membrane \cite{kdm,koszt1998,dkwm} \begin{equation}\label{5} C_2(x^+_m,t)/C_1(x^-_m,t)=\gamma=const. \end{equation} In the second one the flux flowing through the membrane is proportional to the difference of the concentrations at the opposite sides of the membrane \cite{kdm,koszt2001a,koszt2001b} \begin{equation}\label{5a} J(x_m,t)=\lambda[C_2(x^-_m,t)-C_1(x^+_m,t)] , \end{equation} where $\lambda$ is the membrane permeability coefficient. Below we consider the possibility of application of the boundary conditions (\ref{5}) or (\ref{5a}) for the case of the system described by the hyperbolic normal diffusion or hyperbolic subdiffusion equation. \subsection{Constant concentration ratio at the membrane} In the Smoluchowski's papers (see for example \cite{smol}) it was derived the boundary condition at the fully reflecting wall. Let the wall is placed at $x_m$ and the system occupies the interval $(-\infty,x_m)$. Smoluchowski's original approach utilized the assumptions that the amount of the substance in the system does not change in time \begin{equation}\label{eq5c} \frac{\partial}{\partial t}\int^{x_m}_{-\infty}C(x,t)dx=0 , \end{equation} and the flux vanishes at $-\infty$. Integrating the continuity equation (\ref{eq3}) over the interval $(-\infty, x_m)$ and using the above assumptions one gets \begin{equation}\label{eq5d} J(x_m,t)=0 . \end{equation} Since equation (\ref{eq3}) works in hyperbolic subdiffusion equation case, we choose the boundary condition (\ref{eq5d}) at the fully reflecting wall for the system described by the hyperbolic equation. Chandrasekhar used the method of images to derive the Green's function for this system \cite{chan}. The Green's function can be interpreted as a concentration of large number of particles $N$ (divided by $N$); the particles are located at point $x_0$ at the initial moment $t=0$. So, the Green's function can be treated as an instantaneous particle source (IPS) normalized to $1$. Within the method one replaces the wall by additional IPS in such a manner that the concentration behaves exactly as in the system with the wall. Vanishing of the flux at the reflecting wall is achieved when one replaces the wall by the IPS located symmetrically to the initial point $x_0$ with respect to the wall. Then the instantaneous particle sources create fluxes of particles flowing in opposite directions, which reduce each other at the point $x_m$. Thus, one finds (for $x<x_m$ and $x_0<x_m$) \begin{equation}\label{6} G(x,t;x_0)=G_0(x,t;x_0)+G_0(x,t;2x_m-x_0) , \end{equation} where $G_0$ denotes the Green's function for a homogeneous system (without the wall). Let us note that the Green's function equation (\ref{6}) leads to equation (\ref{eq5d}) in any system where the flux fulfills the following relation \begin{equation}\label{jpgs} J \sim \frac{\partial C}{\partial x} . \end{equation} In the papers \cite{koszt1998} the method of images was generalized to the system with a partially permeable wall. Since the Green's function (\ref{6}) works in the system with fully reflecting wall, where a transport is described by hyperbolic diffusion or subdiffusion equation, similar generalization can be performed in such a system with thin membrane. In the system with thin membrane the particles can pass trough the membrane in both directions many times. Let us assume that the membrane is symmetric and the probabilities of passing through it do not depend on the direction of particle's motion. To take into account selective properties of the membrane, we `weaken' the additional IPS located at $2x_m-x_0$ by factor $\sigma$, which gives \begin{equation}\label{7} G(x,t;x_0)=G_0(x,t;x_0)+\sigma G_0(x,t;2x_m-x_0) . \end{equation} Assuming that the flux is continuous at the membrane, we get for $x>x_m$ and $x_0<x_m$ \begin{equation}\label{8} G(x,t;x_0)=(1-\sigma)G_0(x,t;x_0). \end{equation} The functions (\ref{7}) and (\ref{8}) fulfill the boundary condition (\ref{5}) with $\lambda=(1-\sigma)/(1+\sigma)$. To interpret the parameter $\sigma$ let us note that the probability of finding a particle (starting from $x_0$, where $x_0<x_m$) in the region $x>x_m$ is equal to $P_\sigma=(1-\sigma)\int^{\infty}_{x_m}G_0(x,t;x_0)dx$ for the membrane system, whereas the probability of finding the particle in this region for system without the membrane is equal to $P_0=\int^{\infty}_{x_m}G_0(x,t;x_0)dx$. Comparing the above equations we obtain $\sigma=1-P_\sigma/P_0$, so the parameter $\sigma$ can be interpreted as a probability of finding the particle in the region $x<x_m$ under condition that in the system with removed membrane the particle will be in the region $x>x_m$. In other words, $\sigma$ is a conditional probability of stopping the particle by the membrane in unit time under condition, that in the similar system with no membrane this particle pass the position $x_m$. Thus, $\sigma$ is the parameter controlling the reflection of particles by the membrane, the parameter $1-\sigma$ is the parameter of membrane permeability. The boundary condition (\ref{5}) has simple physical interpretation: {\it if $N$ diffusing particles are going to pass through the wall in unit time, then $\sigma N$ of them will be stopped by the wall whereas $(1-\sigma)N$ pass through, where $\sigma =(1-\lambda)/(1+\lambda)$}. \subsection{Radiation boundary condition} For the parabolic normal diffusion equation the radiation boundary condition (\ref{5a}) was derived from the model with discrete space variable \cite{koszt2001a} as well as for the considerations performed in a phase space where the diffusion is described by the Klein-Kramers equation \cite{koszt2001b}. \Eref{5a} can be interpreted as the natural continuation of the Fick equation applied to the membrane. According to equation (\ref{eq1}), we can generalize equation (\ref{5a}) as follows \begin{equation}\label{eq27a} J(x_m,t+\tau)=\lambda[C_1(x_m^-,t)-C_2(x_m^+,t)] . \end{equation} In the following we will see that the boundary conditions (\ref{5}) and (\ref{eq27a}) are not equivalent to each other. \section{Solutions of hyperbolic subdiffusion equation for a membrane system\label{sol}} Let us assume that the thin membrane is located at $x_m=0$. We choose the initial condition as \begin{equation}\label{eq21} C(x,0)=\left\{ \begin{array}{cc} C_{0}, & x<0 \\ 0, & x>0 . \end{array} \right. \end{equation} The boundary conditions demand finiteness of the solutions at infinity \begin{equation}\label{eq17} C_1(-\infty,t)=C_0,\; C_2(\infty,t)=0 . \end{equation} The first boundary condition at the membrane (\ref{eq18}) ensures that the flux is continuous at the one, the second boundary condition at the membrane we take in general form \begin{equation}\label{eq19} b_1C_1(0^-,t)+b_2C_2(0^+,t)+b_3J(0,t+\tau)=0 . \end{equation} We note that the Laplace transform of the flux reads \begin{equation}\label{eq20} \hat{J}(x,s)=-D_\alpha\frac{s^{1-\alpha}}{1+\tau s}\frac{d\hat{C}(x,s)}{dx} . \end{equation} The Laplace transforms of solutions for boundary conditions (\ref{eq18}), (\ref{eq17}), (\ref{eq19}) and initial condition (\ref{eq21}) are as follows \begin{equation}\label{eq22} \fl\hat{C}_1(x,s)=\frac{C_0}{s}\left[1-\frac{b_1}{b_1-b_2-b_3\sqrt{D_\alpha}s^{1-\alpha/2}/\sqrt{1+\tau s}} \exp\left(x\sqrt{\frac{(1+\tau s)s^\alpha}{D_\alpha}}\right)\right] , \end{equation} \begin{equation}\label{eq23} \fl \hat{C}_2(x,s)=\frac{C_0}{s}\frac{b_1}{b_1-b_2-b_3\sqrt{D_\alpha}s^{1-\alpha/2}/\sqrt{1+\tau s}} \exp\left(-x\sqrt{\frac{(1+\tau s)s^\alpha}{D_\alpha}}\right) . \end{equation} Below we find the solutions for two boundary conditions described in \sref{bcond}. \subsection{Constant ratio of the solutions at the membrane} Putting $b_1>0$, $b_2<0$ and $b_3=0$ in (\ref{eq19}) we get equation (\ref{5}) with $\gamma=-b_2/b_1$. The solutions are as follows \begin{equation}\label{eq25} C_1(x,t)=C_0\left[1-\sigma f_{-1,\alpha/2}\left(t;\frac{-x}{\sqrt{D_\alpha}}\right) -\sigma\frac{x\tau}{2\sqrt{D_\alpha}}f_{\alpha/2,\alpha/2}\left(t;\frac{-x}{\sqrt{D_\alpha}}\right)\right], \end{equation} \begin{equation}\label{eq26} C_2(x,t)=C_0\sigma\left[f_{-1,\alpha/2}\left(t;\frac{x}{\sqrt{D_\alpha}}\right) -\frac{x\tau}{2\sqrt{D_\alpha}}f_{\alpha/2,\alpha/2}\left(t;\frac{x}{\sqrt{D_\alpha}}\right)\right], \end{equation} where $\sigma=1/(1+\gamma)$. The plots of functions (\ref{eq25}) and (\ref{eq26}) are presented in \fref{fig:Fig5}. \begin{figure}[h] \centering \includegraphics{Fig5.EPS} \caption{The solutions calculated for the boundary condition (\ref{5}) with $\gamma=1.5$, $\alpha=0.9$, $D_\alpha=5\times 10^{-4}$ for $t=500, 1000, 1500, 2000$. Vertical lines represent the membrane, dashed lines correspond to $\tau=100$, continuous ones correspond to $\tau=0$.\label{fig:Fig5}} \end{figure} As we can see, the differences between the solutions obtained for the parabolic subdiffusion equation are very close to the solution of hyperbolic equation (even for the largest time $\tau=100$). \subsection{Radiation boundary condition} Here $b_1=-b_2>0$, $b_3<0$. The boundary condition takes the form of equation (\ref{eq27a}) where $\lambda=-b_1/b_3$. To obtain the inverse transforms of equations (\ref{eq22}) and (\ref{eq23}) we assume that $\tau s\ll 1$ (what corresponds to $t\gg 1/\tau$) and we extend the transforms into the power series with respect to the parameter $s$. Achieving only the linear terms with respect to $\tau$ we get \begin{eqnarray}\label{eq28} \fl C_1(x,t)=C_0-\frac{C_0}{2}\sum_{k=0}^{\infty}\left(-\frac{\sqrt{D_\alpha}}{2\lambda}\right)^k \left[f_{k(1-\alpha/2)-1,\alpha/2}\left(t;\frac{-x}{\sqrt{D_\alpha}}\right)\right.\nonumber\\ \left.-\frac{k\tau}{2}f_{k(1-\alpha/2),\alpha/2} \left(t;\frac{-x}{\sqrt{D_\alpha}}\right) +\frac{x\tau}{2\sqrt{D_\alpha}}f_{k(1-\alpha/2)+\alpha/2,\alpha/2}\left(t;\frac{-x}{\sqrt{D_\alpha}}\right)\right], \end{eqnarray} \begin{eqnarray}\label{eq29} \fl C_2(x,t)=\frac{C_0}{2}\sum_{k=0}^{\infty}\left(-\frac{\sqrt{D_\alpha}}{2\lambda}\right)^k \left[f_{k(1-\alpha/2)-1,\alpha/2}\left(t;\frac{x}{\sqrt{D_\alpha}}\right)+\frac{k\tau}{2}f_{k(1-\alpha/2),\alpha/2} \left(t;\frac{x}{\sqrt{D_\alpha}}\right)\right.\nonumber\\ -\left.\frac{x\tau}{2\sqrt{D_\alpha}}f_{k(1-\alpha/2)+\alpha/2,\alpha/2}\left(t;\frac{x}{\sqrt{D_\alpha}}\right)\right]. \end{eqnarray} \begin{figure}[h] \centering \includegraphics{Fig6.EPS} \caption{The solutions calculated for boundary conditions (\ref{5}) with $\lambda=10^{-3}$, the values of the parameters are the same as in \fref{fig:Fig5}.\label{fig:Fig6}} \end{figure} \begin{figure}[h] \centering \includegraphics{Fig7.EPS} \caption{The solutions calculated for $\lambda=0.1$, the additional description is as in \fref{fig:Fig6}.\label{fig:Fig7}} \end{figure} The plots of functions (\ref{eq28}) and (\ref{eq29}) are presented in figures \ref{fig:Fig6} and \ref{fig:Fig7}. \section{Final remarks} We present here the solutions of the parabolic and hyperbolic subdiffusion equations for the homogeneous system and for the membrane one. The solutions were found under assumption that we take into account the terms linear with respect to the parameter $\tau$. We applied two different boundary conditions at the membrane. Our considerations are illustrated by few plots presenting the solutions for both of the boundary conditions. The plots were prepared for the parameters which values are of the order of the ones already found for real systems on the basis of experimental results \cite{kdm}. The detailed remarks extracted from the plots are not fully conclusive, but it suggested few regularities, which - in our opinion - are general. They are as follows. \begin{enumerate} \item For the boundary condition (\ref{5}) \begin{itemize} \item The solutions at the membrane do not change in time and read as $C_1(0^-,t)=(\gamma C_0)/(1+\gamma)$, $C_2(0^+,t)=C_0/(1+\gamma)$. This property seems to be `unphysical', but let us note that the solution obtained for the system without membrane (for which $b_1=-b_2$ and $b_3=0$) with initial condition (\ref{eq21}) are constant for $x=0$ and reads as $C(0,t)=C_0/2$. \item The delay effect does not occur at the membrane, consequently it is weak at the membrane neighborhood. \end{itemize} \item For radiation boundary condition (\ref{eq27a}) \begin{itemize} \item The concentration difference between the surfaces decreases in time (see \fref{fig:Fig6}). From equations (\ref{eq22}) and (\ref{eq23}) it is easy to see that $C_1(0,t)\rightarrow C_2(0,t)$ when $t\rightarrow \infty$, since the long time limit corresponds to the limit of small $s$. \item For $\lambda\sim 10^{-1}$ the membrane loses its selectivity. \end{itemize} \item The main qualitative difference between the above boundary conditions is noticeable in the long time limit as boundary condition (\ref{5a}) leads to the solutions of hyperbolic subdiffusion equation which are going to the continuous function at the barrier, unlike than the solutions obtained for (\ref{5}). \item In all cases the delayed effect is connected with the subdiffusion parameter $\alpha$ (this property is clearly seen in \fref{fig:Fig1} - \fref{fig:Fig3} for the Green's functions). When $\alpha$ increases, the delaying effect is stronger. \item For large times the delaying effect is negligibly small. Let us note that for large time the term $\tau/t$ vanishes (it corresponds to the limit $\tau s\rightarrow 0$ in equations (\ref{eq22}) and (\ref{eq23})). \end{enumerate} Here the question arises: why the subdiffusion hyperbolic equation has not been applied to describe the experimental results in the subdiffusive membrane system, despite of proper `physical quality' of the equation? Analyzing the plots \ref{fig:Fig2} - \ref{fig:Fig7} we conclude that in considered cases there is no reason to apply the hyperbolic subdiffusion equation instead of the parabolic one. The difference between the solutions is so small that both of them would certainly be laid within the error bars of the experimental concentration profiles. The order of values of the subdiffusion coefficient $D_\alpha$ taken into calculations agrees with the ones obtained experimentally for sugars in agarose gels \cite{kdm} if as unit of time $1\;sec$ is chosen and $1\;mm$ is the unit of space variable. In these units the value $\tau=100$ is certainly too large, nevertheless these differences are rather hard to observe, for smaller values of $\tau$ these differences are smaller. There is a problem with choosing the boundary condition at the membrane. Seemingly there is no problem with choosing the condition since the real system is limited by external walls and the concentration goes to equilibrium functions which is continuous at the membrane. This property posses the radiation boundary condition only. However, experimental study performed on two membrane system show that the concentration profiles have the scaling property, which is limited to the theoretical solutions obtained for the boundary conditions (\ref{5}) only \cite{dwor2006,koszt2008}. Namely, changing the variables according to the relation $(x,t)\rightarrow (p^\alpha x,pt)$, $p>0$ the experimental profiles does not display noticeable changes. So, if the particles flowing through the membrane do not `feel' the presence of external walls of the system, the boundary conditions (\ref{5}) can be used. Although our conclusion is rather odd in respect to the membrane system, there are systems where the solutions of the hyperbolic and parabolic equations considerably differ from each other. Such a situation occurs for the boundary conditions where the concentration of the particles oscillates with high frequency, as for example in the problem of impedance spectroscopy \cite{lewkoszt}. \ack This paper was partially supported by Polish Ministry of Education and Science under Grant No. 1 P03B 136 30.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \noindent The exchanges between countries become closer with the progress of globalization. As countries began to communicate more politically, economically and academically, language understanding became a new challenge. Acronyms often appear in the scientific documents of different countries. Compared to English, acronyms are more challenging to understand in other languages. Acronyms will become a barrier for researchers to read scientific literature and affect exchanges and cooperation between countries. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{4.png} \caption{Differences and challenges between English and other (such as French) phrases in acronym disambiguation. Red means wrong, green means right. Acronyms in English are often first letter acronyms, but not in other languages.} \label{fig} \end{figure} Acronym disambiguation refers to when acronyms are used in a large number of scientific documents. For these acronyms, we need to find the correct one in the current context from the dictionary. For example, in ``The traditional Chinese sentences are transferred into SC'', ``SC'' means ``simplified Chinese'' rather than ``System Combination''. It is difficult for some people who are not familiar with a language to understand related acronyms. So we need to distinguish abbreviations, which is a challenging task. In the datasets, 30,237 data in the four fields of English (science), English (legal), French and Spanish were given. Any data contains a sentence, and there will appear a word that is the first letter abbreviated. The task hopes to find the most suitable form of an extension for the first letter abbreviation. In the past, researchers have tried to solve AD problems by means of character extraction \cite{li-etal-2018-guess}, word embedding \cite{charbonnier-wartena-2018-using}, and deep learning \cite{jin-etal-2019-deep}. Over the last few years, the BERT \cite{devlin-etal-2019-bert} model has emerged, which adopts a method of pre-training in a large language library. Many studies have shown that these pre-training models (PTMs) have gained a wealth of generic characteristics. Recently, They \cite{Pan2021BERTbasedAD,LeveragingDomain} have achieved remarkable effects using the BERT model in AD tasks. However, these methods do not work well in other languages. So we used the following methods to further enhance the model's out-of-data test performance to help better researchers understand and communicate multilingual multi-domain scientific documents. \begin{itemize} \item A simple ADBCMM approach was proposed to use other language data as counterfacts datasets in AD tasks, solving the model bias. \item We tried to use the Multiple-Choice Model framework to make the model more focused on word-to-word comparisons to help the model better understand the first letter abbreviation. \item Our results achieved SOTA effects in both the French and Spanish of the AD dataset, showing outstanding performance, surpassing all other baselines methods. \end{itemize} \section{Related Work} \noindent In this section, we will introduce AD datasets and how to solve AD tasks in English scenarios in the past, while introducing the difficulties of AD tasks in other languages. \subsection{AD dataset} \begin{table}[h] \centering \renewcommand\arraystretch{2.2} \begin{tabular}{c|cccc} \noalign{\hrule height 1pt} \textbf{Data} & \textbf{En(Lagel)} & \textbf{En(Sci)} & \textbf{French} & \textbf{Spanish}\\ \noalign{\hrule height 0.5pt} \textbf{Train} & 2949 & 7532 & 7851 & 6267 \\ \textbf{Dev} & 385 & 894 & 909 & 818 \\ \textbf{Test} & 383 & 574 & 813 & 862 \\ \noalign{\hrule height 0.75pt} \textbf{Total} & 3717 & 9000 & 9573 & 7947 \\ \noalign{\hrule height 1pt} \end{tabular} \caption{Specific number of AD datasets, including AD tasks for 4 different fields. The total number of data sets is not more than 10,000.} \label{table1} \vspace{-0.2cm} \end{table} \noindent In this AD task, the abbreviation appears in scientific documents in English and other languages. AD datasets provide datasets in French and Spanish in addition to English. Each data gives a dictionary, and each language split has its test set with acronyms not appearing in their training set. \subsection{Previous work} In the AD of SDU@AAAI-21, the teams presented their methodologies and submitted a total of 10 papers. Those papers included some excellent projects. Pan \cite{Pan2021BERTbasedAD} trained a Binary Classification Model incorporating BERT and several training strategies. His program includes dynamic adverse sample selection, task adaptive pretraining, adversarial training \cite{Goodfellow2015ExplainingAH} and pseudo labelling in his paper. This model achieved its first achievement. Zhong \cite{LeveragingDomain} took into account the field unknowledge and specific knowledge often encountered in AD tasks. He proposed a Hierarchical Dual-path BERT method to capture general and professional field language, while using RoBERTa and SciBERT to perceive and predict text. He eventually reached a 93.73\% F1 value in the SciAD datasets. \subsection{Difficulty in multilingual} In the AD of SDU@AAAI-22, the organizers released AD datasets covering French and Spanish, which have the following difficulties compared to the English environment: \begin{itemize} \item In Figure 1, we can find that the extension of other languages does not necessarily contain an acronym of the first letter, and it isn't easy to match directly through the rules. \item Other languages lack PLMs trained in scientific language. \item In Table 1, the number of datasets in French and Spanish is small. Training models are prone to bias and over-adaptation. \end{itemize} \section{Methods} In this section, we will describe the framework for the overall model, as well as a range of methods for AD datasets for other languages, including ADBCMM, In-Trust-loss \cite{huang-etal-2021-named}, Child-Tuning \cite{xu-etal-2021-raise} and R-Drop\cite{liang2021rdrop}. \subsection{The model framework} We use the Multiple-Choice model framework, which is different from the Binary Classification Model used by Pan \cite{Pan2021BERTbasedAD}. The Multiple-Choice model \cite{wolf-etal-2020-transformers} refers to adding a classifier to the end output of the BERT model. Each sentence has only a single output value to represent the probability of this option. In Figure 2, when we use the Multiple-Choice model, each batch will enter all the possible options in the same set during the training. If the word in the dictionary is insufficient, we use ``Padding'' for filling, eventually at the output end for softmax classification and calculation of losses. Thus, we can more accurately derive the probability that each option should be by comparing methods. Compared with Binary Classification Model, Multiple-Choice model capturing more semantic characteristics and make the model more comprehensively trained and predicted on differences, rather than the error interference model caused by the dynamic construction of negative samples. \begin{figure*}[htb] \centering \includegraphics[width=1\linewidth]{9.png} \caption{Multiple-Choice Model} \label{fig} \end{figure*} \subsection{ADBCMM} PLM has achieved excellent results in many NLP tasks, but the potential bias in training data can harm out-of-data testing performance. Counterfactually augmented datasets is a recent solution \cite{kaushik2021explaining}. But if man-built counterfactual samples, it would be expensive and time-consuming. We found many word-like but meaning-different samples by analyzing erroneous samples on dev datasets. We think these samples errors are mainly due to model bias, over-training leads to over-adaptation seriously, and data set performance is poor. That's why we used different language markup information to use other language samples as new counterfactual samples after being modified. In Figure 3, the training process is like a pyramid. We first train using data in multiple languages, and then we do secondary training in a single language based on pre-training. Why continue training with single-language materials after multilingual mixed training instead of testing directly after multilingual Counterfacts datasets training? Because in our experiment, with the addition of more language samples, the models may become overwhelming. Even though French, English and Spanish belong to the Indo-European language family, they all have unique language properties, syntax and vocabulary. This would be a noise interference for different languages. Models may ignore semantic characteristics that are unique to a particular language and prefer to learn more common ones. In addition, to address the noise problem of multilingual mixing caused by ADBCMM. We replaced the original CE loss with In-Trust-Loss. This incomplete trust loss function avoids model over-adaptation noise (other languages data) samples while trusting label information and model output. Combined with our ADBCMM method, it has achieved practical results in multilingual hybrid training scenarios. Our ADBCMM approach can also be further extended to translation, Ner, conversation generation and other tasks. The ADBCMM approach helps address biases caused by insufficient data in small-language environments. \begin{figure}[htb] \centering \includegraphics[width=1\linewidth]{10.png} \caption{Training Process} \label{fig} \end{figure} \subsection{Child-Tuning} Because AD data sets are smaller and can easily be learned, resulting in the model's poor centralized generalization capacity during testing. We used the Child-Tuning method proposed to address this discrepancy. The Child-Tuning strategy only updates the corresponding Child Network when the parameters are updated backwards, without adjusting all the parameters. This approach like the reverse Dropout \cite{JMLR:v15:srivastava14a}, it can bring performance improvements to our models. \subsection{R-Drop} In the R-Drop work, the authors used the model to open Dropout during the training and then made two inputs, so the results of the two inputs would not be the same because the model opened Dropout. In addition to calculating the loss of label information, the Kullback-Leibler divergence was also calculated between the same two inputs but different outputs. This R-Drop method can play the role of normalizing and increasing robustness. In our experiment, R-Drop improved greater performance. \begin{table*}[t] \centering \renewcommand\arraystretch{1.5} \setlength{\tabcolsep}{4mm} \begin{tabular}{c|ccc|ccc} \noalign{\hrule height 1pt} & & French & & & Spanish & \\ Model/Method & Precision & Recall & Macro F1 & Precision & Recall & Macro F1 \\ \noalign{\hrule height 1pt} BETO & N/A & N/A & N/A & 0.8063 & 0.7510 & 0.7777 \\ Flaubert-base-cased & 0.7796 & 0.6786 & 0.7256 & N/A & N/A & N/A\\ mDeberta-v3-base & 0.7244 & 0.6001 & 0.6564 & 0.7176 & 0.6491 & 0.6816\\ \noalign{\hrule height 0.5pt} + ADBCMM & 0.8087 & 0.7213 & 0.7625 & 0.8558 & 0.8236 & 0.8394 \\ + Child-Tuning & 0.7438 & 0.6232 & 0.6782 & 0.7512 & 0.6834 & 0.7157 \\ + R-Drop & 0.7467 & 0.6337 & 0.6856 & 0.7492 & 0.7019 & 0.7248 \\ \textbf{ALLs} & \textbf{0.8423} & \textbf{0.7712} & \textbf{0.8052} & \textbf{0.8859} & \textbf{0.8352} & \textbf{0.8598} \\ \noalign{\hrule height 1pt} \textbf{Finally in Test} & \textbf{0.8942} & \textbf{0.7934} & \textbf{0.8408} & \textbf{0.9107} & \textbf{0.8514} & \textbf{0.8801} \\ \noalign{\hrule height 1pt} \end{tabular} \caption{Experimental results in French and Spanish AD datasets. BETO is a Spanish pre-training model, tested only on Spanish data in AD; Flaubert-base-cased is a French pre-training model, tested only on French data in AD; mDeberta is a multi-language pre-training model, we test in both French and Spanish. Additionally, methods including ``ADBCMM'', ``Child-Tuning'', ``R-Drop'' and ``Alls'' are fine-tuned on mDeberta models, ``Alls'' refers to using all of the above methods. In addition to ``Finally in Test'', we test the results of the Dev series. ``Finally in Test'' also uses model fusion to improve our performance.} \label{sci} \end{table*} \section{Experimental Setting} This section will subsequently present our Baseline, experimental models, experimental settings, control of variables experiment. \subsection{Baseline} For both French and Spanish languages, we used Flaubert-base-cased \cite{le2020flaubert} models and BETO \cite{CaneteCFP2020} cased models respectively. These models are Bidirectional Encoder Representations from Transformers \cite{devlin-etal-2019-bert}, and the size is both bases. These models have a lot of MLM training in the related large single-language repository and have SOTA results in the related languages. These pre-trained models can better capture the semantic information of words. But there is no additional training, so the two models still need to fine-tune AD data centralization to solve AD tasks. We will add a classification layer behind these models, and then the models become Multiple-Choice Models. We trained the models in a single language. Their results will be used as our Baseline, and the results of other models will be compared with them. \subsection{Model} To better adapt to the ADBCMM method, we used the DeBERTa model \cite{he2021debertav3} for pre-training in the multilingual repository CC100 \cite{conneau-etal-2020-unsupervised} . The authors of DeBERTa replaced the MLM objective with the RTD (Replaced Token Detection) intent introduced by ELECTRA for pre-training. Specifically, we used the mdeberta-v3-base model in the experiment, with a total of 280M and containing 250,000 tokens. MDeberta supports 100 languages in 100 countries, including English, French and Spanish. Of course, to ensure that the ADBCMM method rather than the mDeberta model brought us practical performance enhancements, we also used mDeberta only in French or Spanish as a contrast experiment. \subsection{Parameters Setup} We used three pre-training models, including Flaubert, BETO and mDeberta, for a total of 15 training sessions. We use argmax to choose the maximum of all values as the final result for the word to be selected. In all the experiments, we set 16 epochs and decided to use the 1e-5 learning rate (we used warmup simultaneously). We put gradient decrease 1e-5 and batch size 1 (each batch contains 14 different options). We select AdamW Optimizer. We only use the first 300 tokens for each sample. On a 10900K server with 128G memory, we used a 24G NVIDIA 3090 GPU to train our model. \subsection{Assessment of indicators} In AD tasks, Macro F1 was used as an assessment indicator by calculating the accuracy and recall rate of the final result. $$Precision = \frac{TP}{TP+FP}$$ $$Recall = \frac{TP}{TP+FN}$$ $$F1 = \frac{2PrecisionRecall}{Precision+Recall}$$ $$Macro F1 = \frac{\sum_{i=1}^{n} F1_i}{n}$$ $n$ means that the higher the total number of categories, accuracy, recall rate, and MacroF1. The higher the F1 method, the better the performance.\footnote{Below is the specific meaning of the formula. TP: The prediction is correct and the sample is correct. FP: The prediction is wrong and the sample is correct. FN: The predicting is correct and the sample is wrong.} \begin{table*}[t] \centering \renewcommand\arraystretch{1.2} \setlength{\tabcolsep}{4mm} \begin{tabular}{c|ccc|ccc} \noalign{\hrule height 1pt} & & French & & & Spanish & \\ Ranked & Precision & Recall & Macro F1 & Precision & Recall & Macro F1 \\ \noalign{\hrule height 1pt} \textbf{Rank1(Ours)} & \textbf{0.89} & \textbf{0.79} & \textbf{0.84} & \textbf{0.91} & \textbf{0.85} & \textbf{0.88} \\ Rank2 & 0.85 & 0.73 & 0.78 & 0.88 & 0.79 & 0.83 \\ Rank3 & 0.81 & 0.72 & 0.76 & 0.86 & 0.80 & 0.83\\ Rank4 & 0.76 & 0.70 & 0.73 & 0.83 & 0.80 & 0.81\\ Rank5 & 0.73 & 0.64 & 0.68 & 0.86 & 0.77 & 0.81 \\ \noalign{\hrule height 1pt} \end{tabular} \caption{SDU@AAAI ranks AD tasks in French and Spanish} \label{sci} \end{table*} \section{Results} In Table 2, we can find that under the same conditions, mDeberta performs less well in French than in Flaubert-base-cased, and less well in Spanish than in BETO. We speculate that because mDeberta uses a large number of data in different languages during the pre-training phase. Still, after spinning into other languages, due to the further side focus, it may not necessarily accurately record the semantic characteristics of a single language so that the actual performance will be slightly worse compared to BETO and Flaubert. They have been pre-trained only in a single language. Both Child-Tuning and R-Drop showed excellent performance in English and Spanish, bringing a 3-5\% F1 boost to our model. But compared to the ADBCMM method, they were still slightly underperforming. Our ADBCMM method brought more than 10\% performance boost directly to our mDeberta model. This is indeed incredible. To ensure the repetitiveness of the experiment, we repeated three experiments. The mDeberta models using the ADBCMM method were compared to their mDeberta model F1 performance over 10\% in these three experiments. We think that ADBCMM can significantly boost our models because of the reliable Counterfacts datasets. First, they can match upstream and downstream training data; second, counterfacts datasets can reduce the model's bias, learning from more text data to more relevant information with AD tasks; third, even if the datasets are collected from different languages or fields, but they are scientific documents, so the general language training mDeberta model can learn the syntax characteristics of scientific documents in more scientific documents and further improve performance. Finally, we followed ADBCMM-based methods and achieved SOTA scores in both SDU@AAAI's French and Spanish. In AD tasks, our methods of Precision, Recall and Macro F1 are SOTA. Remarkably, our approach leads us to the second F1 score of 5\% - 6\%. \section{Conclusion} In this article, we mainly talk about how to use ADBCMM in AD tasks at SDU@AAAI-22 and compare it with other Models or Methods to ultimately SOTA. We used a straightforward method to build counterfacts datasets in ADBCMM. We directly use other language datasets for training and secondary Fine-Tune in their language, which gives our models a remarkable effect. After combining the Multiple-Choice Model, Child-Tuning, R-Drop and other methods, our approach leads ahead of all different systems. Apparently, in multilingual data aggregation, simply using other languages as counterfacts datasets can improve performance. At the same time, our work provides practical help for researchers to understand scientific documentation better.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There remain numerous questions surrounding the formation of dust in the universe. Significant challenges are still faced in the determination of dust formation rates, mechanisms and environments. Motivated by seemingly inexplicably large masses of dust observed at high redshifts \citep{Omont2001,Bertoldi2003,Watson2015,Laporte2017}, there is a widespread desire to understand the nature of the primary sources of dust in the universe. Core-collapse supernovae (CCSNe) are known to produce dust in their ageing ejecta. Theoretical models predict that CCSNe are capable of producing $>0.1$M$_{\odot}$ of ejecta-condensed dust \citep{Todini2001,Nozawa2003,Gall2011,Sarangi2015}. To date, however, dust masses in the majority of these objects have been inferred via fits to their near-infrared (NIR) and mid-infrared (MIR) spectral energy distributions (SEDs), which trace only warm dust. Warm dust masses up to $\sim10^{-3}$~M$_{\odot}$ have been detected at late times ($>1$~yr) in several CCSNe \citep{Sugerman2006,Meikle2007,Andrews2010,Fabbri2011,Gall2011,Gomez2013,Gall2014}. However, a few objects have also been observed in the far-IR, allowing their full SEDs to be fitted and therefore tracing the presence of cold dust as well as warm and hot dust. Using this technique, dust masses $\gtrsim0.1$\,M$_{\odot}$ have been estimated to have formed in SN~1987A, Cassiopeia~A and the Crab Nebula \citep{Gomez2012, Indebetouw2014,Matsuura2015,deLooze2017,Owen2015}. More recently, it has been suggested that a significant mass of dust (0.08~--~0.9~M$_{\odot}$) has also formed in the Galactic supernova remnant G54.1+0.3 \citep{Temim2017,Rho2017} as well as very large masses of dust ($>1$\,M$_{\odot}$) in a number of other Galactic supernova remnants (Chawner et al. in prep.). An average net dust production rate of 0.1~--~1.0~M$_{\odot}$ per CCSN is required to account for the dust masses observed in the early universe \citep{Morgan2003,Dwek2007}. The dust budget problem would therefore be resolved if these few objects were, in fact, representative of the wider CCSN population and the dust was able to survive the passage of the reverse shock \citep{Bianchi2007,Bocchio2016}. A larger sample of CCSN dust mass estimates is therefore required. Following the end of the \textit{Herschel} mission in 2013, there will be a long wait for instruments that are capable of detecting cold dust emission at far-IR wavelengths and so an alternative approach is needed. The {\sc damocles} Monte Carlo line radiative transfer code predicts dust masses in the ejecta of CCSNe by modelling the red-blue asymmetry frequently observed in their optical and NIR emission lines \citep[hereafter B16]{Bevan2016}. This asymmetry is due to the condensation of dust in the ejecta causing redshifted radiation from the receding regions of the supernova to experience greater extinction than blueshifted radiation emitted from the approaching regions \citep{Lucy1989}. In addition to providing an alternative method for tracing both warm and cold dust in the ejecta, line profile modelling has the added advantage of tracing only newly-condensed dust within the ejecta. Pre-existing circumstellar dust may contribute to the observed flux in the IR but the red and blue components of optical or NIR lines emitted from within the ejecta will be similarly attenuated by the surrounding circumstellar dust, i.e. any dust-induced red-blue asymmetry must be solely a result of internal, ejecta-condensed dust. The approach also allows other properties of the dust to be determined. Of particular interest is the dust grain size distribution. Regardless of the masses of dust that form in the ejecta of CCSNe, the grains will eventually be subject to a reverse shock that will pass back through the ejecta, potentially destroying these newly-formed grains and significantly diminishing the dust mass that has formed. The size of dust grains that condense within the supernova ejecta determine their likelihood of survival. An understanding of dust grain sizes in CCSNe is therefore critical to determining the relative contribution of CCSNe to dust production in the universe. B16 applied the {\sc damocles} Monte Carlo code to the H$\alpha$ and [O{\sc i}]\,6300,6363\,\AA\ lines of SN~1987A between 714\,d and 3500\,d post-explosion. A steady increase in the ejecta dust mass over this period was inferred with a predicted current dust mass of 0.8~M$_{\odot}$, consistent with dust mass estimates derived from SED fitting and modelling \citep{Matsuura2011,Indebetouw2014,Matsuura2015,Wesson2015}. The {\sc damocles} code was also applied to the late-time optical line profiles of SN~1980K, SN~1993J and Cas~A by \citet{Bevan2017}. Dust masses of 0.12~-–~0.3\,M$_{\odot}$ at 30~yr, 0.08~–-~0.18\,M$_{\odot}$ at 16~yr and $\sim$1.1\,M$_{\odot}$ at $\sim$330~yr were predicted respectively \citep{Bevan2017}. Clearly, further examples are needed in order to establish whether this apparent trend towards larger ($\gtrsim0.1$~M$_{\odot}$) dust masses is an accurate representation of dust formation in CCSNe more generally. In working towards the overall goal of understanding the masses and properties of dust in the ejecta of CCSNe, I have explored the implementation of a Bayesian methodology for line profile fitting. The fundamental power of Bayesian statistics in providing a framework to understand the probability of a model when the the data is known has been increasingly exploited in astronomy over the last twenty years \citep[e.g.][]{Strolger2004, Venn2004,Ilbert2006, Feroz2009, Arzoumanian2016}. In particular, Markov Chain Monte Carlo (MCMC) methods have provided efficient, robust and rigorous procedures with which to explore highly multi-dimensional parameter spaces and quantify posterior probability distributions. Increasingly available computing power has allowed these methods to be employed in a wide variety of fields with impressive results that yield significantly more insight than can be gained from a single best-fitting set of parameters. \citet{Sharma2017} presents a comprehensive review of MCMC methods for Bayesian data analysis in astronomy. I have applied an affine invariant ensemble sampler \citep{Goodman2010} to the {\sc damocles} code in order to map the multi-dimensional posterior probability distribution of a range of models and parameter spaces. I initially employed the sampler to model four simulated, or `theoretical', line profiles (generated by {\sc damocles}) that were deemed representative of observed line profiles of CCSNe at late times but for which the `true solution' was known (models A1 - A4). I also revisited the models by B16 of the H$\alpha$ line and the [O~{\sc i}]\,6300,6363\,\AA\ doublet of SN~1987A at 714\,d. The new approach was applied to a smooth, 5-dimensional model based on their work (model B). A new model (model C) was also explored which treated both emission features simultaneously and used the newly-applied Bayesian methodology to characterise a significantly more complex 10-dimensional parameter space. The ultimate goal is to assess the validity of this approach with regard to its application to both archival and future datasets, with a view to significantly expanding the current range of dust mass estimates for CCSNe years after outburst. In Section \ref{the_problem}, the formulation of the problem is presented along with a discussion of how Monte Carlo and observational uncertainties are handled and a brief description of the affine invariant ensemble sampler. The adopted priors, variable parameters and posterior distributions for all models are presented in Section \ref{scn_results}. I discuss the implications of these results in Section \ref{scn_discussion} and compare them to results obtained from previous manual line profile fitting using {\sc damocles}, as well as results obtained from SED fitting. The constraints placed on other parameters such as dust grain size and dust density distribution are also discussed. I emphasise the potential for future application to other objects before summarising and concluding in Section \ref{conclusions}. {\renewcommand{\arraystretch}{1.5}% \begin{table*} \centering \caption{Values of the parameters adopted to generate four representative simulated line profiles (A1 - A4). In these models, amorphous carbon grains were used with the optical constants of \citet{Zubko1996}. The H$\alpha$ line profile was modelled with an intrinsic smooth radial power-law emissivity distribution ($i \sim r^{-2\beta}$) applied to a homologously expanding shell geometry at day 1000.} \begin{tabular*}{\linewidth}{L{1cm} C{3.4cm} C{2.2cm} C{2.2cm} C{1.9cm} C{2.2cm} c} \hline & &$v_{\rm max}$ & $v_{\rm min}$ & $\beta$ & $\log a$ & $\log M_{\rm dust}$ \\ & &10$^3$\,km\,s$^{-1}$ & 10$^3$\,km\,s$^{-1}$ && $\log \mu$m & $\log\,$M$_{\odot}$ \\ \hline A1 &``typical" &4.0 & 1.2 & 2.50 & -1.0 & -4.6 \\ A2 & ``double peaked" &4.0 &2.8 &1.0 &-1.0 & -4.6 \\ A3 &``large grains"&4.0 &1.2 &2.0 & 0.18 & -3.9 \\ A4 &``strongly blueshifted" &4.0 &1.2 &2.5 &-1.6 & -3.6 \\ \hline \end{tabular*} \label{tb_synthetic_profiles} \end{table*} \section{Formulation of the Bayesian approach} \label{the_problem} \subsection{\sc damocles} {\sc damocles} is a Monte Carlo radiative transfer code that models the effects of dust, composed of any combination of species and grain size distributions, on optical and NIR emission lines emitted from the expanding ejecta of a late-time ($>1$\,yr) supernova. For full details of the code and its testing, please see B16. By default, both the emissivity distribution and the dust distribution follow smooth radial power-law distributions although any arbitrary distribution may be specified by providing the appropriate grid. {\sc damocles} will also treat a variety of clumping structures as specified by a clumped dust mass fraction, volume filling factor, clump size and clump power-law distribution. The emissivity distribution may also initially be clumped. The code has a large number of variable parameters ranging from 5 dimensions in the simplest, smooth models to $>20$ in the most complex cases. \subsection{The Bayesian approach} The aim is to map the posterior probability distribution based on the observations and our prior understanding of the physical situation. The \textit{posterior} is defined by Bayes' Theorem as \begin{equation} \label{bayes_thm} P(\boldsymbol \theta \,|\,\boldsymbol D) = P(\boldsymbol \theta ) \, \frac{P(\boldsymbol D\,|\,\boldsymbol \theta)}{P( \boldsymbol D)} \end{equation} \noindent where $\boldsymbol D$ represents the data that we wish to analyse (in our case, the observed or simulated line profile) and $ \boldsymbol \theta$ represents the parameters of our model. $P(\boldsymbol \theta )$ therefore represents our prior understanding of the probability of the model parameters (\textit{the prior}), $P(\boldsymbol D\,|\,\boldsymbol \theta)$ is the probability of obtaining the data for a given set of model parameters (\textit{the likelihood}) and $P(\textbf{D})$ is the probability of the data for all models (\textit{the evidence}). Since $P(\boldsymbol D)$ is independent of $\boldsymbol \theta$, we will only be interested in the scaled posterior as defined by \begin{equation} \label{bayes_thm2} P(\boldsymbol \theta\,|\,\boldsymbol D) \propto P(\boldsymbol \theta ) \, P(\boldsymbol D\,|\,\boldsymbol \theta) \end{equation} The posterior distribution will allow us to understand relationships between the parameters and to visualise which are the most likely regions of parameter space. The aim is not to identify the single `best-fitting' model but to map the variation of likelihood across the entire space. The prior is the probability before looking at any data. It can be driven by theoretical models, previous observations or physical intuition. The likelihood is, practically, a mechanism for forward modelling i.e. simulating the data given a model and its parameters. It is proportional to $\exp({-\chi^2/2})$, where $\chi^2$ is the standard metric typically used to compare data and models in frequentist techniques. In order to characterise the target posterior distribution, we may draw samples from across the parameter space. A single sample in parameter space is translated to a point in the target posterior distribution via Equation \ref{bayes_thm2}. A likelihood function that describes the relationship between the model and the data must therefore be defined, and a prior probability distribution for each parameter must also be specified based on our current knowledge (e.g. physical constraints). Once defined, the ensemble sampler can be employed with this likelihood function in order to map the complete posterior distribution. \subsection{Affine Invariant MCMC Ensemble Sampler} There are numerous MCMC algorithms that work out how to sample points in parameter space efficiently in order to converge on a stable solution as quickly as possible. In this work, I used the Python package `emcee' \citep{emcee}. This package uses an affine invariant ensemble sampler as described by \citet{Goodman2010}, to which publication the reader is referred for full details of the algorithm. I present a summary below. The ensemble sampler acts on a collection of points (n-dimensional position vectors in parameter space) termed `walkers'. An initial position for each walker is sampled according to a distribution specified by the user. The likelihood $P(\boldsymbol D\,|\,\boldsymbol \theta)$ of the model corresponding to the current set of parameters is calculated. A new point in parameter space is sampled based on the current positions of the other walkers and the likelihood of this new point is also calculated. The ratio of these likelihoods determines whether the position of the walker is updated or not (i.e. the new point is either accepted or rejected). As such, the walkers `walk' around the entire parameter space exploring the posterior distribution in such a manner that the value of the posterior distribution in a given region of parameter space is characterised by the density of walkers in that region. Faster convergence is attained when the walkers are initialised near regions of high likelihood but they will explore the entire space regardless of their initial positions. The extent of the space to be explored is determined by the bounds of the prior distributions (if applicable). \begin{figure} \includegraphics[clip = true, trim = 20 10 80 20,width = \linewidth]{figures/shell_geometry_A1.pdf} \includegraphics[clip = true, trim = 20 10 80 20,width = \linewidth]{figures/shell_geometry_A2.pdf} \caption{Schematics illustrating the shell geometries with smooth, radial dust density power-laws used for models A1 and A2. \textit{Above:} Model A1 with $v_{\rm max} = 4000$\,km\,s$^{-1}$, $R_{\rm in}/R_{\rm out} = 0.3$ and $\rho \propto r^{-2.5}$. \textit{Below:} Model A2 with $v_{\rm max} = 4000$\,km\,s$^{-1}$, $R_{\rm in}/R_{\rm out} = 0.7$ and $\rho \propto r^{-1.0}$. The grids are divided into 50 in each axis.} \label{fig_shells} \end{figure} It is, of course, possible simply to grid parameter space, evaluate the likelihood and prior at each point and multiply them to get the posterior. However, whilst this would be exact, it would also be incredibly intensive and likely impossible for $>4$ dimensions. MCMC methods approximate the posterior by exploring the parameter space intelligently and are therefore a popular alternative. {\sc damocles} has between 5 and 20 variable parameters, strong degeneracies between certain of these parameters but no multimodality. MCMC methods are extremely well-suited to this regime and are therefore an ideal choice. The choice to use this particular MCMC methodology was made for a number of reasons. Affine transformations are those which preserve the relative positions of points, lines and planes, for example reflection, rotation and scaling are all affine transformations. This algorithm is designed to be affine invariant such that the parameter space can be `stretched' in order to sample points from a more isotropic distribution. This ensures that it requires very little tuning in order to obtain good performance and is in contrast to a number of other MCMC algorithms such as Metropolis-Hastings \citep{Allison2014}. This property makes the algorithm particularly useful for models with parameters that range over significantly different scales as here. In addition to this, the ease of use and implementation (via the emcee package) and its speed and efficiency for problems with dimensionality of this order led to the choice of this algorithm over other available options. \begin{figure*} \subfloat{\includegraphics[clip = true, trim = 20 200 50 220, width = 0.5\linewidth]{figures/A1_profile_v2.pdf}} \subfloat{\includegraphics[clip = true, trim = 20 200 50 220, width = 0.5\linewidth]{{figures/A2_profile_v2.pdf}}} \\ \subfloat{\includegraphics[clip = true, trim = 20 200 50 220, width = 0.5\linewidth]{figures/A3_profile_v2.pdf}} \subfloat{\includegraphics[clip = true, trim = 20 200 50 220, width = 0.5\linewidth]{figures/A4_profile_v2.pdf}} \caption{\textit{Blue solid lines:} Four simulated line profiles generated using {\sc damocles} representing different types of dust-affected line profiles corresponding to the parameters listed in Table \ref{tb_synthetic_profiles}. \textit{Yellow dashed lines:} The corresponding intrinsic lines profiles with no dust present and scaled to the same peak flux.} \label{fig_sim_line_profiles} \end{figure*} \subsection{Formulating the likelihood function} In order to quantify the likelihood of a particular set of model parameters given the observational or simulated data, we must define a function that relates the model to the data. We here base the likelihood function on the typical $\chi^2$ comparison which is defined as \begin{equation} \label{eqn_likelihood} \chi^2 = \sum_{\rm i=1}^{n} \frac{\left( f_{\rm mod,i}-f_{\rm obs,i}\right) ^2}{\sigma_{\rm i}^2} \end{equation} \noindent where $f_{\rm mod,i}$ and $f_{\rm obs,i}$ are the model flux and observed flux in frequency bin $i$ respectively. $\sigma_i$ represents the overall uncertainty in frequency bin $i$ and $n$ is the number of frequency bins in the observed line profile. There are two primary contributions to the uncertainty $\sigma_i$ in each bin: there is an inherent uncertainty on the observational data and there is also uncertainty arising from the statistical nature of the Monte Carlo radiative transfer simulation. {\renewcommand{\arraystretch}{1.5}% \begin{table*} \caption{The adopted prior distributions for the variable parameters in each model: the simulated line profile models A1 - A4, the smooth, 5-dimensional model of the SN~1987A H$\alpha$ line at 714\,d (model B), and the clumped, simultaneous H$\alpha$ and [O~{\sc i}]\,6300,\,6363\,\AA\ model of SN~1987A at 714\,d (model C). $U(a,b)$ indicates that the variable is distributed uniformly between $a$ and $b$. $v_{\rm max}$ is the maximum velocity, $v_{\rm min}$ is the minimum velocity, $\beta_{\rm smooth}$ is the steepness of the smooth power-law density distribution, $\beta_{\rm clump}$ is the steepness of the power-law clump number density distribution, $M_{\rm d}$ is the dust mass, $f$ is the clump volume filling factor, $a$ is the dust grain radius, and $F_{\rm 6300}/F_{\rm 6363}$ is the flux ratio of the 6300\AA\ and 6363\AA\ components of the [O~{\sc i}]\,6300,\,6363\,\AA\ doublet.} \label{tb_priors} \centering \begin{tabular*}{\linewidth}{l p{2.0cm} C{2.9cm} C{2.9cm} C{2.9cm} C{1.7cm}} \hline \multicolumn{2}{l}{Parameter} & Simulated \newline line profiles \newline (A) & SN 1987A \newline H$\alpha$ smooth \newline (B) & SN 1987A \newline H$\alpha$ and [O{\sc i}] clumped \newline simultaneous (C) & Units \\ \hline \multicolumn{2}{l}{Dust: } &&&& \\ & $v_{\rm max}$ & $U(3.0,8.0)$ & $U(2.0,5.0)$&$U(2.0,6.0)$&10$^3$\,km\,s$^{-1}$\\ & $v_{\rm min}$ & $U(0.5,3.0)$ & $U(0.1,1.1)$&$U(0.1,1.5)$& 10$^3$\,km\,s$^{-1}$\\ & $\beta_{\rm smooth}$ & $U(1.0,3.0)$ & $U(0.2,2.0)$&-&\\ & $\beta_{\rm clump}$ & -&- &$U(0.0,3.5)$ & \\ & $\log\,M_{\rm d}$ & $U(-6.0,-2.0)$ & $U(-7.0,-2.8)$&$U(-6,-3.5)$ &$\log\,M_{\odot}$\\ & $f$ & - &- &$U(0.1,0.7)$& \\ & $\log\,a$ & $U(-2.0,0.7)$ & $U(-3.0,0.7)$ &$U(-2.0,0.7)$ &$\log\,\mu$m\\ \hline \multicolumn{2}{l}{H$\alpha$:} &&&& \\ & $v_{\rm max}$ & coupled to dust &coupled to dust&coupled to dust&10$^3$\,km\,s$^{-1}$ \\ & $v_{\rm min}$ & coupled to dust &coupled to dust&$U(0.1,1.5)$&10$^3$\,km\,s$^{-1}$ \\ & $\beta$ & coupled to dust &coupled to dust &$U(0.0,2.0)$& \\ \hline \multicolumn{2}{l}{[O{\sc i}]:} \\ & $v_{\rm max}$ & - &- &coupled to dust &10$^3$\,km\,s$^{-1}$\\ & $v_{\rm min}$ & - &- &coupled to dust &10$^3$\,km\,s$^{-1}$\\ & $\beta$ & - &- &$U(1.5,3.5)$ &\\ & ${F_{\rm 6300}}/{F_{\rm 6364}}$&- &- &$U(2.0,3.3)$ &\\ \hline \multicolumn{2}{l}{Total number of variable parameters} & 5 & 5 & 10 & \\ \hline \end{tabular*} \end{table*} } \quad The observational uncertainty in each frequency bin ($\sigma_{\rm obs,i}$) is usually determined when data are reduced and is often included in addition to fluxes in flux-calibrated spectral data files. However, in a number of cases, particularly in cases of older, archival data, accurate uncertainties are not available. In these cases, a region of flat continuum may be selected and the observational uncertainty estimated from the variance of fluxes in that region. A number of different `flat' regions of the spectrum should be sampled and the mean variance calculated. This value may be used as an approximation to $\sigma_{\rm obs}^2$ which is assumed to be constant over the whole line profile. Whilst this is an approximation, over the small wavelength ranges of interest for a single line profile, it is generally reasonable to assume that there is little variation in the uncertainty although care should be taken if there was significant contamination to the profile by, for example, sky lines. Where accurate errors are available, or a full set of raw observations is available such that accurate uncertainties can be calculated, these should be adopted. The observational error should ideally include accurately calculated uncertainties from as many sources of observational uncertainty as possible (instrumental noise, calibration errors etc.) but particular care should be paid when handling continuum subtraction. This can be a significant factor that influences the results of line profile fitting and ideally should be included as a free parameter in any modelling. This is discussed further in Section \ref{sscn_87A_results}. Each modelled line profile is also inherently uncertain due to the stochastic nature of Monte Carlo simulations. The Monte Carlo uncertainty can be quantified analytically. {\sc damocles} propagates weighted energy packets through a dusty medium. Once it has escaped, the weighted packet is added to the appropriate frequency bin. Each frequency bin therefore receives weighted packets at a rate that is determined by the properties of the model. Statistically, this is described by a compound Poisson distribution (i.e. identically, independently distributed weights arrive at a rate described by a Poisson distribution). In the limit of a large number of packets (as here), the compound Poisson distribution can be approximated by a normal distribution with associated Monte Carlo uncertainty in each frequency bin $\sigma_{\rm mod,i}$ described by \begin{equation} \sigma_{\rm mod,i} = f_{\rm obs} \frac{\sqrt{\sum_{\rm j=1}^{n_{\rm i}} w_{\rm ij}^2}} { \sum_{\rm i,j} w_{\rm ij}} \end{equation} \noindent where $f_{\rm obs}$ is the total integrated flux of the observed line profile, $n_i$ is the number of packets in bin $i$ and $w_{ij}$ is the weight of the $j^{th}$ packet to arrive in bin $i$. The model flux in the $i^{th}$ frequency bin is given by $f_{\rm mod,i} = f_{\rm obs} \sum_{\rm j=1}^{n_{\rm i}} w_{\rm ij}/ \sum_{\rm i,j} w_{\rm ij}$. The fluxes are therefore scaled such that the total integrated flux of the model profile is equal to that of the observed profile. Since both the observational and Monte Carlo uncertainties can be assumed to follow normal distributions, the total error in the likelihood function (see Equation \ref{eqn_likelihood}) therefore also follows a normal distribution and is described by \begin{equation} \sigma_i^2 = \sigma_{\rm obs,i}^2 + \sigma_{\rm mod,i}^2 \end{equation} \noindent thus fully defining the likelihood function. \subsection{Computational implementation} The Python package `emcee' \citep{emcee} was coupled to the Fortran 95 {\sc damocles} code using the F2PY Fortran-to-Python interface generator \citep{f2py}. Samples in parameter space are generated in Python, passed to {\sc damocles} where the full Monte Carlo radiative transfer calculation is performed, before the model line profile is passed back to Python. The likelihood and prior are calculated and the algorithm progresses accordingly. {\sc damocles} is parallelised using OpenMP \citep{OpenMP} and models were run on an 88-core machine with Intel Xeon CPU E5-4669 2.20GHz processors using half its capacity. The most complex, 10-dimensional model took approximately 2 weeks to converge ($\sim$20,000 steps). \begin{figure*} \includegraphics[clip = true, trim = 0 0 0 0, width = \linewidth]{figures/run1_corner_units.pdf} \caption{The full posterior probability distribution for the `typical' simulated line profile (model A1). The known, `true' values used to generate the line profile are marked by the blue cross-hairs and the best-fitting parameter set from the MCMC run is marked with a magenta circle. The adopted priors for this model are presented in Table \ref{tb_synthetic_profiles}. The contours of the 2D distributions represent $0.5\sigma$, $1.0 \sigma$, $1.5 \sigma$ and $2.0 \sigma$, and the dashed, black vertical lines represent (left to right) the 16$^{\rm th}$, 50$^{\rm th}$ and 84$^{\rm th}$ quantiles of the 1D marginalised probability distributions. } \label{fig_run1} \end{figure*} \begin{figure*} \includegraphics[clip = true, trim = 0 0 0 0, width = \linewidth]{figures/run2_corner_units.pdf} \caption{The full posterior probability distribution for the `double peaked' simulated line profile (model A2). The known, `true' values used to generate the line profile are marked by the blue cross-hairs and the best-fitting parameter set from the MCMC run is marked with a magenta circle. The adopted priors for this model are presented in Table \ref{tb_synthetic_profiles}. The contours of the 2D distributions represent $0.5\sigma$, $1.0 \sigma$, $1.5 \sigma$ and $2.0 \sigma$, and the dashed, black vertical lines represent (left to right) the 16$^{\rm th}$, 50$^{\rm th}$ and 84$^{\rm th}$ quantiles of the 1D marginalised probability distributions. } \label{fig_run2} \end{figure*} \begin{figure*} \includegraphics[clip = true, trim = 0 0 0 0, width = \linewidth]{figures/run3_corner_units.pdf} \caption{The full posterior probability distribution for the `large grains' simulated line profile (model A3). The known, `true' values used to generate the line profile are marked by the blue cross-hairs and the best-fitting parameter set from the MCMC run is marked with a magenta circle. The adopted priors for this model are presented in Table \ref{tb_synthetic_profiles}. The contours of the 2D distributions represent $0.5\sigma$, $1.0 \sigma$, $1.5 \sigma$ and $2.0 \sigma$, and the dashed, black vertical lines represent (left to right) the 16$^{\rm th}$, 50$^{\rm th}$ and 84$^{\rm th}$ quantiles of the 1D marginalised probability distributions. } \label{fig_run3} \end{figure*} \begin{figure*} \includegraphics[clip = true, trim = 0 0 0 0, width = \linewidth]{figures/run4_corner_units.pdf} \caption{The full posterior probability distribution for the `strongly blueshifted' simulated line profile (model A4). The known, `true' values used to generate the line profile are marked by the blue cross-hairs and the best-fitting parameter set from the MCMC run is marked with a magenta circle. The adopted priors for this model are presented in Table \ref{tb_synthetic_profiles}. The contours of the 2D distributions represent $0.5\sigma$, $1.0 \sigma$, $1.5 \sigma$ and $2.0 \sigma$, and the dashed, black vertical lines represent (left to right) the 16$^{\rm th}$, 50$^{\rm th}$ and 84$^{\rm th}$ quantiles of the 1D marginalised probability distributions. } \label{fig_run4} \end{figure*} \section{Results} \label{scn_results} \subsection{MCMC models of simulated line profiles (A)} \label{sscn_results_theoretical} I initially considered a number of simulated line profiles for which the true parameters were known. Four simulated line profiles were produced that are similar to the types of asymmetric dust-affected optical and NIR line profiles observed in the spectra of late-time CCSNe with regard to the extent of their asymmetries, their shape and notable features. The parameters used to generate these line profiles are described in Table \ref{tb_synthetic_profiles} with graphical representations of the geometrical structures presented in Figure \ref{fig_shells} and the profiles presented in Figure \ref{fig_sim_line_profiles}. All four profiles exhibit a blueshifted peak flux due to increased absorption by dust of redshifted radiation and an extended red scattering wing caused by repeated dust scattering events. Three of the profiles also display a `shoulder' or second peak at the position of the minimum radial velocity on the red side. This has been previously noted by B16 and occurs in scenarios with steeper dust and gas density distributions as a result of significant absorption in the central regions of the profile. A monochromatic line at 6563\,\AA\ (H$\alpha$) was modelled in each case assuming a post-explosion date of 1000\,d and a symmetric shell ejecta in homologous expansion ($v \propto r$) with maximum velocity at $R_{\rm out}$ of $v_{\rm max} = 4000$\,km~s$^{-1}$. The models adopted an intrinsic smooth power-law emissivity distribution which was coupled to the square of the density distribution of the dust ($\rho_{\rm d}$) such that for $\rho_{\rm d} \propto r^{-\beta}$ the emissivity distribution followed $i \propto r^{-2\beta}$, as appropriate for recombination lines assuming a constant dust-to-gas mass ratio. Two schematics that illustrate the structure of the shell geometries and the smooth, radial power-law dust density distributions are presented in Figure \ref{fig_shells} using models A1 and A2 as examples. 100\% amorphous carbon grains were used and the optical constants presented by \citet{Zubko1996} were adopted. The physical extent of the ejecta in each model was determined within the code based on the post-explosion time and the specified maximum velocity ($R_{\rm out} = v_{\rm max}t$). The four simulated line profiles that were selected for investigation are presented in Figure \ref{fig_sim_line_profiles} with the parameters used to generate them detailed in Table \ref{tb_synthetic_profiles}. The ensemble sampler was applied to each of these four simulated profiles. Five variable parameters were investigated in each case, namely: the maximum velocity ($v_{\rm max}$), the minimum velocity ($v_{\rm min}$), the index of the power-law dust density distribution ($\beta$), the grain radius ($a$) and the total dust mass ($M_{\rm d}$). These parameters were selected on the basis of previous models (B16, \citet{Bevan2017}) which suggested that they are the parameters to which a simulated line profile is most sensitive in a spherically symmetric scenario. Whilst dust optical depth and albedo could be substituted for dust mass and grain radius, the latter parameters were chosen in order to allow for more straightforward comparison to other works which present results in these terms. Prior distributions were adopted on all parameters and are described in detail in Table \ref{tb_priors}. Uniform priors were adopted for the maximum and minimum velocities and for the index of the density distribution. Uniform priors were appropriate for these parameters since I sought to assume minimal prior knowledge and the range of feasible values that these parameters could take was easily encompassed within an order of magnitude. This was not the case for the dust mass and the grain radius however, both of which could take values within a range covering more than three orders of magnitude. As a result, these parameters were investigated in log space and uniform priors were adopted for $\log M_{\rm d}$ and $\log a$. The range of the prior for each parameter was either physically motivated (e.g. the minimum velocity of the expanding ejecta cannot be negative) or was based on realistic values given the observed line profile (e.g. there is no flux detected redwards of 8000~km~s$^{-1}$). The adopted priors were the same for each of the four simulated lines. In each case, 100 walkers were used and the code was run to convergence, which was determined based on the autocorrelation time. In general, several thousand steps were required to approach convergence. In all cases, the runs were allowed to continue for several autocorrelation times past this point. An acceptance fraction in the range $[0.2,0.5]$ was required in all cases and in most cases the acceptance fraction was $\sim 0.3$. Figures \ref{fig_run1} to \ref{fig_run4} illustrate the results of these models. For each simulated line profile, a two-dimensional contour plot of the posterior probability distribution for each pairing of the variable parameters is presented. The contours on these plots represent $0.5\sigma$, $1.0 \sigma$, $1.5 \sigma$ and $2.0 \sigma$. Additionally, one-dimensional histograms of the probability density distribution for a single parameter (marginalised over the other parameters) are also presented with the 16$^{\rm th}$, 50$^{\rm th}$ and 84$^{\rm th}$ quantiles indicated, encompassing the central two-thirds of the data. The known, `true' values that were used to generate the simulated line profiles are marked on these plots. For the sake of comparison, the single best-fitting model was tracked throughout the sampling process and this is also marked on the plots. As expected, the best-fitting model line profiles were virtually identical to the simulated line profiles input into the simulation at the start and so are not presented here. In most instances, the parameters can be tightly constrained. However, there are certain parameters which exhibit a broad posterior probability distribution indicating that the line profile is largely insensitive to variations in this parameter. Dependencies and correlations between the parameters can be observed for some of the parameters, for example the maximum velocity and the density profile (see Fig. \ref{fig_run3}) or the grain radius and the dust mass (see Fig. \ref{fig_run1}). For the majority of cases, the true values lie very close to or inside the most likely ($1\sigma$) regions of the contour plots. Where there are exceptions to this, these can be understood as an insensitivity to a specific parameter on which another parameter is dependent. In particular, where the dust grain radius cannot be determined from the line profile, the dust mass is likely also to be ill-constrained. I discuss the reasons for, and implications of, these results in more detail in Section \ref{scn_discussion}. \subsection{MCMC model of SN 1987A} \label{sscn_87A_results} SN~1987A is an extremely well-studied, nearby CCSN that remains critical to our understanding of the formation and evolution of dust in CCSNe. Spectra of SN~1987A have been taken regularly since its outburst on 23 February 1987 and it is an ideal candidate for line profile modelling with asymmetric optical line profiles exhibited from $\sim$650\,d \citep{Lucy1989}. \begin{figure} \includegraphics[clip = true, trim = 60 190 70 205, width = \linewidth]{figures/SN1987A_HaOI_d714_spectrum.pdf} \caption{A region of the optical spectrum of SN 1987A at 714\,d post-explosion encompassing the H$\alpha$ line and the [O~{\sc i}]\,6300,\,6363\,\AA\ doublet obtained with the CTIO-1.5m telescope in 1989. The spectrum is centered on zero-velocity at $\lambda = 6563$\,\AA. } \label{fig_87A_spectrum} \end{figure} \begin{figure*} \includegraphics[clip = true, trim = 0 0 0 0, width = \linewidth]{figures/87A_Halpha_smooth.pdf} \caption{The full posterior probabilty distribution for the smooth, 5-dimensional model of the SN~1987A H$\alpha$ line at 714\,d as described in Section \ref{sscn_87A_results} (model B). The adopted priors for this model are presented in Table \ref{tb_synthetic_profiles}. The estimated best-fit values from the manual fitting of B16 are marked by the orange cross-hairs. The contours of the 2D distributions represent $0.5\sigma$, $1.0 \sigma$, $1.5 \sigma$ and $2.0 \sigma$ and the dashed, black vertical lines represent the 16$^{\rm th}$, 50$^{\rm th}$ and 84$^{\rm th}$ quantiles for the 1D marginalised probability distributions. } \label{fig_87a_smooth} \end{figure*} I applied the ensemble sampler to the H$\alpha$ and [O~{\sc i}]\,6300,\,6363\,\AA\ lines of SN~1987A at 714\,d post-outburst. A region of the optical spectrum that was obtained with the CTIO-1.5m telescope on 6$^{\rm th}$ February 1989 and includes these lines is presented in Figure \ref{fig_87A_spectrum} \citep{Phillips1990}. The spectrum is available on the CTIO archives. This epoch was selected to revisit due to the high signal-to-noise ratio of the spectrum and the distinct separation of the two features, which is not as clear at later epochs. Additionally, the relative lack of contamination of these broad lines by narrow nebular emission makes this epoch particularly attractive. Both the H$\alpha$ and [O~{\sc i}]\,6300,\,6363\,\AA\ lines have been previously investigated by B16 using {\sc damocles}, who used a systematic, manual approach to determine a best-fitting set of parameters for both clumped and smooth dust density distributions. I sought to compare the best-fitting parameters that they inferred with the results generated by the automated ensemble sampler. Additionally, I was interested to understand whether a more sophisticated model that investigates a significantly higher-dimensional variable parameter space could be explored by employing the ensemble sampler. A grid-based or manual approach would not be feasible for higher-dimensional models and this was a primary consideration in the implementation of a MCMC procedure. I have therefore investigated two models for SN 1987A at 714\,d post-outburst. \begin{figure*} \includegraphics[clip = true, trim = 0 0 0 0, width = \linewidth]{figures/87A_multiline.pdf} \caption{The full posterior probabilty distribution for the clumped, 10-dimensional model of the H$\alpha$ line and [O~{\sc i}]\,6300,\,6363\,\AA\ doublet of SN~1987A at 714\,d as described in Section \ref{sscn_87A_results} (model C). The adopted priors for this model are presented in Table \ref{tb_synthetic_profiles}. The contours of the 2D distributions represent $0.5\sigma$, $1.0 \sigma$, $1.5 \sigma$ and $2.0 \sigma$ and the dashed, black vertical lines represent the 16$^{\rm th}$, 50$^{\rm th}$ and 84$^{\rm th}$ quantiles for the 1D marginalised probability distributions. The best-fitting parameter set from the MCMC run is marked with a magenta circle.} \label{fig_87a_10D} \end{figure*} The first is a 5-dimensional, smooth model that allows for direct comparison with previous results. The H$\alpha$ line is modelled with a spherically-symmetric, smooth shell distribution. The power-law emissivity distribution is coupled to the dust density distribution as described in Section \ref{sscn_results_theoretical} and the same dust properties were used (i.e. 100\% amorphous carbon dust with optical constants from \citet{Zubko1996}). This scenario is the same as that adopted by B16 for their smooth model of the H$\alpha$ line at 714\,d. A five-dimensional parameter space is explored. Uniform priors were adopted for the maximum velocity ($v_{\rm max}$), the minimum velocity ($v_{\rm min}$) and the index of the power-law dust density distribution ($\beta$), and log-uniform priors were adopted for the grain radius ($a$) and the total dust mass ($M_{\rm d}$). Full details of the priors can be found in Table \ref{tb_priors}. The range of the priors was kept as wide as possible in an effort to identify any additional maxima in the posterior distribution and therefore to obtain all possible solutions. In all cases, the best-fitting parameter set identified by B16 lies within the prior range adopted here. The modelled profiles were convolved to the resolution of the spectrum (16\,\AA) before the likelihood was calculate and the region between 440~km~s$^{-1}$ and 1400~km~s$^{-1}$ was excluded from this calculation since it is contaminated by the unresolved, narrow, nebular [N~{\sc ii}]\,6583\AA\ emission. The high signal-to-noise ratio of the spectrum resulted in a negligible observational error as determined by assessing the variance in a flat region of the spectrum. The height of the continuum is a potentially important factor in determining a number of the model properties. However, a preliminary investigation that included the continuum height as a free parameter revealed an insensitivity to the continuum height and so it was fixed at $2.1 \times 10^{-14}$~erg~cm$^{-2}$~s$^{-1}$~$\AA^{-1}$. The results of this model are presented in Figure \ref{fig_87a_smooth} with the best-fitting parameters as identified by B16 marked on the probability distributions for comparison. In all cases, the previous results lie within $1\sigma$ of the marginalised 1D probability distribution and within the 1.5$\sigma$ contour of the 2D joint-probability distributions. This suggests good agreement between the two approaches but, as can be seen, significantly more information is yielded from the full investigation. For example, the results indicate that the steepness of the density distribution does not significantly affect the likelihood. They also highlight the relative insensitivity of the dust mass to all parameters except the grain radius. The predictably strong correlation between grain size and dust mass, as noted by B16, is clear. However, whilst the grain radius has a fairly well-constrained minimum at around 0.05\,$\mu$m, it is not tightly constrained at larger grain sizes and, as such, constraining the dust mass is difficult without further information. The second model for SN~1987A treats both the H$\alpha$ and [O~{\sc i}]\,6300,\,6363\,\AA\ lines simultaneously for the first time. A spherically-symmetric shell-based geometry is once again adopted but, in this more complex scenario, the dust is located entirely in clumps that are stochastically distributed throughout the shell according to a power-law distribution. The clumps all have equal volume equivalent to a single, cubical grid cell in the simulation of width $R_{\rm out}/25$, roughly consistent with what might be expected from Rayleigh-Taylor instabilities in the ejecta. The total volume of the ejecta occupied by dust clumps is described by the filling factor which is varied between 0.1 and 0.7. All species (dust, H$\alpha$ and [O~{\sc i}]) extend to the same maximum velocity, but the minimum velocities for H$\alpha$ and [O~{\sc i}]\,6300,\,6363\,\AA\ are separate, variable parameters. The minimum dust velocity is coupled to that of [O~{\sc i}] since it seems likely that most dust formation is occurring in regions of high metallicity where the constituent ingredients of dust grains are available to condense. All three species follow separate power-law density distributions (with the emissivity following the square of the density as described in Section \ref{sscn_results_theoretical}). Finally, the flux ratio between the 6300\AA\ and 6363\AA\ components of the [O~{\sc i}] doublet is also left as a free parameter. Intrinsically, the flux ratio is fixed. However, whilst it is assumed that the gas is optically thin, it is possible that there remain some gas optical depth effects that could influence this ratio and it is therefore included for clarity. This yields a total of 10 variable parameters which are summarised, along with the adopted priors for each parameter, in Table \ref{tb_priors}. The ranges of the adopted priors were motivated by the previous results of B16 and the results from the previous smooth 5D model. For certain parameters (the dust mass and grain radius), the range was restricted slightly relative to the 5D simulation, without significant loss of information, in order to speed up the calculation. Physical factors also dictated the adopted ranges, e.g. the flux ratio $F_{\rm 6300}/F_{\rm 6363}$, which was capped at 3.3 since the theoretical value for an optically thin medium is 3.1 \citep{Storey2000}, and the filling factor $f$, which must clearly be in the range $[0,1]$. The likelihood was calculated as per Equation \ref{eqn_likelihood} but, for these purposes, the [O~{\sc i}]\,6300,\,6363\,\AA\ was scaled to the same peak flux as the H$\alpha$ line in order to ensure that both features were weighted equally. This aside, the adopted procedure for this model was identical to that of model B. The results of this 10D simulation are presented in Figure \ref{fig_87a_10D}. A significant quantity of information is contained in this figure but it is of particular interest to note that the majority of parameters have been constrained and follow a distribution with a single peak. The probability distribution peaks at an extreme of the range in the cases of the flux ratio $F_{\rm 6300}/F_{\rm 6363}$, the filling factor $f$ and the clump number density distribution which is specified by $\beta_{\rm clump}$. The line profile is not highly sensitive to the density distribution of any species but the minimum and maximum velocities can be restricted to a relatively narrow range, regardless of the values of the other parameters. Of most interest however, is the strongly-peaked marginalised 1D probability distribution for the grain radius suggesting a large grain radius of the order of $\sim0.2\,\mu$m. This has allowed the dust mass to be similarly constrained with the marginalised probability distribution yielding a $1\sigma$ range spanning only one order of magnitude. The best-fitting parameter set is marked in Figure \ref{fig_87a_10D} for comparison and the corresponding line profile is presented in Figure \ref{fig_10D_bestfit} for the purposes of illustrating the goodness-of-fit. I discuss these results further in the context of dust formation in SN~1987A and other CCSNe in Section \ref{scn_discussion}. \section{Discussion} \label{scn_discussion} \subsection{Theoretical models} \label{sscn_theoretical_discussion} Of primary interest in investigating this approach to modelling asymmetric line profiles is whether the Bayesian methodology offers additional insight or rigour in comparison to manual or grid-based frequentist fitting. \begin{figure} \includegraphics[clip = true, trim = 40 0 60 20, width = \linewidth]{figures/10D_bestfit.pdf} \caption{\textit{Blue:} The CTIO spectrum of SN~1987A at 714\,d encompassing the H$\alpha$ line and [O~{\sc i}]\,6300,\,6363\,\AA\ doublet. \textit{Red}: The best-fitting model from the 10-dimensional MCMC run (model C). Vertical, dashed, black lines indicate (left to right) zero-velocity for $\lambda = 6300$\,\AA, 6363\,\AA\ and 6563\,\AA. } \label{fig_10D_bestfit} \end{figure} \begin{figure} \includegraphics[clip = true, trim = 20 0 40 20, width = \linewidth]{figures/grain_size_vs_Qabs.pdf} \caption{The relationship between the quantity $a/Q_{\rm abs}$ and $a$ at fixed $\lambda = 6563$\,\AA\ as calculated for amorphous carbon using Mie theory, where $a$ is the grain radius and $Q_{\rm abs}$ is the absorption efficiency. $a/Q_{\rm abs}$ is proportional to the dust mass for fixed optical depth, dust density and ejecta size. This relationship is therefore an approximation to the more complex dependency between $a$ and $M_{\rm d}$ seen in the posterior distributions in Figures \ref{fig_run1} to \ref{fig_run4}.} \label{fig_a_vs_q} \end{figure} The approach was initially tested against four simulated example line profiles that had been generated using {\sc damocles} and that exhibited different shapes and features (models A1~--~A4, see Figure \ref{fig_sim_line_profiles}). The posterior probability distribution for each of these models is presented in Figures \ref{fig_run1} to \ref{fig_run4} with the true parameters used to generate the profiles and the best-fitting parameter set from the chain indicated on the plots. A good test that the Bayesian calculation is being performed correctly is that the best-fitting model from the MCMC chain is, in nearly all cases, in broad agreement with the true values. The resulting posterior successfully characterises the likelihood of a given parameter over a range of values and exhibits a single-peaked marginalised 1D probability distribution for most parameters, as well as revealing dependencies and correlations between parameters. The known, true values generally lie within the $1\sigma$ contour as would be expected. Below, I discuss a number of interesting results from the simulated line profile modelling. \subsubsection{Maximum velocity, $v_{\rm max}$} The maximum velocity of an emitting species is generally inferred from a line profile as the velocity at which the flux on the blue side of the profile goes to zero. In the case of models A2 and A4, this approach would yield a reasonable estimate of the maximum velocity in agreement with the posterior distribution, which tightly constrains the value of the maximum velocity in both cases. However, models A1 and A3 (see Figure \ref{fig_run1} and \ref{fig_run3}) illustrate that it is not necessarily straightforward to determine the maximum velocity by eye. The transition between the blue wing of the emission line and the continuum (at zero flux in this theoretical scenario) is smooth for this steep emissivity distribution which makes it difficult to determine the exact velocity at which the transition occurs. This issue is exacerbated for real observations where noise further obscures the inflection point. The full Bayesian calculation illuminates the relative likelihoods of different maximum velocities, considered both in isolation from the other parameters (via the 1D marginalised posterior) and jointly with the other parameters. It highlights, for example, the postive correlation between the maximum velocity and the emissivity distribution ($i \propto r^{-2\beta}$). Similarly, for model A1 (Figure \ref{fig_run1}), the relationship between grain radius and the maximum velocity is clear. This can be interpreted as due to the fact that, for amorphous carbon grains, the single-scattering albedo is a monotonically increasing function of grain radius for fixed $\lambda = 6563$\AA. This results in a red scattering wing extending to higher velocities than those observed on the blue side. This feature of the line profile could be approximated, for small grains, by adopting larger maximum velocities. This relationship is clearly revealed by the Bayesian calculation whilst the results still prefer the correct grain radius of $\sim 0.1\mu$m and maximum velocity of approximately $\sim3250 - 3750$\,km~s$^{-1}$. Determining the maximum velocity accurately is particularly important since it determines the size of the ejecta and therefore has a significant effect on the overall dust optical depth to which radiation is exposed for a given dust mass. The co-dependence of the maximum velocity with several of the other parameters is handled rigorously by the ensemble sampler and can be easily quantified and communicated via the posterior distribution. \subsubsection{Minimum velocity, $v_{\rm min}$} The range of viable values for the minimum velocity can be very narrowly constrained in all optically thin cases (i.e. A1~--~A3). It is only when the dust becomes significantly optically thick that the minimum velocity becomes harder to determine. In practice, it is normally the case that the blue-shifted peak flux is coincident with the minimum velocity of the emitting ion. In this case, an asymmetry is observed as a result of absorption in the central regions causing an intrinsically flat-topped, boxy profile (as is produced by an expanding shell) to peak sharply at the blue `corner' of the flat top. A secondary peak coincident with minimum velocity on the red side is also a possibility \citep[B16,][]{Bevan2017}. These peaks are important for determining the minimum velocity. Where dust optical depths are high enough that the peak flux shifts beyond the minimum velocity (see simulated profile A4, lower right panel of Figure \ref{fig_sim_line_profiles}), there is less information available in the profile to constrain the minimum velocity. The results for A4 (Figure \ref{fig_run4}) suggest that the minimum velocity cannot exceed $\sim$2000\,km\,s$^{-1}$, presumably because the profile would become too wide, but yield similar likelihoods for all $v_{\rm min}<2000$\,km\,s$^{-1}$ with a broad peak centered around $\sim 1600$~km~s$^{-1}$. \subsubsection{Density distribution index, $\beta$} The steepness of the density distribution (and hence also the emissivity distribution) is not tightly bound for any of the simulated lines. However, this is most noticeable in run A2, where there is only a little variation in the likelihood of $\beta$ across the full range explored. The width of an intrinsic flat-top profile at its peak is determined by the minimum velocity but the shape of the wings is determined (for homologous expansion) by the steepness of the power-law emissivity distribution. Where this is steeper, the profile appears more concave in its wings. In A3, only a small fraction of the profile is in the wings, with the majority of the width of the profile arising from the intrinsically flat-topped region. As a result, there is limited information in the profile to allow $\beta$ to be determined. However, the other parameters can still be reasonably estimated by marginalising over $\beta$ since they are not strongly dependent on it. This is a good illustration of the fact that, under certain conditions, observed line profiles will not contain sufficient information to determine some of parameters of interest. Even in this case, however, it may be possible to constrain the other parameters by marginalising over these less sensitive parameters. The full posterior distribution clarifies the sensitivity of the line profile to the variable parameters. \subsubsection{Grain radius, $a$, and dust mass, $M_{\rm d}$} The strong correlation between grain radius and dust mass is recovered by the Bayesian approach. The absorption and scattering efficiencies of dust grains of any species depend strongly on the grain radius, as does the cross-sectional area available for interaction. As a result, there is a strong relationship between the opacity and the grain radius, and hence also the required dust mass and the grain radius. By making a number of assumptions, the relationship between dust mass and grain radius can be determined analytically for a simplified version of the scenarios modelled in A1~--~A4. bf We can then compare the 2D marginalised likelihood distributions for dust mass and grain radius to this analytic relationship as a test of the Bayesian approach. We consider the dust number density to be independent of radius ( which is not the case for models A1~--~A4). The dust optical depth at a given wavelength is then $\tau_{\rm d} = Q_{\rm ext} \pi a^2 n_{\rm d} R$, where $Q_{\rm ext}$ is the extinction efficiency, $n_{\rm d}$ is the dust number density and $R$ is the distance to be traversed by the photon. We also have the total dust mass described by $M_{\rm d} = Vn_{\rm d} \frac{4\pi a^3}{3}\rho_{\rm g}$, where $V$ is the volume of the ejecta, $\rho_{\rm g}$ is the mass density of a dust grain and other parameters are as previously defined. We require a specific dust optical depth in order to reproduce a line profile. If we additionally assume that the physical extent of the ejecta is fixed and that the dust is entirely absorbing (such that $Q_{\rm ext} = Q_{\rm abs}$, the absorption efficiency), then we can conclude that $M_{\rm d} \propto a/Q_{\rm abs}$. At $\lambda = 6563$\AA, H$\alpha$, we can determine the exact correlation between grain radius $a$ and $a/Q_{\rm abs}$ using Mie theory. The resulting relationship for amorphous carbon grains (presented in Figure \ref{fig_a_vs_q}) is echoed in the joint 2D posterior distribution of $\log a$ and $\log M_{\rm d}$ in all theoretical runs as expected (A1\,--\,A4; see Figures \ref{fig_run1} to \ref{fig_run4}). However, the 2D likelihood distributions marginalise over the other parameters and therefore deviate from the analytic relationship derived above. Deviations from this relationship are due to, for example, the polychromatic nature of the transported packets, a dust number density that is non-constant with radius, a significant scattering component to the extinction etc. Silicate grains would be expected to follow a different, more complicated relationship. The dependency between grain radius and dust mass has significant implications for determining the ejecta-condensed dust mass via line profile fitting and has been discussed in detail by B16 and \citet{Bevan2017}. By quantifying the posterior distribution, the exact relationship between these two parameters can be understood by marginalising over the other parameters of interest. If further information can be obtained that would allow the grain radius to be constrained, then the joint probability distribution described by the posterior dictates the required dust mass for a given model. This grain radius could be estimated from dust emission features or from other techniques such as SED fitting. The approach used here could also be expanded to include multiple optical or NIR emission lines at a given epoch in order to exploit the wavelength dependence of dust extinction and hence constrain the dust grain radius simultaneously with the other parameters (see Section \ref{sscn_87A_discussion}). One further implication of the dependency of dust mass on grain radius is that, unless the grain radius can be constrained reasonably tightly, the 1D marginalised dust mass probability distribution will tend towards a specific peak. This is because there is a wide range of dust grain radii that all yield similar values of $a/Q_{\rm abs}$. Since the marginalised probability distribution is integrated over the whole prior range of all other parameters, a narrow band of dust masses will naturally be preferred. Care should be taken to ensure that the grain radius has been accurately constrained before inferring the dust mass from the posterior. It is worth noting that, whilst emphasis is often placed on determining the mass of dust that has formed in the ejecta of CCSNe, and this is therefore naively the most interesting parameter, determining the dust grain radius is critically important in its own right. In order to determine how much dust CCSNe can eject into the ISM, we must not only understand how much is formed in the ejecta but also how much is destroyed by shocks, and in particular by the reverse shock that will inevitably pass back through the newly-formed dust \citep{Temim2015,Bocchio2016,Dwek2016}. The rate of destruction of dust grains by sputtering in shocks is independent of the size of the grain and, as such, the initial size of the dust grain is critical to understanding whether or not it will survive into the ISM or will eventually be destroyed \citep{Barlow1978}. \subsection{Application to SN 1987A at 714\,d} \label{sscn_87A_discussion} The need to isolate the grain radius motivated the production of two different models of SN~1987A, one significantly more complex that the other. The initial smooth model in five dimensions (model B) couples the H$\alpha$ emissivity distribution with the dust density distribution and is analogous to the models of H$\alpha$ produced by B16. Their results are indicated on the posterior distribution which is presented in Figure \ref{fig_87a_smooth}. They are generally in good agreement with the results produced by the ensemble sampler. However, additional insight is gained into the range of viable values for the maximum and minimum velocities, with the most likely regions of parameter space leaning towards a slightly lower maximum velocity at 3000\,km\,s$^{-1}$ (compared to the B16 estimate of $\sim$3250\,km\,s$^{-1}$ ) and a slightly higher minimum velocity at $\sim$900\,km\,s$^{-1}$ (compared to the B16 estimate of 813\,km\,s$^{-1}$). The steepness of the emissivity distribution is not tightly established but does not affect the ability of the sampler to constrain the other parameters. Of most interest, however, is the estimation of the dust grain radius and the dust mass. As previously discussed, there is a strong correlation between these parameters. Since there is a wide range of small dust grain radii that result in similar dust mass estimates, there is a peak in the marginalised dust mass probability distribution that suggests a dust mass of $\sim$10$^{-6}$\,M$_{\odot}$. We can infer that this is likely the case if only small dust grains are present in the ejecta. However, the dust grain radius is not tightly constrained by the smooth fitting, with a wide range of values $>$0.15\,$\mu$m yielding similar probabilities. Model C is significantly more detailed and includes additional variable parameters resulting in a 10-dimensional parameter space. All of the dust is located in clumps. This is a more realistic dust distribution than a smooth radial power-law; dust has been observed to be located in clumpy or filamentary structures in a variety of different CCSNe and remnants \citep{Barlow2010,Gomez2012,Temim2012}. In addition to a higher-dimensional parameter space, the more complex model also treats both the H$\alpha$ line and the [O~{\sc i}]\,6300,\,6363\,\AA\ doublet simultaneously. By providing the sampler with more data, and in particular two lines separated in wavelength space, the grain radius can be reasonably constrained. The median grain radius is $\sim 0.2\,\mu$m, with a maximum at $1\sigma$ of $\sim 1\,\mu$m. This yields a dust mass that is constrained to within one order of magnitude, with a median dust mass of $\sim 4.5 \times 10^{-5}$\,M$_{\odot}$ and a maximum dust mass at $1\sigma$ of $\sim 1.5 \times 10^{-4}$\,M$_{\odot}$. These estimates are very similar to the separate H$\alpha$ ($5.5 \times 10^{-5}$\,M$_{\odot}$) and [O~{\sc i}]\,6300,\,6363\,\AA\ ($2.0 \times 10^{-4}$\,M$_{\odot}$) estimates by B16 for a grain radius of $0.6\mu$m. However, they are somewhat lower than the dust mass estimates inferred from radiative transfer models of the SED of SN~1987A presented by \citet{Wesson2015} for this epoch, who deduce a dust mass of $1.0 \times 10^{-3}$\,M$_{\odot}$ at 615\,d post-outburst. This discrepancy may be a result of their adoption of an MRN dust grain radius distribution ($n(a) \propto a^{-3.5}$ for $0.005<a<0.25$, \citet{Mathis1977}) or the assumption here of a single grain size. This more complex model, which is clearly still a simplification of a highly complicated reality, yields considerable insight into the relative likelihoods of the velocity distributions of the different species and the mass and grain radius of the dust in the ejecta. Additionally, however, we also gain insight into other properties of the geometry of the nebula at this epoch. The results suggest that the clumps are all likely concentrated towards the central regions (high $\beta_{\rm clump}$) and occupy only a small fraction of the total volume of the ejecta (low $f$). Similarly, they indicate that the [O~{\sc i}] is also concentrated towards the central regions (median $\beta_{\rm [OI]}$ of 2.59) with the hydrogen more diffusely distributed (median $\beta_{{\rm H}\alpha}$ of 1.18). This suggests a geometry that would be consistent with observations of SN~1987A obtained by \citet{Abellan2017} using the Atacama Large Millimeter Array (ALMA). These spatially-resolved observations of IR lines of CO and SiO reveal that both species are concentrated in the inner ejecta and occupy a clumpy distribution, suggesting that the heavier elements, and in particular oxygen, are likely located in these central regions. The results would also be consistent with the structures and geometries predicted by hydrodynamic explosion models of CCSNe \citep{Hammer2010,Wongwathanarat2015}. These models predict that, at very early times, only a few seconds after the explosion, the heavier elements are mostly located within the central regions of the ejecta with small clumps of fast-moving material escaping at higher velocities. A more expansive, more diffuse hydrogen envelope is also present. Once homologous expansion has set in, the geometry will remain self-similar for many hundreds of years, assuming that there is no encounter with significantly dense circumstellar material, and so it may not be unreasonable to compare these results. \subsubsection*{} Supernovae and supernova remnants are highly complex objects. I have not included in my models different dust species, nor dust grain size distributions, and I have also restricted my investigations to geometries that are, with the exception of a stochastically generated dust clump distribution in one case, spherically symmetric. These are important factors that should be explored in future work. Similarly, I have explored only a few particular models. The results of these analyses do not make comment on the validity of the model itself, rather on the relative likelihoods of the parameters given that particular model. Care should be taken in the future to assess the applicability of a given model and whether dust formation represents the most likely explanation for the observed properties of a given line profile. This had already been established from previous work in this case (B16). The application of a Bayesian procedure may prove useful in this regard also since it lends itself well to quantified model comparison. However, the above results illustrate the overall power of this methodology to constrain parameters, identify the parameters to which the line profile is sensitive and characterise dependencies between the parameters. Most importantly, I am able to investigate and analyse highly complex models for which manual parameter estimation would be extremely difficult. \section{Conclusions} \label{conclusions} I have applied an affine invariant ensemble sampler to the Monte Carlo radiative transfer code {\sc damocles} in order to explore the variable parameter space in a rigorous fashion and apply a Bayesian methodology to the inference of conclusions from the data. I have utilised the algorithm presented by \citet{Goodman2010} and implemented by \cite{emcee} in order to create a Fortran-Python hybrid code that is capable of fitting dust-affected optical and NIR line profiles from CCSNe at late stages in their evolution in order to construct a posterior probability distribution. The code was applied to four different simulated line profiles that were generated by {\sc damocles} in order to represent different sorts of dust-affected line profiles that are observed in the spectra of late-time dust-forming CCSNe. A smoothly distributed, spherically symmetric geometry was adopted and five variable parameters investigated. The posterior distributions are in good agreement with the known, true parameters and suggest that the methodology is accurate and effective for parameter estimation. The theoretical runs highlight a number of dependencies between specific parameters. The power of the Bayesian inferential approach in revealing and quantifying these dependencies is beneficial for future research using this methodology, but also illustrates the need for care when using line profile fitting (or indeed any other method) to estimate model parameters from observations. I also revisited the H$\alpha$ line and [O~{\sc i}]\,6300,\,6363\,\AA\ doublet of SN~1987A at 714\,d. A simple model with five variable parameters analogous to the smooth model of H$\alpha$ investigated by B16 was initially adopted. I also investigated a significantly more complex model in 10-dimensional parameter space that treated both H$\alpha$ and [O~{\sc i}]\,6300,\,6363\,\AA\ simultaneously. The dust mass and dust grain radius predictions are in agreement with the previous manual approach but their relative likelihood is now quantified, as is their dependence on other parameters. The affine invariant ensemble sampler has proved to be an efficient and effective method to investigate and analyse highly complex models for which manual parameter estimation would be extremely difficult. The Bayesian methodology allows for considerably more insight to be gained and communicated than the previous manual approach and there is significant potential for using this approach to determine accurate ejecta dust masses for a large number of CCSNe. \section*{Acknowledgements} AB would like to thank Dr Boris Leistedt for his patient guidance and teaching on astrostatistics and Bayesian methodologies, as well as Dr Roger Wesson, Dr Ilse de Looze and Professor Mike Barlow for many discussions and their help in readying this paper for publication. Many thanks also to the anonymous referee for their helpful suggestions. This work was supported by European Research Council (ERC) Advanced Grant SNDUST 694520 and is based on publicly available observations from the archives of the CTIO.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This document is a template for \LaTeXe. If you are reading a paper or PDF version of this document, please download the electronic file \texttt{ifacconf.tex}. You will also need the class file \texttt{ifacconf.cls}. Both files are available on the IFAC web site. Please stick to the format defined by the \texttt{ifacconf} class, and do not change the margins or the general layout of the paper. It is especially important that you do not put any running header/footer or page number in the submitted paper.\footnote{ This is the default for the provided class file.} Use \emph{italics} for emphasis; do not underline. Page limits may vary from conference to conference. Please observe the page limits of the event for which your paper is intended. \section{Procedure for Paper Submission} Next we see a few subsections. \subsection{Review Stage} For submission guidelines, follow instructions on paper submission system as well as the event website. Note that conferences impose strict page limits, so it will be better for you to prepare your initial submission in the camera ready layout so that you will have a good estimate for the paper length. Additionally, the effort required for final submission will be minimal. \subsection{Equations} Some words might be appropriate describing equation~(\ref{eq:sample}), if we had but time and space enough. \begin{equation} \label{eq:sample} {{\partial F}\over {\partial t}} = D{{\partial^2 F}\over {\partial x^2}}. \end{equation} See \cite{Abl:56}, \cite{AbTaRu:54}, \cite{Keo:58} and \cite{Pow:85}. \subsubsection{Example.} This equation goes far beyond the celebrated theorem ascribed to the great Pythagoras by his followers. \begin{thm} The square of the length of the hypotenuse of a right triangle equals the sum of the squares of the lengths of the other two sides. \end{thm} \begin{pf} The square of the length of the hypotenuse of a right triangle equals the sum of the squares of the lengths of the other two sides. \end{pf} Of course LaTeX manages equations through built-in macros. You may wish to use the \texttt{amstex} package for enhanced math capabilities. \subsection{Figures} To insert figures, use the \texttt{graphicx} package. Although other graphics packages can also be used, \texttt{graphicx} is simpler to use. See Fig.~\ref{fig:bifurcation} for an example. \begin{figure} \begin{center} \includegraphics[width=8.4cm]{bifurcation} \caption{Bifurcation: Plot of local maxima of $x$ with damping $a$ decreasing} \label{fig:bifurcation} \end{center} \end{figure} Figures must be centered, and have a caption at the bottom. \subsection{Tables} Tables must be centered and have a caption above them, numbered with Arabic numerals. See table~\ref{tb:margins} for an example. \begin{table}[hb] \begin{center} \caption{Margin settings}\label{tb:margins} \begin{tabular}{cccc} Page & Top & Bottom & Left/Right \\\hline First & 3.5 & 2.5 & 1.5 \\ Rest & 2.5 & 2.5 & 1.5 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Final Stage} Authors are expected to mind the margins diligently. Papers need to be stamped with event data and paginated for inclusion in the proceedings. If your manuscript bleeds into margins, you will be required to resubmit and delay the proceedings preparation in the process. \subsubsection{Page margins.} See table~\ref{tb:margins} for the page margins specification. All dimensions are in \emph{centimeters}. \subsection{PDF Creation} All fonts must be embedded/subsetted in the PDF file. Use one of the following tools to produce a good quality PDF file: \subsubsection{PDFLaTeX} is a special version of LaTeX by Han The Thanh which produces PDF output directly using Type-1 fonts instead of the standard \texttt{dvi} file. It accepts figures in JPEG, PNG, and PDF formats, but not PostScript. Encapsulated PostScript figures can be converted to PDF with the \texttt{epstopdf} tool or with Adobe Acrobat Distiller. \subsubsection{Generating PDF from PostScript} is the classical way of producing PDF files from LaTeX. The steps are: \begin{enumerate} \item Produce a \texttt{dvi} file by running \texttt{latex} twice. \item Produce a PostScript (\texttt{ps}) file with \texttt{dvips}. \item Produce a PDF file with \texttt{ps2pdf} or Adobe Acrobat Distiller. \end{enumerate} \subsection{Copyright Form} IFAC will put in place an electronic copyright transfer system in due course. Please \emph{do not} send copyright forms by mail or fax. More information on this will be made available on IFAC website. \section{Units} Use SI as primary units. Other units may be used as secondary units (in parentheses). This applies to papers in data storage. For example, write ``$15\,\mathrm{Gb}/\mathrm{cm}^2$ ($100\,\mathrm{Gb}/\mathrm{in}^2$)''. An exception is when English units are used as identifiers in trade, such as ``3.5 in disk drive''. Avoid combining SI and other units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity in an equation. The SI unit for magnetic field strength $\mathbf{H}$ is $\mathrm{A}/\mathrm{m}$. However, if you wish to use units of $\mathrm{T}$, either refer to magnetic flux density $\mathbf{B}$ or magnetic field strength symbolized as $\mu_0\,\mathbf{H}$. Use the center dot to separate compound units, e.g., ``$\mathrm{A} \cdot \mathrm{m}^2$''. \section{Helpful Hints} \subsection{Figures and Tables} Figure axis labels are often a source of confusion. Use words rather than symbols. As an example, write the quantity ``Magnetization'', or ``Magnetization M'', not just ``M''. Put units in parentheses. Do not label axes only with units. For example, write ``Magnetization ($\mathrm{A}/\mathrm{m}$)'' or ``Magnetization ($\mathrm{A} \mathrm{m}^{-1}$)'', not just ``$\mathrm{A}/\mathrm{m}$''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature ($\mathrm{K}$)'', not ``$\mbox{Temperature}/\mathrm{K}$''. Multipliers can be especially confusing. Write ``Magnetization ($\mathrm{kA}/\mathrm{m}$)'' or ``Magnetization ($10^3 \mathrm{A}/\mathrm{m}$)''. Do not write ``Magnetization $(\mathrm{A}/\mathrm{m}) \times 1000$'' because the reader would not know whether the axis label means $16000\,\mathrm{A}/\mathrm{m}$ or $0.016\,\mathrm{A}/\mathrm{m}$. \subsection{References} Use Harvard style references (see at the end of this document). With \LaTeX, you can process an external bibliography database using \texttt{bibtex},\footnote{In this case you will also need the \texttt{ifacconf.bst} file, which is part of the \texttt{ifaconf} package.} or insert it directly into the reference section. Footnotes should be avoided as far as possible. Please note that the references at the end of this document are in the preferred referencing style. Papers that have not been published should be cited as ``unpublished''. Capitalize only the first word in a paper title, except for proper nouns and element symbols. \subsection{Abbreviations and Acronyms} Define abbreviations and acronyms the first time they are used in the text, even after they have already been defined in the abstract. Abbreviations such as IFAC, SI, ac, and dc do not have to be defined. Abbreviations that incorporate periods should not have spaces: write ``C.N.R.S.'', not ``C. N. R. S.'' Do not use abbreviations in the title unless they are unavoidable (for example, ``IFAC'' in the title of this article). \subsection{Equations} Number equations consecutively with equation numbers in parentheses flush with the right margin, as in (\ref{eq:sample}). To make your equations more compact, you may use the solidus ($/$), the $\exp$ function, or appropriate exponents. Use parentheses to avoid ambiguities in denominators. Punctuate equations when they are part of a sentence, as in \begin{equation} \label{eq:sample2} \begin{array}{ll} \int_0^{r_2} & F (r, \varphi ) dr d\varphi = [\sigma r_2 / (2 \mu_0 )] \\ & \cdot \int_0^{\inf} exp(-\lambda |z_j - z_i |) \lambda^{-1} J_1 (\lambda r_2 ) J_0 (\lambda r_i ) d\lambda \end{array} \end{equation} Be sure that the symbols in your equation have been defined before the equation appears or immediately following. Italicize symbols ($T$ might refer to temperature, but T is the unit tesla). Refer to ``(\ref{eq:sample})'', not ``Eq. (\ref{eq:sample})'' or ``equation (\ref{eq:sample})'', except at the beginning of a sentence: ``Equation (\ref{eq:sample}) is \ldots''. \subsection{Other Recommendations} Use one space after periods and colons. Hyphenate complex modifiers: ``zero-field-cooled magnetization''. Avoid dangling participles, such as, ``Using (1), the potential was calculated'' (it is not clear who or what used (1)). Write instead: ``The potential was calculated by using (1)'', or ``Using (1), we calculated the potential''. A parenthetical statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) Avoid contractions; for example, write ``do not'' instead of ``don' t''. The serial comma is preferred: ``A, B, and C'' instead of ``A, B and C''. \section{Conclusion} A conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions. \begin{ack} Place acknowledgments here. \end{ack} \section{Introduction} Innovative solutions in many industries require lighter, more durable, and often, consequently, flexible materials \citep{Saadat2002IndustrialApplications}. Applying standard solutions from rigid object manipulation to objects made from novel flexible materials lead to large vibrations. Existing feedback solutions require accurate sensing of the vibrations using an additional sensors and complex analytical or data-driven models. On the other hand, existing feedforward solutions increase the task execution time \citep{Singer1990ishaping}. Therefore, the industry can substantially benefit from new effective, yet simple solutions for flexible object handling. In this paper we address the general problem of manipulating a flexible beam with a rigid robot arm \citep{Kapsalas2018ARXbeam}. We focus on solutions that do not use exteroceptive sensors for sensing vibrations of the beam -- such as external force-torque sensors at the end-effector or position tracking system -- only a joint torque estimator, available in the manipulator software, is used. Recently \cite{mamedov2022OBH} showed that using simple pendulum approximation of the beam and trajectory optimization, they can handle flexible objects better than existing methods. However, some residual vibration were still present. Assuming that the beam handling is repetitive, this paper extends the work by \cite{mamedov2022OBH} and investigates whether vibrations can be further reduced by \ac{ILC}. A typical ILC algorithm uses the output error of the current task execution to update the input of the next run \citep{Bristow2006}. Robotic manipulators have been a common application for such learning techniques since its first mention \citep{Arimoto1984} to more recent advances \citep{Koc2019}. Generating a feasible input for robot manipulators with \ac{ILC} requires the algorithm to cope with nonlinear dynamics and hard joint constraints. \cite{Wang2018} used a filter-based \ac{ILC} with linearized model that demands a robust $\mathcal{H}_{\infty}$ design to account for such approximation. \cite{Steinhauser2017} obtained feasible trajectories with an optimization-based ILC formulation where the the nonlinear dynamics and joint constraints were directly accounted for. In this paper we adopt a similar optimization-based strategy. Specifically, the problem at hand requires designing a \ac{PTP} trajectory for the manipulator which does not result in residual vibrations of the beam. Several \ac{ILC} techniques for optimizing \ac{PTP} trajectories are available, e.g. \cite{Freeman2011}, \cite{Son2013}, however they do not consider residual vibrations after motion. In contrast, \cite{VanDeWijdeven2008} proposed a vibration suppression \ac{ILC} that is, however, based on a predefined trajectory. Nonetheless, their method accounts for residual vibrations by formulating the problem with a separate control and prediction horizon similar to the proposed \ac{ILC}. This paper proposes a vibration suppression \ac{ILC} for flexible object handling with a robot manipulator. The approach exploits the generic formulation from \cite{Volckaert2013} of an explicit learning and control steps, shown to be equivalent to a norm-optimal \ac{ILC}. The learning step consists of two estimation problems: the first, to learn a simple yet effective parametric model that approximates the flexible beam and considers the nonlinear kinematics of the robot manipulator; the second, to learn an equivalent output disturbance to account for the residual dynamics. Finally, in the control step, we formulate a vibration suppression \ac{OCP} for \ac{PTP} motions that exploits the learned dynamics and accounts for input and joints limits. Namely, we make the following contributions: \begin{itemize} \item a measurement model for the external torque induced by a flexible object at the end-effector that accounts for the estimation error of the external torque provided by the manipulator software; \item a generalization of \ac{OCP} formulation from \citep{mamedov2022OBH} that leverages the learned residual dynamics and exploit a time-optimal-like formulation to induce zero residual vibration \item experimental validation of the \ac{ILC} scheme. \end{itemize} This paper is organized as follows: Section \ref{sct:modeling} addresses the modeling of the robot arm, beam and external torque sensing. Section \ref{sct:ilc} discusses the proposed \ac{ILC} algorithm. Section \ref{sct:experiments} presents experimental results, followed by a discussion. Section \ref{sct:disc_conc} concludes the paper \section{Modeling} \label{sct:modeling} The vibration suppression \ac{OCP} \ac{PTP} motion controller requires a system model. Flexible objects are infinite dimensional systems; they are accurately modeled by partial differential equations (PDE) that are computationally demanding to solve and are seldom used in control and trajectory optimization. In robotics, for computationally tractable modeling of flexible objects, researchers make simplifying assumptions to convert PDE to ordinary differential equations \citep{Sakawa1985, Zhou2002NonlinearIsh}. The model parameters in the above-mentioned methods are obtained from CAD models because otherwise, in practice, it is difficult to estimate them. Data-driven methods approach modeling beam dynamics differently; they infer the model structure from data \citep{Kapsalas2018ARXbeam}. For modeling the beam we adapt the simple lumped modeling approach from \cite{mamedov2022OBH} and briefly describe it in this section for completeness. \subsection{Manipulator dynamics} For a robot arm with $n_\mathrm{dof}$ degrees of freedom ($\mathrm{dof}$), let $\boldsymbol q \in \mathbb{R}^{n_\mathrm{dof}}$ be the vector of joint positions and assume that: \begin{assumption}\label{as:arm_double_int} \label{as:arm_model} The robot joint controller can accurately track the given joint reference trajectories. \end{assumption} Then, a double integrator model suffices to accurately describes the manipulators dynamics: \begin{align} \label{eq:kin_model} \ddot{\boldsymbol q} = \boldsymbol u, \end{align} where $\ddot{\boldsymbol q} \in \mathbb{R}^{n_\mathrm{dof}}$ is the vector of joint accelerations, and $\boldsymbol u \in \mathbb{R}^{n_\mathrm{dof}}$ is the vector of inputs (reference joint accelerations). \subsection{Beam dynamics on the end-effector} \begin{figure} \centering \includegraphics[width=\linewidth]{images/lumping.pdf} \caption{Approximation of a beam attached to the end-effector of a robot arm with a simple pendulum of length $l$ and a lumped mass $m$ connected to the end-effector through a passive revolute joint with stiffness $k$ and damping $c$.} \label{fig:modeling_assump} \end{figure} For modeling the beam dynamics manipulated by a robot arm, we make another critical and simplifying assumption: \begin{assumption} \label{as:beam_model} The beam can be approximated by a simple pendulum of mass $m$ and lenght $l$ connected to the end-effector of a robot arm through passive revolute joint with stiffness $k$ and damping $c$, as shown in Fig. \ref{fig:modeling_assump}. \end{assumption} By making such assumption, we consider only the first natural frequency of a beam and only the lateral vibrations. To derive the pendulum dynamics using the Lagrange formulation \cite[Ch. 7]{sciavicco2001book}, let $\bm p_{m}^0 \in \mathbb{R}^{3}$ denote the position of the pendulum mass $m$ in the robot's base frame \begin{align} \label{eq:mass_pos} \bm p_{m}^0 (\bm q, \theta) = \bm p_{b}^{0}(\bm q) + l \bm R_{b}^{0} (\bm q) \bm R_z(\theta) {\bm i}, % \end{align} where $[\bm p_{b}^{0}\ \bm R_{b}^{0}] = \mathrm{fk}(\bm q)$ are the position and the orientation of the origin of frame $\{b\}$, connected to the end-effector, in base frame $\{0\}$, respectively and are obtained from the forward kinematics of the manipulator, $\bm R_z(\theta) \in \mathrm{SO(3)}$ is a rotation matrix around $Z_b$ axis, $\theta$ is the angular position of the pendulum and ${\bm i} = [1\ 0\ 0]^\top$ is a unit vector. From now on, we drop superscript $({}^0)$ and explicit dependence of variables on joint positions $\bm q$ and velocities $\dot{\bm q}$ for convenience. Using \eqref{eq:mass_pos} and its time derivative, it is possible to formulate the Lagrangian. Finally, applying the Lagrange equations leads the final expression for the pendulum dynamics \begin{equation}\label{eq:pend_dynamics} \begin{split} \ddot \theta =& ~f_p(\bm q,\, \dot{\bm q},\, \theta,\, \dot \theta ) \\ = & -\frac{1}{m\,l^2} ~\big(c \dot \theta+ k \theta\big) + \frac{1}{l}{\bm i}^\top \frac{d \bm R_z(\theta)}{d \theta}^\top \bm R_b^\top (\bm g - \ddot{\bm p}_{b}) \\ & -{\bm i}^\top \frac{d \bm R_z(\theta)}{d \theta}^\top \bm R_b^\top \bm S(\dot{\bm \omega}_b) \bm R_b \bm R_z(\theta) {\bm i} \\ & +{\bm i}^\top\frac{d \bm R_z(\theta)}{d \theta}^\top \bm R_{b}^\top \bm S(\bm \omega_b)^\top \bm S(\bm \omega_b) \bm R_{b} \bm R_z(\theta){\bm i}, \end{split} \end{equation} where $\bm \omega_b \in \mathbb{R}^3$ and $\dot{\bm \omega}_b \in \mathbb{R}^3$ are the angular velocity and acceleration of frame $\{b\}$ with respect to $\{0\}$ expressed in $\{0\}$ respectively, $\bm S(\dot{\bm \omega}_b) := \dot{\bm R}_{b} \bm R_{b}^\top \in \mathbb{R}^{3\times 3}$ is a skew-symmetric matrix and $\bm g = [0\ 0\ -9.81]^\top\ \mathrm{m}/\mathrm{s}^2$ is the gravity acceleration vector. \subsection{External torque sensing} \label{sct:modeling_tau_ext} Any control strategy that attempts to improve the manipulation of the flexible beam requires measurements or estimates of the beam motion in response to the control actions. Hence, in this subsection we develop an output model which complements the setup dynamics model from \cite{mamedov2022OBH}. In absence of exteroceptive sensors, the beam dynamics can be inferred from the torque that its motion generates at frame $\{b\}$ along the $Z_b$ axis (see Fig. \ref{fig:modeling_assump}). Following the pendulum approximation of the beam (\ref{eq:pend_dynamics}), this reaction torque in frame $\{b\}$ along the $Z_b$ axis is written as: \begin{align} \label{eq:beam_torque} \tau := \tau_{b, z}^b = - c \dot \theta - k \theta \end{align} The software available in the robot manipulator drive system provides a filtered version of the external joint torque estimates $\bm \tau_{\mathrm{ext}}$ \citep{mamedov2020practical,petrea2021interactionForce} that is based on the dynamic model of the robot and torque measurement either at the joint or motor side. Therefore, a filtered version $\hat{\tau}_{b, z}$ in (\ref{eq:beam_torque}) is retrieved by using $\hat{\bm \tau}_{\mathrm{ext}}$ and the robots kinematics to compute the external wrench $\hat{\bm F}_b$ at the $\{b\}$: \begin{align} \label{eq:torque_ext_to_wrench} \boldsymbol J_{b}^b(\boldsymbol q)^\top \boldsymbol \hat{\bm \tau}_{\mathrm{ext}} = \hat{\bm F}_b^b = [ \hat F_{b,x}^b\ \hat F_{b,y}^b\ \hat F_{b,z}^b\ \hat \tau_{b,x}^b\ \hat\tau_{b,y}^b\ \hat\tau_{b,z}^b]^\top \end{align} where $\boldsymbol J_{b}^b$ is the manipulator Jacobian in the $\{b\}$ frame. As our variable of interest $\tau$ (\ref{eq:beam_torque}) can only be retrieved from its filtered version $\hat \tau :=\hat\tau^b_{b, z}$, we make the following output modeling assumption: \begin{assumption} \label{as:meas_model} The available output measurement is the external torque estimate in the frame $\{b\}$ along the $Z_b$ axis, filtered with a first-order low-pass filter: \begin{equation} \label{eq:output_model_filter} \begin{split} \dot{\hat\tau} &= f_\tau(\hat \tau, \tau, \tau_{\mathrm{e}})= - a \hat \tau + a (\tau + \tau_{\mathrm{e}}) \\ y &= \hat \tau \end{split} \end{equation} where $a$ is the inverse of the time constant of the filter and $\tau_e$ is a torque error given by assuming the following: \end{assumption} \begin{assumption} \label{as:est_dyn} The external torque estimator might not be correctly initialized but it converges exponentially. \end{assumption} \begin{align} \label{eq:output_model_error} \dot \tau_{\mathrm{e}} = -b \,\tau_{\mathrm{e}} \qquad \text{with} ~~ \tau_{\mathrm{e}}(0) = \tau_{\mathrm{e}, 0} \end{align} where $\tau_{e,0}$ is the unknown initial estimator error. \subsection{Setup dynamics} The setup model describes the dynamics later used by the learning algorithm to accomplish the task at hand. For this purpose the model is enhanced with a disturbance $d$ that affect the reaction torque \eqref{eq:beam_torque} as \begin{align} \label{eq:beam_torque_d} \tau := - c \dot \theta - k \theta + d, \end{align} in order to capture the residual dynamics. Also, the dependency of the dynamics on a parameter $\bm p$ is made explicit, resulting in a model of the form $\dot{\bm x} = f(\bm x, \bm u, \bm p, d)$ and output map $y=h(\bm x, \bm u, \bm p, d)$. The setup model combines the manipulator dynamics \eqref{eq:kin_model}, the beam dynamics \eqref{eq:pend_dynamics}, the reaction torque filtering dynamics \eqref{eq:output_model_filter}-\eqref{eq:beam_torque_d}. \begin{subequations} \label{eq:setup_model} \begin{align} \dot{\bm x} & = [\dot{\bm q}^{\top} ~~ \dot\theta ~~ \bm u^{\top} ~~ f_p(\cdot) ~~ f_\tau(\cdot) ~~ -b\, \tau_{\mathrm{e}}]^{\top} \label{eq:setup_model_dyn_f}\\ y &= ~ \hat \tau \label{eq:setup_model_dyn_h} \end{align} \end{subequations} where $\bm x = \big[\bm q^T ~~ \theta ~~ \dot{\bm q}^T ~~ \dot \theta ~~ \hat \tau ~~\tau_e \big]^T ~ \in \mathbb{R}^{n_x}$ is the state of the system with dimension $n_x = 2\,(n_{\mathrm{dof}}+1)+2$, $\bm p = [k~~ c~~ m~~ l~~ a~~ b~~ \tau_{\mathrm{e}, 0}]^\top$ is the vector of the parameters of the system and $\bm u$ is the control input as shown in \eqref{eq:kin_model}. In the rest of this paper, we use discretized the setup dynamics $\bm x_{k+1} = \bm F(\bm x_{k}, \bm u_{k}, \bm p, \bm d)$ -- obtained from \eqref{eq:setup_model_dyn_f} using a $4$th-order Runge-Kutta integrator -- and the output map $y_{k} = H(\bm x_{k}, \bm u_{k}, \bm p, \bm d) := h(\cdot)$ obtained from \eqref{eq:setup_model_dyn_h} . \section{Iterative Learning Control} \label{sct:ilc} This section introduces the overall structure of the proposed \ac{ILC} algorithm for vibration free handling of a flexible object and subsequently details the two separate steps of the approach. We use the following notation: $(\cdot)^i$ denotes a particular iteration $i \in \mathbb{Z}_+ $ of the \ac{ILC}; $(\cdot)_k$ denotes a particular time sample $k \in \mathbb{Z}$ and $\bar{(\cdot)}$ indicates that the variable is pre-computed and/or given. \subsection{Algorithm/outline of the approach} \begin{algorithm} \caption{Vibration free flexible object handling ILC} \begin{algorithmic}[1] \State\small\textbf{Require:} $\bm p^0,\, d^0$ \Comment {prior parameters and disturbance} \State $\bm u^{1}\gets$ \small\verb|ocp(|$\bm p^0, \, d^0$\verb|)| \State $i \gets 1$ \While{$i \leq i_{max}$} \State $\tilde{y}^i\gets$ \small\verb|system_response_measurement(|$\bm u^i$\verb|)| \LineComment[1\dimexpr\algorithmicindent]{Learning step} \State $p\bm ^i\gets$ \small\verb|parameter_estimation(|$\tilde{y}^i, \bm u^i,\bm p^{i\smin1}$\verb|)|\label{ALG_param_estimation} \State $d^i\gets$ \small\verb|disturbance_estimation(|$\tilde{y}^i, \bm u^i,\bm p^{i},d^{i\smin1}$\verb|)|\label{ALG_disturbance_estimation} \LineComment[1\dimexpr\algorithmicindent]{Control Step} \State $\bm u^{i+1}\gets$ \small\verb|ocp(|$\bm p^i, \, d^i, \, \bm u^i$\verb|)| \label{ALG_ocp} \State $i \gets i+1$ \EndWhile \end{algorithmic} \label{ALG_ilc} \end{algorithm} Algorithm \ref{ALG_ilc} shows the general outline of proposed \ac{ILC}. It start with generating the first control input $\bm u^1$ is based on the given priors $\bm p_0$ and $d_0$. Next, the algorithm proceeds by iterating between: collecting the system response measurements $\tilde y^i$,$\bm u^i$, learning the parametric and residual dynamics $\bm p^i$, $d_i$ and computing the next control action $\bm u^{i+1}$ for vibration free handling. \subsection{Learning step} Traditionally, \ac{ILC} learns from the tracking error to update the next input. In the proposed approach, the learning is performed by explicitly correcting the model and learning the residual dynamics given the current experiment data $\bm u^i$, $\tilde{y}^i$. In the first learning step, the model parameters $\bm p^i$ are obtained by solving the following nonlinear least-square estimation problem: \begin{subequations} \label{eq:estimation_problem_p} \begin{align} \underset{\bm x, \theta_0,\, \bm p}{\text{min}} ~~ & \sum_{k=0}^{N-1}\big[\lVert\tilde{y}^i_k - y^i_k\rVert^2_2+ \underbrace{\lVert \bm p \rVert^2_{\bm V_1}}_{\mathrm{r}_{p,1}} + \underbrace{\lVert \bm p -\bm p^{i\smin1} \rVert^2_{\bm V_2}}_{\mathrm{r}_{p,2}}\big]\label{eq:est_objective}\\ \text{s.t} \quad & \bm x_{k+1}= \bm F\big(\bm x_k, \bm u^i_k, \bm p\big), \quad k = 0,\dots,N-1, \label{eq:est_state_dyn}\\ & y_k = H\big(\bm x_k, \bm u^i_k, \bm p\big), \qquad k = 0,\dots,N-1,\label{eq:est_output}\\ & f_{\mathrm{p,eq}}(\bm \bar{\bm q}_0, \theta_0, \bm p ) = 0, \label{eq:est_eq_state}\\ & \bm x_0 = [\bar{\bm q}_0^\top\ \theta_{0}\ \bm 0^\top\ \hat \tau_0\ \tau_{e,0}]^\top,\label{eq:est_init} \\ &\bm p \in \mathcal{P}\label{eq:est_p_constr} \end{align} \end{subequations} where $\mathcal{P}$ is a feasible sets for the parameters, $f_{\mathrm{p,eq}}(\bm q, \theta, \bm p):= f_p(\bm q, \bm 0, \theta, 0, \bm p)$ and $\theta_0$ is the equilibrium position of the pendulum. The main objective is to minimize the prediction error of the parametric model, i.e. (\ref{eq:est_state_dyn}) and (\ref{eq:est_output}) refer to the setup model (\ref{eq:setup_model}) where the disturbance $d$ is ignored. In the objective (\ref{eq:est_objective}) two regularization terms are added: $\mathrm{r}_{p,1}$, known as Tikhonov or Ridge regression \citep[Chp. 6.3.2]{Boyd2004convex}, improves the conditioning of the problem but introduces a bias; $\mathrm{r}_{p,2}$ regularizes the change in iteration domain to decrease the learning rate and hence to improve robustness against non-repetitive components, such as noise. The second step in the learning procedure consists of capturing -- as an equivalent disturbance $d^i$ -- the residual dynamics that cannot be described by the parametric model. This is achieved by the following estimation problem where the model parameters are now set to the estimate $\bm p^i$ from the previous step: \begin{subequations} \label{eq:estimation_problem_d} \begin{align} \underset{\bm x, \, \bm d}{\text{min}} ~~ & \sum_{k=0}^{N-1}\big[\lVert\tilde{y}^i_k - y^i_k\rVert^2_2 + \underbrace{\lVert d_k \rVert^2_{w_1}}_{\mathrm{r}_{d,2}} +\\ &\quad ~+ \underbrace{\lVert d_k- d^{i\smin 1}_k \rVert^2_{w_2}}_{\mathrm{r}_{d,2}}\big] + \sum_{k=0}^{N-2}\underbrace{ \big\lVert d_{k+1} - d_k \rVert^2_{w_3}}_{\mathrm{r}_{d,3}} \label{eq:est_d_objective}\\ \text{s.t} \quad & \bm x_{k+1}= \bm F\big(\bm x_k, \bm u^i_k, \bm p^i, d_k\big), ~~ k = 0,\dots,N\smin1, \label{eq:est_d_state_dyn}\\ & y_k = H\big(\bm x_k, \bm u^i_k, \bm p^i, d_k\big), \qquad k = 0,\dots,N\smin1, \label{eq:est_d_output} \\ & \bm x_0 = [\bar{\bm q}_0^\top\ \theta^i_{0}\ \bm 0^\top\ \hat\tau^i_{0}\ \tau^i_{e,0}]^\top. \label{eq:est_d_init} \end{align} \end{subequations} Similar to (\ref{eq:estimation_problem_p}), regularization terms are added to the main objective that minimizes the prediction error. $\mathrm{r}_{d,1}$ penalizes the magnitude of the disturbance i.e. prevents $d_k$ from becoming too large. $\mathrm{r}_{d,2}$ increases robustness and regulates the learning rate. An additional regularization term $\mathrm{r}_{d,3}$ is added in(\ref{eq:est_d_objective}) to penalize the rate of change in time domain of the disturbance. This term imitates a low-pass filtering effect on the disturbance estimate and increases robustness w.r.t. measurements and process noise \citep[Ch. 6.3.2]{Boyd2004convex}. \subsection{Control step} \label{sec:ilc_control} The vibration free flexible object handling task consists of a \ac{PTP} motion between two resting pose of the flexible beam connected to the end-effector. Such task is defined by the initial rest pose of the setup, determined by $\bar{\bm q}_{0}$ and $\bar{\theta}_{0}$; and final rest pose $\bar{\bm p}_{b, f}$, $\bar{\bm R}_{b, f}$, determined by $\bar{\bm q}_{f}$, with the corresponding equilibrium of the pendulum $\bar{\theta}_{f}$. We compute the feedforward joint acceleration $\bm u_{i+1}$ by solving the \ac{OCP} that follows while using the current learned model information $\bm p_i$, $d_i$: \begin{subequations} \label{eq:ocp} \begin{align} \underset{\bm x, \bm u}{\text{min}} ~~ & \phi_{c}(\bm x, \bm u, \bm u^i) +\phi_{p}(\theta, \dot \theta , \tau) \label{eq:ocp_objective}\\ \text{s.t} \quad & \bm x_{k+1}= F\big(\bm x_k, \bm u^i_k, \bm p^i, d^i_k\big), ~~ k = 0,\dots,N_p\smin1, \label{eq:ocp_state_dyn}\\ & \tau_{k} = - k \, \theta_k -c\, \dot\theta_k + d^i_k \label{eq:ocp_tau}\\ & \bm x_0 = [\bar{\bm q}_0^\top\ \bar{\theta}_{0}\ \bm 0^\top]^\top, \bm u_0 = \bm0,\label{eq:ocp_init} \\ & \bm p_{b}\left(\bm q_{N_c}\right) = \bar{\bm p}_{b, f},\ \dot{\bm q}_{N_c} = \bm 0,\\ &\bm u_{k} = \bm 0, \qquad\qquad\qquad k=N_c\smin1,\dots, N_p\smin1, \label{eq:ocp_u_tf_zero}\\ &\bm e_O\left(\bm R_{b}\left(\bm q_{N_c}\right), \bar{\bm R}_{b, f}\right) = \bm 0_{3\times 1}\\ &\bm x \in \mathcal{X},\ \bm u \in \mathcal{U}, \ \dot{\bm u} \in \mathcal{J}. \end{align} \end{subequations} where $\tau_k = \tau_{b,z,k}+d_k$ is the prediction of the pendulum reaction torque with the equivalent disturbance $d_k$ of the residual dynamics; $\bm e_O(\cdot) \in \mathbb{R}^3$ is a function for computing the orientation error between two frames \cite[Ch.3]{sciavicco2001book}; $\mathcal{X}$, $\mathcal{U}$, and $\mathcal{J}$ are feasible sets for states, controls, and rate of change of controls. In the problem formulation (\ref{eq:ocp}), we consider a control horizon of $N$ samples in which the motion is executed and for which a control horizon cost term in (\ref{eq:ocp_objective}) is designed to enforce a desirable motion of the robot manipulator: \begin{equation} \label{eq:ocp_objective_c} \begin{split} \phi_{c}(\cdot) = \sum_{k=0}^{N_c}\lVert \bm x_k - \bm x_0 \rVert^2_Q &+ \sum_{k=0}^{N_c-1}\lVert\bm u_k \rVert^2_{R_1}+\\ &+\sum_{k=0}^{N_c-2}\lVert \bm u_{k+1} - \bm u_{k} \rVert^2_{R_2}, \end{split} \end{equation} where $\bm{Q} \in \mathbb{R}^{n_x \times n_x}, \bm R_1 \in \mathbb{R}^{n_{\mathrm{dof}} \times n_{\mathrm{dof}}}$, $\bm R_2 \in \mathbb{R}^{n_{\mathrm{dof}} \times n_{\mathrm{dof}}}$ are the weights for penalizing deviation of states from the initial state (to avoid excessive movements of the robot), inputs, prior input and jerks, respectively. Additionally, we consider an extended prediction horizon from $N_c$ to $N_p$, in which the controls are set to zero (\ref{eq:ocp_u_tf_zero}), that is used to penalize the residual vibration occurring after the motion by means of the prediction horizon cost in (\ref{eq:ocp_objective}): \begin{equation} \label{eq:ocp_objective_p} \begin{split} \phi_{p}(\cdot) = \sum_{k=N_c}^{N_p-1} \gamma^{k} \Big[\underbrace{\rho_1 \lVert \theta_{k} - \bar{\theta}_f\rVert_{1}}_{\mathrm{o}_{1}} &+ \underbrace{\rho_2 \lVert \dot{\theta}_{k}\rVert_{1}}_{\mathrm{o}_{2}} + \\ &+ \underbrace{\rho_3\lVert \tau_{k} - \bar{\tau}_{f}\rVert_{1}}_{\mathrm{o}_{3}}\Big] \end{split} \end{equation} The residual vibration are observed through $\theta_k$, $\dot\theta_k$ and $\tau_k$ and hence all three are considered in (\refeq{eq:ocp_objective_p}), each with their own weight $\rho_1$, $\rho_2$ and $\rho_3$ respectively. The objective terms $\mathrm{o}_{1}$ and $\mathrm{o}_{2}$ penalize the prediction of the residual vibration by the parametric model. The aim is to keep $\theta_k$ close to the equilibrium position $\bar{\theta}_f$ and $\dot \theta_k$ equal to zero during the time horizon following the robot motion. Additionally, the term $\mathrm{o}_{1}$ penalizes the residual vibration as predicted by the torque $\tau_k$ (\ref{eq:ocp_tau}) which includes the residual dynamics given by $d^i_k$. To achieve vibration suppression $\tau_k$ should be equal to the equilibrium reaction torque given by $\bar{\tau}_f = -k\,\theta_f+\frac{1}{N_p-N}\sum^{N_p-1}_{k=N} d_k$. Finally, note that all the terms in (\ref{eq:ocp_objective_p}) employ the sparsity promoting $l_1$-norm and are weighted by an exponentially increasing weight $\gamma>1$. This is done with the purpose of promoting zero residual vibration as early as possible after finishing the robot motion, that is as close as possible after reaching time instance $N_c$. A similar strategy is adopted in \citep{Verschueren2018} for a time-optimal model predictive control formulation. \subsection{Numerical implementation} In this work, we use CasADi \citep{Andersson2019casadi} to formulate the optimization problems ($\ref{eq:estimation_problem_p}$), ($\ref{eq:estimation_problem_d}$) and ($\ref{eq:ocp}$) as nonlinear programs (NLP) following the multiple-shooting method. The NLPs are solved using the nonlinear optimization solver IPOPT \citep{wachter2006ipopt} which implements an interior-point method. Moreover, we retrieve the computations of velocities and accelerations of the end-effector -- i.e, first- and second-order kinematics required in ($\ref{eq:estimation_problem_p}$), ($\ref{eq:estimation_problem_d}$) and ($\ref{eq:ocp}$)-- from the forward pass of the recursive Newton-Euler algorithm, which exploits the sparsity of the kinematic model unlike algorithmic differentiation. Such efficient functions for kinematics (and their derivatives) are generated by using Pinocchio \citep{carpentier2019pinocchio}. The code used in this work is publicly available on a GitHub repository\footnote{\url{https://github.com/danieleR3/beam_handling_ilc}}. \section{Experiments} \label{sct:experiments} In this section, we describe the experimental setup, the task and the ILC settings. Then, we present the experimental validation of the proposed approach and compare it with an existent solution. \begin{figure}[t] \centering \includegraphics[]{images/plot_learning.pdf} \caption{Top: error norm along the ILC iterations between the predicted, from \eqref{eq:ocp}, and the measured output. Bottom: comparison of the measured and prediction output for \textsc{ilc} and \textsc{ilc-p} at the $10$-th iteration.} \label{fig:T1_plot_learning} \end{figure} \subsection{Setup description} The setup used to validate our approach consists of a 7-$\mathrm{dof}$ Franka Emika Panda manipulator and a flexible beam with dimensions $60\times6\times0.1 \ \mathrm{cm}$ rigidly attached to the arm's end-effector. The beam is made of stainless steel 316L with $\rho = 6.3\ \mathrm{g}/\mathrm{cm^3}$, $EI=1.267\ \mathrm{N}\cdot \mathrm{m}^2$. The actual inputs to the setup are reference joint velocities $\dot{\boldsymbol q}_r(t)$ retrieved by integrating the joint accelerations $\bm u_i$. The given outputs from the setup are joint positions $\boldsymbol q(t)$, velocities $\dot{\boldsymbol q}(t)$ and estimated filtered external torques $\hat{\boldsymbol \tau}_{\text{ext}}(t)$ at 1$\mathrm{kHz}$ as detailed in section \ref{sct:modeling_tau_ext}. \subsection{Task definitions and ILC settings} To demonstrate the functioning and the effectiveness of the proposed \ac{ILC} we consider the following beam handling task: starting from $\bm q_{0} = [ -\frac{\pi}{2}, -\frac{\pi}{6}, 0, -\frac{2\pi}{3}, 0, \frac{\pi}{2}, \frac{\pi}{4}]^\top$ move the end-effector by $[0.20\ 0\ -0.20]^\top \ \mathrm{m}$ relative to $\{0\}$ within $0.48\ \mathrm{s}$. The \ac{ILC} algorithm is initialized with $\bm p^0$ obtained analytically from the beam material properties, as detailed in \citep{Sakawa1985}, and $d^0 =0$. The estimation problems \eqref{eq:estimation_problem_p} and \eqref{eq:estimation_problem_d} consider a horizon of $N=240$ samples with integration interval of $6\cdot10^{-3}\ \mathrm{s}$. Likewise, the control and prediction horizons in \eqref{eq:ocp} consist respectively of $N_c=48$, $N_p=144$ samples with integration interval of $10^{-2}\ \mathrm{s}$. The proposed ILC approach is compared with the $\textsc{baseline}$ approach, described in \citep{mamedov2022OBH}, that represent a special case of the \ac{OCP} (\ref{eq:ocp}) where only the parametric model is considered. The model parameters used in $\textsc{baseline}$ were determined by means of a data-driven method that rely on several ad hoc experiments. To quantify the performance of the experiments we define as metric the normalized integral of the absolute value of the zero mean residual vibrations (vibrations that persist after the end of the motion) \begin{align} V = \frac{1}{N_r}\sum_{k=N}^{N + N_r} \left|\hat \tau - \bar{\hat{\tau}}\right|, \end{align} where $\bar{\hat{\tau}}$ is the average value of $\hat \tau$ and $N_r$ are samples of a sufficiently long time horizon such that it contains several of its periods in case of significant vibrations. In this paper, we consider a time window of $5\ \mathrm{s}$ in addition to the task motion time. \begin{figure}[!t] \centering \includegraphics[]{images/plot_performance.pdf} \caption{Top: comparison of the vibration performance metric along the ILC iteration. Bottom: comparison of the residual vibrations induced in the measurements $\hat \tau$ for the first and last iteration of \textsc{ilc} and \textsc{ilc-p} and for the \textsc{baseline}.} \label{fig:T1_plot_performance} \end{figure} \subsection{Validation} The proposed \ac{ILC} algorithm combines a parametric model and a disturbance that represents the residual dynamics. To understand its functioning, we perform Algorithm \ref{ALG_ilc} (\textsc{ilc}) and compare it to the case where the parameter estimation does not include the residual dynamics (\textsc{ilc-p}), i.e., $d^i = 0$. Figure \ref{fig:T1_plot_learning} shows that by combining the parametric and the disturbance models, the \textsc{ilc} more accurately predicts the output with respect to \textsc{ilc-p}, especially the residual vibrations. This result motivates the need to learn the residual dynamics and leverage it via the extended prediction horizon cost \eqref{eq:ocp_objective_p}. Figure \ref{fig:T1_plot_performance} compares the performance of both \textsc{ilc}, \textsc{ilc-p} and the \textsc{baseline}. The top figures shows the evolution of the residual vibrations as a function of the ILC iterations. \textsc{ilc} achieves nearly zero residual after a short time interval, especially compared to the first experiment that exploits the analytical model. Despite of that, \textsc{ilc-p} still achieve a considerable reduction of the vibration w.r.t the initial experiment and obtains a comparable vibration suppression to \textsc{baseline}. Note that \textsc{ilc} and \textsc{ilc-p} learn the model parameter by exploiting the execution of the task, while \textsc{baseline} requires ad-hoc experiments prior to the task. A visual demonstration of the experiments can be found at \url{https://youtu.be/c8vi91NDlkg}. \section{Conclusion} \label{sct:disc_conc} This paper proposes an ILC algorithm for vibration free flexible object handling with a robot manipulator. Assuming that the beam handling is repetitive, this paper extends the work by \cite{mamedov2022OBH}. We present a measurement model for the external torque induced by the flexible object that accounts for the estimation error introduced by the manipulator software. The model enables learning of a parametric model and residual dynamics without relying on any exteroceptive sensors. Unlike other ILC approaches, the proposed algorithm introduces a PTP optimal control strategy that accounts for residual vibration, nonlinear kinematics and physical limits of the manipulator. The approach is experimentally validated and shows a threefold improvement compared with the available state-of-the-art method. This result is mainly due to estimating and exploiting the residual dynamics. This work can provide a solution for learning \ac{PTP} motion primitives useful for executing more challenging and industrially relevant handling tasks.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Star clusters are a powerful tool in the investigation of Galaxy structure and dynamics, star formation and evolution processes, and as observational constraints to N-body codes. This applies especially to the long-lived and populous globular clusters (GCs) that, because of their relatively compact nature, can be observed in most regions of the Galaxy, from near the center to the remote halo outskirts. In general terms, the structure of most star clusters can be described by a rather dense core and a sparse halo, but with a broad range in the concentration level. In this context, the standard picture of a GC assumes a isothermal central region and a tidally truncated outer region (e.g. \citealt{Binney1998}). Old GCs, in particular, can be virtually considered as dynamically relaxed systems (e.g. \citealt{NoGe06}). During their lives clusters are continually affected by internal processes such as mass loss by stellar evolution, mass segregation and low-mass star evaporation, and external ones such as tidal stress and dynamical friction e.g. from the Galactic bulge, disk and giant molecular clouds (e.g. \citealt{Khalisi07}; \citealt{Lamers05}; \citealt{GnOs97}). Over a Hubble time, these processes tend to decrease cluster mass, which may accelerate the core collapse phase for some clusters (\citealt{DjMey94}, and references therein). Consequently, these processes, combined with the presence of a central black hole (in some cases) and physical conditions associated to the initial collapse, can affect the spatial distribution of light (or mass) both in the central region and at large radii (e.g. \citealt{GLO99}; \citealt{NoGe06}). It is clear from the above that crucial information related to the early stages of Galaxy formation, and to the cluster dynamical evolution, may be imprinted in the present-day internal structure and large-scale spatial distribution of GCs (e.g. \citealt{MvdB05}; \citealt{GCProp}). To some extent, this reasoning can be extended to the open clusters (OCs), especially the young, which are important to determine the spiral arm and disk structures and the rotation curve of the Galaxy (e.g. \citealt{Friel95}; \citealt{DiskProp}). Consequently, the derivation of reliable structural parameters of star clusters, GCs in particular, is fundamental to better define their parameter space. This, in turn, may result in a deeper understanding of the formation and evolution processes of the star clusters themselves and the Galaxy. Three different approaches have been used to derive structural parameters of star clusters. The more traditional one is based on the surface-brightness profile (SBP), which considers the spatial distribution of the brightness of the component stars, usually measured in circular rings around the cluster center. The compilation of Harris (1996, and the 2003 update\footnote{\em http://physun.physics.mcmaster.ca/Globular.html}) presents a basically uniform set of parameters for 150 Galactic GCs. Among their structural parameters, the core (\mbox{$\rm R_c$}), half-light (\mbox{$\rm R_{hL}$}) and tidal (\mbox{$\rm R_t$}) radii, as well as the concentration parameter $c=\log(\mbox{$\rm R_t$}/\mbox{$\rm R_c$})$, were based mostly on the SBP database of \citet{TKD95}. SBPs do not necessarily require cluster distances to be known, since the physically relevant information contained in them is essentially related to the relative brightness of the member stars. In principle, it is easy to measure integrated light. However, SBPs are more efficient near the cluster center than in the outer parts, where noise and background starlight may be a major contributor. Another potential source of noise is the random presence of bright stars, either from the field or cluster members, especially outside the central region in the less-populous GCs or most of the OCs. Structural parameters derived from such SBPs would certainly be affected. One way to minimise this effect is the use of wide rings throughout the whole radius range, but this would cause spatial resolution degradation on the profiles, especially near the center. The obvious alternative to SBPs is to use star counts to build radial density profiles (RDPs), in which only the projected number-density of stars is taken into account, regardless of the individual star brightness. This technique is particularly appropriate for the outer parts, provided a statistically significant, and reasonably uniform, comparison field is available to tackle the background contamination. On the other hand, contrary to SBPs, RDPs are less efficient in central regions of populous clusters where the density of stars (crowding) may become exceedingly large. In such cases it may not be possible to resolve individual stars with the available technology. Finally, a more physically significant profile can be built by mapping the cluster's stellar mass distribution, which essentially determines the gravitational potential and drives most of the dynamical evolution. However, mass density profiles (MDPs) not only are affected by the same technical problems as the RDPs but, in addition, the cluster distance, age and a reliable mass-luminosity relation are necessary to build them. In principle, the three kinds of profiles are expected to yield different values for the structural parameters under similar photometric conditions, since each profile is sensitive to different cluster parameters, especially the age and dynamical state. Qualitatively, the following effects, basically related to dynamical state, can be expected. Large-scale mass segregation drives preferentially low-mass stars towards large radii (while evaporation pushes part of these stars beyond the tidal radius, into the field), and high-mass stars towards the central parts of clusters. If the stellar mass distribution of an evolved cluster can be described by a spatially variable mass function (MF) flatter at the cluster center than in the halo, the resulting RDP (and MDP) radii should be larger than SBP ones. The differences should be more significant for the core than the half and tidal radii, since the core would contain, on average, stars more massive than the halo and especially near the tidal radius. Besides, the presence of bright stars preferentially in the central parts of young clusters (\citealt{DetAnalOCs} and references therein) should as well lead to smaller SBP core and half-light radii than the respective RDP ones. Another relevant issue is related to depth-limited photometry. When applied to the observation of objects at different distances, depth-limited photometry samples stars with different brightness (or mass), especially at the faint (or low-mass) end. Thus, it would be interesting to quantify the changes produced in the derived parameters when RDPs, MDPS and SBPs are built with depth-limited photometry, as well as to check how the structural parameters derived from one type of profile relate to the equivalent radii measured in the other profiles. In the present work we face the above issues by deriving structural parameters of star clusters built under controlled conditions, in which the radial distribution of stars follows a pre-established analytical profile, and field stars are absent. Effects introduced by mass segregation (simulated by a spatially variable mass function), age and structure are also considered. This work focuses on profiles built in the near-infrared range. The main goal of the present work is to examine relations among structural parameters measured in the different radial profiles, built under ideal conditions, especially noise-free photometry and as small as possible statistical uncertainties (using a large number of stars). In this sense, the results should be taken as upper-limits. \begin{table*} \caption[]{Model star cluster specifications} \label{tab1} \renewcommand{\tabcolsep}{2.65mm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{ccccccccrcccc} \hline\hline Model&$R_t/R_c$&c&$\chi_0$&$\chi_t$&Age&\mbox{$\rm [Fe/H]$}&$m_i$&$m_s$&$\langle m\rangle$ &$\rm M_J(TO)$&$\rm M_J(bright)$&$\rm M_J(faint)$\\ & & & & &(Myr)&&(\mbox{$\rm M_\odot$})&(\mbox{$\rm M_\odot$})&(\mbox{$\rm M_\odot$})&(mag)&(mag)&(mag)\\ (1) &(2) &(3) &(4) &(5) &(6) &(7) &(8) &(9) &(10)&(11) &(12) &(13)\\ \hline GC-A &5&0.7&0.00&1.35&$10^4$&$-1.5$&0.15&1.02&0.43&$+2.86$&$-2.14$&$+9.12$\\ GC-B&20&1.3&0.00&1.35&$10^4$&$-1.5$&0.15&1.02&0.43&$+2.86$&$-2.14$&$+9.12$\\ GC-C&20&1.3&0.00&0.00&$10^4$&$-1.5$&0.15&1.02&0.46&$+2.86$&$-2.14$&$+9.12$\\ GC-D&40&1.6&0.00&1.35&$10^4$&$-1.5$&0.15&1.02&0.43&$+2.86$&$-2.14$&$+9.12$\\ OC-A&15&1.2&0.30&1.35&$10^3$&$~~0.0$&0.15&2.31&0.59&$+0.32$&$-2.68$&$+9.18$\\ OC-B&15&1.2&0.30&1.35&$100$&$~~0.0$&0.15&5.42&0.92&$-1.82$&$-4.82$&$+9.18$\\ OC-C&15&1.2&0.30&1.35&$10$&$~~0.0$&0.15&18.72&1.76&$-4.82$&$-8.82$&$+9.18$\\ \hline \end{tabular} \begin{list}{Table Notes.} \item Col.~3: concentration parameter $c=\log(\mbox{$\rm R_t$}/\mbox{$\rm R_c$})$. Cols.~4 and 5: mass function slopes at the cluster center and tidal radius. Cols.~8-10: lower, upper and average star mass. Col.~11: absolute J magnitude at the turnoff (TO). Cols.~12 and 13: absolute J magnitude at the bright and faint ends. \end{list} \end{table*} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig1.eps}} \caption{Model star cluster specifications. Panel (a): a random selection of $n$ in the range $0\leq n\leq1$ produces King-like RDPs in the range $0\leq R\leq\mbox{$\rm R_t$}$ (see Eq.~\ref{eq2}). Panel (b): Radially-variable mass function slopes $\left(\frac{dN}{dm}\propto m^{-(1+\chi)}\right)$ used in the models. Panel (c): Padova isochrones used to simulate the mass-luminosity relation of the star cluster models. The 10\,Gyr, $\mbox{$\rm [Fe/H]$}=-1.5$ metallicity isochrone is adopted in the globular cluster models. Panel (d): distribution of concentration parameters of the GCs in H03 with peaks at $c\approx1.6,~1.3,~{\rm and}~0.7$. Panel (e): model fraction of stars brighter than $M_J=M_{J_{TO}}+\Delta_{TO}$. In all cases, the fraction of stars brighter than the TO ($M_{J_{TO}}$) is below the $1\%$ level.} \label{fig1} \end{figure} This work is structured as follows. In Sect.~\ref{ModelSCs} we present the star cluster models and build radial profiles with depth-limited photometry. In Sect.~\ref{Struc} we derive structural parameters from each profile, discuss their dependence on depth, and compare similar radii derived from the different types of profiles. In Sect.~\ref{N6397} we compare relations derived from model parameters with those of the nearby GC NGC\,6397. Concluding remarks are given in Sect.~\ref{Conclu}. \section{The model star clusters} \label{ModelSCs} For practical reasons, the model star clusters are simulated by first establishing the number-density radial distribution. The approach we follow is to build star clusters of different ages and concentration parameters, with the spatial distribution of stars truncated at the tidal radius (\mbox{$\rm R_t$}). Stars are distributed with distances to the cluster center in the range $0\leq R\leq\mbox{$\rm R_t$}$, with the $R$ coordinate having a number-frequency given by a function similar to a \citet{King62} three-parameter surface-brightness profile. The mass and brightness of each star are subsequently computed according to a pre-defined mass function and mass-luminosity relation consistent with the model age. The last step is required for the derivation of the MDP and SBPs. We point out that different, more sophisticated analytical models have also been used to fit the SBPs of Galactic and extra-Galactic GCs, other than \citet{King62} profile. The most commonly used are the single-mass, modified isothermal sphere of \citet{King66} that is the basis of the Galactic GC parameters given by \citet{TKD95} and H03, the modified isothermal sphere of \citet{Wilson75}, that assumes a pre-defined stellar distribution function (which results in more extended envelopes than \citealt{King66}), and the power-law with a core of \citet{EFF87} that has been fit to massive young clusters especially in the Magellanic Clouds (e.g. Mackey \& Gilmore 2003a,b,c). Each function is characterised by different parameters that are somehow related to the cluster structure. However, the purpose here is not to establish a ``best'' fitting function of the structure of star clusters in general. Instead, we want to quantify changes in the structural parameters, derived from RDPs, MDPs and SBPs of star clusters with the stellar distribution assumed to follow an analytical function, under different photometric conditions. We expect that changes in a given parameter should have a small dependence, if any at all, on the adopted functional form. The adopted King-like radial distribution function is expressed as \begin{equation} \label{eq1} \frac{dN}{2\pi\,R\,dR}=\sigma_0\left[\frac{1}{\sqrt{1+(R/R_c)^2}} - \frac{1}{\sqrt{1+(R_t/R_c)^2}}\right]^2, \end{equation} where $\sigma_0$ is the projected number-density of stars at the cluster center, and \mbox{$\rm R_c$}\ and \mbox{$\rm R_t$}\ are the core and tidal radii, respectively. Since structural differences are basically controlled by the ratio $\mbox{$\rm R_t$}/\mbox{$\rm R_c$}$, we set $\mbox{$\rm R_c$}=1$ in all models. Such a King-like RDP (for $\sigma_0=1.0$) is obtained by numerically inverting the relation (see App.~\ref{Transf}) \begin{equation} \label{eq2} n(R) = \frac{x^2-4u(\sqrt{1+x^2}-1)+u^2\ln(1+x^2)}{u^2\ln{u^2}-(u-1)(3u-1)}, \end{equation} where $x\equiv R/R_c$ and $u^2\equiv 1+(R_t/R_c)^2$. Thus, a random selection of numbers in the range $0\leq n\leq1$ produces a King-like radial distribution of stars with the radial coordinate in the range $0\leq R/R_t\leq1$. The $R/R_t$ curves as a function of $n$ for the models considered in this work are shown in Fig.~\ref{fig1} (Panel a). Once a given star has been assigned a radial coordinate, its mass is computed with a probability proportional to the mass function \begin{equation} \label{eq3} \frac{dN}{dm}\propto m^{-(1+\chi)}, \end{equation} where the slope varies with $R$ according to $\chi=\chi(R)=\chi_t + (\chi_t-\chi_0)(R/R_t-1)$, where $\chi_0$ and $\chi_t$ are the mass function slopes at the cluster center and tidal radius, respectively (Table~\ref{tab1} and Fig.~\ref{fig1}). Thus, the presence of large-scale mass segregation in a star cluster can be characterised by a slope $\chi_0$ flatter than $\chi_t$. Mass values distributed according to Eq.~\ref{eq3} are obtained by randomly selecting numbers in the range $0\leq n\leq1$ and using them in the relation of mass with $n~\rm{and~} \chi$ (App.~\ref{Transf}) \begin{equation} \label{eq4} m=\left\{ \begin{array}{lc} m_i\,(m_s/m_i)^n, & \rm{for~\chi=0.0,}\\ m_s/[(1-n)(m_s/m_i)^\chi+n]^{1/\chi}, & \rm{otherwise}, \end{array} \right . \end{equation} where $m_i$ and $m_s$ are the lower and upper mass values considered in the models (Table~\ref{tab1}). In what follows we adopt the 2MASS\footnote{\em http://www.ipac.caltech.edu/2mass/releases/allsky/} photometric system to build SBPs. Finally, the 2MASS \mbox{$\rm J$}, \mbox{$\rm H$}\ and \mbox{$\rm K_s$}\ magnitudes for each star are obtained according to the mass-luminosity relation taken from the corresponding model (Table~\ref{tab1}) Padova isochrone (\citealt{Girardi02}). For illustrative purposes the model isochrones are displayed in Fig.~\ref{fig1} (panel c). The set of models considered here is intended to be objectively representative of the star cluster parameter space. For globular clusters we use the standard age of 10\,Gyr and the spatially uniform metallicity $\mbox{$\rm [Fe/H]$}=-1.5$, which is typical of the metal-poor Galactic GCs (e.g. \citealt{GCProp}). However, we note that abundance variations have been suggested to occur within GCs (e.g. \citealt{Gratton04}). Basically, small to moderate metallicity gradients would produce slight changes in the colour and magnitude of the stars in different parts of the cluster, which has no effect on the (star-count derived) RDPs and MDPs. The effect on the SBPs may be small as well, provided that the magnitude bin used to build the SBPs is wide enough to accommodate such magnitude changes. As for the core/tidal structure we consider the ratios $R_t/R_c=40,~20,~15,~{\rm and}~5$, or equivalently the concentration parameters $c=\log{(R_t/R_c)\approx1.6,~1.3,~1.2,~{\rm and}~0.7}$, which roughly correspond to the peaks in the distribution of $c$ values presented by the regular (non-post core collapse) GCs given in H03 (Fig.~\ref{fig1}, panel d). Models GC-A, B and D take into account mass segregation by means of a flat ($\chi_0=0.00$) mass function at the center and a Salpeter (1955) IMF ($\chi_t=1.35$) at the tidal radius. GC-C model is similar to GC-B, except that it considers a uniform, heavily depleted MF ($\chi_0=0.00$) throughout the cluster. OCs are represented by solar-metallicity models with the ages 10\,Myr (to allow for the presence of bright stars in young OCs), 100\,Myr (somewhat evolved OCs) and 1\,Gyr (intermediate-age OCs), $R_t/R_c=15$ ($c\approx1.2$) and a spatially variable MF (Table~\ref{tab1}). The values of $c$ and the core/halo MF slopes are representative of OCs (\citealt{DetAnalOCs}). Another effect not considered here is differential absorption. In principle, low to moderate differential absorption should have a minimum effect on the radial profiles, because of the same reasons as those given above for the metallicity gradient. High values, on the other hand, would affect RDPs as well, because of a radially-dependent loss of stars due to depth-limited photometry. However, inclusion of this effect is beyond the scope of the present work. As expected, the fraction of stars brighter than the turnoff (TO) in the resulting star cluster models is significantly smaller than 1\% (Fig.~\ref{fig1}, panel e). Thus, we had to use a total number of stars of $1\times10^9$ in all models, so that the radial profiles resulted statistically significant (small $1\sigma$ Poisson error bars) especially at the shallowest magnitude depth. \subsection{Depth-varying radial profiles} \label{DeptVP} The radial profiles were built considering all stars brighter than a given magnitude threshold, with the TO as reference. At the bright end, statistically significant GC profiles were obtained for $\mbox{$\rm \Delta_{TO}$}\equiv M_{J,th}-M_{J,TO}=-5$, where $M_{J,th}$ and $M_{J,TO}$ are the threshold and TO absolute magnitudes in the 2MASS \mbox{$\rm J$}\ band. At the faint end, GC-models have $\mbox{$\rm \Delta_{TO}$}=6.3$. OC models have $\mbox{$\rm \Delta_{TO}$}=-3~{\rm and~} -4$ at the bright end, and $\mbox{$\rm \Delta_{TO}$}=8.9,~11.0,~{\rm and}~14.0$, at the faint end. Starting at the bright magnitude end, RDPs, MDPs and SBPs were built considering stars with the \mbox{$\rm J$}\ magnitude brighter than a given faint threshold, with the magnitude depth increasing in steps of $\mbox{$\rm \Delta_{TO}$}=1$, up to the respective faint magnitude end. Figure~\ref{fig2} displays a selection of profiles corresponding to both extremes in magnitude depths, for the GC-D and OC-C models. These profiles are representative of the whole set of models, especially in terms of the small uncertainties associated with each radial coordinate. Reflecting the large differences in the number of stars at different photometric depths, the central values of the number and mass densities, and surface-brightness, vary significantly from the shallowest to the deepest profiles. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig2.eps}} \caption{A selection of RDPs (top panels), MDPs (middle) and 2MASS \mbox{$\rm J$}\ magnitude SBPs (bottom) that illustrate structural changes under different magnitude depths. Arbitrary units (au) are used both for the radial coordinate and projected area.} \label{fig2} \end{figure} \begin{table*} \caption[]{Model star cluster structural parameters for different photometric depths} \label{tab2} \renewcommand{\tabcolsep}{1.3mm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{cccccccccccc} \hline\hline &\multicolumn{3}{c}{RDP}&&\multicolumn{3}{c}{MDP}&&\multicolumn{3}{c}{SBP (\mbox{$\rm J$}\ band)}\\ \cline{2-4}\cline{6-8}\cline{10-12} $\Delta_{TO}$&\mbox{$\rm R_c$}&\mbox{$\rm R_{hSC}$}&\mbox{$\rm R_t$}&&\mbox{$\rm R_c$}&\mbox{$\rm R_{hM}$}&\mbox{$\rm R_t$}&&\mbox{$\rm R_c$}&\mbox{$\rm R_{hL}$}&\mbox{$\rm R_t$}\\ (mag)&(au)&(au)&(au)&&(au)&(au)&(au)&&(au)&(au)&(au)\\ (1) &(2) &(3) &(4) &&(5) &(6) &(7) &&(8) &(9) &(10) \\ \hline &\multicolumn{11}{c}{Model: GC-A; Input RDP parameters: $\mbox{$\rm R_c$}=1.0$, $\mbox{$\rm R_t$}=5.0$}\\ \cline{2-12} $-5.0$&$0.78\pm(\dag)$&$1.02\pm(\dag)$&$4.61\pm0.01$&&$0.78\pm(\dag)$&$1.02\pm(\dag)$&$4.61\pm0.01$&&$0.75\pm0.01$&$1.01\pm(\dag)$&$4.80\pm0.01$\\ $~0.0$&$0.76\pm(\dag)$&$1.02\pm(\dag)$&$4.77\pm0.01$&&$0.76\pm(\dag)$&$1.02\pm(\dag)$&$4.77\pm0.01$&&$0.75\pm0.01$&$1.01\pm(\dag)$&$4.79\pm0.01$\\ $+6.3$&$1.00\pm(\dag)$&$1.19\pm(\dag)$&$5.00\pm0.01$&&$0.92\pm(\dag)$&$1.14\pm(\dag)$&$4.91\pm(\dag)$&&$0.75\pm0.01$&$1.03\pm(\dag)$&$4.80\pm0.01$\\ \hline &\multicolumn{11}{c}{Model: GC-B; Input RDP parameters: $\mbox{$\rm R_c$}=1.0$, $\mbox{$\rm R_t$}=20.0$}\\ \cline{2-12} $-5.0$&$0.87\pm0.01$&$2.03\pm(\dag)$&$17.31\pm0.08$&&$0.87\pm0.01$&$2.03\pm(\dag)$&$17.31\pm0.08$&&$0.86\pm0.01$&$2.04\pm0.01$&$17.82\pm0.05$\\ $~0.0$&$0.83\pm0.01$&$2.03\pm(\dag)$&$18.72\pm0.08$&&$0.83\pm0.01$&$2.03\pm(\dag)$&$18.72\pm0.08$&&$0.86\pm0.01$&$2.03\pm(\dag)$&$17.80\pm0.03$\\ $+6.3$&$1.00\pm(\dag)$&$2.39\pm(\dag)$&$20.00\pm(\dag)$&&$0.95\pm(\dag)$&$2.27\pm(\dag)$&$19.28\pm0.03$&&$0.86\pm0.01$&$2.05\pm(\dag)$&$17.80\pm0.02$\\ \hline &\multicolumn{11}{c}{Model: GC-C; Input RDP parameters: $\mbox{$\rm R_c$}=1.0$, $\mbox{$\rm R_t$}=20.0$}\\ \cline{2-12} $-5.0$&$1.00\pm(\dag)$&$2.38\pm0.01$&$20.02\pm0.03$&&$1.00\pm(\dag)$&$2.38\pm0.01$&$20.02\pm0.04$&&$1.00\pm(\dag)$&$2.38\pm0.01$&$19.94\pm0.06$\\ $~0.0$&$1.00\pm(\dag)$&$2.39\pm(\dag)$&$20.00\pm0.01$&&$1.00\pm(\dag)$&$2.39\pm(\dag)$&$20.00\pm0.01$&&$1.00\pm(\dag)$&$2.39\pm(\dag)$&$19.95\pm0.03$\\ $+6.3$&$1.00\pm(\dag)$&$2.39\pm(\dag)$&$20.00\pm(\dag)$&&$1.00\pm(\dag)$&$2.39\pm(\dag)$&$20.00\pm(\dag)$&&$1.00\pm(\dag)$&$2.39\pm(\dag)$&$19.97\pm0.03$\\ \hline &\multicolumn{11}{c}{Model: GC-D; Input RDP parameters: $\mbox{$\rm R_c$}=1.0$, $\mbox{$\rm R_t$}=40.0$}\\ \cline{2-12} $-5.0$&$0.90\pm0.01$&$2.81\pm0.02$&$33.96\pm0.19$&&$0.90\pm0.01$&$2.81\pm0.02$&$33.96\pm0.19$&&$0.91\pm0.01$&$2.82\pm(\dag)$&$34.18\pm0.05$\\ $~0.0$&$0.86\pm0.01$&$2.82\pm(\dag)$&$37.15\pm0.17$&&$0.86\pm0.01$&$2.82\pm(\dag)$&$37.15\pm0.17$&&$0.91\pm0.01$&$2.82\pm(\dag)$&$34.00\pm0.05$\\ $+6.3$&$1.00\pm(\dag)$&$3.30\pm(\dag)$&$39.99\pm0.01$&&$0.96\pm(\dag)$&$3.14\pm(\dag)$&$38.51\pm0.07$&&$0.91\pm0.01$&$2.82\pm(\dag)$&$34.20\pm0.04$\\ \hline &\multicolumn{11}{c}{Model: OC-A; Input RDP parameters: $\mbox{$\rm R_c$}=1.0$, $\mbox{$\rm R_t$}=15.0$}\\ \cline{2-12} $-3.0$&$0.82\pm0.01$&$1.70\pm(\dag)$&$12.85\pm0.07$&&$0.82\pm0.01$&$1.70\pm(\dag)$&$12.85\pm0.07$&&$0.81\pm0.01$&$1.72\pm(\dag)$&$13.18\pm0.02$\\ $~0.0$&$0.78\pm0.01$&$1.72\pm(\dag)$&$13.78\pm0.06$&&$0.78\pm0.01$&$1.72\pm(\dag)$&$13.78\pm0.06$&&$0.82\pm0.01$&$1.72\pm(\dag)$&$13.19\pm0.02$\\ $+8.9$&$1.00\pm(\dag)$&$2.08\pm(\dag)$&$15.00\pm0.01$&&$0.91\pm(\dag)$&$1.93\pm(\dag)$&$14.43\pm0.03$&&$0.81\pm0.01$&$1.73\pm(\dag)$&$13.20\pm0.01$\\ \hline &\multicolumn{11}{c}{Model: OC-B; Input RDP parameters: $\mbox{$\rm R_c$}=1.0$, $\mbox{$\rm R_t$}=15.0$}\\ \cline{2-12} $-3.0$&$0.72\pm0.01$&$1.61\pm(\dag)$&$13.30\pm0.08$&&$0.72\pm0.01$&$1.61\pm(\dag)$&$13.30\pm0.08$&&$0.76\pm0.01$&$1.61\pm(\dag)$&$12.75\pm0.03$\\ $~0.0$&$0.70\pm0.02$&$1.61\pm(\dag)$&$13.67\pm0.08$&&$0.70\pm0.02$&$1.61\pm(\dag)$&$13.67\pm0.08$&&$0.77\pm0.01$&$1.61\pm(\dag)$&$12.74\pm0.03$\\ $+11.0$&$1.00\pm(\dag)$&$2.08\pm(\dag)$&$15.00\pm(\dag)$&&$0.84\pm0.01$&$1.84\pm(\dag)$&$14.33\pm0.04$&&$0.74\pm0.02$&$1.63\pm(\dag)$&$12.75\pm0.02$\\ \hline &\multicolumn{11}{c}{Model: OC-C; Input RDP parameters: $\mbox{$\rm R_c$}=1.0$, $\mbox{$\rm R_t$}=15.0$}\\ \cline{2-12} $-4.0$&$0.62\pm0.02$&$1.49\pm(\dag)$&$13.06\pm0.10$&&$0.62\pm0.02$&$1.49\pm(\dag)$&$13.05\pm0.10$&&$0.71\pm0.01$&$1.49\pm(\dag)$&$11.99\pm0.03$\\ $~0.0$&$0.62\pm0.02$&$1.49\pm(\dag)$&$13.10\pm0.10$&&$0.62\pm0.02$&$1.49\pm(\dag)$&$13.09\pm0.10$&&$0.71\pm0.01$&$1.49\pm(\dag)$&$11.99\pm0.03$\\ $+14.0$&$1.00\pm(\dag)$&$2.08\pm(\dag)$&$15.00\pm(\dag)$&&$0.70\pm0.02$&$1.70\pm(\dag)$&$14.35\pm0.05$&&$0.64\pm0.03$&$1.49\pm(\dag)$&$12.00\pm0.02$\\ \hline \end{tabular} \begin{list}{Table Notes.} \item ($\dag$): uncertainty smaller than 0.01 arbitrary units (au). The half-type radii are half-star counts (\mbox{$\rm R_{hSC}$}), half-mass (\mbox{$\rm R_{hM}$}) and half-light (\mbox{$\rm R_{hL}$}). \end{list} \end{table*} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig3.eps}} \caption{Structural parameters of the GC models. Top panels: Ratio of the tidal radius measured in profiles with a photometric depth \mbox{$\rm \Delta_{TO}$}\ with respect to that derived from the deepest one, for the RDPs (left panels), MDPs (vertical-middle) and SBPs (right). Horizontal-middle panels: half-type radii. Bottom: core radii. TO values are indicated by the dotted line. Except for GC-C (uniform mass function), the remaining models present changes in radii in the RDPs and MDPs. SBP radii are essentially uniform.} \label{fig3} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig4.eps}} \caption{Same as Fig.~\ref{fig3} for the OC models. For comparison purposes, the y-scale is the same as in Fig.~\ref{fig3}. Similarly to the GC models, radii changes are conspicuous in the RDPs and MDPs. } \label{fig4} \end{figure} \section{Structural parameters {\em vs.} photometry depth} \label{Struc} The depth-varying model SBPs are fit with the empirical three-parameter function introduced by \cite{King62} to describe the surface-brightness distribution of GCs, which is characterised by the presence of the core and tidal radii. For RDPs and MDPs we use the King-like analytical profile that describes the projected number-density of stars as a function of \mbox{$\rm R_c$}\ and \mbox{$\rm R_t$}, $\sigma(R)=\frac{dN}{2\pi\,R\,dR}$, as given by eq.~\ref{eq1}. We also compute the distances from the center which contains half of the cluster's total light, stars and mass. The half-star count (\mbox{$\rm R_{hSC}$}), light (\mbox{$\rm R_{hL}$}) and mass (\mbox{$\rm R_{hM}$}) radii are derived by directly integrating the corresponding profiles. A selection of the resulting structural parameters as a function of \mbox{$\rm \Delta_{TO}$}\ is given in Table~\ref{tab2}. For simplicity we only present the values obtained from the bright and faint magnitude ranges, as well as for $M_J\leq M_{J,TO}$. The whole set of parameters are contained in Figs.~\ref{fig3} - \ref{fig6}. At first glance, RDP and MDP radii present a significant decrease for shallower photometry, with respect to the intrinsic values. SBP radii, on the other hand, are more uniform. The most noticeable feature is that, except for GC-C (uniform mass function), RDP and MDP radii tend to become increasingly larger than SBP ones with increasing photometric depth. \subsection{Dependence on photometric depth} \label{DependDepth} In Fig.~\ref{fig3} we compare the radii measured in GC profiles built with a given photometric depth (e.g. $\mbox{$\rm R_c$}(\Delta_{TO})$) with the intrinsic ones, i.e. those derived from the deepest profiles ($R_{c,deep}$). RDP parameters are more affected than the MDP ones, while the SBP ones are essentially uniform, thus insensitive to photometric depth. Among the radii, RDP and MDP core are the most affected (underestimated), followed by the half and tidal radii. In the most concentrated model (GC-A, $c\approx0.7$), measurements or \mbox{$\rm R_c$}\ in the RDP may be underestimated by a factor $\approx25\%$ in profiles shallower than near the TO, with respect to $R_{c,deep}$, and $\approx20\%$ in MDPs. The effect is smaller in \mbox{$\rm R_{hSC}$}\ and \mbox{$\rm R_{hM}$}, which may be underestimated by $\approx15\%$ in the same profiles. The underestimation in the tidal radii is smaller than $\approx10\%$. As expected, RDP, MDP and SBP radii do not change when the mass function is uniform (GC-C model). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig5.eps}} \caption{GC model profiles. Ratio between the same type of radii as measured in RDPs and MDPs (left panels) and RDPs and SBPs (right panels). From top to bottom: tidal, half and core radii. TO values are indicated by the dotted line.} \label{fig5} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig6.eps}} \caption{Same as Fig.~\ref{fig5} for the OC models. For comparison purposes, the y-scale is the same as in Fig.~\ref{fig5}.} \label{fig6} \end{figure} Similar radii ratios in the OC models are examined in Fig.~\ref{fig4}. Qualitatively, the same conclusions drawn from the GC models apply to the OC ones. However, the underestimation factor of RDP radii increases for younger ages, to the point that \mbox{$\rm R_c$}\ drops to $\approx60\%$ of the deepest value for all profiles shallower than $\approx3$\, mag below the TO in the OC-C model ($10$\,Myr), and to $\approx70\%$ for OC-B ($100$\,Myr). The respective half-star count radii are affected by similar, although smaller, underestimation factors. MDP radii are less affected by cluster age than RDP ones. Similarly to the GC models (Fig.~\ref{fig3}), the three types of SBP radii are essentially insensitive to photometric depth, within uncertainties. We note that the presence of bright stars in the central region of young clusters (OC-C) appears to introduce a small dependence of the core radius on photometric depth (bottom-right panel). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig7.eps}} \caption{Top panels: relation of the half-type radii with the concentration parameter, for the RDPs (left panel), MDPs (middle) and SBPs (right). For each model, $R_h$ values increase for deeper profiles. Dashed line in panels (a) and (b): $R_h\sim c^2$. In panel (c): $\mbox{$\rm R_{hL}$}\sim c$. Bottom panels: concentration parameter as a function of photometric depth.} \label{fig7} \end{figure} \subsection{Comparison of similar radii among different profiles} \label{CompDifProf} Differences on the same type of radii among the profiles, introduced essentially by a spatially variable MF, are discussed in Fig.~\ref{fig5} for the GC models. Regardless of the model assumptions, RDP and MDP radii are essentially the same, except for the profiles corresponding to deep photometry, for which the RDP radii become slightly larger than the MDP ones. This occurs basically because of the larger fraction of low-mass stars at the outer parts of the clusters. Since all stars have equal weight in the building of the RDPs, the accumulation of low-mass stars at large radii ends up broadening the RDPs with respect to the MDPs. On the other hand, RDP core and half-star count radii tend to be larger than the SBP ones for profiles including stars fainter than near the TO. RDP \mbox{$\rm R_t$}\ may be 10 -- 20\% larger than SBP ones for all depths. As discussed above, the uniformly-depleted MF of GC-C model produces profiles whose radii are independent of photometry depth. The RDP to SBP core and tidal radii ratios decrease with concentration parameter. The RDP to SBP half-type radii ratios do not depend on $c$. The same analysis applied to the OC models is discussed in Fig.~\ref{fig6}. The presence of massive stars in young clusters enhances the RDP to MDP radii ratios, especially the core and to some extent, the half-type radii. This occurs for profiles that contain stars brighter than $\approx4$\,mag below the TO. For the youngest model (OC-C), the core radius measured in the RDP may be $\approx40\%$ larger than the MDP one. This effect is enhanced when RDP radii are compared to SBP ones, again decreasing in intensity from the core to tidal radii. For OC-C, RDP core, half and tidal radii are $\approx55\%$, $\approx40\%$, and $\approx25\%$ larger than the equivalent SBP ones. Comparing with the GC models (Fig.~\ref{fig5}), the presence of a larger fraction of more massive (brighter) stars towards the center in young clusters tend to enhance radii ratios of RDP with respect to MDP, and especially, RDP to SBP. \subsection{Further relations} \label{FurtRel} The models discussed in previous sections can be used as well to examine the dependence of the half-type radii with the concentration parameter, and to test how $c$ varies with photometric depth. These issues are presented in Fig.~\ref{fig7}. As already suggested by Figs.~\ref{fig3} and \ref{fig4}, the relation of the half radius with $c$, in a given model, changes significantly with photometric depth in RDPs (panel a) and MDPs (panel b). In SBPs, on the other hand, it is almost insensitive to depth (panel c). From eq.~\ref{eq1}, the half-star count radius is tightly related to the concentration parameter according to $\mbox{$\rm R_{hSC}$}=(0.69\pm0.01)+(1.01\pm0.01)\,c^2$. This curve fits well the values measured in the deepest RDP of all GC and OC models alike (panel a). Such a relation fails for the shallower profiles. A similar, but poorer, relation applies to the values derived from the deepest MDPs (panel b), $\mbox{$\rm R_{hM}$}=(0.63\pm0.09)+(0.99\pm0.05)\,c^2$. It fails especially for the young (OC) models. The GC SBPs, on the other hand, can be poorly fit with the linear function $\mbox{$\rm R_{hL}$}=(-0.9\pm0.1)+(2.4\pm0.1)\,c$ (panel c). Concentration parameters measured in RDPs and MDPs (panels d and e) change with photometric depth. Around the TO they reach the maximum value, which corresponds to a star cluster $\approx15\%$ more concentrated than the pre-established value (Table~\ref{tab1}). At the shallowest profiles $c$ presents a value intermediate between the maximum and the pre-established one, which is retrieved at the deepest profiles with the inclusion of the numerous low-mass stars. The exception again is the uniform MF model GC-C, whose $c$ values do not change with $\Delta_{TO}$. $c$ values measured in SBPs are essentially insensitive to photometric depth (panel f). \section{NGC\,6397: a test case} \label{N6397} We compare the results derived for the model star clusters with similar parameters measured in the $\mbox{$\rm M_V$}=-6.63$, nearby GC ($\mbox{$\rm d_\odot$}=2.3$\,kpc) NGC\,6397. Being populous is important to produce statistically significant radial profiles, while the proximity allows a few magnitudes fainter than the giant branch to be reached with depth-limited photometry. NGC\,6397 is a post-core collapse GC with evidence of large-scale mass segregation, as indicated by a mass function flatter at the center than outwards (\citealt{Andreuzzi04} and references therein). Additional relevant data (from H03) for the metal-poor ($\mbox{$\rm [Fe/H]$}=-1.95$) GC NGC\,6397 are the Galactocentric distance $\mbox{$\rm R_{GC}$}=6$\,kpc, half-light and tidal radii (measured in the V band) $\mbox{$\rm R_{hL}$}=2.33\arcmin$ and $\mbox{$\rm R_t$}=15.81\arcmin$, and Galactic coordinates $\ell=338.17^\circ$, $b=-11.96^\circ$. Thus, bulge star contamination is not heavy, and cluster sequences can be unambiguously detected, which is important for the extraction of radial profiles with small errors (see below). Using SBPs built with 2MASS images and a fit with \citet{King62} profile, \citet{Cohen07} derived the core radius in the \mbox{$\rm J$}\ band $\mbox{$\rm R_c$}(J)=61.5\arcsec\pm9.3\arcsec$. However, based on Hubble Space Telescope data and using a power-law plus core as fit function, \citet{NoGe06} derived $\mbox{$\rm R_c$}=3.7\arcsec$ in the equivalent V band, thus roughly resolving the post-core collapse nucleus. The post-core collapse state of NGC\,6397 does not affect the present analysis, since the goal here is the determination of changes produced in cluster radii derived under the assumption of a King-like profile (Sect.~\ref{Struc}) applied to RDP, MDP and SBPs built with different magnitude depths. We base the analysis of NGC\,6397 on \mbox{$\rm J$}, \mbox{$\rm H$}\ and \mbox{$\rm K_s$}\ 2MASS photometry extracted using VizieR\footnote{\em vizier.u-strasbg.fr/viz-bin/VizieR?-source=II/246} in a circular field of radius $\mbox{$\rm R_{ext}$}=70\arcmin$ centered on the coordinates provided in H03. This extraction radius is large enough to encompass the whole cluster, allowing as well for a significant comparison field. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig8.eps}} \caption{Structural analysis of NGC\,6397. Panel (a): decontaminated CMD of a central ($R<5\arcmin$) region. The reference magnitude $\mbox{$\rm J$}=15$ is indicated by the dashed-line. Shaded region: colour-magnitude filter. Background-subtracted RDPs for stars brighter than $\mbox{$\rm J$}<15+\Delta_{J15}$, with $\Delta_{J15}=1,~0,~-1,~-2$ are shown in panels (b) to (d), respectively. The respective King-like fits (solid-line) together with the fit uncertainty (shaded region) are shown.} \label{fig8} \end{figure} For a better definition of the cluster sequences we apply the statistical decontamination algorithm described in \cite{BB07}, which takes into account the relative number-densities of candidate cluster and field stars in small cubic CMD cells with axes corresponding to the magnitude \mbox{$\rm J$}\ and the colours \mbox{$\rm (J-H)$}\ and \mbox{$\rm (J-K_s)$}. Basically, the algorithm {\em (i)} divides the full range of magnitude and colours of the CMD into a 3D grid, {\em (ii)} computes the expected number-density of field stars in each cell based on the number of comparison field stars with magnitude and colours compatible with those in the cell, and {\em (iii)} subtracts the expected number of field stars from each cell. Typical cell dimensions are $\Delta\mbox{$\rm J$}=0.5$, and $\Delta\mbox{$\rm (J-H)$}=\Delta\mbox{$\rm (J-K_s)$}=0.25$, which are large enough to allow sufficient star-count statistics in individual cells and small enough to preserve the morphology of the CMD evolutionary sequences. The comparison field is the region located between $50\leq R(\arcmin)\leq70$, which is beyond the tidal radius. Field-decontaminated CMDs allow for a better definition of colour-magnitude filters, useful to remove stars (and artifacts) with colours compatible with those of the field which, in turn, improves the cluster/background contrast in RDPs and SBPs. They are wide enough to accommodate cluster MS and evolved star colour distributions and dynamical evolution-related effects, such as enhanced fractions of binaries and other multiple systems (e.g. \citealt{BB07}; \citealt{N188}). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig9.eps}} \caption{Left panels: RDP and SBP structural radii of NGC\,6397 as a function of $\Delta_{J15}$, normalised to the values measured in the deepest profile. Right panels: RDP to SBP radii ratios (similar to Fig.~\ref{fig5}).} \label{fig9} \end{figure} Figure~\ref{fig8} (panel a) displays the decontaminated CMD of a central region of NGC\,6397, with $R<5\arcmin$, somewhat larger than the half-light radius (Table~\ref{tab3}). We take $\mbox{$\rm J$}=15$ as reference to extract the depth-variable profiles. RDPs and SBPs are built with colour-magnitude filtered photometry, with the faint end varying in steps of $\Delta_{J15}=0.5$, with the deepest (i.e. at the available 2MASS depth) profile beginning at $\mbox{$\rm J$}=16$ and the brightest one ending near the giant clump at $\mbox{$\rm J$}=13$. The extracted profiles are fitted with the King-like function discussed in (Sect.~\ref{ModelSCs}). A selection of depth-limited RDPs, together with the respective fits and uncertainties, is shown in Fig.~\ref{fig8}, and the corresponding RDP and SBP (\mbox{$\rm J$}\ band) radii are given in Table~\ref{tab3}. Within uncertainties, the present value of the core radius (for the deepest profile), $\mbox{$\rm R_c$}(\mbox{$\rm J$})=1.4\arcmin\pm0.3\arcmin$, agrees with that derived by \citet{Cohen07}, using the same fit function. The near-infrared half-light radius, on the other hand, is larger than the optical one (H03), $\mbox{$\rm R_{hL}$}(\mbox{$\rm J$})\approx1.5\mbox{$\rm R_{hL}$}(V)$. Effects of the varying magnitude depth on the radii of NGC\,6397 are examined in Fig.~\ref{fig9}. Qualitatively, the resulting curves agree, within uncertainties, with the behaviour predicted by the GC models (Figs.~\ref{fig3} and \ref{fig5}). Compared to the values measured in the deepest RDP, the tidal (panel a), half-star counts (b) and core (c) radii decrease for shallower profiles, especially for $\Delta_{J15}\geq-0.5$, remaining almost uniform for $\Delta_{J15}<-0.5$. In particular, the core radius measured in shallow RDPs (containing essentially giants) drops to $\approx45\%$ of its deepest value (which includes stars at the top of the MS). Consistently with the GC models containing a spatially variable MF (Sect.~\ref{Struc}), the varying depth affects the tidal, half and core radii, with increasing intensity. SBP radii, on the other hand, remain essentially uniform with variable depth, consistent with the GC models (Sect.~\ref{Struc}). The same conclusions apply to the RDP to SBP radii ratio (right panels). \begin{table} \caption[]{Radii of NGC\,6397 from RDPs and 2MASS SBPs} \label{tab3} \renewcommand{\tabcolsep}{0.9mm} \renewcommand{\arraystretch}{1.25} \begin{tabular}{cccccccc} \hline\hline &\multicolumn{3}{c}{RDP}&&\multicolumn{3}{c}{SBP (\mbox{$\rm J$}\ band)}\\ \cline{2-4}\cline{6-8} $\Delta_{J15}$&\mbox{$\rm R_c$}&\mbox{$\rm R_{hSC}$}&\mbox{$\rm R_t$}&&\mbox{$\rm R_c$}&\mbox{$\rm R_{hL}$}&\mbox{$\rm R_t$} \\ (mag)&(\arcmin)&(\arcmin)&(\arcmin)&&(\arcmin)&(\arcmin)&(\arcmin)\\ (1)&(2)&(3)&(4)&&(5)&(6)&(7)\\ \hline $-2.0$&$1.3\pm0.1$&$3.8\pm0.1$&$33\pm5$&&$1.2\pm0.3$&$3.4\pm0.1$&$28\pm5$ \\ $-1.5$&$1.3\pm0.1$&$4.0\pm0.2$&$39\pm8$&&$1.2\pm0.3$&$3.4\pm0.1$&$30\pm5$ \\ $-1.0$&$1.3\pm0.1$&$4.0\pm0.2$&$42\pm8$&&$1.2\pm0.3$&$3.4\pm0.1$&$26\pm5$ \\ $-0.5$&$1.4\pm0.1$&$3.9\pm0.2$&$44\pm7$&&$1.2\pm0.3$&$3.4\pm0.1$&$27\pm8$ \\ $~0.0$&$1.7\pm0.1$&$4.0\pm0.1$&$41\pm4$&&$1.2\pm0.3$&$3.4\pm0.1$&$27\pm6$ \\ $+0.5$&$2.3\pm0.1$&$4.4\pm0.1$&$40\pm4$&&$1.4\pm0.4$&$3.4\pm0.1$&$28\pm8$ \\ $+1.0$&$2.9\pm0.1$&$4.9\pm0.1$&$48\pm3$&&$1.4\pm0.3$&$3.5\pm0.1$&$32\pm2$ \\ \hline \end{tabular} \begin{list}{Table Notes.} \item Core and tidal radii were derived from fits of \citet{King62} functions (Sect.~\ref{Struc}) to the respective profiles. The half-star counts and half-light radii were measured directly on the profiles. \end{list} \end{table} \section{Concluding remarks} \label{Conclu} In this work we simulated star clusters of different ages, structure and mass functions, assuming that the spatial distribution of stars follows an analytical function, similar to \citet{King62} profile. The mass and near-infrared luminosities of each star were assigned according to a mass function with a slope that may depend on distance to cluster center. They form the set of models from which we built number-density, mass-density and surface-brightness profiles, allowing for a variable photometric depth. The structural parameters core, half-light, half-mass and half-star count, and tidal radii, together with the concentration parameter, were measured in the resulting radial profiles. Next we examined relations among similar parameters measured in different profiles, and determined how each parameter depends on photometric depth. We point out that the results should be taken as upper-limits, especially for open clusters, since we have considered noise-free photometry and a large number of stars, which produced small statistical uncertainties. With respect to the adopted form of the radial distribution of stars, we note that \citet{King62} isothermal sphere, single-mass profile has been superseded by more realistic models like those of \citet{King66}, \citet{Wilson75} and \citet{EFF87}, which have been fit mostly to the SBPs of Galactic and extra-Galactic GCs (Sect.~\ref{ModelSCs}). The analytical functions associated with these models are characterised by different scale radii (among other parameters) that are roughly related to \citet{King62} radii. Thus, it is natural to extend the scaling with photometric depth undergone by \citet{King62} radii to the equivalent ones in the other models. The main results can be summarised as follows. \begin{itemize} \item {\em (i)} Structural parameters derived from surface-brightness profiles are essentially insensitive to photometric depth, except perhaps the cluster radius in very young clusters. \item {\em (ii)} Uniform mass functions also result in structural parameters insensitive to photometric depth. \item {\em (iii)} Number-density and mass-density profiles built with shallow photometry result in underestimated radii, with respect to the values obtained with deep photometry. Tidal, half-star count and half-mass, together with the core radii are affected with increasing intensity. \item {\em (iv)} Because of the presence of bright stars, radii underestimation increases for young ages. \item {\em (v)} For clusters older than $\sim1$\,Gyr, number-density and mass-density radii present essentially the same values; for younger ages, RDP radii become increasingly larger than MDP ones, especially at the deepest profiles. \item {\em (vi)} Irrespective of age, profiles deeper than the turnoff have RDP radii systematically larger than SBP ones, especially the core. \item {\em (vii)} The concentration parameter also changes with photometric depth, reaching a maximum around the turnoff. \end{itemize} Most of the above model predictions were qualitatively confirmed with radii measured in ground-based RDPs and SBPs of the nearby GC NGC\,6397. In principle, working with SBPs has the advantage of producing more uniform structural parameters, since they are almost insensitive to photometric depth. However, as discussed in Sect.~\ref{intro}, SBPs usually present high levels of noise at large radii. Noise that is also present in SBPs of clusters projected against dense fields and/or the less populous ones. A natural extension of this work would be to examine radial profiles built with photometry that includes observational uncertainties, differential absorption, metallicity gradients, binaries, and star cluster models with a number of stars compatible with those of open clusters. As a consequence of the wide range of distances to the Galactic (and especially extra-Galactic) star clusters, interstellar absorption, and intrinsic instrumental limitations, the available photometric data for most clusters do not sample the low-mass stars. All sky surveys like 2MASS, usually are restricted to the giant branch, or the upper main sequence, for clusters more distant than a few kpc. In such cases, the structural parameters have to be derived from radial profiles built with photometry that does not reach low-mass stars. The present work provides a quantitative way to estimate the intrinsic (i.e. in the case of photometry including the lower main sequence) values of structural radii of star clusters observed with depth-limited photometry. \begin{acknowledgements} We thank the anonymous referee for helpful suggestions. We acknowledge partial support from the Brazilian institution CNPq . \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The mechanism driving and maintaining the spiral arm structure in disc galaxies is not well understood. The problem stems from the observation that material within a galaxy orbits the centre with a frequency that decreases with radius. Any observed spiral patten will therefore wind up in such a disc (the ``winding problem"). The prevailing theory that attempts to explain the spirality is that the pattern is wave-like rather than material, with stars and interstellar medium (ISM) gas flowing in and out of the arms \citep{1964ApJ...140..646L,1973PASAu...2..174K}. While these original quasi-stationary density wave theories had the problem of spiral decay, later work on global spiral mode theories allowed for more steady spiral structure \citep{1989ApJ...338...78B,1996ssgd.book.....B}. The pattern speed of such spiral arms (the speed the arm is seen to rotate irrespective of the rotation curve) rotates with some fixed frequency; $\Omega_p={\rm constant}$. While compelling in theory, these density waves have proven difficult to definitely affirm as the explanation of spirals in all external galaxies or be reproducible in numerical simulations. In observations of external galaxies certain tracers should appear offset to others in and around the spiral arms (e.g. emission in CO and H$\alpha$). This is due to the shocking of the gas as it approaches the bottom of the stellar potential well, be it from up-stream or down-stream \citep{1968IAUS...29..453F,1969ApJ...158..123R}. Offsets between different galactic components have been observed in some external galaxies, but not all \citep{2009ApJ...697.1870E,2013ApJ...763...94L,2013ApJ...779...42S}. Numerical simulations have observed such offsets between stars and gas only in instances when the spiral pattern is driven by an underlying potential rather than a live stellar disc, in so-called ``dynamic spirals" \citep{2008MNRAS.385.1893D,2011ApJ...735....1W,2012MNRAS.426..167G,2015PASJ...67L...4B}. Spiral density wave (SDW) theory also predicts pattern speeds that are constant throughout the radius of the disc. Once again this is seen in some galaxies, but more recently galaxies are seen to have radially decreasing pattern speeds \citep{2008ApJ...688..224M,2012ApJ...752...52S}. Dynamic spiral arms are however material in nature with pattern speeds that are the same as the rotation frequency of the material in the disc (e.g. \citealt{2013ApJ...763...46B,2013A&A...553A..77G,2015MNRAS.449.3911P}). As they exhibit material-like rotation, these arms will also wind up over the order of a galactic rotation yet are recurrent as well as transient, with new arms forming continually. Whether such a system has three arms, five arms or is near flocculent is down to the mass ratios of the various galactic components (in general, low disc to halo mass ratio systems will form more flocculent-like structures). See \citet{2011MNRAS.410.1637S} and \citet{2014PASA...31...35D} for a more in depth review of the current standing of spiral generation. Grand design, unbarred two-armed spirals, however, present more of an issue. While a large fraction of spirals appear two armed ($\approx 50$\%), the degree of the strength and dominance of this spirality is widely variable \citep{1987ApJ...314....3E,1995ApJ...447...82R}. Long lasting two-armed spirals have been thus far elusive in isolated $N$-body simulations \citep{1994A&A...290..785D,1996ApJ...457..125Z,2011MNRAS.410.1637S}. While two-armed spirals can be generated, the discs tend to alternate between a two and three-armed structure or exhibit two-armed structure for only a short time-frame. There are believed to be two main alternative causes for two-armed spiral generation. The first is the rotation of an inner bar which is the likely cause for spiral arms galaxies such as NGC1300 and NGC1365 (such bars are easily generated in simulations; \citealt{2010ApJ...720L..72S,2012MNRAS.426..167G,2013MNRAS.429.1949A}). These arms tend to be fairly tightly wound and circularize at the Outer Lindblad Resonance; OLR (e.g. \citealt{2008MNRAS.388.1803R}). The nature of the arms in barred galaxies is fairly complex, with observed offsets between bar ends and spiral arms, with the two appearing dynamically decoupled in simulations \citep{2015MNRAS.454.2954B}. Another possible mechanism for two-armed spiral generation in disc galaxies is the tumbling or relaxation of their dark matter haloes \citep{2012ARep...56...16K,2015arXiv150701643H}. The non-axisymmetric distortions of the halo can induce two-armed spirals in embedded rotating discs, though this is an idea still in its infancy with many unknown variables and is more difficult to prove that the other mechanisms. The other main mechanism for generating two-armed spirals is the interaction with a companion galaxy, where the tidal force of a passing companion induces a bridge-tail structure in the host that evolves into a symmetric two-armed spiral \citep{1972ApJ...178..623T,1991A&A...252..571D}. Interactions such as these are believed to be responsible for some of the most well known two-armed spiral galaxies. The poster-child of grand design spiral structure, M51, is clearly interacting with the smaller NGC\,5195 and the system has been reproduced with simulations in numerous previous studies (e.g. \citealt{2000MNRAS.319..377S,2010MNRAS.403..625D}). The structure of our nearest neighbour M31 is difficult to discern due to the large inclination on the night sky, however \citet{2014ApJ...788L..38D} find that a penetrating orbit of a small companion from above the galaxy can induce the somewhat irregular spiral morphology seen in observations. The grand-design spiral M81 is part of a more complex interacting system, with the tidal interactions of at least two other nearby bodies believed to be driving the spiral structure, making reproduction with simulations a difficult endeavour \citep{1993A&A...272..153T,1999IAUS..186...81Y}. The Milky Way itself has several nearby galactic-sized objects, most notably the Large and Small Magellanic Clouds. While the satellites of the Milky Way are not believed to be the sole drivers of the observed spiral structure, they can explain some of its morphological features \citep{2009MNRAS.399L.118C,2011Natur.477..301P}. There are also many less well known galaxies that are believed to have structures driven by companions, be it a minor companion fly-bys such as NGC\,2535 or NGC\,6907, or a more grandiose encounter situation such as NGC\,6872 or Arp\,273. Previous theoretical and numerical work on tidal encounters in galactic systems has primarily focused on $N$-body stellar simulations, beginning with the seminal work of \citet{1972ApJ...178..623T}. Tidal forces tend to scale as $F_{\rm tide} \propto M_p/d^3$, where $M_p$ and $d$ are the mass and pericentric distance to the perturbing companion. This means some degeneracy exists between the $M_p$ and $d$, though methods have been suggested that can break this degeneracy \citep{2009MNRAS.399L.118C}. The velocity vector of the companion plays a more indirect role, with tidal features being strongest when the velocity of the companion is comparable to the rotation speed of the host galaxy near closest approach \citep{2010ApJ...725..353D}. Tidal encounters can be seen to drive many different morphological features, depending on the properties of the interaction. This includes driving \citep{1998ApJ...499..149M,2014ApJ...790L..33L} and hindering bar formation \citep{2008ApJ...687L..13R}, creating ringed and spoked features \citep{2012MNRAS.425.2255F,2015ApJ...805...32W}, and generating grand-design two-armed spirals \citep{1992AJ....103.1089B,1998ASPC..136..260D,2008ApJ...683...94O,2011MNRAS.414.2498S}. Little work has been done on a cosmological perspective on the tidal driving of spiral structures. Simulations of a halo filled with dark satellites by \citet{2008ASPC..396..321D}, seeded by dark matter simulation statistics of \citet{2004MNRAS.355..819G}, find that spirals can be easily generated in a subhalo passage, though these are short-lived. There exists several studies that focus of the driving of spirals in galactic discs in tidal encounters. In the study of \citet{1992AJ....103.1089B} the authors find a lower limit on galaxy to companion mass ratio of 0.01 for driving spiral structure, though their calculations are limited to a static halo and confined to 2D. Similar lower mass limits were used in the simulation surveys of \citet{2011ApJ...743...35C}, and \citet{1991A&A...244...52E}, though in these cases no detailed morphological study was shown for the varying interactions. \citet{1996ApJ...471..115B} include gas in their simulations, but no detailed morphological study, and are fairly low in resolution. \citet{2011MNRAS.414.2498S} also include gas, but limit their study to only three calculations, and find little to no spiral structure induced in a 0.01 companion to galaxy mass ration encounter. The studies of \citet{2008ApJ...683...94O} and \citet{2015ApJ...807...73O} offer the closest analogy to the work presented here, looking into the arm structure and dynamics in several simulations, however, they include no gas in their calculations. In the studies mentioned above we find two questions to be unanswered. The first is how do the gas and stellar morphologies differ in different interactions? Will their morphology be the same in interactions with different masses, orbital inclinations, or will they evolve similar structures regardless of the specifics of the interaction? We specifically aim to look at serval key quantities of the structures in the gas and stellar components; the pattern speed, arm number, pitch angle and radial migration to ascertain whether there is any differences between the two that may in turn be of use to detect such interactions in external galaxies. We also study any offsets between the two media, any such instance is of interest due to the appearance of such features in density wave driven arms and dearth of in the dynamic spiral case. The second key point is finding how small the companion can be and still trigger a spiral in the disc. This has serious observational consequences, and is important for placing limits on companions that could be responsible for unbarred two-armed spirals (such as NGC\,1566, NGC\,2535 and NGC\,2857). This paper is organized as follows. Sec. \ref{sec:numerics} describes the setup and technical details of the calculations. We briefly first discuss the initial isolated disc in Sec. \ref{Res1}, before the introduction of a companion. We then discuss our fiducial simulation in detail in Sec. \ref{Res2}. The results of the parameter study are presented in Sec. \ref{Res3} and we conclude in Sec. \ref{conclusions}. The appendix includes a brief analysis of supplementary models that are permutations of our fiducial model (e.g. different resolutions and gas temperature). \section{Numerical Simulations} \label{sec:numerics} The simulations presented here focus on the case of a small companion interacting with a host disc galaxy. The galaxy is composed of an $N$-body stellar disc, spherical inner bulge and outer halo. ISM gas is also included in the disc, with a stellar to gas mass ratio of 10:1. Initial conditions for the isolated galaxy are generated using the \textsc{nemo} stellar dynamics toolbox \citep{1995ASPC...77..398T}, specifically the \textsc{magalie} initial conditions generator \citep{2001NewA....6...27B}, itself based on the method of \citet{1993ApJS...86..389H}. The profiles describing the various components are an exponential stellar disc, a truncated isothermal dark matter halo and a Hernquist-type bulge. The various masses and scale lengths of each of the components are listed in Table\;\ref{ICtable}. Fig.\,\ref{RC} shows various properties of the initial conditions as a function of galactic radius including the individual rotation curve components (top) the radial velocity dispersion (middle) and Toomre-$Q$ parameter in the stars (bottom), given by the equation; \begin{equation} Q_s = \frac{\kappa \sigma_R}{3.36 G \Sigma_0} \end{equation} where $\kappa$ is the epicycle frequency, $\sigma_R$ the radial velocity dispersion and $\Sigma_0$ the disc surface density. Values of $Q_s<1$ imply the disc is gravitationally unstable \citep{1964ApJ...139.1217T}. The rotation curve is tailored to represent a general disc galaxy, with a velocity amplitude of approximately 200\,km\,s$^{-1}${}, peaking at approximately 220\,km\,s$^{-1}${} at $R=3{\rm kpc}$. The orbital period at two disc radial scale lengths ($2a_{\rm disc}=7{\rm kpc}$) is approximately 190 and 540\,Myr at the disc edge ($R=20$kpc). \begin{figure} \includegraphics[trim = 7mm 10mm 0mm 5mm,width=90mm]{F1_rcqsigv.png} \caption{The rotation curve (top), initial radial velocity dispersion (middle) and initial Toomre-$Q$ parameter in the stars in the simulations presented in this work. Parameters governing each component are given in Table\;\ref{ICtable}.} \label{RC} \end{figure} The initial galaxy is designed to be near-flocculent to make any arm the result of the interaction rather than the disc's own instabilities. This requires a rotation curve that is halo dominated, in accordance with the predicted swing amplification mode (see \citealt{1981seng.proc..111T}, \citealt{2011ApJ...730..109F}, \citealt{2015MNRAS.449.3911P}). This, in addition to the relatively heavy inner bulge, also makes the disc stable to bar formation due to the establishment of a stabilizing Toomre $Q$-barrier in the inner disc. Bars are undesirable in this study as they would make it difficult to discern whether arms are driven by the interaction or by the bar rotation \citep{1987gady.book.....B,1995gaco.book.....C}. This galaxy model is very similar to the Bd model in \citet{2015MNRAS.449.3911P} except with a live halo. A live halo is necessary to correctly model strong interactions, as a static halo would produce an unrealistic anchor for the disc-bulge system, though this does introduce additional dynamical friction. Our fiducial resolution is $2\times 10^4$ bulge, $3\times 10^5$ disc, $3\times 10^5$ gas and $3\times 10^5$ halo particles, though we also perform calculations with 1 million disc particles as a resolution check. Our fiducial resolution is lower than is advised by some literature studies to resolve bar \citep{2009ApJ...697..293D} or spiral \citep{2011ApJ...730..109F} features driven by self-gravity in the stellar disc. However, we stress that are not wishing to follow such structures, and instead wish for a featureless disc prior to the companion interaction. The perturbing companion is normally modelled as a single heavy dark matter particle, though also we run a single computation using a resolved Plummer sphere to represent the companion. \begin{table} \centering \begin{tabular}{@{}l | c l} \hline Param & Value & Desc.\\ \hline \hline $n_{\rm halo}$ & $3\times 10^5$ & Halo particle number \\ $n_{\rm gas}$ & $3\times 10^5$ & Gas particle number\\ $n_{\rm disc}$ & $3\times 10^5$ & Disc particle number\\ $n_{\rm bulge}$ & $2\times 10^4$ & Bulge particle number\\ $n_{\rm pert}$ & 1 or $2\times 10^4$ & Perturber particle number \\ $\epsilon_{\rm soft}$ & 50pc & Grav. softening length\\ \hline $M_{\rm disc}$ & $3.0 \times 10^{10}$ $M_\odot$ & Mass of stellar exponential disc \\ $a_{\rm disc}$ & 3.5\,kpc & Scale length of stellar exponential disc \\ $r_{t,\rm disc}$ & 20\,kpc & Truncation length of disc \\ $M_{\rm halo}$ & $17 \times 10^{10}$ $M_\odot$ & Mass of isothermal halo\\ $a_{\rm halo}$ & 7\,kpc & Scale length of isothermal halo \\ $r_{t,\rm halo}$ & 60\,kpc & Truncation length of halo \\ $M_{\rm bulge}$ & $1.6 \times 10^{10}$ $M_\odot$ & Mass of Hernquist bulge\\ $a_{\rm bulge}$ & 0.7\,kpc & Scale length of Hernquist bulge\\ $M_{\rm gas}$ & 0.1$M_{\rm disc}$& Mass of gas \\ $M_{ p,0}$ &$2 \times 10^{10}$ $M_\odot$& Fiducial mass of perturber \\ \hline \end{tabular} \caption{Fixed values in our simulations, including resolutions and parameters governing the rotation curve of the primary galaxy.} \label{ICtable} \end{table} The orbit is set to be parabolic for our default calculation, with a closest approach of approximately 20kpc and an initial velocity magnitude of 50\,km\,s$^{-1}$. The companion is initially 140kpc away from the host galaxy, to ensure it is well outside of the majority of the halo galactic mass distribution upon inclusion in the system. We focus our efforts on grazing minor interactions, where the companion is at least an order of magnitude less massive than the host, being physically analogous to a small dwarf galaxy or dark matter subhalo. This initial configuration was chosen by performing a series of lower-resolution simulations to find a configuration that produced a strong two-armed perturbation while keeping a grazing closest approach and an initial distance well outside the majority of the halo mass distribution. We investigate the effect of changing the mass, velocity, closest approach and orbital path on the morphology of the host galaxy. Note that due to the effect of dynamical drag as the companion passes through the halo, the orbits may become strongly bound regardless of the seeded orbit. The separate calculations are listed in Table\;\ref{RunParams}, which includes several different companion masses, two resolution tests and four different orbital inclinations. The Extended model has an extended gas disc and larger-scale length (a factor of 2 increase for both compared to the fiducial run) to mimic observations of spiral galaxies with extended gas discs. The mass is the same as the normal runs, so the effective surface density is lower and the path of the companion now directly travels through the gas disc for a nearly half a galactic rotation. Also included are two calculations where two-armed spirals are generated without a companion, including SDW and dynamic, transient and recurrent (DTR) spirals. Simulations were performed using the $N$-body+smoothed particle hydrodynamics (SPH) code \textsc{gasoline} \citep{2004NewA....9..137W}. Gravity is solved using a binary tree, and the system integrated using a kick-drift-kick leapfrog. Self gravity is active for all components, using a fixed gravitational softening of 50pc. The gas is isothermal with a temperature of 10000K, in effect simulating the warm ISM. An additional calculation with 1000K gas (DefaultCld) is included, but to avoid large-scale collapse on the scale of the gravitational softening length we use a surface density half that of the fiducial calculation. Calculations with a 200K gas were initially included, but the disc experienced wide scale collapse rapidly even before the perturber passage. With additional physics such as star/sink formation and supernova feedback the calculation could continue, but for the simulation survey conducted here we prefer to omit these additional physical processes. These will instead be used in a future study with a much smaller number of simulations and higher resolution. The different permutations of the Default calculation (DefaultX3, Resolved, Extended, DefaultCld) are primarily discussed in the appendix and only mentioned briefly in the main text. We found the different resolution tests do not change the driven spiral features. The gas distribution and temperature do have some effect on arm features, reasons for which are discussed in the appendix. \begin{table*} \centering \begin{tabular}{@{}l c c c c c l } \hline Model & $M_p\, {\rm[ 2\times10^{10} M_\odot]}$ & $V_\circ$ [\,km\,s$^{-1}$] & $\theta\, {\rm[deg]}$ & $\psi\, {\rm[deg]}$ & $d {\rm[kpc]}$ & Notes\\ \hline \hline Default & 1 & 50 & 0 &0 & 20 & Our fiducial calculation\\ DefaultX3 & 1 & 50 & 0 &0 & 20 & As above with triple resolution in galaxy\\ Resolved & 1 & 50 & 0 &0 & 20 & As fiducial run but with resolved companion\\ Extended & 1 & 50 & 0 &0 & 20 & As fiducial run but with an extended gas disc\\ DefaultCld & 1 & 50 & 0 &0 & 20 & As fiducial run but with 1000K gas disc and $0.5M_{\rm gas}$\\ \hline Heavy1 & 2 & 50 & 0 &0 &20 & Heavier companion than the fiducial run\\ Light1 & 0.5 & 50 & 0 &0 &20 & Companion 50\% mass of Default\\ Light2 & 0.25 & 50 & 0 &0 &20 & Companion 50\% mass of Light1\\ Light3 & 0.125 & 50 & 0 &0 &20 & Companion 50\% mass of Light2\\ Light4 & 0.0625 & 50 & 0 &0 &20 & Companion 50\% mass of Light3 \\ Light4d1 & 0.0625 & 50 & 0 &0 & 16 & As Light4 but reduced closest approach\\ Light4d2 & 0.0625 & 50 & 0 &0 & 12 & As Light4d1 but reduced closest approach\\ Light4d3 & 0.0625 & 50 & 0 &0 & 8 & As Light4d2 but reduced closest approach \\ \hline Orbit45 & 1 & 50 & +45 &0 &20 & Perturber orbit is $+45^\circ$ out of plane\\ Orbit90 & 1 & 50 & +90 &0 &20& Perturber orbit is $+90^\circ$ out of plane (follows $x=0$)\\ Orbit135 & 1 & 50 & +135 &0 &20 & Perturber orbit is $+135^\circ$ out of plane\\ Above & 1 & 50 & 0 &+90 &20 & Perturber initially above North Galactic Pole\\ \hline Slow1 & 1 & 40 & 0 &0 &20 & Perturber velocity has additional $-10$\,km\,s$^{-1}${} initially\\ Fast1 & 1 & 60 & 0 &0 &20 & Perturber velocity has additional $+10$\,km\,s$^{-1}${} initially\\ \hline SDW &- & - & - & - & - & Gas disc with analytic disc and spiral potentials \\ DTR & - & - & - & - & - & Isolated galaxy with $\times 2 M_d$\\ \hline \hline \end{tabular} \caption{Description of the calculations presented in this work. The latter two calculations have no perturbing companion, and are included to illustrate SDW and DTR two-armed structures. The parameters describing the companion are the closest approach of the orbit, $d$, the initial velocity magnitude, $V_\circ$, and mass of the perturber, $M_p$. The angle $\theta$ measures the rotation of the initial velocity vector about the $y$-axis (i.e., $\theta=180^\circ$ is a retrograde orbit) and $\psi$ is the rotation about the $x$-axis (i.e. when $\psi=90^\circ$ the companion originates from the North Galactic Pole).} \label{RunParams} \end{table*} \section{Results and discussion} \label{sec:results} \subsection{Initial isolated disc} \label{Res1} \begin{figure} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F2_topiso.png}} \caption{Density projection of the gas (top) and position of star particles (bottom, no bulge stars plotted) of the initial isolated galaxy at three separate times, showing general global stability.} \label{IsoDisc} \end{figure} The evolution of the isolated disc is shown in Fig.\,\ref{IsoDisc} in the gas (upper) and stellar (lower) components, where times are given from the time of initialization. There is some structure in the disc at early times ($t\approx 1$\,Gyr) but this dies away with time. The companion does not reach the stellar disc until after $t\approx 2$\,Gyr, and while there is still some structure in the disc at this stage, the structure is near flocculent and well smoothed out. Fourier mode analysis of the disc in this time frame shows the isolated galaxy has a dominant arm mode of approximately $m=5$, but this is comparable to the noise of the other modes and is highly time-dependant. We allow the initial galaxy to evolve for 1\,Gyr before the inclusion of the perturber into the system. The actual interaction does not occur for another Gyr due to the companion being placed over 100kpc away. Over this time the Toomre Q parameter in the stars rises from an initial value of 1.2 to approximately 1.8 in the mid disc over the snapshots shown in Fig.\,\ref{IsoDisc}. \subsection{Fiducial simulation} \label{Res2} \begin{figure*} \centering \resizebox{0.9\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F3_pertlapse.png}} \caption{Time lapse of the fiducial simulation during and after interaction with the perturber over 700\,Myr. A density render of gas is shown in the top panel, and the positions of star particles in the lower. The galaxy is rotating clockwise and the position of the small companion is indicated by the green dot. The (0,0) location is the centre of mass of the bulge component.} \label{PertLapse} \end{figure*} In Fig.\,\ref{PertLapse} we show a time-lapse of our fiducial simulation. The top panels show the gas surface density and the lower panels the position of the disc star particles spanning a time of 700\,Myr. The companion is indicated by the green point in the first two panels of the stellar distribution, and originates from the top of the page (it then moves out of frame in following panels). The galaxy is initially positioned at (0,0,0) but experiences a drift due to unanchored halo and attraction of the companion. The images in Fig.\,\ref{PertLapse} have been re-centred to the bulge centre of mass for clarity. A clear bridge-tail system is driven soon after the encounter which evolves into a two-armed spiral pattern after approximately half a rotation (150\,Myr). The bridge experiences a bifurcation before the transformation into more regular spiral structures (seen more clearly in the stellar material). The gas in general traces a much finer armed structure, while the stellar distribution is smoother and has more inter-arm material. The arm features persist for two galactic rotations (600\,Myr) before they become distorted from an ideal log spiral structure. After the encounter the perturbing companion stripped the host galaxy of a small amount of gas and stellar material (4\% and 3\% respectively) while the dark matter material is less affected. \begin{figure*} \centering \resizebox{0.80\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F4_modes.png}} \caption{Evolution of the power of various Fourier modes in our fiducial simulation, shown in Fig.\,\ref{PertLapse}. The stellar component is shown on left and the gas on the right. Each column shows a different radial region; the inner disc (2kpc$<R<$6kpc), the mid-disc (6kpc$<R<$10kpc) and the outer disc (10kpc$<R<$15kpc).} \label{ArmModes} \end{figure*} To further quantify the features generated by the tidal interaction, we perform a Fourier analysis of the stellar and gaseous material. This enables the calculation of the dominant arm number, which in turn can be used to infer the pitch angle (the angle an arm makes with a tangent to a circle at the same radius, $\alpha$) and pattern speed (the rotation speed of the observed spiral structure, $\Omega_p$) of the spiral arms. The details of this calculation are described in the appendix of \citet{2015MNRAS.449.3911P}. Fig.\,\ref{ArmModes} shows the power in each Fourier arm mode ($m$) for our fiducial model as a function of time after the passage of the companion. As the response of the disc is different at each radius, we show the mode power at an inner ($2{\rm kpc}<R<6{\rm kpc}$), mid ($6{\rm kpc}<R<10{\rm kpc}$), and outer ($10{\rm kpc}<R<15{\rm kpc}$) disc region. The stellar material is shown in the left-hand column, and the gas in the right. The panels show the time from just after periastron passage of the companion, to when the disc appears to have reverted to its a structure similar to the original morphology, at approximately 1.6Gyr. There is still some remnant power in the $m=2$ and $m=1$ modes after this time, but it is very weak, highly irregular in structure, and confined to the outer disc. There is a clear increase in the power of the $m=2$ mode throughout the disc after the interaction, which appears to dominate at all radii for over 1\,Gyr. This then slowly dies away to powers similar to the other arm modes. The stars have a generally clearer peak in the $m=2$ mode, while the gas has more power allocated to other modes due to its more filamentary nature. There is a small dip in the power of the $m=2$ after the primary peak, which corresponds to when the bridge bifurcates, seen clearly in the stars during $400 {\rm Myr} \le t \le 500 {\rm Myr}$ in Fig.\,\ref{PertLapse}. There is some additional power in the other even modes (4 and 6). There are two possible reasons for this. The first is that there is genuine power in these modes in the disc. In inspection of Fig.\,\ref{PertLapse} there is some bifurcation of two to four armed structure in disc, seen clearly in the 400\,Myr timestamp, which would explain some of the additional power in the $m=4$ mode around the same time. The other reason is the square wave-like nature of the density structure with azimuth at each radii, which will boost the power in other even modes for a $m=2$ signal. With the dominant modes in the disc traced, and clearly belonging to the $m=2$ family, we then fit logarithmic spiral functions to the gas and stellar arms. The resulting pitch angles of the spiral arms in the gas and stars are shown in Fig.\,\ref{Alpha} as a function of time. There is a clear decrease in pitch angle with time, indicating the arms winding up, and follows an exponential-like decay. The black points indicate the fits where the $m=2$ mode is not the dominant mode, and only occurs for the gas where the $m=1$ mode (a large one-armed feature in the outer disc) begins to take over at later times. The shaded region shows approximately where the $m=2$ power decreases to the ambient level of all the other modes, thus making a fit and pitch angle determination problematic. The maximum for both media is near 35$^\circ${} and reaches a minimum of about 6$^\circ${}. These span almost the entire range of values seen in external galaxies (e.g. \citealt{1998MNRAS.299..685S}). This is also the only way of producing two-armed spirals in simulations with comparatively small pitch angles, whereas isolated galaxy simulations with DTR arms tend to favour wide pitch angles of 15$^\circ${}$\le \alpha \le$30$^\circ${} \citep{2013A&A...553A..77G,2015MNRAS.449.3911P}. \begin{figure} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F5_alpha.png}} \caption{The evolution of the pitch angle for our fiducial simulation for the $m=2$ mode. The blue and black lines show the fit to the stellar and gaseous arms respectively. The black points indicate times where the $m=1$ mode had the greater power. The grey region indicates the time frame where the power of the $m=2$ mode becomes comparable to the other modes (i.e. noise).} \label{Alpha} \end{figure} There is some evidence and interest in whether different galactic components trace different structures. For example, in the M51 PAWS data \citep{2013ApJ...779...42S} offsets can be seen between the star formation regions, molecular gas and old stellar population. Even in our own galaxy there is some evidence that gas and stars trace entirely different arm numbers \citep{2001ApJ...556..181D}. While the dominant arm mode is clearly $m=2$ in these calculations, we can assess any offsets between components. In Fig. \ref{Offs} we show the offset in the spiral features traced by the stars and the gas in our fiducial simulation. No logarithmic spiral assumption is used, with instead the points reflecting the greatest density of material in each radial bin. The vertical axis is simply the azimuthal position of the gaseous arm subtracted from the stellar arm and multiplied by $\pi R$ to give the azimuthal distance offset. There is small yet noticeable offset between the gaseous and stellar arms in the mid/outer disc, with the gas leading the stars upstream in this region. This is significantly larger than resolution limits (gravitational softening is 50pc and gas smoothing lengths being centred approximately around 80pc). The offset is significantly smaller (of the order of 1$^\circ${}-2$^\circ${}) than in simulations of density-wave driven spirals \citep{2015PASJ...67L...4B}, and offsets between different gas components in galaxies where a number of negligible to small-scale offsets ($<10^\circ$) are observed \citep{2013ApJ...763...94L}. As the spiral arms are only pseudo-density waves, shocking of the gas detailed in works such as \citet{1969ApJ...158..123R} should be much weaker and likely only partially causing the offset seen here. The remaining offset is likely due to the tidal nature of the interaction, in which the companion slightly tugs the gas out of stellar potential well upstream of the spiral existing spiral arms. Further investigation on offsets seen in tidal spirals in comparison to what is seen in observations is needed, which we leave to future work. \begin{figure} \centering \resizebox{1.\hsize}{!}{\includegraphics[trim = 0mm 8mm 0mm 0mm]{F6_offset.png}} \caption{The offset between the spiral arms in the gas and stars in the fiducial simulation as a function of radius at three different times after the periastron passage. A minor offset can be seen between the stars and gas at all times in the mid-disc. The large differences in the inner and outer disc are partially due to the near featureless disc centre and the lower surface density in the outer disc.} \label{Offs} \end{figure} The speed that spiral arms rotate at is another point of debate in the community. The standard SDW theory assumes the spirals are rotating with some constant pattern speed, while numerical simulations with live discs (e.g. \citealt{2013ApJ...763...46B,2013A&A...553A..77G,2015MNRAS.449.3911P}) and an increasing amount of observations (e.g. \citealt{2006MNRAS.366L..17M,2008ApJ...688..224M,2012ApJ...752...52S}) show arms that are winding with a rotation speed indistinguishable from the material speed. We measure the pattern speed, $\Omega_p$, of the spiral arms in our fiducial simulation in the stellar and gas components, the results for which are shown in Fig.\,\ref{PatternSpeed}. Pattern speeds are only shown in the range where spiral features can be clearly fit, which does not include the fairly featureless inner disc. Rotation frequencies in the disc are indicated by the black lines, including the $\Omega\pm \kappa/2$ and $\Omega\pm \kappa/4$ resonances (where $\kappa$ is the epicycle frequency and $\Omega$ the rotation frequency of the galactic material). The pattern speed is clearly not constant, as would be expected for winding arms, but it is also not exactly material ($\Omega_p \ne \Omega(R)$). The arms are wave-like, with material flowing in and out of the spiral potential well, but also experience winding due to a non-constant $\Omega_p$. This is highlighted in Fig.\,\ref{TrackGas}, where we show the evolution of the two armed spiral in the gas, and the locations of two individual gas particles (green and magenta points). The paths of the particles are traced by the solid lines as the disc evolves for approximately a full rotation at $R=2a_{\rm disc}$.The particles are selected to be within the spiral arms initially, and can be seen to flow out of the arms, through the inter arm region and then back into another arm (starting and finishing in the black and blue open circles respectively). The pattern speed in Fig.\,\ref{PatternSpeed} clearly traces the $\Omega- \kappa/2$ frequencies, so there is no resulting inner or outer OLR, or corrotation radius. The gas is always moving faster than the spiral arms, and flows into the perturbation from behind. When the companion passes the point of closest approach it is moving with a velocity of 270\,km\,s$^{-1}$ relative to that of the main galaxy. As closest approach is 20kpc, this results in a circular frequency of 13.5$\rm km \,s^{-1}\,kpc^{-1}${}, which is slightly higher than the rotation frequency in the disc at this radius ($\approx 11$$\rm km \,s^{-1}\,kpc^{-1}${}). As such, the frequency of the perturber does not need to be an exact match to any of the frequencies of the disc, be it $\Omega$ or $\Omega- \kappa/2$, to successfully drive a spiral pattern in the disc that persists for Gyr time-scales. The time dependence of the pattern speed will be discussed in further detail in Sec. \ref{Res3}. \begin{figure} \centering \resizebox{0.9\hsize}{!}{\includegraphics[trim = 10mm 0mm 10mm 0mm]{F7_omega.png}} \caption{The pattern speed (in the gas and stars) as a function of radius plotted against the rotation frequencies (in the stars) in the galactic disc. Pattern speeds here are calculated between 400 and 600\,Myrs.} \label{PatternSpeed} \end{figure} It is possible the slight offset seen in Fig.\,\ref{Offs} and the non-material yet non-constant pattern speed in Fig.\,\ref{PatternSpeed} is an indication of the middle-ground nature of these spirals. They are not quite standing density waves (with $\Omega_p={\rm const.}$ and clear gas-star offset) and not quite material arms (with $\Omega_p=\Omega(R)$ and coincident gas-star arms). \begin{figure*} \centering \resizebox{0.8\hsize}{!}{\includegraphics[trim = 0mm 15mm 0mm 5mm]{F8_armtrack.png}} \caption{Time evolution of the positions of gas particles in the host galaxy in our fiducial simulation, coloured by normalized density. The green and magenta circles show two individual gas particles as they move through the disc, and are initially co-incident with the spiral structure. The black and blue open circles are the start and end points of the paths of the two particles. These two particles can be seen to exit and re-enter the spiral arm density waves. The panels are of length 20kpc.} \label{TrackGas} \end{figure*} \subsubsection{Spurring features} \label{SecSpur} \begin{figure} \centering \resizebox{.45\hsize}{!}{\includegraphics[trim = 30mm -5mm 10mm 5mm]{F9_spurPert.png}} \resizebox{.5\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F9_spurPertR.png}} \resizebox{.45\hsize}{!}{\includegraphics[trim = 30mm -5mm 10mm 5mm]{F9_spurWave.png}} \resizebox{.5\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F9_spurWaveR.png}} \resizebox{.45\hsize}{!}{\includegraphics[trim = 30mm -5mm 10mm 5mm]{F9_spurLive.png}} \resizebox{.5\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F9_spurLiveR.png}} \caption{Gas density render and particle locations in our fiducial tidally driven simulation (top), in a simulation with fixed spiral potential (SDW, middle) and a simulation where a relatively heavy live stellar disc has driven a short-lived two-armed structure (DTR, bottom). Spurs can be seen in the inter-arm regions in all cases, but are much weaker in the live-disc simulation.} \label{Spur} \end{figure} We find that once the two-armed mode has been established there are noticeable spur-like features present in the galactic gas disc. These are not seen the stellar component. The presence of spurs in external galaxies is most evident in M51, and has also been seen in simulations in the literature \citep{2003ApJ...596..220C,2006MNRAS.367..873D,2006ApJ...647..997S}. They are relatively easy to produce in simulations with fixed spiral potentials, but somewhat more difficult to reproduce in interactions (seen in \citealt{2010MNRAS.403..625D} and \citealt{2011MNRAS.414.2498S} though not as clear as here), and have remained elusive in simulations of live-stellar discs (e.g. \citealt{2011ApJ...735....1W}). In the top row of Fig.\,\ref{Spur} we show a snapshot of our simulation where clear spurring features can be seen (top, including particle positions and density render). In the middle row we show the results of a calculation where the gas is instead exposed to a static stellar potential (SDW model). The gas is exposed to a log-spiral potential of \citet{2002ApJS..142..261C} with a pitch angle chosen to match that of the companion induced spiral (15$^\circ${}) and a pattern speed of 12$\rm km \,s^{-1}\,kpc^{-1}${}, which is a medium value of the arms in the perturbed disc. The spiral structures are quite similar, with spurring features existing between the arms in the mid disc region. There are obviously some differences, owing to the variable rotation of the spiral arms and the much more dynamic nature of the stellar component in the perturbed galaxy. The reasons for the existence of these spurs is not fully understood. Possible causes include Kelvin-Helmholtz instabilities \citep{2004MNRAS.349..270W}, orbit crowing as gas passes through a spiral shock \citep{2006MNRAS.367..873D} or vorticity generated at deformed spiral shock fronts \citep{2014ApJ...789...68K} and can be dramatically influenced by magnetic fields and intricacies of the axisymmetric rotation curve \citep{2006ApJ...647..997S}. For comparison we also show a live-disc calculation without a perturbing companion that has been initialized so that a two-armed structure is produced in isolation (DTR model, bottom row of Fig.\,\ref{Spur}). The arm structure was simply achieved by doubling the stellar disc mass, which encourages the production of low-mode arm formation (see \citealt{2015ApJ...808L...8D,2015MNRAS.449.3911P}). The arms in this calculation are much more transient than the other models, and will soon shear out into new arm structures (due to their material rotation speed). The spirals formed here have very limited spur features, though some do persist in the upper-left quadrant. The lack of spurs in this dynamic spiral is due to their material-like pattern speed and lack of strong spiral shock. The aforementioned possible causes of these spurs all hinge on the passage of gas through the spiral arms regardless of the mechanism. The gas here is coincident with the stellar spiral potential in the mid to outer disc. The central region however has a minor spur feature, which is likely brought on by the shearing out of the spiral arm it resides in rather than passing through a shock. The existence of these features can be therefore used to discern the origin of spiral structure in observers galaxies, as the two mechanisms shown in Fig.\,\ref{Spur} show spurs, while live disc simulations do not, instead showing more pronounced branches and bifurcations. Future calculations with more realistic ISM physics will help to identify the nature of these spurs, as the warm isothermal calculations presented here are difficult to compare directly to observations. \subsection{Parameter study} \label{Res3} We split our analysis of our parameter sweep into discussion of the variation by mass (Sec. \ref{MassSweep}), then variation by orbital path (Sec. \ref{OrbitSweep}), and finally a brief discussion of the comparative response across all models in terms of strength (Sec. \ref{strength}) migration of material (Sec. \ref{migmat}). \subsubsection{Varying companion mass} \label{MassSweep} One of the purposes of this work is to assess the limiting case of when a companion induces spiral structure, specifically the mass of companion required to form two-armed spiral features. We show the results of six different companion masses; our fiducial calculation, one twice as massive, and four lighter variants. We vary masses by factors of 2 less than our fiducial value ($M_{p,0}=2\times 10^{10}M_\odot$), which equates to approximately 0.7 of the stellar disc mass. The orbit is again in plane to maximize the disc response, increasing the duration of the impulse, and the closest approach to the primary maintained at approximately 20\,kpc. In Fig.\,\ref{MassChange} we show results for different mass perturbers. In the right hand column we show the gas surface density at the time where the $m=2$ mode is the strongest. The left hand column shows the power of each of the arm modes. These are similar to Fig.\,\ref{ArmModes} but only using the stellar material in a range of $4{\rm kpc} \le R \le 12 {\rm kpc}$. The gas tends to follow a very similar trend except with relatively more low-lying power in the $m=4$ mode and greater noise from other modes in general. The central column shows the pattern speed of the two-armed features at two different epochs after the interaction (early:cyan and late:magenta) in the stars (dashed) and the gas (solid) and over a period of two full rotations (black). Note there is no pattern speed shown for the lowest mass companion because the $m=2$ component was too weak to fit a consistent arm feature to for long enough to calculate a pattern speed, and the early pattern speed for the second to last model was impossible to determine at early times. The gas renders show a very clear decrease in the disc response to the companion as the mass is decreased. This is also mirrored in the power spectrum, where the $m=2$ mode is seen to be barely any higher than the ambient noise of the remaining modes for the $0.0625M_{p,0}$ mass companion. The overall behaviour of the $m=2$ mode with time also changes with decreasing mass. For the two heavier companions there is a clear sharp increase as the initial bridges and tails are formed, which then slowly decrease to lower levels over the course of a Gyr. With the lighter companions the response is more muted and near-symmetric, in that the $m=2$ mode increases as gradually as it decays, likely due to the lack of a strong bridge adding significant power to the $m=2$ mode. The pattern speeds show very little change with decreasing companion mass. All masses show a good general agreement with the location of the $\Omega-\kappa/2$ resonance as seen in Fig.\,\ref{PatternSpeed} at later times. The pattern speed is more difficult to fit in the inner disc ($R<5$kpc) where the influence of the random velocities of the bulge is considerable. At the early epoch (magenta lines) the pattern speed is near constant in the outer disc, as documented by \citet{2015ApJ...807...73O}. However, we stress that during this period the interaction is in its early stage and when there is still a clear bridge and tail system in the heavier models. We choose to perform most of our analysis after the bridge is disconnected and when the spiral, avoiding the epochs when the arms are highly asymmetric. The pitch angles for all the models also follow a similar trend between different mass companions, and we do not show them here. All have an initial steep rise and then gradual decline as in our fiducial model (Fig.\,\ref{Alpha}) though the weaker the model the faster the decrease occurs and the lower the maximum pitch angle initially reached. \begin{figure*} \centering \resizebox{0.34\hsize}{!}{\includegraphics[trim = -5mm 0mm 5mm 0mm]{F10b_massmode.png}} \resizebox{0.34\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F10c_massomega.png}} \resizebox{0.295\hsize}{!}{\includegraphics[trim = 10mm 4mm 0mm 6mm]{F10a_massrend.png}} \caption{Fourier arm modes (calculated in $4{\rm kpc}\le R \le 12{\rm kpc}$ range), pattern speeds and gas density render in interactions with various companion masses (decreasing from top). The $m=2$ mode can be clearly seen to drop in power as the companion mass is reduced, reaching barely greater than the ambient noise for the lowest mass case. The pattern speeds show very little variation with mass in either component. Note the spiral arms are too weak to fit a pattern speed in the lowest mass case.} \label{MassChange} \end{figure*} As the $0.0625 M_{p,0}$ ($1.25\times 10^{9}M_\odot$) companion induced negligible response in the host galaxy, we ran further calculations to discern whether this is the limit for inducing structure. We reduced the closest approach distance to the host galaxy, until the disc displayed signs of the interaction (Light4d1). In Fig.\,\ref{MassLow} we show the resulting calculation, where the periastron passage distance is reduced to 12kpc. The top panel shows the density render, and the lower panel the evolution of the Fourier components. By this stage a small SDW is driven in the outer disc, but is fairly weak and accompanied by a growth in the $m=3$ mode initially, and the $m=1$ mode later on. Attempts to increase the amplitude and longevity of the $m=2$ mode by varying the companion properties and orbit resulted in mergers or unperturbed fly-bys. Closer periastron passages resulted in the companion ploughing through the disc, at which point the point-mass approximation breaks down, and the perturber generates a strong $m=1$ mode in the interaction. We therefore find that a companion to stellar disc mass ratio of approximately $f_d \approx 25$ ($\approx 1\times 10^9$ in our calculations) is the limit to significant spiral structure generation, below which it is unlikely spiral features can be induced by a non-merging tidal encounter. Substructures of this size in galactic haloes are seen in large scale structure simulations (e.g. \citealt{1999ApJ...524L..19M}). While the masses of these subhalos does extend into the $1\times 10^9$ regime, this appears at the tail end of the distributions in some studies \citep{2004MNRAS.348..333D,2004MNRAS.355..819G}. \begin{figure} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F11a_lowmassR.png}} \resizebox{1.0\hsize}{!}{\includegraphics[trim = 8mm 0mm -8mm 0mm]{F11b_lowmassmode.png}} \caption{Response of the model galaxy to the lowest mass companion with a closer periastron passage (model Light4d2). Top panels show the top-down gas response, and the bottom panel shows the evolution of stellar arm response. The Fourier modes are shown in the stars in the range $4{\rm kpc} \le R \le 12 {\rm kpc}$.} \label{MassLow} \end{figure} For completeness we also include a calculations with a companion with twice the fiducial mass (Heavy1), where the companion mass is now heavier than the stellar disc itself. A time-lapse of the gas response is shown in Fig.\,\ref{MassX2}. The response of the disc in this instance is very strong, which results in a less symmetric spiral structure than the lower mass calculations. An initially very strong bridge-tail feature is formed, which transforms into a one-armed structure shortly after periastron passage (about 200\,Myr). This arm interacts with the more regular inner two-armed features to create some more exotic and irregular ring-like features, and even a leading arm structure in the final panel. Much of the low-density gas has also been radially offset in this interaction compared to the lower mass companions, reducing the effective ``size" of the host galaxy by nearly 2kpc shortly after the interaction compared to the lighter calculations (see Sec.\;\ref{migmat}). Higher mass companions were also tested, but the interaction became more and more destructive and formed short lived spiral arms that quickly formed irregular features. One benefit of this strongly interacting case is that the stronger tidal forces appear more efficient at driving spiral features in the inner disc. The top-right panel of Fig.\,\ref{MassX2} shows a two-armed feature that persists to $R \approx 2{\rm kpc}$, whereas the fiducial calculation has arms that dissipate by 3-4kpc. \begin{figure} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F12_heavy.png}} \caption{Time-lapse of the gas density in the interaction between our heavy companion and the host galaxy. The spiral response appears weaker after the interaction compared to the fiducial model (Figure\,\ref{PertLapse}).} \label{MassX2} \end{figure} While not explicitly shown here, we also experimented with varying the perturber initial velocity magnitude. The resulting arm structures were very similar to effect seen in varying the companion mass. The Slow1 calculation (10\,km\,s$^{-1}${} slower initial velocity) drives the same response as doubling the companion mass, and the Fast1 calculation (10\,km\,s$^{-1}${} faster initial velocity) the same as halving the companion mass. This is due to the effect of increasing/decreasing the duration of the impulse experienced by the disc, creating structures that are similar to increasing/decreasing the companion mass. Even the final separation between the host and companion is similar in the Fast1 and Light1 models, reaching approximately 60kpc when the spiral mode is most well defined. The equivalent is true for the Slow1 and Heavy1 calculations, though the orbits are clearly bound in these cases. \subsubsection{Varying companion orbit} \label{OrbitSweep} We perform a limited number of calculations where the perturbing companion is no longer orbiting in the plane of galactic rotation. These include three orbits where the perturber origin is the same, but velocity vector is rotated by 45$^\circ${}, 90$^\circ${} (passes over the North Galactic Pole), and 135$^\circ${} out of plane; Orbit45, Orbit90 and Orbit135. We also perform a single calculation where the companion originates directly above the North Galactic Pole (the Above model), but all other properties of the orbit are same. The orbital path of the Above and Orbit90 models has no azimuthal component at closest approach, and so has no rotation frequency to compare to the galactic disc. The Orbit45 and Orbit135 models have frequencies of $+10$$\rm km \,s^{-1}\,kpc^{-1}${} and $-10$$\rm km \,s^{-1}\,kpc^{-1}${} respectively, slightly lower than that of the previous models (13.5$\rm km \,s^{-1}\,kpc^{-1}${}) and closer to the disc orbital frequency at the same radius. In Fig.\,\ref{OrbitsRend} we show top-down gas renders of the response of the disc in each of these four calculations 600\,Myr after closest approach. The response of the disc is clearly seen to be reduced the further the companion moves out of plane, with the Orbit45 calculation appearing very similar to $0.5 M_{p,0}$ mass companion from Fig.\,\ref{MassChange}, despite the point of closest approach being unchanged. The retrograde approach (Orbit135) has an extremely diminished effect on the disc, and tests with a completely retrograde in-plane orbit ($\theta=180^\circ$) showed no resulting spiral structure in the disc. Moving the orbital path of the companion of host galaxy's orbital plane therefore gives a similar result as lowering the mass, implying the in-plane and retrograde momentum of the companion is the key quantity. \begin{figure} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 20mm 0mm 20mm 0mm]{F13_orbitrend.png}} \caption{Gas surface density render for the primary galaxy after interaction with the fiducial mass companion with different orbital paths. Notice that despite the same mass and periastron distance, the resulting arm structure is very different between models.} \label{OrbitsRend} \end{figure} Regarding the properties of the spirals driven in these calculations, we show the evolution of the pitch angle with time in Fig.\,\ref{OrbitsAlpha} and the pattern speed as a function of radius in Fig.\,\ref{OrbitsOmega}. The pattern speeds are calculated at a time where the $m=2$ mode is most prominent. The pitch angles all have very similar behaviour, and start with a decreasing maximum amplitude as the orbits move further out of plane. The rate of decay is similar to that of Fig.\,\ref{Alpha}, dropping to about $4^\circ$ in a Gyr. The pattern speeds are also similar for all models, where the main difference is seen in the Orbit135 model, whose pattern speed appears flatter than that of the other models. We only show the pattern speed in this model in the $8 {\rm kpc} \le R \le 17 {\rm kpc}$ range as further within the disc there are negligible arm features to fit to. The near constant pattern speed in this range is similar to that of the $0.125 M_{p,0}$ model in the same range at the earlier epoch, which also has a very weak spiral response (Fig \ref{MassChange}). The stars and the gas again trace very similar pattern speeds for each model. \begin{figure} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 20mm 0mm 10mm 0mm]{F14_orbitalpha.png}} \caption{Evolution of the pitch angle of the $m=2$ arm mode (in stars and gas) induced by companions of our fiducial mass but with different orbital paths.} \label{OrbitsAlpha} \end{figure} \begin{figure} \centering \resizebox{0.9\hsize}{!}{\includegraphics[trim = 15mm 0mm 15mm 10mm]{F15_orbitomega.png}} \caption{Pattern speed of $m=2$ features in the galactic disc in the gas (left) and stars (right) driven by companions on different orbital trajectories. Note that the $m=2$ mode is so weak in the Orbit135 model a pattern speed can only be calculated in the outer disc.} \label{OrbitsOmega} \end{figure} In Fig.\,\ref{Warps} we show edge-on views of the gas disc in the calculation where the companion originates above the disc (bottom right Fig.\,\ref{OrbitsRend}). Panels show different times after periastron passage ($t=0$). A clear warp feature can be seen in the outer edge of the disc ($12{\rm kpc}\le R \le 20{\rm kpc}$), which oscillates about the $x-y$ plane after the passage of the companion and has stabilized after approximately 500\,Myr. Galactic warps are not uncommon in external galaxies (e.g. ESO 510-G13) and our own Milky Way \citep{2006ApJ...643..881L,2006ApJ...641L..33W}. The warp of the Milky Way is seen to extend to about 1kpc at $R\approx 15{\rm kpc}$, which is very similar to what is seen in Fig.\,\ref{Warps} \citep{doi:10.1146/annurev-astro-082708-101823}. Interestingly for a warp of this scale there is very little spiral structure driven in the disc, especially inside $R\le10${\rm kpc}. This suggests that whatever process induces warps in galactic discs need not necessarily be responsible for observed spiral structure. For example, in the context of the Milky Way, if the Magellanic companions are responsible for the Galactic warp then the spiral structure itself may be driven by a different mechanism. \begin{figure} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 20mm 0mm 20mm 0mm]{F16_warp.png}} \caption{Vertical projection of gas density at different times in the simulation ``Above" where the companion originates from the North Galactic Pole. A warp can be seen to be induced at the disc edge, which oscillates about the $x-y$ plane before settling back to equilibrium.} \label{Warps} \end{figure} \subsubsection{Quantifying the strength of different interaction scenarios} \label{strength} \begin{figure*} \centering \resizebox{0.48\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F17a_PowComp_s.png}} \resizebox{0.48\hsize}{!}{\includegraphics[trim = 0mm 0mm 0mm 0mm]{F17b_PowComp_g.png}} \caption{Comparison of the $m=2$ response in the calculations presented here. The main mass sweep is shown by the black points, the different orbital paths in magenta, different orbital paths in green and low mass companions with different closest approaches in blue. The large circles show the average power of the $m=2$ mode relative to that of the fiducial model. The red points show the maximum for the varying mass models, and the grey region shows the power of the $m=2$ mode before the interaction (the approximate noise level).} \label{PowComp} \end{figure*} Fig.\,\ref{PowComp} shows the arm response for the majority of the calculations presented here. The $y$-axis shows the power of the $m=2$ mode relative to the power of arms in our fiducial calculation, averaged over 1Gyr after the interaction. The black points show different mass companions, with the red points showing the maximum power for these runs, rather than the average. The blue points show the runs with the lightest mass companion for varying closest approach distances. The green points show different orbital paths, and the magenta points show calculations with the $\pm10$\,km\,s$^{-1}${} velocity boost. The size of the points is proportional to the duration for which the $m=2$ mode dominates the full spectrum. The stellar response is shown in the left, and gas in the right, with masses and mode powers shown in log-space. It is immediately clear that there is a near-linear drop in response with decreasing companion mass at fixed closest approach (20kpc), with the lightest companion inducing a response barely stronger than the noise level and being relatively short-lived (as seen in the bottom-left of Fig.\,\ref{MassChange}). Decreasing the closest approach distance then increases the response, though the 16kpc approach (Light4d1) shows little additional power. The Light4d2 (12kpc distance) model shows a stronger response similar to a companion of $\times4$ the mass, and the Light4d3 model stronger still. This latter model however is highly destructive to the host galaxy, carving a great swath through the gas disc and resulting in a merger scenario. The out-of-plane companions can be seen to drive a decreasing response in the disc, though being 45$^\circ${} out of plane seems to make only a minor difference. Moving 90$^\circ${} out-of-plane is similar to reducing an in-plane companion mass by 1/2, and 135$^\circ${} similar to reducing by 1/8. The strength of the interaction can be characterized by the dimensionless parameter \citep{1991A&A...244...52E}: \begin{equation} S = \left(\frac{R_{\rm enc}}{d} \right)^3 \frac{\Delta T}{T}\frac{M_p}{M_{\rm tot}(R<R_{\rm enc})} \label{Seq} \end{equation} where $M_p$ is the companion mass and $d$ is the distance of closest approach. $R_{\rm enc}$ is a characteristic distance of the galaxy (taken here to be 20kpc, the truncation distance of the stellar and gaseous disc) and $M_{\rm tot}(R<R_{\rm enc})$ is the mass of all the host galaxy components within this radius. $\Delta T$ is the time for the perturber to move 1 radian at closest approach, and $T$ is the time for stars at $R_{\rm enc}$ to move 1 radian in orbit around the galactic centre. This $S$ parameter provides information on the tidal strength of the interaction and is the force experienced by material in the outer edge of the disc over a duration $\Delta T$ as a fraction of the circular momentum in the galactic orbit at this point. This offers a method of characterizing the strength of the interaction while taking into account the velocity information. We show the values of $S$ for our in-plane interactions in Fig.\,\ref{Spow}, using the same colours for points as Fig.\,\ref{PowComp} for reference. As with the $m=2$ mode analysis there is a clear trend with our models to have a decreasing tidal force with decreasing companion mass. The value of $S$ for our fiducial calculation is similar to those used in the literature for interacting galaxies, however our minimum spiral case is substantially lower than that seen in the literature ($S\approx0.01$ for the Light4d2 calculation). For example, \citet{2015ApJ...807...73O} find a tidal strength of $S\lesssim 0.065$ is required to form at least a tidal tail, however their models do not explore below this value to find the no-spiral case. \citet{1991A&A...244...52E} look at much weaker interactions, and find spirals can be induced in interactions with strengths of $S\approx0.02$, though they are mostly concerned with ocular/bar shaped structures. \begin{figure} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 0mm 10mm 0mm 0mm]{F18_Spow}} \caption{The dimensionless strength parameter, $S$ (Eq.\,\ref{Seq}), for each of our in-plane interactions with varying orbital properties. Colours are the same as those in Fig.\,\ref{PowComp}. The dotted line traces the $S=0$ limit.} \label{Spow} \end{figure} \subsubsection{Migration of material} \label{migmat} \begin{figure*} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 20mm 20mm 10mm 0mm]{F19_encmass.png}} \caption{The evolution of the radius which encloses half ($R_{1/2}$, red lines) or three-quarters ($R_{3/4}$, green lines) of the total mass of stars (dashed lines) and gas in each of our main models (solid lines). The 0\,Myr time corresponds to the time of closest approach of the companion. Radii are measured as offsets from the average value prior to the companion passage, with values given in the lower right corner. All radii are shown to the same scale, with the inserts showing a zoom in of the same data for simulations where the response is small.} \label{SDall} \end{figure*} In Fig.\,\ref{SDall} we show the radial migration of gas and stars in our calculations. The $y$-axis shows the radius that encompasses either half ($R_{1/2}$, red) or three quarters ($R_{3/4}$, green) of the total gas or stellar mass of the galaxy as a function of time. The values for the radius are shown as offsets to the average value before the interaction, shown in the bottom right corner of each panel. All models are shown to the same scale, and for values whose response is very weak the same data is shown in the zoomed in insert. The Above model is not shown as it shows very similar features to the Orbit90 and Orbit135 calculations. Gas is shown by the circles and solid lines, and stars by the stars and dashed lines. In most cases the material is seen to be migrating inwards, shown in the radii entering the negative region. The infall of gas does not continue for a long period, and in the Default and Heavy1 models levels out after 200\,Myr. The spiral arms are continuously winding after this period however (Figs\,\ref{PertLapse} and \ref{Alpha}) so the motion of material inwards is not a result of changes in the arm structure, but rather the strong tidal force of the companion's passage and lasting 200-400\,Myr. \begin{figure*} \centering \resizebox{1.0\hsize}{!}{\includegraphics[trim = 20mm 20mm 10mm 10mm]{F20_halfmassEx}} \caption{As Fig. \ref{SDall} but showing the variations on the fiducial calculation. Left: the higher resolution run, centre: the calculation with a resolved companion and right: the calculation with the extended gas disc. The insert in the latter shows a zoom out of the main plot that encompasses the full extent of $R_{3/4}$.} \label{SDex} \end{figure*} In the strongest interactions (Heavy1 and Slow1) there is also a motion of gas away from the galaxy, shown by the increase of $R_{3/4}$ with a simultaneous infall in the inner disc. In this instance the gas is being stripped from the host to a significant degree. The weakest interactions show no significant radial migration of gas, with $R_{1/2}$ and $R_{3/4}$ moving by only 50pc. In these instances the power of the different arm modes in Fig.\,\ref{MassChange} is a better indicator of the disc response. This is highlighted by the Light3 and Light4 models, as while they have a near-identical behaviour in $R_{1/2}$ and $R_{3/4}$, there is a much clearer $m=2$ component in Fig.\,\ref{MassChange} for Light3. The gas migration for the Light1, Fast1 and Orbit45 models show a similar trend, highlighting that the resulting tidal forces are similar in each case. Similarly the Heavy1 and Slow1 models show a similar trend, through the slower companion has a stronger effect on the migration, with $R_{3/4}$ showing an increase for a longer time than the Heavy1 calculation. The gas and stellar material displays a very similar behaviour for most models, especially in the lower strength interactions, where $R_{1/2}$ and $R_{3/4}$ are near indistinguishable. In some cases the gas appears stronger effected by companion (Default, Orbit45, Slow1, Light1) and experiences a greater net motion inwards. The main difference is seen in the Heavy1 interaction, where the stars have an increase in $R_{3/4}$, whereas the gas experiences a rise then a drop back inwards. In the Slow1 interaction, which appears slightly stronger than the Heavy1, the gas and stars both maintain an increase in $R_{3/4}$. This implies the stars are easier to be dragged out of the disc compared to the gas, whereas the gas requires a stronger interaction to be pulled out but conversely will more readily fall inwards than the stars (seen $R_{1/2}$ in Slow1 and Heavy1). The lack of a strong stripping of gas in these calculations may seem at odds with the paradigm that interaction events should strip spirals of their gas \citep{1999MNRAS.308..947A,2004IAUS..217..440C,2004cgpc.symp..305V}. However, these mechanisms are usually strong interactions or include some dense inter-cluster medium to efficiently strip the outer gas disc. In the models in Fig.\,\ref{SDall} the encounter is grazing, and is only marginally effective at capturing gas in the stronger interactions. Indeed in the tidal interactions that are efficient at gas stripping are usually strong enough that the system results in a merger \citep{2004IAUS..217..440C} which is not the case here. The right hand panel of Fig.\,\ref{SDex} shows the migration of material in a calculation with an extended gas disc. This calculation shows a very different evolution for the gas disc, with the outer material being clearly stripped away from the galaxy (the sharp increase in $R_{3/4}$) and the inner disc falling into the galactic centre. This is due to the material being ram-pressure stripped by the companion which now ploughs through the extended gas disc, whereas in the original calculations the companion grazed the disc-edge. Fig. \ref{SDex} also shows the evolution for $R_{1/2}$ and $R_{3/4}$ for the simulations with a higher resolution and a resolved companion. Behaviour is very similar to the fiducial calculation shown in Fig. \ref{SDall} in both cases. \section{Conclusions} \label{conclusions} We have performed a set of $N$-body and hydrodynamical simulations of galaxy interactions to better understand the relation between resulting stellar and gas structures, and determine the limiting mass case for spiral generation. We find that the spiral structures are very similar in the gas and stellar components. The arm numbers, pitch angles and pattern speeds are all very similar, with subtle differences such as a more noisy gas power spectrum. We find a small but noticeable offset between the gas and stellar spiral arms, whereas in spiral potentials there is a significant offset and no offset seen in isolated $N$-body spiral arms \citep{2015PASJ...67L...4B}. The pattern speeds of the spiral arms trace the $\Omega-\kappa/2$ curves of material, thus behaving as slowly winding density waves. As gas passes through the spiral potential it exhibits spurring features, which are also not seen in isolated $N$-body simulations. The existence of spurs, gas-spiral offsets, and the radial dependence of the pattern speed therefore presents possible tests of the nature spiral arms in nature, which could be paramount for explaining the existence of grand-design two-armed spirals. We find it possible to quantify the strength of the interactions by either the relative power of the Fourier modes, or a dimensionless strengths parameter \citep{1991A&A...244...52E} that also includes velocity information. Moving the interaction out-of-plane is similar to significantly reducing the mass of the companion, and angling out of plane by 90$^\circ${} and 135$^\circ${} produces a similar response to reducing the companion mass by 1/2 and 1/8 respectively. We find a strength parameter of the order of $S\approx0.01$ can induce some spiral features, noticeably lower than found in previous studies. For our calculations we find a minimum mass limit of approximately $1\times 10^{9}M_\odot$ (equivalent to 4\% of the stellar disc mass), with a closest approach of 12kpc and whereby spiral structure can barely be generated without a merger scenario. This is within the range of dark matter subhaloes, (e.g. \citealt{2004MNRAS.348..333D,2015arXiv150605537Z}) suggesting small dark matter structures can drive at least a portion of spirals seen in nature. The next step is to use the work presented here as a foundation to investigate the ability of interactions to reproduce unbarred two-armed spirals in nature. To better match real external galactic gas structure it will be prudent to include the effects of star formation, feedback and ISM heating/cooling. As the Milky Way presents an ideal nearby test-case of a spiral galaxy, it is highly desirable to understand its spiral features. The minimum mass we find above is well below the mass of the LMC ($6-20\times 10^{9}M_{\odot}$; \citealt{1997ApJ...488L.129K}). We aim to expand upon our previous studies of the morphology of the Milky Way by assessing whether tidally induced spiral structure can induce the correct observational analogous seen from Earth, as opposed to steady SDW \citep{2014arXiv1406.4150P} or DTR spiral structures \citep{2015MNRAS.449.3911P}. \section*{Acknowledgements} We would like to thank the referee for their detailed and informative report which greatly improved this paper. ARP is currently supported by the MEXT grant for the Tenure Track System of EJT. Numerical computations were [in part] carried out on Cray XC30 at Center for Computational Astrophysics, National Astronomical Observatory of Japan and the GPC supercomputer at the SciNet HPC Consortium \citep{2010JPhCS.256a2026L}. SciNet is funded by: the Canada Foundation for Innovation under the auspices of Compute Canada; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. Figures showing SPH density were rendered using the freely available \textsc{yt} toolkit \citep{2011ApJS..192....9T}. The authors would like to thank researchers at NAOJ, McMaster University, IMPU and ELSI for useful discussions. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Active matter has been attracting much interest from a broad range of research communities~\citep{ramaswamy2010mechanics,cates2011active, marchetti2013hydrodynamics,needleman2017active}. At the micron scale, active matter consists of a large number of active agents that are able to convert energy to achieve directed or persistent motions, which include those of living microorganisms, synthetic micro-robots, biopolymers such as actin filaments, etc. The motions of these active agents are attributed to a wide range of mechanisms~\citep{lauga2, marchetti2013hydrodynamics, alapan2019microrobotics}, e.g. one of the most common strategies adopted by natural and artificial micro-swimmers lies in the beating and wiggling of slender structures such as cilia and filaments, which are hair-like slender microscale structures that play an important role in various biological processes~\citep{fawcett1961cilia}, such as swimming, pumping, mixing, cytoplasmic streaming, etc. The biological organelles deliver these functionalities by performing rhythmic, wave-like motions. To achieve persistent motions, cyclic or oscillatory motions are needed, yet, the mechanism underlying the emergence of such oscillations remains unclear. Two major hypotheses, geometric feedback~\citep{brokaw1971bend, brokaw2009thinking, riedel2007synchrony, sartori2016dynamic, hines1983three, hilfinger2009nonlinear} and ``flutter'' or buckling instability~\citep{bayly2016steady,de2017spontaneous, ling2018instability,hu2018finite,fatehiboroujeni2018nonlinear}, have been raised based on theory and/or simulations: the first hypothesis assumes that a time-dependent dynein activity (switching on/off or modulation) is necessary to trigger the oscillations; the second one suggests that a steady point force or force distributions acting along the axial direction of a flexible filament can trigger its oscillatory motion through a ``flutter'' or buckling instability. These forces are in fact called the ``follower force'' in the mechanics literature~\citep{pfluger1950stabilitatsprobleme, ziegler1952stabilitatskriterien, herrmann1964stability}. Because the follower force was initially invented theoretically and assumed to be always tangential to the slender structure regardless of its time-dependent deformation, it was demonstrated only mathematically and has been considered impractical~\citep{koiter1996unrealistic}. However, it was recently realised experimentally on a metre-scale rod~\citep{bigoni2018flutter}. To drive the oscillations of artificial cilia and filaments of micron scale, different methods that exploit magnetic~\citep{dreyfus2005microscopic,singh2005synthesis, evans2007magnetically, livanovivcs2012magnetic, hanasoge2017asymmetric,huang2019adaptive} , electrostatic~\citep{den2008artificial}, piezoelectric~\citep{oh2009bio}, optical~\citep{van2009printed} and hydrogel-based actuations~\citep{sidorenko2007reversible,masuda2013self} have been developed. Nonetheless, these practises relied on a time-dependent power source, except for the self-oscillation of polymer brushes triggered by the Belousov-Zhabotinsky reaction~\citep{masuda2013self}. This reaction-based beating shares with other biological processes, such as mammalian otoacoustic emissions~\citep{gold1948hearing,kemp1979evidence} and glycolysis~\citep{sel1968self} the same feature: self-oscillation, that is generating and sustaining a periodic motion based on a power source without a corresponding periodicity~\citep{jenkins2013self}. In our recent work~\citep{zhu2019propulsion}, we proposed a chemical-reaction-free and follower-force-free strategy to engineer the self-oscillations of artificial structures by employing a time-independent, uniform electric field. We reported an elasto-electro-hydrodynamic (EEH) instability based on the Quincke rotation (QR) instability, and utilised it to drive various motions of an object composed of a dielectric spherical particle with an attached elastic filament. In this work, we will present in detail the setup and the mathematical description of the new EEH problem. First, we numerically solve the system coupling the electrohydrodynamics of the particle in a dielectric fluid and the elastohydrodynamics of the filament in a viscous fluid. We identify the emergence of the EEH instability that produces the self-oscillation of the composite object. The oscillations in turn cause the object to translate. Then, we perform a linear stability analysis (LSA) incorporating an elastohydrodynamic model to predict the onset of self-oscillatory instability. Finally, we propose a minimal model that can reproduce the similar EEH instability. We describe the setup and governing equations of the EEH problem in \S~\ref{sec:setup_math}, and demonstrate the numerical results in \S~\ref{sec:results}. The elastohydrodynamic model and LSA are shown in \S~\ref{sec:linear}, followed by \S~\ref{sec:minimal} illustrating the minimal model. Finally, we conclude our observations and provide some discussions in \S~\ref{sec:conclusions}. \section{Problem setup and mathematical formulations}\label{sec:setup_math} We consider a weakly conducting dielectric spherical particle of radius $A$, which has attached an inextensible elastic filament of contour length $L$. The filament is cylindrical with a constant cross-section of radius $a$, and its slenderness is $\epsilon_{\mathrm{sl}} = a/L \ll 1$. We fix $\epsilon_{\mathrm{sl}} = 0.01$ in this work. The composite object is subject to a time-independent uniform electric field $\bE = E \mathbf{e}_z$ (see figure~\ref{fig:sketch}), where $\mathbf{e}_z$ is the $z$-direction basis vector of the laboratory coordinates system $\mathbf{e}_{xyz}$. The centreline of the filament is described by $\br \lp s, t\rp$, where $s$ indicates the arclength. The base J ($s=0$) of the filament is clamped at the particle surface, namely, the tangent vector $\partial \br/\partial s |_{s=0}=-\mathbf{e}_\mathrm{p}$ at the base always passes through the particle centre P, regardless of its deformation and the orientation vector $\mathbf{e}_\mathrm{p}$ of the particle. The size ratio between the particle and filament is $\alpha = A/L$. We consider only the bending deformation of the filament with a bending stiffness of $D = \pi a^4 Y/4$, where $Y$ denotes Young's modulus. \begin{figure} \begin{center} \includegraphics[scale=0.8]{fig1.pdf} \end{center} \caption{Schematic of the setup: a dielectric spherical particle of radius $A$ attached with a flexible filament of contour length $L$ is exposed to a steady uniform electrical field $\bE = E\mathbf{e}_z$. The composite object's motion, orientation $\mathbf{e}_\mathrm{p}$ and induced dipole $\bP$ are all constrained to the $yz$-plane, and $\mathbf{e}_\mathrm{p}$ is described by the angle $\theta$ with respect to $\mathbf{e}_z$.} \label{fig:sketch} \end{figure} The composite object is immersed in a dielectric solvent fluid with dynamic viscosity $\mu$. The electrical conductivity and absolute permittivity of the solvent are $\ss$ and $\es$, respectively, and those of the particle are $\sp$ and $\ep$; $R=\sp/\ss$ and $S=\ep/\es$ indicate the ratios. The terms $\taus = \es/\ss$ and $\taupart = \ep/\sp$ denote the charge relaxation time of the solvent and particle, respectively. These electrical properties are important to the induced QR electrohydrodynamic instability that is critical to the dynamics in this paper. Their values are based on experiments~\citep{brosseau2017electrohydrodynamic}, where $R=2.3 \times 10^{-7}$ and $S=0.84$ are fixed in this work. Though the filament will also be polarised like the particle, the induced electric torque on the filament will be much weaker than that on the particle (see \S~\ref{sec:conclusions} for a detailed discussion). We thus do not consider the electrohydrodynamics of the filament in this work. \subsection{Assumptions} The numerical simulations are carried out by invoking several assumptions. Motivated by biomimetic applications at the micron scale, we neglect the inertia of the fluid and particle. The fluid motion is therefore governed by the Stokes equations, and the particle satisfies instantaneous force- and torque-free conditions. The movement of the composite object is constrained to be planar, such that the particle centre P and filament position $\br(s, t)$ are in the $yz$-plane. We adopt the local resistive-force theory~\citep{batchelor1970slender} to calculate the hydrodynamic forces on the filament. We further ignore the hydrodynamic interactions between the particle and the filament. In the elastohydrodynamic model developed for LSA, we also assume that the filament undergoes weak deformation. \subsection{Electrohydrodynamics of the particle} When a dielectric particle in a dielectric solvent is exposed to an electric field, the interface of the particle will be polarised. The total induced dipole $\bPtt$ consists of an instantaneous part $\bPinf$ and a retarding part $\bP$, \textit{viz.} $\bPtt = \bPinf + \bP$. Both vectors are defined by three components, $\mPinf_i$ and $\mP_i$ ($i=1...3$) in the reference frame $\mathbf{e}_{123}$ that rotates with the particle (see figure~\ref{fig:euler-sketch}). For a homogeneous spherical particle, its Maxwell-Wagner polarisation time $\taumw$, and low- and high-frequency susceptibilities, $\chi^0$ and $\chi^{\infty}$, respectively, are isotropic, hence the $i$-th component of the instantaneous dipole $\bPinf$ is \begin{align}\label{eq:mPinf} \mPinf_i = \chi^{\infty} E_i. \end{align} In the rotating reference frame of the particle, the retarding dipole $\bP$ is governed by~\citep{tsebers1980internal, cebers2000electrohydrodynamic} \begin{align} \frac{\partial \mP_i}{\partial t} = -\frac{1}{\taumw}\left[ \mP_i - \lp \chi^{0} - \chi^{\infty} \rp E_i\right], \end{align} where \begin{align}\label{eq:kappa} \kappa=\frac{R+2}{S+2} \end{align} and $\taumw = \taus / \kappa$. It is well known that when the charge relaxation time $\taupart$ of the particle is larger than that of the solvent $\taus$, \textit{i.e.}, $R/S<1$, $\bP$ is oriented opposite to the electric field. This directional misalignment is the necessary condition for the electro-rotation of the particle, the so-called Quincke rotation~\citep{quincke1896ueber}, which occurs when, in addition, the strength $E$ of the electric field is above a critical value $E^{\mathrm{cri}}$ derived theoretically as~\citep{jones1984quincke,brosseau2017electrohydrodynamic} (see appendix~\ref{sec:append}) \begin{align}\label{eq:Ecri} E^{\mathrm{cri}} = \sqrt{\frac{2 \ss \mu (R+2)^2}{3 \es^2(S-R)}}. \end{align} We do not consider the hydrodynamic interactions between the spherical particle and filament, hence the dynamics of the particle can be obtained by using its translational and rotational mobility factors. Assuming that the particle rotates at angular velocity $\boldsymbol{\Omega}$ about its centre P, which translates at velocity $\mathbf{U}$, the force and torque balances on the particle give \begin{subequations} \begin{align} \mathbf{F}^{\mathrm{f}\rightarrow\mathrm{p}} - \beta_{\mathrm{drag}} \mathbf{U} & = \mathbf{0}, \\ \boldsymbol{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}} + \boldsymbol{\Gamma}^{\mathrm{elec}} - \gamma_{\mathrm{drag}} \boldsymbol{\Omega} & = \mathbf{0}, \label{eq:torque-free} \end{align} \end{subequations} where $\mathbf{F}^{\mathrm{f}\rightarrow\mathrm{p}}$ denotes the force exerted by the filament on the particle, $\boldsymbol{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}$ the torque with respect to the particle centre P, and $\beta_{\mathrm{drag}} = 6\pi \mu A$ and $\gamma_{\mathrm{drag}} = 8\pi \mu A^3$ are the translational and rotational drag coefficients of a sphere in the creeping flow, respectively. Also, $\boldsymbol{\Gamma}^{\mathrm{elec}}$ is the electric torque on the particle with respect to its centre P, that is \begin{align} \boldsymbol{\Gamma}^{\mathrm{elec}} = \bPtt \times \bE = \bPinf \times \bE + \bP \times \bE = \bP \times \bE, \end{align} where $\bPinf \times \bE \equiv \mathbf{0}$ for an isotropic sphere because $\mPinf_i$ linearly scales with $E_i$ in each direction by the same factor $\chi^{\infty}$ (see equation~(\ref{eq:mPinf})). It is worth noting that $\bPinf \times \bE \neq \mathbf{0}$ for ellipsoidal particles, where the factor $\chi^{\infty}$ is direction dependent \citep{cebers2000electrohydrodynamic,brosseau2017electrohydrodynamic}. The translation of the particle is driven by the elastic force exerted by the filament, which is balanced by the viscous drag, while the rotational motion of the particle is determined by the balance between the elastic, electric and hydrodynamic torques. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth] {fig2.pdf} \end{center} \caption{The reference frame $\mathbf{e}_{123}$ that rotates and translates with the particle, the orientation $\mathbf{e}_\mathrm{p}$ of the composite object coincides with $\mathbf{e}_3$. The proper Euler angles $[\theta, \phi, \psi]$ are adopted to describe the orientation $\mathbf{e}_\mathrm{p}$, where $\mathbf{N}$ denotes the nodal line direction and $\mathbf{Q} = \mathbf{e}_3 \times \mathbf{N}$. We note that a graphical error occurred in a similar figure in our related work~\citep{zhu2019propulsion}, where $\psi$ ranges from $\mathbf{N}$ but erroneously to $\mathbf{e}_y$. } \label{fig:euler-sketch} \end{figure} The orientation of the particle $\mathbf{e}_\mathrm{p}$ is defined as the direction from the filament base J towards the particle centre P, where $\mathbf{e}_3$ of the particle-based reference frame coincides with $\mathbf{e}_\mathrm{p}$. We have found it convenient to use the proper Euler angles $[\theta, \phi, \psi]$, see figure~\ref{fig:euler-sketch}. Here, $\bP$ is decomposed into $\bP = \mP_3 \mathbf{e}_3 + \mP_N \mathbf{N} + \mP_Q \mathbf{Q}$, where $\mathbf{N}$ indicates the nodal line direction and $\mathbf{Q} = \mathbf{e}_3 \times \mathbf{N}$. This decomposition applies to other vectorial variables such as $\bE$. We constrain $\bP$ onto the $yz$-plane, hence $\phi=\psi\equiv 0$ and $\theta$ is the only angle indicating the orientation $\mathbf{e}_\mathrm{p}$; additionally, $\mP_N = 0$ and $\mathbf{e}_x = \mathbf{N}$. For the sake of completeness, we first derive the governing equations for a general situation without these constraints. Using the torque-free condition equation~(\ref{eq:torque-free}), we obtain the governing equations for $[\theta, \phi, \psi]$, \begin{subequations}\label{eq:euler-dim} \begin{align} \frac{\partial \theta}{\partial t} & = \frac{1}{\gamma_{\mathrm{d}}} \lp\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_N + E_3 \mP_Q - E_Q \mP_3 \rp, \\ \frac{\partial \phi}{\partial t} & = \frac{1}{\gamma_{\mathrm{drag}}} \sin{\theta} \lp -E_3 \mP_N + \Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_Q \rp, \\ \frac{\partial \psi}{\partial t} & = \frac{1}{\gamma_{\mathrm{drag}}} \sin{\theta}\lp E \mP_N - \Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_Q \cos{\theta}\rp, \end{align} \end{subequations} where $E_3 = E\cos{\theta}$ and $E_Q = E\sin{\theta}$. The governing equations for $[\mP_N, \mP_Q, \mP_3]$ are~\citep{cebers2000electrohydrodynamic} \begin{subequations}\label{eq:dipole-dim} \begin{align} \frac{\partial \mP_N}{\partial t} + \frac{\partial \psi}{\partial t} \mP_Q & = -\frac{1}{\taumw} \mP_N, \label{eq:PN} \\ \frac{\partial \mP_Q}{\partial t} - \frac{\partial \psi}{\partial t} \mP_N & = -\frac{1}{\taumw} \left[ \mP_Q - \lp \chi^0 - \chi^{\infty} \rp E_Q \right], \label{eq:PQ}\\ \frac{\partial \mP_3}{\partial t} & = - \frac{1}{\taumw} \left[ \mP_3 - \lp \chi^0 - \chi^{\infty} \rp E_3 \right]. \label{eq:P3} \end{align} \end{subequations} We choose the charge relaxation time of the solvent $\taus$ as the characteristic time, $L/\taus$ the characteristic velocity, and $E^{\mathrm{cri}}$ and $D/(L E^{\mathrm{cri}})$ the characteristic strength of the electrical field and polarisation dipole, respectively. Using $\;\bar{}\;$ to indicate the dimensionless variables hereafter, the dimensionless equations for the Euler angles $[\theta, \phi, \psi]$ are \begin{subequations}\label{eq:non-euler} \begin{align} \frac{\partial \theta}{\partial \bar{t}} & = \frac{1}{\bareta} \lp\bar{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_N + \bar{E}_3 \bar{\mathcal{P}}_Q - \bar{E}_Q \bar{\mathcal{P}}_3 \rp, \label{eq:non-theta} \\ \frac{\partial \phi}{\partial \bar{t}} & = \frac{1}{\bareta \sin{\theta}} \lp -\bar{E}_3 \bar{\mathcal{P}}_N + \bar{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_Q \rp, \label{eq:non-phi} \\ \frac{\partial \psi}{\partial \bar{t}} & = \frac{1}{\bareta \sin{\theta}}\lp \bar{E} \bar{\mathcal{P}}_N - \bar{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_Q \cos{\theta}\rp, \label{eq:non-alpha} \end{align} \end{subequations} as derived in \citet{cebers2000electrohydrodynamic} in the absence of the elastic torque $\bar{\boldsymbol{\Gamma}}^{\mathrm{f}\rightarrow\mathrm{p}}$. Here, \begin{align}\label{eq:bareta} \bareta = \alpha^3 \barmu, \end{align} with \begin{align}\label{eq:barmu} \barmu = \frac{8\pi \mu L^4}{D \taus} \end{align} defined as the elasto-electro-viscous (EEV) parameter. The dimensionless governing equations for $[\bar{\mathcal{P}}_N, \bar{\mathcal{P}}_Q, \bar{\mathcal{P}}_3]$ following from equation~(\ref{eq:dipole-dim}) are \begin{subequations} \begin{align} \frac{\partial \bar{\mathcal{P}}_N }{\partial \bar{t}} & = -\kappa \bar{\mathcal{P}}_N -\frac{\partial \psi}{\partial \bar{t}}\bar{\mathcal{P}}_Q, \label{eq:non-PN} \\ \frac{\partial \bar{\mathcal{P}}_Q}{\partial \bar{t}} & = -\kappa \lp\bar{\mathcal{P}}_Q + \kappa \bareta \bar{E}_Q \rp + \frac{\partial \psi}{\partial t} \bar{\mathcal{P}}_N, \label{eq:non-PQ} \\ \frac{\partial \bar{\mathcal{P}}_3}{\partial \bar{t}} & = -\kappa \lp \bar{\mathcal{P}}_3 + \kappa \bareta \bar{E}_3 \rp, \label{eq:non-P3} \end{align} \end{subequations} where $\kappa = (R+2)/(S+2)$ as defined in equation~(\ref{eq:kappa}). We slightly perturb the instantaneous polarisation $\bar{\bP}^{\infty}$ and use it as the initial value $\bar{\bP}_{\mathrm{ini}}$ of $\bar{\bP}$, where \begin{subequations}\label{eq:bp_perturb} \begin{align} \bar{\bP}^{\infty} & = \frac{\bareta \kappa^2 \bar{E}}{\kappa-(R-1)/(S-1)}\lp \mathbf{e}_Q \sin\theta + \mathbf{e}_3 \cos\theta \rp, \\ \bar{\bP}_{\mathrm{ini}} & = \bar{\bP}^{\infty} + \epsilon_{\mP}|\bar{\bP}^{\infty}|, \end{align} \end{subequations} with $\epsilon_{\mP}=\mathcal{O}(10^{-4}) - \mathcal{O}(10^{-3})$. The dimensionless force- and torque-free conditions are \begin{subequations}\label{eq:force_torque_non} \begin{align} \bar{\bF}^{\mathrm{f}\rightarrow\mathrm{p}} - 3\alpha \barmu \bar{\bU} /4 & = \mathbf{0}, \label{eq:force-free-non} \\ \bar{\boldsymbol{\Gamma}}^{\mathrm{f}\rightarrow\mathrm{p}} + \bar{\bP} \times \bar{\bE} - \bareta \bar{\boldsymbol{\Omega}} & = \mathbf{0}.\label{eq:torque-free-non} \end{align} \end{subequations} Since we constrain the motion of the composite object and the induced dipole $\bar{\bP}$ to the $yz$-plane, we solve equations~(\ref{eq:non-theta}), (\ref{eq:non-PQ}) and (\ref{eq:non-P3}) for $\theta$, $\bar{\mathcal{P}}_Q$ and $\bar{\mathcal{P}}_3$, where the last term $\frac{\partial \psi}{\partial t} \bar{\mathcal{P}}_N$ in equation~(\ref{eq:non-PQ}) vanishes. \subsection{Elastohydrodynamics of the filament} We describe here the elastohydrodynamic equations for the filament. By employing the slender body theory (SBT) considering the leading-order local drag~\citep{batchelor1970slender}, the relation between the velocity $\br_t$ of the filament centreline and the force per unit length exerted by the fluid onto the filament $\bf(s,t)$ is \begin{align}\label{eq:local} 8 \pi \mu \lp \br_t - \bUinf \rp = c \lp \bI + \br_s \br_s \rp \cdot \bf, \end{align} where $\bUinf$ is the underlying flow velocity (background or imposed flow velocity) at $\br(s,t)$ and $\bUinf = \mathbf{0}$ in this work; the subscripts $t$ and $s$ denote the partial derivatives with respect to $t$ and $s$, respectively and \begin{align}\label{eq:c} c = 1+2 \log{\epsilon_{\mathrm{sl}}}<0. \end{align} The filament is assumed to be described by the Euler–Bernoulli constitutive law, and because the elastic force balances the hydrodynamic force anywhere on the centreline, we obtain \begin{align} \label{eq:bf} \bf(s) = -\lp T(s) \br_s \rp_s + D \br_{ssss}, \end{align} where $T(s,t)$ denotes the line tension, which acts as a Lagrangian multiplier to guarantee the inextensibility of the filament, \textit{i.e.}, $\br_s \cdot \br_s \equiv 1$. By substituting equation~(\ref{eq:bf}) into equation~(\ref{eq:local}), and choosing $L$ and $D/L^2$ as the characteristic length and force, respectively, we obtain the dimensionless equations for $\bar{\br}(\bar{s},\bar{t})$, \begin{align}\label{eq:fila_pos} \barmu \bar{\br}_{\bar{t}} = -2c \bar{T}_{\bar{s}} \bar{\br}_{\bar{s}} - c \bar{T} \bar{\br}_{\bar{s}\nons} + c \bar{\br}_{\bar{s}\nons\bar{s}\nons} + c \lp \bar{\br}_{\bar{s}}\cdot \bar{\br}_{\bar{s}\nons\bar{s}\nons} \rp\bar{\br}_{\bar{s}}. \end{align} The dimensionless equation for $\bar{T} \lp \bar{s} \rp$ reads, \begin{align}\label{eq:fila_T} 2c \bar{T}_{\bar{s}\nons} - c \bar{T}\bar{\br}_{\bar{s}\nons}\cdot \bar{\br}_{\bar{s}\nons} = -7c\bar{\br}_{\bar{s}\nons}\cdot \bar{\br}_{\bar{s}\nons\bar{s}\nons} - 6c \bar{\br}_{\bar{s}\nons\bar{s}} \cdot \bar{\br}_{\bar{s}\nons\bar{s}} - \barmu \beta_{\mathrm{p}} \lp 1 - \bar{\br}_{\bar{s}}\cdot \bar{\br}_{\bar{s}} \rp, \end{align} where the last term on the right-hand side $- \barmu \beta_{\mathrm{p}} \lp 1 - \bar{\br}_{\bar{s}}\cdot \bar{\br}_{\bar{s}} \rp$ is an extra (numerical) penalisation term introduced~\citep{tornberg2004simulating,li2013sedimentation} to preserve the local inextensibility constraint $\bar{\br}_{\bar{s}} \cdot \bar{\br}_{\bar{s}} \equiv 1$; $\beta_{\mathrm{p}}=100$ is adopted in our simulations. The boundary conditions (BCs) for $\bar{\br}(\bar{s},\bar{t})$ and $\bar{T}(\bar{s},\bar{t})$ at the free end $\bar{s}=1$ are \begin{subequations} \label{eq:BC_s1} \begin{align} \bar{\br}_{\bar{s}\nons} & = \bar{\br}_{\bar{s}\nons\bar{s}} = \mathbf{0}, \label{eq:BC_s1_r}\\ \bar{T} & = 0. \end{align} \end{subequations} The BCs at the clamped end $\bar{s}=0$ couple the elastohydrodynamics and electrohydrodynamics, as will be described next. \subsection{Elasto-electro-hydrodynamic coupling}\label{sec:eeh_coup} The electrohydrodynamics of the dielectric particle in a dielectric solvent and the elastohydrodynamics of the flexible filament in a viscous fluid are coupled via, first the BCs of $\bar{\br}(\bar{s},\bar{t})$ and $\bar{T}(\bar{s},\bar{t})$ at the filament base $\bar{s}=0$, and second the elastic force $\bar{\bF}^{\mathrm{f}\rightarrow\mathrm{p}}(\bar{t})$ and torque $\bar{\boldsymbol{\Gamma}}^{\mathrm{f}\rightarrow\mathrm{p}}(\bar{t})$ exerted by the filament on the particle (equation (\ref{eq:force_torque_non}). The BCs at the filament base $\bar{s}=0$ are \begin{subequations}\label{eq:BC_s0} \begin{align} \bar{\br} & = \bar{\bx}_{\mathrm{p}} + \alpha \bar{\br}_{\bar{s}}, \label{eq:nonbr_s0} \\ \bar{\br}_{\bar{s}} & = -\mathbf{e}_\mathrm{p}, \label{eq:nonbrs_s0} \end{align} \end{subequations} where $\bar{\bx}_{\mathrm{p}} (\bar{t})$ denotes the dimensionless position of the particle centre P. Equations~(\ref{eq:nonbr_s0}) and ~(\ref{eq:nonbrs_s0}) imply, respectively, that the filament base $\bar{s}=0$ is exactly on the particle surface, and the filament tangent vector at $\bar{s}=0$ always passes through the particle centre. Moreover, $\bar{\bx}_{\mathrm{p}}(\bar{t})$ and $\mathbf{e}_\mathrm{p}(\bar{t})$ are connected to the particle kinematics through \begin{subequations}\label{eq:par-kin} \begin{align} \frac{\d \bar{\bx}_{\mathrm{p}}}{\d \bar{t}} & = \bar{\bU}, \\ \frac{\d \mathbf{e}_\mathrm{p}}{\d \bar{t}} & = \bar{\boldsymbol{\Omega}} \times \mathbf{e}_\mathrm{p}, \end{align} \end{subequations} where $\bar{\bU}(\bar{t})$ is linked to equation~(\ref{eq:force-free-non}) and $\bar{\boldsymbol{\Omega}}(\bar{t})$ to equation~(\ref{eq:non-euler}). The coupling is completed by the computation of $\bar{\bF}^{\mathrm{f}\rightarrow\mathrm{p}}$ and $\bar{\boldsymbol{\Gamma}}^{\mathrm{f}\rightarrow\mathrm{p}}$, \begin{subequations}\label{eq:force-torque-non} \begin{align} \bar{\bF}^{\mathrm{f}\rightarrow\mathrm{p}} & = \left[-\bar{\br}_{\bar{s}\nons\bar{s}} + \bar{T}\bar{\br}_{\bar{s}}\right]|_{\bar{s}=0}, \label{eq:non_elastic_force}\\ \bar{\boldsymbol{\Gamma}}^{\mathrm{f}\rightarrow\mathrm{p}} & = \left[\bar{\br}_{\bar{s}} \times \lp \bar{\br}_{\bar{s}\nons} - \alpha \bar{\br}_{\bar{s}\nons\bar{s}} \rp\right]|_{\bar{s}=0}.\label{eq:non_elastic_torque} \end{align} \end{subequations} For completeness, we write the BC for the tension $\bar{T}$ at the filament base $\bar{s}=0$ \begin{align} 2 c \bar{T}_{\bar{s}} + 6c \bar{\br}_{\bar{s}\nons} \cdot \bar{\br}_{\bar{s}\nons\bar{s}} = - \barmu \bar{\br}_{\bar{s}} \cdot \bar{\br}_{\bar{t}}. \end{align} \section{Numerical results}\label{sec:results} In the original QR phenomenon (without a filament), the particle rotates when the dimensionless electric field is above $1$, namely, $\bar{E} \geq 1$. We hereby investigate the influence of the bending stiffness of the filament by varying $\barmu$, where we fix the electric field $\bar{E} = 1.5$ at which an individual particle undergoes steady QR. We fix the size ratio $\alpha=0.3$ in this section. \begin{figure} \begin{center} \hspace{0cm}\includegraphics[width=1\textwidth] {fig3.pdf} \end{center} \caption{$\barmu$-dependent time evolution of the rotational velocity $\bar{\Omega} \lp \bar{t} \rp$ for (a) $\barmu=600$, (b) $\barmu=635$ and (c) $\barmu=2000$ when $\bar{E} = 1.5$. Their corresponding equilibrium configurations are stationary, undulating and steady spinning, respectively. Note that $\bar{\Omega}_{\bar{t}=0}$ is not necessarily zero because the induced dipole $\bar{\bP}$ is slightly perturbed at $\bar{t}=0$, see equation~(\ref{eq:bp_perturb}); moreover, (a) and (b) have strikingly different scales for $\bar{\Omega}$. } \label{fig:omg_v_time_1} \end{figure} \begin{figure} \begin{center} \hspace{0cm}\includegraphics[width=1\textwidth] {fig4.pdf} \end{center} \caption{ (a) Highlighted cyan domain of figure~\ref{fig:omg_v_time_1}b indicating the initial rapidly growing period of $\bar{\Omega}(\nt)$, for $\barmu=635$ and $\bar{E} = 1.5$. The red curve denotes the local peak $\nonOmegalpk$, and the inset of (a) shows the linear dependence of $\log{\nonOmegalpk}$ on $\bar{t}$. (b) Highlighted green domain of figure~\ref{fig:omg_v_time_1}b corresponding to the time-periodic response of $\bar{\Omega}(\nt)$, where consecutive time instants $\bar{t}_i$ ($i=1,...,6$) within a period are marked. (c) Particle-filament configurations at $\bar{t}_i$. (d) Trajectory of the particle centre within $\bar{t} \in [0, 1940]$.} \label{fig:zoom_in_mu635} \end{figure} We observe that the composite object exhibits three $\barmu$-dependent scenarios, indicated by the time evolution of the rotational velocity $\bar{\Omega}$ shown in figure~\ref{fig:omg_v_time_1}. When $\barmu=600$ (figure~\ref{fig:omg_v_time_1}a), $\bar{\Omega}$ decays dramatically and eventually becomes zero, indicating that the object relaxes to a stationary state. Increasing $\barmu$ to $635$ (figure~\ref{fig:omg_v_time_1}b), the time evolution of $\bar{\Omega}$ features two phases: in the initial phase (cyan domain), it grows rapidly due to self-oscillation; in the second phase (green domain), it reaches a time-periodic state with a constant amplitude of approximately $0.1$. The third type of response is illustrated by $\barmu=2000$, where $\bar{\Omega}$ eventually approaches a steady value around $-0.6$. We further scrutinise the $\barmu=635$ case. The close-up views of the initially rapidly growing phase (cyan domain) and saturated time-periodic phase (green domain) are shown in figure~\ref{fig:zoom_in_mu635}a and b, respectively. The red curve connecting the local peaks $\nonOmegalpk$ of $\bar{\Omega}$ implies an exponential growth of $\bar{\Omega}$ in time. This trend is confirmed by the linear relationship between $\log{\nonOmegalpk}$ and $\bar{t}$ shown in the inset of figure~\ref{fig:zoom_in_mu635}a. The time-periodic phase enlarged in figure~\ref{fig:zoom_in_mu635}b reveals its sinusoidal-like variation characterised by fore-aft temporal symmetry. Six times within one period of this phase are marked, with their corresponding positions and orientations of the particle, and the profiles of the filament depicted in figure~\ref{fig:zoom_in_mu635}c. The oscillating particle drives the filament to wiggle, because the filament is clamped onto the particle. The wiggling filament provides thrust to the whole object, as a natural resemblance to a biological appendage. Consequently, the object achieves locomotion, following a wave-like trajectory (figure~\ref{fig:zoom_in_mu635}d). The wavy path is tightly packed near $\bar{t}=0$, implying the slow motion of the object undergoing small-amplitude oscillation in the initial phase. \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=1\textwidth] {fig5.pdf} \end{center} \caption{Time evolution of the rotational velocity $\bar{\Omega} \lp \bar{t} \rp$ when $\bar{E} = 1.5$ for (a) $\barmu=825$ and (b) $\barmu=1000$.} \label{fig:omg_v_time_2} \end{figure} We observe that when $\barmu$ lies in the self-oscillating regime, the time evolution of $\bar{\Omega}$ varies with $\barmu$. As shown in figure~\ref{fig:omg_v_time_2} for $\barmu=825$ and $1000$, for a larger $\barmu$ it takes fewer time periods for the perturbation to reach its time-periodic state. In addition, that state clearly breaks fore-aft symmetry with increasing $\barmu$. \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=1\textwidth] {fig6.pdf} \end{center} \caption{Amplitude $\nonOmegaamp$ of the rotational velocity as a function $\barmu$ for (a) $\bar{E}=1.2$, (b) $1.5$ and (c) $1.7$. The three $\barmu$-dependent regimes, stationary (dashed lines), undulating (triangles) and steady spinning (diamonds) of the composite object are separated by two thresholds $\barmucrione$ and $\barmucritwo$. (d), (e) and (f) show the linear variation of $\lp \nonOmegaamp \rp^2$ in $\barmu$ in close proximity to $\barmucrione$ for $\bar{E}=1.2$, $1.5$ and $1.7$, respectively.} \label{fig:omg_mu_alpha03_varyE} \end{figure} We next investigate the critical $\barmu$ values that separate the three regimes corresponding to the stationary, undulating and steady spinning states. Figure~\ref{fig:omg_mu_alpha03_varyE} displays the rotational velocity magnitude $\nonOmegaamp$ versus $\barmu$ for $\bar{E}=1.2$ (a), $1.5$ (b) and $1.7$ (c). When $\barmu \leq \barmucrione$, $\nonOmegaamp=0$ represents the fixed-point solution; when $\barmu \geq \barmucrione$, the non-zero $\nonOmegaamp$ representing the constant spinning speed corresponds to the asymmetric fixed-point solution; when $\barmu \in \lp \barmucrione, \barmucritwo\rp$, $\nonOmegaamp$ indicates the magnitude of the oscillating $\bar{\Omega}$ when it reaches a time-periodic state. We plot $\lp \nonOmegaamp \rp^2$ as a function of $\barmu$ in close proximity to $\barmucrione$ in figure~\ref{fig:omg_mu_alpha03_varyE}d-f. The linear dependence of $\lp \nonOmegaamp \rp^2$ on $\barmu$ implies that the instability occurs at $\barmucrione$ through a Hopf bifurcation from where a limit-cycle solution emerges. Moreover, the $\nonOmegaamp(\barmu)$ profile also indicates the supercritical nature of the Hopf bifurcation. On the other hand, a sudden jump of $\nonOmegaamp$ at $\barmucritwo$ signifies a secondary bifurcation where the limit cycle shrinks to a fixed point or vice versa. \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=1\textwidth] {fig7.pdf} \end{center} \caption{(a) Trajectory of the particle centre for $\barmu=635$, when $\bar{E}=1.5$. The dashed arrow indicates how the effective translational velocity $\mU$ is quantified. (b) $\mU$ versus $\barmu \in \lp \barmucrione, \barmucritwo \rp$ when $\bar{E}=1.5$, $\mU$ reaches an optimal value $\mUopt \approx 6\times 10^{-3}$ at $\barmu = \muopt \approx 825$. (c) The optimal EEV number $\muopt$ when the composite object attains the maximum effective translational velocity $\mUopt$; $\muopt$ and $\mUopt$ are plotted versus the field strength $\bar{E}$.} \label{fig:opt_mu_and_spd} \end{figure} Having demonstrated that the composite object is able to achieve propulsion by self-oscillatory undulation, we naturally examine its propulsive performance. Shown in figure~\ref{fig:opt_mu_and_spd}a, when the undulating swimmer reaches its time-periodic state, its trajectory resembles a periodic wave propagating along a straight direction (dashed arrow). We thus define the effective translational velocity $\mU$ of the swimmer as the propagation speed of the wave, that is $\mU = \bar{\mathcal{D}}/\lp \bar{T}_2 - \bar{T}_1 \rp$. This effective velocity $\mU$ exhibits a clear non-monotonic variation in $\barmu$; it reaches its maximum value at an optimal EEV number $\barmu = \muopt \approx 825$ and becomes zero when $\barmu \rightarrow \barmucrione$ and $\barmu \rightarrow \barmucritwo$. Such a non-monotonic trend is expected, since when $\barmu$ is outside the self-oscillating regime $[\barmucrione,\barmucritwo]$, the object either remains stationary or spins steadily, resulting in no net locomotion. It is also worth noting that $\mU$ exhibits wavy variation near $\barmucritwo$. In this regime, the filament is so deflected and the hydrodynamic interactions between the particle and filament can be reasonably strong due to the decreasing distance between them. Since our simulations do not consider the hydrodynamic interactions, hence it is not self-consistent to interrogate the data in detail in this regime. Finally, we show in figure~\ref{fig:opt_mu_and_spd}c the dependence of the optimal swimming condition, $\muopt$ and $\mUopt$, on the electric field strength $\bar{E}$. The optimal EEV number $\muopt$ decreases with $\bar{E}$ monotonically; in contrast, the optimal velocity $\mUopt$ displays a non-monotonic variation in $\bar{E}$, reaching a maximum value of approximately $6\times 10^{-3}$ at $\bar{E} \approx 1.55 - 1.6$. This non-monotonic trend is not surprising. In fact, self-oscillation of the composite object only emerges when $1 < \bar{E} < \bar{\mathcal{E}}^{\mathrm{cri}}$, where $\bar{\mathcal{E}}^{\mathrm{cri}}$ represents the critical electric field above which the particle jointed with a rigid rod ($\barmu \rightarrow 0$) of the same length and slenderness will undergo the QR instability. Hence, when $\bar{E} \geq \bar{\mathcal{E}}^{\mathrm{cri}}$, the composite object will spin steadily but not self-propel regardless of the filament rigidity. On the other hand, when $\bar{E} \leq 1$, the extra anchored filament will further stabilise the original QR particle, hence the composite object will be stationary. We further note that the optimal translational velocity $\approx 6\times 10^{-3}$ is in the range $\lp1, 15\rp\times 10^{-3}$ of the dimensionless speed of a magnetically driven flexible artificial flagellum~\citep{dreyfus2005microscopic}. \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=0.6\textwidth] {fig8.pdf} \end{center} \caption{Amplitude $\nonOmegaamp$ of the rotational velocity versus $\bar{E}$ for three EEV numbers $\barmu=500$, $2000$ and $8000$. $\barEcrione$ (green star) and $\barEcritwo$ (magenta star) indicate where the Hopf and secondary bifurcations occur, respectively. The solid curve corresponds to the original QR rotational velocity, $\bar{\Omega}_{\mathrm{QR}}$ (see equation~(\ref{eq:non_omega_QR})) and the hollow square denotes $\bar{E}=1$, the occurrence of the pitchfork bifurcation.} \label{fig:omg_v_E_varyMu_a0.3} \end{figure} By analogy to the results in figure~\ref{fig:omg_mu_alpha03_varyE}a-c, we show in figure~\ref{fig:omg_v_E_varyMu_a0.3} $\nonOmegaamp$ versus $\bar{E}$ as the bifurcation parameter for three EEV numbers $\barmu=500$, $2000$ and $8000$. A similar bifurcation diagram is identified: increasing $\bar{E}$ from zero, the stationary fixed point solution transits to a limit-cycle solution through a supercritical Hopf bifurcation at $\barEcrione$ (green star); that solution then jumps to a second fixed point solution (steady spinning) via a secondary bifurcation at $\barEcritwo$ (magenta star). The original QR instability emerges at $\bar{E}=1$ (hollow square) through a supercritical pitchfork bifurcation~\citep{turcu1987electric, peters2005experimental,das2013electrohydrodynamic}. The filament manages to transform that bifurcation for an individual particle into a corresponding Hopf bifurcation leading to self-oscillation. It is not surprising that by increasing $\barmu$, the variation of $\nonOmegaamp$ for the composite object tends to recover that of the original QR corresponding to $\barmu \rightarrow \infty$. \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=1\textwidth] {fig9.pdf} \end{center} \caption{(a) Time evolution of the elastic $\bar{\Gamma}_{x}^{\mathrm{f} \rightarrow \mathrm{p}}$ (solid), electric $\bar{\Gamma}_{x}^{\mathrm{elec}}$ (dashed), hydrodynamic $\bar{\Gamma}_{x}^{\mathrm{hydro}}$ (dot-dashed) and total $\bar{\Gamma}_{x}^{\mathrm{total}}$ (straight solid) torque ($x$-component) on the particle with respect to its centre, where $\barmu=635$ and $\bar{E}=1.5$. (b) Close-up view of the time-periodic state. Shaded regions indicate when the elastic $\bar{\Gamma}_{x}^{\mathrm{f} \rightarrow \mathrm{p}}$ and hydrodynamic $\bar{\Gamma}_{x}^{\mathrm{hydro}}$ torques have opposite signs.} \label{fig:torque_omg} \end{figure} It is evident that the elastic torque $\bar{\boldsymbol{\Gamma}}^{\mathrm{f}\rightarrow\mathrm{p}}$ plays an important role in the torque balance. We examine the time evolution of the $x$-component of the torques, namely the elastic $\bar{\Gamma}_{x}^{\mathrm{f} \rightarrow \mathrm{p}}$, hydrodynamic $\bar{\Gamma}_{x}^{\mathrm{hydro}}$ and electric $\bar{\Gamma}_{x}^{\mathrm{hydro}}$ torques in figure~\ref{fig:torque_omg} when $\barmu=635$ and $\bar{E}=1.5$. The sum of the torques $\bar{\Gamma}_{x}^{\mathrm{total}} = \bar{\Gamma}_{x}^{\mathrm{f} \rightarrow \mathrm{p}} + \bar{\Gamma}_{x}^{\mathrm{hydro}} + \bar{\Gamma}_{x}^{\mathrm{elec}}= 0$ implies that the torque balance is well satisfied numerically. Similar to the evolution of the rotational velocity, the torques exhibit exponential growth in the initial phase before approaching a time-periodic state. The torque balance in this state is further scrutinised in figure~\ref{fig:torque_omg}b. Realising the negative relation between $\bar{\Gamma}^{\mathrm{hydro}}_x$ and $\bar{\Omega}$, we notice that $\bar{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_x$ and $\bar{\Omega}$ have the same sign in the two highlighted periods emphasising when the elastic $\bar{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_x$ and $\bar{\Gamma}^{\mathrm{hydro}}_x$ hydrodynamic torque contributions have opposite signs. The in-phase behaviour of $\bar{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_x$ and $\bar{\Omega}$ is a clear signature of negative damping, or positive feedback that triggers the linear instability of self-oscillation~\citep{jenkins2013self}. \section{Linear stability analysis}\label{sec:linear} \subsection{Linearisation about the stationary equilibrium state}\label{sec:linear_qc} We perform LSA about the stationary equilibrium state of the composite particle when the filament is undeformed. In this section, we drop the bars for all of the dimensionless unknown variables (those over dimensionless parameters remain), unless otherwise specified. We linearise the governing equations of the particle orientation $\theta$, and the dipole components $[\mP_Q, \mP_3]$. By incorporating into the LSA a theoretical model of the elasto-viscous response of the filament, we do not linearise the equations for the filament position $\br(s)$ and tension $T(s)$ as conducted in \citet{guglielmini2012buckling}. The state variables $[\theta, \mP_Q, \mP_3]$ are decomposed into a base (equilibrium) state $[\hat{\theta}, \hat{\mP}_Q, \hat{\mP}_3]$ and a perturbation state $[\theta^{\prime}, \mP^{\prime}_Q, \mP^{\prime}_3]$, which satisfy \begin{subequations} \label{eq:linear-decomp} \begin{align} \theta & = \hat{\theta} + \theta^{\prime}, \\ \mP_Q & = \hat{\mP}_Q + \mP^{\prime}_Q, \\ \mP_3 & = \hat{\mP}_3 + \mP^{\prime}_3. \end{align} \end{subequations} The perturbation-state variables $[\theta^{\prime},\mP^{\prime}_Q, \mP^{\prime}_3]$ are assumed to be infinitesimal in LSA. By substituting $\boldsymbol{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}} = \mathbf{0}$ and $\frac{\partial }{\partial t}=0$ into equations~(\ref{eq:non-theta}), (\ref{eq:non-PQ}) and \ref{eq:non-P3}, we obtain the base-state dipoles \begin{subequations}\label{eq:base-dipole} \begin{align} \hat{\mP}_Q &= -\kappa \bareta \bar{E} \sin{\hat{\theta}},\\ \hat{\mP}_3 &= -\kappa \bareta \bar{E} \cos{\hat{\theta}}. \end{align} \end{subequations} By substituting equations~(\ref{eq:linear-decomp}) and (\ref{eq:base-dipole}) into equations~(\ref{eq:non-theta}), (\ref{eq:non-PQ}) and (\ref{eq:non-P3}), and assuming small $\theta^{\prime}$, we derive the governing equations for the perturbation-state variables $[\theta^{\prime},\mP^{\prime}_Q, \mP^{\prime}_3]$, \begin{subequations}\label{eq:pert_linear} \begin{align} \frac{\partial \mP^{\prime}_Q}{\partial t} &= -\kappa \lp \mP^{\prime}_Q + \kappa \bareta \bar{E} \theta^{\prime} \cos{\hat{\theta}} \rp,\\ \frac{\partial \mP^{\prime}_3}{\partial t} &= -\kappa \lp \mP^{\prime}_3 - \kappa \bareta \bar{E} \theta^{\prime} \sin{\hat{\theta}} \rp,\\ \frac{\partial \theta^{\prime}}{\partial t} &= \frac{1}{\bareta}\left[\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_N + \bar{E} \lp \mP^{\prime}_Q\cos{\hat{\theta}}-\mP^{\prime}_3\sin{\hat{\theta}} + \kappa\bareta \bar{E} \theta^{\prime} \rp \right].\label{eq:pert_theta} \end{align} \end{subequations} Adopting the normal-mode approach, we assume that the perturbations vary exponentially in time with a complex rate $\sigma = \sigma_{r} + \mathrm{i} \sigma_{i}$, so $\left[\mP^{\prime}_Q, \mP^{\prime}_3, \theta^{\prime} \right] = \left[ \Phi, \Pi, \Theta\right] \exp{(\sigma t)}$. Consequently, equation~(\ref{eq:pert_linear}) can be reformulated to \begin{align}\label{eq:linear_qr} \Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_N = \frac{\sigma \left[ \sigma - (\bar{E}^2-1)\kappa \right]}{\sigma+\kappa} \Theta \bareta \exp{(\sigma t)}. \end{align} We note that, for a vanishing elastic torque $\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_N=0$ (no attached filament), equation~(\ref{eq:linear_qr}) is characterised by two roots $\sigma_1=0$ and $\sigma_2 = \kappa \lp \bar{E}^2-1 \rp$, which describe the original QR instability; the first root represents the stationary state and the second indicates that the dimensionless threshold electrical field (scaled by $E^{\mathrm{cri}}$) required to trigger instability is $\bar{E}=1$. Note that $E^{\mathrm{cri}}$ in equation~(\ref{eq:Ecri}) is originally derived by balancing the electric and hydrodynamic torque~\citep{jones1984quincke} instead of conducting LSA (see appendix~\ref{sec:append} for details). The two predictions exactly agree with each other. \subsection{Elastohydrodynamic model}\label{sec:eh_model} Since the elastohydrodynamic equations are not linearised, we thus derive a theoretical expression for $\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_N (t)$ for the dispersion relation, equation~(\ref{eq:linear_qr}). \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=1\textwidth] {fig10.pdf} \end{center} \caption{(a) Schematic of the model problem: a composite object of a sphere and a filament undergoes a rotational oscillation. The particle centre P and joint (filament base) J rotate periodically along circular arcs of radius $b$ and $d=A-b$, respectively. V denotes their common pivot point. (b) Zoom-in on the circular arc trajectory of the joint, showing the difference between the torque $\boldsymbol{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}|_{V}$ with respect to the pivot V and $\boldsymbol{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}$ to the particle centre P.} \label{fig:force-torque-sketch} \end{figure} We find $\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_N (t)$ by solving a separate elastohydrodynamic problem of the composite object undergoing a prescribed rotational oscillation characterised by $\theta^{\prime} = \tilde{\theta} (t)\exp{\lp \mathrm{i} \sigma_{i} t\rp}$, where $\tilde{\theta} (t) = \Theta \exp{\lp \sigma_{r} t \rp}$ indicates the angular oscillation amplitude. We do not consider the object's translation near the onset of instability since any translation is negligible due to the small-amplitude oscillation. To simplify the algebra in the next steps, we set $\hat{\theta} = \pi/2$ without loss of generality as shown in figure~\ref{fig:force-torque-sketch}, where the rest configuration (dashed curves) corresponds to when the particle centre P coincides with the origin O and the undeformed filament is aligned in the $\mathbf{e}_y$ direction. The rotational oscillation is executed about a pivot V that lies away from the origin by a dimensional distance $b$ on the $y$-axis, where $\beta = b/L$; the dimensional distance between V and J is $d = A-b$, so similarly \begin{align}\label{eq:delta} \delta = d/L = \alpha - \beta. \end{align} The particle centre P (resp. filament base J) follows a trajectory of a circular arc that is centred at V and of radius $b$ (resp. $d$); both trajectories are symmetric about the $y$-axis. Note that $\beta$ is an unknown that is to be determined. Near the onset of instability, the amplitude $\tilde{\theta} (t)$ varies much more slowly than the oscillation of $\theta^{\prime}$, \textit{viz.} $\sigma_{r} \ll \sigma_{i}$. This allows us to assume that the amplitude $\tilde{\theta} = \Theta \exp{(\sigma_{r} t)}$ is quasi-steady, namely, $\theta^{\prime}$ at a particular time $t_0$ can be approximated by \begin{align} \theta^{\prime} = \Theta \exp{\lp\sigma_{r} t_0\rp} \exp{\lp \mathrm{i} \sigma_{i} t \rp}, \end{align} as an instantaneous configuration of a periodic signal with a prescribed amplitude $\Theta \exp{\lp\sigma_{r} t_0\rp}$ and frequency $\sigma_{i}$. This setup resembles the theoretical framework developed to address the so-called elastohydrodynamic problem II~\citep{wiggins1998flexive,wiggins1998trapping} of a filament with one of its ends undergoing straight, oscillatory translation. We adapt that framework for our configuration, whereas the filament end oscillates on a circular arc instead of on a straight path, as shown in figure~\ref{fig:force-torque-sketch}b. Because the filament undergoes small-amplitude deformation, $|z_y| \ll 1$ and its tangent vector $\br_s \approxeq \mathbf{e}_y$. We also assume $T(s) \equiv 0$. The position $\br(t, s)$ of the filament centreline is $\br(s) = (\alpha + s) \mathbf{e}_y + z(t, s) \mathbf{e}_z$. The horizontal displacement of the filament base is of order $\mathcal{O}(\tilde{\theta}^2)$ and can be neglected because $|\theta^{\prime}| \leq |\tilde{\theta}| \ll 1$. The base's vertical oscillation is prescribed as \begin{align}\label{eq:z_s0} z(t)|_{s=0} & = \delta \sin{\theta^{\prime}} \approxeq \delta \theta^{\prime} = \delta \tilde{\theta} \exp{(\mathrm{i} \sigma_{i} t)}, \end{align} where $\delta \tilde{\theta}$ represents the oscillation amplitude. Following \citet{wiggins1998flexive} and \citet{wiggins1998trapping}, the deflection of the filament is expressed by \begin{align}\label{eq:z_s} z (s) = \delta \tilde{\theta} \exp{(\mathrm{i} \sigma_{i} t) h(s,\mathcal{L})}, \end{align} where \begin{align}\label{eq:mL} \mathcal{L}^4 = \frac{\barmu \sigma_{i}}{-1-2\log{\epsilon_{\mathrm{sl}}}} \end{align} and $h$ is a sum of four solutions \begin{align} h(s,\mathcal{L}) = c_1 \xi^{\mathrm{i} s} + c_2 \xi^{-s} + c_3 \xi^{-\mathrm{i} s} + c_4 \xi^{s}, \end{align} with \begin{subequations}\label{eq:z0_xi} \begin{align} z_0 & = \exp{(-\mathrm{i} \pi/8)}, \label{eq:z0} \\ \xi & = \exp{(z_0 \mathcal{L})}. \label{eq:xi} \end{align} \end{subequations} The four coefficients $c_i$ need to be determined by the BCs at the filament ends. In contrast to \citet{wiggins1998flexive} and \citet{wiggins1998trapping} treating $z(s)$ as a real variable, we consider a complex $z(s)$. This allows us to obtain the complex torque consistent with the complex nature of the torque balance, equation~(\ref{eq:linear_qr}). The BCs for $h(s)$ at the free end $s=1$ are $h_{ss}=h_{sss}=0$. At the clamped end $s=0$, $h=1$ as a Dirichlet BC corresponding to the prescribed displacement; the other BC is more subtle. Because the filament orientation is orthogonal to the circular arc (see figure~\ref{fig:force-torque-sketch}b), we have \begin{align}\label{eq:bc_zs} z_s & = \sin{\theta^{\prime}} \approxeq \theta^{\prime} =\tilde{\theta} \exp{(\mathrm{i} \sigma_{i} t)}. \end{align} By substituting equation~(\ref{eq:z_s}) into equation~(\ref{eq:bc_zs}), we obtain the BC \begin{align} h_s|_{s=0} & = 1/\delta, \end{align} where $\delta$ is defined in equation~(\ref{eq:delta}). Knowing all the BCs of $h(s)$, we compute the four coefficients \begin{subequations} \begin{align} c_1 & = \frac{\left(1+\mathrm{i}\right) \left[\left((1-\mathrm{i}) \xi ^{1+\mathrm{i}}-\mathrm{i} \xi ^2+1\right) \delta \log \xi -(1+\mathrm{i}) \xi ^{1+\mathrm{i}}-\mathrm{i} \xi ^2-1\right]}{2 \Lambda \delta \log \xi }, \\ c_2 & = \frac{\left(1+\mathrm{i} \right) \xi \left[ \left(-\mathrm{i} \xi ^{1+2 \mathrm{i}}+(1-\mathrm{i}) \xi ^\mathrm{i}+\xi \right) \delta \log \xi-\xi ^{1+2 \mathrm{i}}+(-1+\mathrm{i}) \xi ^\mathrm{i}+\mathrm{i} \xi \right]}{2 \Lambda \delta \log \xi },\\ c_3 & = \frac{\left( 1 + \mathrm{i} \right) \xi ^\mathrm{i} \left[\left(\xi ^{2+\mathrm{i}}-\mathrm{i} \xi ^\mathrm{i}+(1-\mathrm{i}) \xi \right) \delta \log \xi+\xi ^{2+\mathrm{i}}+\mathrm{i} \xi ^\mathrm{i}+(1+\mathrm{i}) \xi \right]}{2 \Lambda \delta \log\xi},\\ c_4 & = \frac{(1+\mathrm{i}) \left(\xi ^{2 \mathrm{i}}+(1-\mathrm{i}) \xi ^{1+\mathrm{i}}-\mathrm{i}\right) \delta \log (\xi )+(1-\mathrm{i}) \xi ^{2 \mathrm{i}}+2 \xi ^{1+\mathrm{i}}+1+\mathrm{i}}{2 \Lambda \delta \log \xi }, \end{align} \end{subequations} where $\Lambda=\xi ^{2 \mathrm{i}}+4 \xi ^{1+\mathrm{i}}+\xi ^{2+2 \mathrm{i}}+\xi ^2+1$. Considering the small-amplitude deformation, the total force $\bF$ exerted by the filament on the clamped end is along the vertical $\mathbf{e}_z$ direction. The torque $\boldsymbol{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}|_{V}$ with respect to the pivot V and $\boldsymbol{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}$ with respect to the particle centre P are along the $\mathbf{e}_x$ direction, so that the corresponding components of the force and torques are \begin{subequations} \label{eq:force-torque-full} \begin{align} F^{\mathrm{f}\rightarrow\mathrm{p}}_z & = \tilde{\theta} \exp{\lp \mathrm{i} \sigma_{i} t \rp} \frac{\log^2{\xi}\left[(1+\mathrm{i}) \Lambda_1 \delta \log\xi - \mathrm{i} \Lambda_2 \right]}{\Lambda}, \label{eq:force-full}\\ \Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{V} & = \tilde{\theta} \exp{\lp \mathrm{i} \sigma_{i} t \rp} \frac{\log \xi \left[(1+\mathrm{i}) \delta^2 \Lambda_1 \log^2\xi - 2\mathrm{i} \delta \Lambda_2 \log\xi +(-1-\mathrm{i}) \Lambda_3 \right]}{\Lambda},\\ \Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_x & = \Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{V} + \lp \alpha - \delta \rp F^{\mathrm{f}\rightarrow\mathrm{p}}_z \nonumber \\ & = \tilde{\theta} \exp{\lp \mathrm{i} \sigma_{i} t \rp} \frac{\log \xi \left[(1+\mathrm{i}) \alpha \delta \Lambda_1 \log^2\xi -\mathrm{i} (\alpha + \delta ) \Lambda_2 \log\xi +(-1-\mathrm{i}) \Lambda_3 \right]}{\Lambda}, \end{align} \end{subequations} where \begin{subequations} \begin{align} \Lambda_1 & = -\xi ^{2 \mathrm{i}}-\mathrm{i} \xi ^{2+2 \mathrm{i}}+\xi ^2+\mathrm{i}, \\ \Lambda_2 & = \left(-1+\xi ^{2 \mathrm{i}}\right) \left(\xi ^2-1\right),\\ \Lambda_3 & = \mathrm{i} \xi ^{2 \mathrm{i}}+\xi ^{2+2 \mathrm{i}}-\mathrm{i} \xi ^2-1. \end{align} \end{subequations} Now, let us examine the denominator, $\Lambda$, of equation~(\ref{eq:force-torque-full}) whose five terms are in the form of $\xi^{q_k}$ ($k=1 ... 5$), where $[q_1, q_2, q_3, q_4, q_5]=[2\mathrm{i}, 1+\mathrm{i}, 2+2\mathrm{i}, 2, 0]$. Using equation~(\ref{eq:z0_xi}), we express $\xi^{q_k}$ as \begin{align} \xi^{q_k} = \left[\exp{(z_0 \mathcal{L})}\right]^{q_k} = \zeta^\mathcal{L}_k, \end{align} where $\zeta_k = \exp{(z_0 q_k)}$ are \begin{align} \zeta_1 & = -0.59 + 2.09\mathrm{i},\nonumber \\ \zeta_2 & = 3.17 + 1.9\mathrm{i}, \nonumber \\ \zeta_3 & = 6.4 + 12.05 \mathrm{i}, \nonumber \\ \zeta_4 & = 4.57 - 4.4\mathrm{i}, \nonumber \\ \zeta_5 & = 1. \end{align} We observe that the third term $\zeta_3^{\mathcal{L}}$ is larger than the rest in magnitude when $\mathcal{L} \geq 1$, dominating the second largest term by one order when $\mathcal{L} \geq 3$. Let us assume $\mathcal{L}\geq3$ a priori, so that we can then approximate $\Lambda$ by $\zeta_3^{\mathcal{L}}$ in equation~(\ref{eq:force-torque-full}). By further extracting the leading-order terms of $\Lambda_1/\Lambda$, $\Lambda_2/\Lambda$ and $\Lambda_3/\Lambda$, we attain a simplified, leading-order expression for the force and torque (denoted by $\;\tilde\;\;$) \begin{subequations}\label{eq:force-torque-simple} \begin{align} \tilde{F}^{\mathrm{f}\rightarrow\mathrm{p}}_z & = \tilde{\theta} \exp{\lp \mathrm{i} \sigma_{i} t \rp} \log^2{\xi}\left[(1-\mathrm{i}) \delta \log\xi-\mathrm{i} \right], \label{eq:force-simple}\\ \tilde{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{\mathrm{V}} & =\tilde{\theta} \exp{\lp \mathrm{i} \sigma_{i} t \rp} \log \xi \left[(1-\mathrm{i}) \delta^2 \log^2\xi -2\mathrm{i} \delta \log\xi -1-\mathrm{i} \right], \\ \tilde{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_x & =\tilde{\theta} \exp{\lp \mathrm{i} \sigma_{i} t \rp} \log \xi \left[(1-\mathrm{i}) \alpha \delta \log^2\xi -\mathrm{i} ( \alpha +\delta ) \log\xi -1-\mathrm{i} \right].\label{eq:torque_simple} \end{align} \end{subequations} The theoretical force $F^{\mathrm{f}\rightarrow\mathrm{p}}_z$, torque $\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{\mathrm{V}}$ and their leading-order counterparts $\tilde{F}^{\mathrm{f}\rightarrow\mathrm{p}}_z$ and $\tilde{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{\mathrm{V}}$ are validated against the numerical results for six cases spanning a wide range of parameters relevant to our study (see table.~\ref{tab:sixcases}), where case 1 is the reference case and the other five vary a single parameter compared to case 1. Because the numerical force and torque are real quantities, the real parts of $F^{\mathrm{f}\rightarrow\mathrm{p}}_z$ (dashed curve) given by equation~(\ref{eq:force-full}), and its leading-order approximation $\tilde{F}^{\mathrm{f}\rightarrow\mathrm{p}}_z$ (dot-dashed curve) by equation~(\ref{eq:force-simple}), are compared with the numerical data (solid curve) in figure~\ref{fig:force-valid}. A similar comparison between the torques $\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{\mathrm{V}}$ and $\tilde{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{\mathrm{V}}$ is shown in figure~\ref{fig:torque-valid}. \begin{table} \center \begin{tabular}{ccccc} \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\tilde{\theta}$} & \multicolumn{1}{c|}{$\delta$} & \multicolumn{1}{c|}{$\sigma_{i}$} & \multicolumn{1}{c|}{$\barmu$} \\ \hline \multicolumn{1}{|c|}{Case 1 (reference)} & \multicolumn{1}{c|}{$10^{-3}$} & \multicolumn{1}{c|}{$0.3$} & \multicolumn{1}{c|}{$0.2$} & \multicolumn{1}{c|}{$10^3$} \\ \hline \multicolumn{1}{|c|}{Case 2} & \multicolumn{1}{c|}{$\mathbf{0.1}$} & \multicolumn{1}{c|}{$0.3$} & \multicolumn{1}{c|}{$0.2$} & \multicolumn{1}{c|}{$10^3$} \\ \hline \multicolumn{1}{|c|}{Case 3} & \multicolumn{1}{c|}{$10^{-3}$} & \multicolumn{1}{c|}{$\mathbf{0.8}$} & \multicolumn{1}{c|}{$0.2$} & \multicolumn{1}{c|}{$10^3$} \\ \hline \multicolumn{1}{|c|}{Case 4} & \multicolumn{1}{c|}{$10^{-3}$} & \multicolumn{1}{c|}{$0.3$} & \multicolumn{1}{c|}{$\mathbf{2}$} & \multicolumn{1}{c|}{$10^3$} \\ \hline \multicolumn{1}{|c|}{Case 5} & \multicolumn{1}{c|}{$10^{-3}$} & \multicolumn{1}{c|}{$0.3$} & \multicolumn{1}{c|}{$0.2$} & \multicolumn{1}{c|}{$\mathbf{10^2}$} \\ \hline \multicolumn{1}{|c|}{Case 6} & \multicolumn{1}{c|}{$10^{-3}$} & \multicolumn{1}{c|}{$0.3$} & \multicolumn{1}{c|}{$0.2$} & \multicolumn{1}{c|}{$\mathbf{10^4}$} \\ \hline & & & & \end{tabular} \caption{Parameters for the six cases chosen to validate numerical results against the theoretical force $F^{\mathrm{f}\rightarrow\mathrm{p}}_z$, torque $\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{\mathrm{V}}$ and their leading-order counterparts. Bold entries indicate the difference with the reference, case $1$.} \label{tab:sixcases} \end{table} We observe that the force $F^{\mathrm{f}\rightarrow\mathrm{p}}_z$ and torque $\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{\mathrm{V}}$ and their leading-order values agree with the numerical results quantitatively in all the cases except for case $5$, where the leading-order results deviate a little from the full expression and numerical results. This disagreement results from the violation of the assumption $\mathcal{L} \geq 3$ used to derive the leading-order expression, where $\mathcal{L} \approx 1.25$ for case $5$. This also implies that the leading-order predictions become less accurate at small $\barmu$ values. \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=1\textwidth] {fig11.pdf} \end{center} \caption{Comparison between the theoretical force $F^{\mathrm{f}\rightarrow\mathrm{p}}_z$ (dashed curves), its leading-order approximation $\tilde{F}^{\mathrm{f}\rightarrow\mathrm{p}}_z$ (dot-dashed curves) and the numerical results (solid curves).} \label{fig:force-valid} \end{figure} \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=1\textwidth] {fig12.pdf} \end{center} \caption{Comparison between the theoretical torque $\Gamma^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{\mathrm{V}}$ (dashed curves), its leading-order approximation $\tilde{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_x|_{\mathrm{V}}$ (dot-dashed curves) and the numerical results (solid curves).} \label{fig:torque-valid} \end{figure} For the validation purpose, $\delta=\alpha-\beta$ can be prescribed. However, for the model, $\delta$ needs to be determined using the force-free condition on the particle. The particle follows a circular arc on the other side of the pivot V, the $z$-component of the hydrodynamic force on the particle approximated by the Stokes's law is \begin{align}\label{eq:force_par} F_z^{\mathrm{h}\rightarrow \mathrm{p}} = \frac{3\mathrm{i}}{4}\barmu \alpha\beta\sigma_{i} \tilde{\theta} \exp{\lp \mathrm{i} \sigma_{i} t \rp}. \end{align} Substituting equation~(\ref{eq:force-simple}) and (\ref{eq:force_par}) into $\tilde{F}^{\mathrm{f}\rightarrow\mathrm{p}}_z + F_z^{\mathrm{f}\rightarrow \mathrm{p}}=0$, we obtain \begin{align} \beta & = \frac{4\log^2{\xi}\left[ 1 + \alpha \lp \mathrm{i} + 1 \rp \log{\xi} \right] }{3 \alpha \barmu \sigma_{i}+ 4 \lp \mathrm{i} +1 \rp \log^3{\xi}}. \end{align} Using the leading-order torque $\tilde{\Gamma}^{\mathrm{f}\rightarrow\mathrm{p}}_x$ equation~(\ref{eq:torque_simple}), as the left-hand side torque of equation~(\ref{eq:linear_qr}) (note that the nodal line direction $\mathbf{N}=\mathbf{e}_x$ when the orientation $\mathbf{e}_\mathrm{p}$ is restricted to the $yz$-plane), we obtain the governing equation for the transformed growth rate $\hat{\sigma} = \barmu \sigma$, \begin{align}\label{eq:hsig} \alpha^3 \frac{\hat{\sigma}\left[ \hat{\sigma} - \lp \bar{E}^2-1 \rp \kappa \barmu \right] }{\hat{\sigma} + \kappa \barmu} + \log \xi \left[(\mathrm{i}-1) \alpha (\alpha -\beta ) \log^2\xi + \mathrm{i} (2 \alpha -\beta ) \log\xi +1+\mathrm{i} \right]=0, \end{align} where \begin{align}\label{eq:beta} \beta & = \frac{4\log^2{\xi}\left[ 1 + \alpha \lp \mathrm{i} + 1 \rp \log{\xi} \right] }{3 \alpha \hat{\sigma}_i+ 4 \lp \mathrm{i} +1 \rp \log^3{\xi}}, \end{align} where $\log{\xi}$ can be written as \begin{align}\label{eq:log_xi} \log{\xi} = z_0 \mathcal{L} = z_0 \lp \frac{\hat{\sigma}_i}{-1-2\log{\epsilon_{\mathrm{sl}}}} \rp^{1/4}. \end{align} \subsection{Complex growth rates and onset of instability}\label{sec:lsa2} We solve equation~(\ref{eq:hsig}) to obtain the transformed growth rate $\hat{\sigma}$, which facilitates a theoretical prediction of the onset of self-oscillatory instability. The growth rate $\hat{\sigma} =\hat{\sigma}_r + \mathrm{i} \hat{\sigma}_i$ depends on $\alpha$, $\epsilon_{\mathrm{sl}}$, $\kappa$, $\barmu$ and $E$, where we have fixed $\epsilon_{\mathrm{sl}}$ and $\kappa$. By writing $\hat{\sigma}_i = W^4$ and substituting it into equation~(\ref{eq:mL}), we obtain $\mathcal{L} = W/(-1-2\log{\epsilon_{\mathrm{sl}}})^{1/4}$. Here, $\mathcal{L}$ is a positive real number, so is $W$. By substituting equations~(\ref{eq:beta}) and (\ref{eq:log_xi}) into equation~(\ref{eq:hsig}), we derive a system of two-dimensional, nonlinear polynomial equations for $\hat{\sigma}_r$ and $W$ (see appendix~\ref{sec:polynomial}) and obtain its roots by employing the python driver phcpy~\citep{verschelde2013modernizing,otto2019solving} of a general-purpose solver PHCpack~\citep{verschelde1997phcpack} for polynomial systems. Because $\hat{\sigma}=\barmu\sigma$, we obtain the real part $\sigma_r = \hat{\sigma}_r/\barmu$ and imaginary part $\sigma_i = W^4/\barmu$ of the complex growth rate $\sigma$. \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=1\textwidth] {fig13.pdf} \end{center} \caption{The real $\sigma_{r}$ and imaginary $\sigma_{i}$ part of the complex growth rate $\sigma = \sigma_{r} + \mathrm{i} \sigma_{i}$ versus $\barmu$ for two electric fields (a) $\bar{E}=1.15$ and (b) $\bar{E}=1.5$, where the size ratio $\alpha=0.3$. Theoretical (LSA) and numerical predictions are denoted by red curves and blue symbols, respectively. The intersection of $\sigma_{r}(\barmu)$ with $\sigma=0$ gives the critical EEV value $\barmucrione$ (indicated by diamonds and pentagrams for theoretical and numerical results, respectively) corresponding to the onset of instability.} \label{fig:sigma_v_mu} \end{figure} We show $\sigma_r$ and $\sigma_i$ as a function of $\barmu$ in figure~\ref{fig:sigma_v_mu} for two electric fields $\bar{E}=1.15$ (a) and $\bar{E}=1.5$ (b), where $\alpha=0.3$. In both cases, the imaginary part $\sigma_i \lp \barmu \rp > 0$ implying that the perturbation always decays/grows in an oscillatory manner. In contrast, the real part $\sigma_r$ increases with $\barmu$ monotonically from negative to positive values, indicating the critical condition $\sigma_r \lp \barmucrione\rp = 0$ of the self-oscillatory instability. When $\barmu$ is smaller/larger than $\barmucrione$, the perturbation exhibits oscillatory decaying and growth. The LSA prediction of $\lp\sigma_r, \sigma_i\rp$ agrees quantitatively with the numerical counterpart for the $\bar{E}=1.15$ case, and qualitatively for the $\bar{E}=1.5$ case. We adopt a bi-section method to determine $\barmucrione$ as a function of $\lp\bar{E},\alpha\rp$, as shown in figure~\ref{fig:crimu_v_E}. For all $\alpha$ values, $\barmucrione$ decreases monotonically with $\bar{E}$. The theoretical and numerical predictions agree well with each other, especially in the high $\barmu$ regime. The agreement degenerates with decreasing $\barmu$. We infer that $\mathcal{L}$ becomes smaller when $\barmu$ decreases, hence this disagreement is mostly attributed to violating the $\mathcal{L} \geq 3$ assumption of the leading-order force/torque model for the LSA. \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=0.6\textwidth] {fig14.pdf} \end{center} \caption{The LSA (hollow symbols) and numerical (filled symbols) predictions of the critical EEV number $\barmucrione$ (versus $\bar{E}$) at which instability occurs through a Hopf bifurcation, for size ratios $\alpha=0.2$ (circles), $0.3$ (squares), $0.4$ (diamonds) and $0.5$ (triangles). } \label{fig:crimu_v_E} \end{figure} \section{A minimal model to reproduce the EEH instability and self-oscillation}\label{sec:minimal} \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=0.75\textwidth] {fig15.pdf} \end{center} \caption{Schematic of a minimal model to reproduce the EEH instability and self-oscillation: the elastic filament is represented by two rigid rods of the same length $\ell=L/2$ linked flexibly at $\mathrm{J}_1$ by a torsional spring of elastic modulus $K$. Rod $\#1$ is rigidly anchored at J. A steady, uniform electrical field $\bE=E\mathbf{e}_z$ is applied.} \label{fig:sketch-linker} \end{figure} To better unravel the physics underlying the EEH instability, we seek a minimal model reproducing this instability and the corresponding self-oscillation. By analogy to the multi-linker models~\citep{de2017spontaneous,ling2018instability}, we replace the elastic filament by two rigid cylindrical rods numbered $\#1$ and $\#2$ of equal length $\ell =L/2$ and equal radius $a$ of their cross sections, which are linked at $\mathrm{J}_1$ by a torsional spring with an elastic module of $K$ (see figure.~\ref{fig:sketch-linker}). Rod $\#1$ is clamped at the sphere surface J, namely it always passes through the particle centre P, hence the displacement vectors $\overrightarrow{\mathrm{PJ}}$ and $\overrightarrow{\mathrm{P}\mathrm{J}_1}$ are opposite to the particle orientation $\mathbf{e}_\mathrm{p}$. Rod $\#2$ is oriented with respect to $\overrightarrow{\mathrm{P}\mathrm{J}_1}$ by an angle $\theta_1$, which is zero when the composite system is at rest. Similar to the original setup, we assume that the motion of particle and the rods are restricted to the $yz$-plane. Further, no hydrodynamic interactions between the particle and rods, or between the rods are considered. The system consists of six unknowns: the translational velocity components $U_y(t)$ and $U_z(t)$ of the particle, the rotational velocity component $\frac{\d \theta(t)}{\d t} = \Omega(t)$ of the particle and $\frac{\d \theta_1(t)}{\d t} = \Omega_1 (t)$ of rod $\#2$ with respect to rod $\#1$, and the polarisation vector components $\mP_Q(t)$ and $\mP_3(t)$. It is worth noting that compared to the classical QR particle, this minimal configuration only incorporates one extra degree of freedom, $\theta_1$, which indicates the deformation magnitude of the torsional spring. We first derive the hydrodynamic force exerted on rod $\#1$ as \begin{align} \mathbf{F}^{\mathrm{hydro}}_{1} = & \frac{2 \pi \mu \ell }{c} \left[2 \theta_t (2 A+\ell) \cos \theta+U_y \cos2 \theta +3 U_y+U_z \sin 2 \theta \right]\mathbf{e}_y \nonumber \\ + &\frac{2 \pi \mu \ell}{c} \left[2 \theta_t (2 A+\ell) \sin \theta+U_y \sin 2 \theta - U_z \cos{2\theta} +3 U_z\right]\mathbf{e}_z, \end{align} and the torque about the particle centre P \begin{align} \boldsymbol{\Gamma}^{\mathrm{hydro}}_{1}|_{\mathrm{P}} = \frac{4 \pi \mu \ell }{3 c} \left[ 2 \theta_t \left(3 A^2+3 A \ell+\ell^2\right)+3 U_y (2 A+\ell) \cos \theta +3 U_z (2 A+\ell) \sin \theta\right]\mathbf{e}_x. \end{align} Likewise, the hydrodynamic force exerted on rod $\#2$ is \begin{dmath} \mathbf{F}^{\mathrm{hydro}}_{2} = \frac{2 \pi \mu \ell }{c} \left[ \theta_t \lp A + \ell \rp \cos (\theta +2 \theta_1 )+3 \theta_t (A+\ell) \cos \theta + 2 \ell (\theta_t +\theta_{1,t} ) \cos (\theta +\theta_1 ) +U_y \cos 2 (\theta +\theta_1 )+3 U_y+U_z \sin 2 (\theta +\theta_1 )\right]\mathbf{e}_y \\ +\frac{2 \pi \mu \ell}{c} \left[ \theta_t \lp A + \ell \rp \sin (\theta +2 \theta_1 ) + 3 \theta_t (A+\ell) \sin \theta+2 \ell \lp\theta_t + \theta_{1,t}\rp \sin (\theta +\theta_1 )+U_y \sin 2 (\theta +\theta_1 )-U_z\cos 2 (\theta +\theta_1 )+3 U_z\right]\mathbf{e}_z \end{dmath} and the hydrodynamic torque on rod $\#2$ about $\mathrm{J}_1$ is \begin{dmath} \boldsymbol{\Gamma}^{\mathrm{hydro}}_{2}|_{\mathrm{J}_1} = \frac{4 \pi \mu \ell^2}{3 c} \left[3 \theta_t (A+\ell) \cos \theta_1 +2 \ell \lp \theta_t + \theta_{1,t}\rp +3 U_y \cos (\theta +\theta_1 )+3 U_z \sin (\theta +\theta_1 )\right]. \end{dmath} The torque-free condition on rod $\#2$ reads \begin{align}\label{eq:torque_rod2} \mathbf{M}_{2} + \boldsymbol{\Gamma}^{\mathrm{hydro}}_{2}|_{\mathrm{J}_1} = \mathbf{0}, \end{align} where $\mathbf{M}_{2} = -K\theta_1 \mathbf{e}_x$ is the elastic moment exerted on rod $\#2$ by the torsional spring. The torque balance on the whole composite system about the particle centre P is \begin{align}\label{eq:torque_linker} \boldsymbol{\Gamma}^{\mathrm{hydro}}_{1}|_{\mathrm{P}} + \underbrace{\lp \boldsymbol{\Gamma}^{\mathrm{hydro}}_{2}|_{\mathrm{J}_1} + \overrightarrow{\mathrm{P}\mathrm{J}_1}\times \mathbf{F}^{\mathrm{hydro}}_{2}\rp}_{\text{hydrodynamic torque on rod } \#2 \text{ about P}} - \gamma_{\mathrm{drag}} \theta_t\mathbf{e}_x + \underbrace{\lp E_3\mP_Q-E_Q\mP_3 \rp \mathbf{e}_x}_{\text{electric torque on the particle}} = \mathbf{0}. \end{align} We also need to impose the force-free condition on the whole composite object \begin{align}\label{eq:linker_force} \mathbf{F}^{\mathrm{hydro}}_{1} + \mathbf{F}^{\mathrm{hydro}}_{2} - \beta_{\mathrm{drag}} \lp U_y \mathbf{e}_y + U_z\mathbf{e}_z\rp = \mathbf{0}. \end{align} To close the system, we solve the governing equations~(\ref{eq:PQ}) and (\ref{eq:P3}) for $\mP_Q$ and $\mP_3$, where the second term $-\frac{\partial \psi}{\partial t}\mP_N$ in equation~(\ref{eq:PQ}) disappears. We note that equations~(\ref{eq:torque_rod2}) and (\ref{eq:torque_linker}) indeed reflect the subtle interplay between the elastic, electric and hydrodynamic torques, which lead to the EEH instability-induced self-oscillation. \subsection{Nondimensionalization of the minimal model} We use the same characteristic scales as the original particle-filament configuration (see \S~\ref{sec:setup_math}) to nondimensionalise equations~(\ref{eq:torque_rod2}), (\ref{eq:torque_linker}) and (\ref{eq:linker_force}), except that we substitute $D$ by $KL$, resulting in a slightly modified EEV parameter \begin{align} \breve{\mu} = \frac{8\pi \mu L^3}{K\taus}, \end{align} to be distinguished from $\barmu$ defined by equation~(\ref{eq:barmu}) for the original setup. The dimensionless governing equations for $\bar{U}_y(\bar{t})$, $\bar{U}_z(\bar{t})$, $\theta(\bar{t})$ and $\theta_{1}(\bar{t})$ are \begin{subequations}\label{eq:non_model_vel} \begin{dmath} \lp 7\alpha+5/2 \rp \bar{\Omega}\cos\theta + \bar{\Omega} \lp \alpha+1/2 \rp\cos\lp \theta+2\theta_1 \rp + \lp \bar{\Omega} + \bar{\Omega}_1 \rp \cos\lp \theta+\theta_1\rp + \bar{U}_y\left[ \cos 2\theta+\cos2\lp \theta+\theta_1 \rp - 6\alpha c +6 \right] + \bar{U}_z \left[ \sin 2\theta+\sin2\lp \theta+\theta_1 \rp\right] = 0, \end{dmath} \begin{dmath} \lp 7\alpha+5/2 \rp \bar{\Omega} \sin{\theta} + \bar{\Omega} \lp \alpha+1/2 \rp\sin\lp \theta+2\theta_1 \rp + \lp \bar{\Omega} + \bar{\Omega}_1 \rp \sin\lp \theta+\theta_1\rp +\bar{U}_z\left[ -\cos 2\theta-\cos2\lp \theta+\theta_1 \rp - 6\alpha c +6 \right] + \bar{U}_y \left[ \sin{2\theta} + \sin2\lp \theta+\theta_1 \rp \right] =0, \end{dmath} \begin{dmath} \frac{\breve{\mu}}{24}\left[3\bar{\Omega} \lp \alpha+1/2 \rp \cos\theta_1 + \bar{\Omega} + \bar{\Omega}_1 + 3\bar{U}_y \cos \lp \theta+\theta_1 \rp + 3\bar{U}_z \sin\lp \theta+\theta_1 \rp \right] - c\theta_1 = 0, \end{dmath} \begin{dmath} \frac{\breve{\mu}}{24}\left\{\lp21\alpha^2 +15\alpha+13/4 \rp\bar{\Omega} +3\lp \alpha+1/2 \rp \lp \bar{\Omega} + \bar{\Omega}_1 \rp \cos\theta_1 \\ +3\lp 7\alpha+5/2 \rp \lp \bar{U}_y\cos{\theta} + \bar{U}_z \sin{\theta} \rp \\ +3\lp \alpha+1/2 \rp \cos2\theta_1 \left[ \lp \alpha+1/2 \rp \bar{\Omega} + \bar{U}_y\cos\theta +\bar{U}_z \sin\theta \right] + 3\lp \alpha+1/2 \rp \sin2\theta_1\lp \bar{U}_z \cos\theta - \bar{U}_y \sin\theta\rp \right\} \\ + c \theta_1 - c \bareta\bar{\Omega} + c \bar{E} \lp \bar{\mathcal{P}}_Q\cos\theta - \bar{\mathcal{P}}_3 \sin \theta \rp = 0, \end{dmath} \end{subequations} where $c=1+2\log\epsilon_{\mathrm{sl}}$ and $\bareta = \alpha^3\breve{\mu}$ as given by equations~(\ref{eq:c}) and (\ref{eq:bareta}), respectively. The dimensionless equations for $\bar{\mathcal{P}}_Q$ and $\bar{\mathcal{P}}_3$ are \begin{subequations}\label{eq:non_P_model} \begin{align} \frac{\partial \bar{\mathcal{P}}_Q}{\partial \bar{t}} & = -\kappa \lp\bar{\mathcal{P}}_Q + \kappa \bareta \bar{E} \sin\theta \rp, \label{eq:non-PQ-model} \\ \frac{\partial \bar{\mathcal{P}}_3}{\partial \bar{t}} & = -\kappa \lp \bar{\mathcal{P}}_3 + \kappa \bareta \bar{E} \cos\theta\rp, \label{eq:non-P3-model} \end{align} \end{subequations} with their initial values at $\bar{t}=0$ \begin{subequations} \begin{align} \bar{\mathcal{P}}_Q \lp \bar{t} = 0\rp & = \frac{\bareta \kappa^2 \bar{E} \sin\theta}{\kappa-\lp R-1 \rp/\lp S-1\rp}, \\ \bar{\mathcal{P}}_3 \lp \bar{t} = 0\rp & = \frac{\bareta \kappa^2 \bar{E} \cos\theta}{\kappa-\lp R-1 \rp/\lp S-1\rp}. \end{align} \end{subequations} \subsection{Numerical and theoretical (LSA) results of the minimal model} \begin{figure} \begin{center} \hspace{0em}\includegraphics[width=1\textwidth] {fig16.pdf} \end{center} \caption{(a) Numerical results of the rotational velocity magnitudes of the minimal model versus $\breve{\mu}$ by solving equations (\ref{eq:non_model_vel}) and (\ref{eq:non_P_model}), where $\lp\bar{E},\alpha \rp=\lp 1.5, 0.3\rp$; diamonds and circles denote those of the particle and rod $\#2$, respectively. $\mbarmucrione$ and $\mbarmucritwo$ separate the three $\breve{\mu}$-dependent regimes: stationary, wiggling (blue) and steady spinning (red). (b) LSA (red) and numerical (blue) results of the real $\sigma_{r}$ and imaginary $\sigma_{i}$ parts of the complex growth rate $\sigma$ versus $\breve{\mu}$, where $\lp\bar{E},\alpha \rp=\lp 1.5,0.3\rp$. $\mbarmucrizero$ distinguishes whether the perturbations decay monotonically when $\breve{\mu} < \mbarmucrizero$ or in an oscillatory manner when $\mbarmucrizero < \breve{\mu} <\mbarmucrione$.} \label{fig:results-linker} \end{figure} We solve equations (\ref{eq:non_model_vel}) and (\ref{eq:non_P_model}) numerically using the MATLAB solver `ode15s' for ordinary differential equations. Fixing the electric field $\bar{E}=1.5$ and size ratio $\alpha=0.3$, we show in figure~\ref{fig:results-linker}a the $\breve{\mu}$-dependent magnitudes $\bar{\Omega}^{\mathrm{mag}}$ and $\bar{\Omega}^{\mathrm{mag}}_1$ of the rotational velocities of the particle and rod $\#2$, respectively, when the minimal composite object reaches its equilibrium configuration. This simple model reproduces the three characteristic behaviours of the original particle-filament system: stationary ($\breve{\mu}<\mbarmucrione\approx2513$), wiggling ($\mbarmucrione < \breve{\mu} < \mbarmucritwo \approx4300$) and steady spinning ($\breve{\mu}>\mbarmucritwo$). In the spinning state, $\bar{\Omega}^{\mathrm{mag}}_1 = |\mathrm{d} \theta_1 /\mathrm{d} \bar{t}|=0$ reflects a time-independent angle $\theta_1$ between the two rods, which adopt a steady ``deformed'' configuration representing a minimal model of the deformed filament. Conducting an LSA for this minimal model, we find the closed-form expression of the complex growth rate $\sigma=\sigma_{r} + \mathrm{i} \sigma_{i}$ (see appendix~\ref{sec:lsa_mini} for details). The theoretical values of $\sigma_{r,i}$ versus $\breve{\mu}$ in case of $\lp\bar{E},\alpha\rp=\lp1.5,0.3\rp$ are depicted in figure~\ref{fig:results-linker}b, as well as their numerical counterparts in the near $\mbarmucrione$ regime. The theoretical and numerical values of both $\sigma_{r}$ and $\sigma_{i}$ almost lie on top of each other, consequently, their predictions of $\mbarmucrione$ (when $\sigma_{r}=0$) agree. This superior agreement to the particle-filament system (figure~\ref{fig:sigma_v_mu}) is expected, because the minimal model does not require an approximate model (see \S \ref{sec:eh_model}) for the elastic torque as the original case. The LSA also indicates the emergence of another subtle critical EEV number $\mbarmucrizero \approx 345$ (black cross in figure~\ref{fig:results-linker}b): when $\breve{\mu}<\mbarmucrizero$, the real part $\sigma_{r}$ of the growth rate is negative, accompanying a zero imaginary part, thus the perturbations diminish to zero monotonically; when $\mbarmucrizero < \breve{\mu} < \mbarmucrione$, $\sigma_{r}<0$ but $\sigma_{i}>0$, the perturbations also die out but in a an oscillatory fashion. The former case corresponds to the non-negative quantity $\Sigma$ inside the square-root operator in equation~(\ref{eq:sigma_linker}) that naturally yields real solutions for $\sigma$ only. A similar structure of the solutions of $\sigma$ was reported in \citet{de2017spontaneous}. Since the current work mainly addresses the EEH instability-induced self-oscillation, we do not pursue a detailed investigation in this stable, stationary regime. \section{Conclusions and discussions}\label{sec:conclusions} Standard biomimetic practises commonly rely on an oscillating magnetic or electric field to produce the oscillatory motion of slender artificial structures. In contrast, we propose a strategy to achieve self-oscillation of artificial structures based on a time-independent, uniform electric field. By formulating and numerically solving an elasto-electro-hydrodynamic problem, this concept is illustrated by oscillating a composite object consisting of a weakly conducting dielectric spherical particle and an elastic filament immersed in a dielectric solvent. Our strategy is grounded in the QR electrohydrodynamic instability phenomenon indicating that a weakly conducting dielectric particle suspended in a dielectric liquid of higher conductivity can undergo spontaneous rotation under a sufficiently strong DC electric field. For an individual spherical particle, this instability emerges through a supercritical pitchfork bifurcation resulting in steady rotation~\citep{jones1984quincke}. By incorporating an elastic filament, we transform the pitchfork bifurcation into a Hopf bifurcation through which a self-oscillatory instability occurs~\citep{zhu2019propulsion}. This transformation is attributed to the elasto-viscous response of the filament providing an elastic torque to balance the electric and hydrodynamic torques. The elastic torque is in phase with the rotational velocity of the particle at certain time periods (see figure~\ref{fig:torque_omg}b). This in-phase behaviour results in negative damping (or positive feedback), hence leading to the onset of linear instability~\citep{jenkins2013self}. We comment that such a transition from pitchfork to Hopf bifurcation was also identified by \citet{tsebers1980electrohydrodynamic} who observed oscillatory QR of ellipsoidal particles attributed to their anisotropic electric properties. It is also worth mentioning that the QR instability was utilised to study suspensions of artificial swimmers made of QR particles that achieved locomotion by rolling near a rigid solid boundary~\citep{bricard2013emergence}. In addition, the recent work of \citet{das2019active} shows theoretically and numerically that a dielectric particle with particular geometrical asymmetry (e.g. a helix) under a DC electric field is able to convert QR into spontaneous translation in an unbounded domain. We next recall the original experiments conducted by~\citet{quincke1896ueber}, where the particle was hung by a silk thread and hence the particle rotated in the direction along the orientation of the thread. Quincke also noted an oscillatory behaviour as translated by~\citet{jones1984quincke}\\ ``\textit{Quincke, with his spheres tethered to silk threads, had been forced to contend with periodic rotation, first in one direction and then in the other as the silk thread wound and unwound}''.\\ We think that the ``wound and unwound'' motion manifested the self-oscillatory phenomenon, which is attributed to the torsional deformation of the silk thread. We speculate that Quincke probably regarded this observation as an experimental nuisance, thus did not pay attention to it nor did other researchers, except for one little-known preprint~\citep{zaks_onset} that recognised and modelled this torsional oscillation by considering a QR particle hung by a thread with torsional elasticity. In this paper, we consider only the bending stiffness of the grafted filament and the whole composite object is freely suspended in the solvent. By applying an electric field stronger than the critical value corresponding to the onset of original QR instability, the composite object exhibits three distinct behaviours depending on the EEV number $\barmu$ (inversely proportional to the bending stiffness). When $\barmu \leq \barmucrione$, the object remains stationary, corresponding to a fixed-point solution; when $\barmu \geq \barmucritwo$, the particle spins steadily towing a deformed filament, corresponding to an asymmetric fixed-point solution; when $\barmu \in \lp \barmucrione, \barmucritwo \rp$, the particle oscillates and the filament wiggles, leading the object to an undulatory locomotion. More specifically, instability occurs at $\barmucrione$ through a supercritical Hopf bifurcation, where the self-oscillatory motion represents a limit-cycle solution; at $\barmucritwo$, a secondary bifurcation appears, and the oscillatory, limit-cycle solution jumps to the steadily spinning, fixed-point solution. By fixing the EEV number $\barmu$, bifurcation diagrams considering the electric field strength $\bar{E}$ as the control parameter revealed the same three scenarios (see figure~\ref{fig:omg_v_E_varyMu_a0.3}). We have also examined the propulsive performance of the micro object in the self-oscillating regime $\barmu \in (\barmucrione, \barmucritwo)$. The trajectory of the object resembles a wave propagating along a straight path. The translational velocity of the object along this path varies in $\barmu$ non-monotonically (see figure~\ref{fig:opt_mu_and_spd}c). Motivated by the exponential temporal growth of the rotational velocity, we performed a LSA to predict theoretically the onset of the self-oscillatory instability. We have developed an elastohydrodynamic model to account for the elastic force and torque exerted by the filament on the particle, which closely matched the numerical counterparts. Incorporating this model into a standard LSA for the original QR particle, we derived the dispersion relationship of the new EEH problem. We thus calculated the complex growth rate $\sigma=\sigma_r + \mathrm{i}\sigma_i$ and identified the critical EEV number $\barmucrione$. Theoretical predictions of $\sigma$ (figure~\ref{fig:sigma_v_mu}) and $\barmucrione$ (figure~\ref{fig:crimu_v_E}) agree well the numerical results, especially in the large $\barmu$ regime. However, the agreement becomes less satisfactory when $\barmu$ decreases because of the violation of an assumption used in the elastohydrodynamic model. To unravel the EEH instability mechanism, we studied a minimal model system characterised by two rigid rods linked by a torsional spring to mimic the original filament. This substitution reduces the elastic element's number of degrees of freedom to one. Numerical and LSA results demonstrated that the minimal model could exhibit the three elasticity-dependent behaviours: stationary, wiggling and steady spinning. Following the comments of an anonymous referee, we hereby emphasise the difference between our work and other seemingly similar studies~\citep{manghi2006propulsion,qian2008shape,coq}, where a flexible slender structure (filament or rod) rotated in a viscous fluid and produced thrust because one of its ends was clamped to a constantly rotating base or actuated by a constant torque. This rotation results from forced oscillation characterised by a close correlation between the frequency of the power source and that of the resulting periodic motion. This forced-oscillatory periodic motion distinguishes itself from the self-oscillatory motion we observe, where the time-independent electric field as the power source lacks a frequency corresponding to that of the periodic motion. The current work constrained the kinematics and electric polarisation vector of the particle to a plane in order to show a clean physical picture of the new EEH instability we identified. By removing these constraints, we anticipate the appearance of more complex and diverse three-dimensional behaviours featured by bi/multi-stability, hysteresis and even chaos (ellipsoidal particles were observed to exhibit chaotic QR~\citep{tsebers1991chaotic}). We will report the results of the ongoing work in a future paper. It is also worth mentioning the assumption of neglecting electrohydrodynamic effect of the filament. The electric torque exerted on a slender QR structure scales with $a^2 L$~\citep{das2019active}, and that on a sphere scales with $A^3$ (see equation~(\ref{eq:torque_qr_sphere})). By assuming that the filament and particle have similar dielectric properties and realising $\alpha=A/L= O(1)$, the ratio of the former to the latter torque is of the order of $\epsilon_{\mathrm{sl}}^2$. This comparison thus justifies the assumption, which also implies that no special attention needs to be paid in this context for the experimental realisation. In conclusion, incorporating an elastic element to manipulate the electrohydrodynamic instability, we report an elasto-electro-hydrodynamic instability and use it for engineering self-oscillation of artificial structures. We anticipate that this idea of harnessing elastic media to control and diversify the bifurcation and the corresponding instability behaviour can be generalised to other stability phenomena and systems. As a result, different emerging instability behaviours can be utilised for diverse functionalities. This concept might inspire new approaches to design soft, reconfigurable machines that can morph and adapt to the environment. Declaration of Interests. The authors report no conflict of interest.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \vspace{3mm} The bounded mean oscillation space was originally introduced by John and Nirenberg in 1961, which played an important role in the investigation of the solutions of the elliptic partial differential equations. Afterwards, Fefferman and Stein found that the bounded mean oscillation space was the dual space of the Hardy space and demonstrated the Fefferman-Stein decomposition, which became the bond revealing the intrinsic relationship between the bounded mean oscillation space and harmonic analysis. Therefore, the study on the bounded mean oscillation space becomes an essential part of harmonic analysis. For example, if we want to obtain the boundedness of the operator $T:H^1\rightarrow L^1$, due to that the BMO space is the dual space of the $H^1$ space, we can consider the boundedness of the dual operator $T^\ast:L^\infty\rightarrow BMO.$ Another classical application is that many classical operators are bounded from $L^p$ space to $L^p$ space for $1<p<\infty$, however, when $p=\infty$, the boundedness may be not correct. Instead of, we may obtain the boundedness of operator from $L^\infty$ space to BMO space. \vskip 8 pt As the natural generalizations of the Lebesgue space $L^p$, the Orlicz space was first studied by Orlicz. From then on, the theory of the Orlicz spaces has been extensively developed in the area of analysis. Meanwhile, it has wide applications in probability, statistics, potential theory, partial differential equations, for instance \cite{Rao2002Applications}. Recently, there have been intense research activities on regularity theory in Orlicz spaces connected to a Young function, which satisfies some moderate growth conditions for second-order elliptic and parabolic PDEs, see \cite{Jia2007Regularity}. Moreover, the Orlicz-Hardy spaces are also good substitutes of the Orlicz spaces in dealing with many problems in analysis, for example, the boundedness of operators. The study of its dual spaces Orlicz-BMO spaces can be traced to the work of Janson in 1980. He generalized the classical Hardy space and BMO space, and obtained the dual relationship. All theories of these spaces are closely connected with properties of harmonic analysis, and of the Laplacian operator on $\mathbb{R}^n$. In recent ten years, Ding \cite{DingPoincar,Ding2010}, Bi \cite{Bi2011Orlicz} and Yi \cite{6} discussed the properties of operators or composite operators with Orlicz norm acting on differential forms. In this paper, we will introduce two generalized spaces, called $L^{\varphi}$-BMO space and $L^{\varphi}$-Lipchitz space. The traditional BMO space and Lipchitz space can be taken as special cases of our two new spaces, if we let the function $\varphi=t^{p}, 1<p<\infty$. Then, we will establish the $L^\varphi$-BMO norm estimates of the homotopy operator for differential forms. Especially, when the differential forms satisfy the conditions of the Weak Reverse H\"{o}lder class (in \cite{Johnson2013Integral}), we obtain the $L^\varphi$-Lipschitz norm estimates. We did it because it can be used to study the $L^\varphi$-BMO and $L^{\varphi}$-Lipchitz norm estimates of some complicated composition of operators, such as the composition $T \circ H$ of homotopy and projection operators and the composition $T \circ G$ of homotopy and Green's operators. More results on the norm inequalities for differential forms and homotopy operator can be found in \cite{Liu2010Some,Ding2009A,Ding2015Norm,Bi2011Some,Ding2009Lipschitz,1}. The main purpose of this paper is to estimate the $L^\varphi$-BMO norm and $L^\varphi$-Lipschitz norm for the homotopy operator on differential forms. The paper is organised as follows. Section 2 contains, in addition to definitions and other preliminary material, the main lemmas. Theorem \ref{d6} and Theorem \ref{d11} in Section 2 show the estimates for the homotopy operator with the $L^\varphi$-BMO norm and $L^\varphi$-Lipschitz norm by $L^\varphi$ norm. The conditions for differential forms $u$ in the two theorems are different, especially, the similar estimate as Theorem \ref{d6} with the condition in Theorem \ref{d11} has not been proved. Whereafter, the comparison for the $L^\varphi$-BMO norm and $L^\varphi$-Lipschitz norm are given. As applications, we use the results and methods in the previous section to estimate the conjugate $A$-harmonic tensors in Section 4. In this section we also get a weighted estimate for differential forms. \vskip 8 pt \section{The Main Definitions and Lemmas} Before specifying the main results precisely, we introduce some notations. We write $\Omega$ for a bounded convex domain in $\mathbb{R}^n$, $n\geq2$, endowed with the usual Lebesgue measure denoted by $|\Omega|$. $B$ and $\sigma{B}$ are concentric balls with $\hbox{diam}(\sigma{B})=\sigma{\hbox{diam}(B)}$. The set of $l\hbox{-forms}$, denoted by $\Lambda^l=\Lambda^l(\mathbb{R}^n)$, is a $l\hbox{-vector}$, spanned by exterior products $e_I={e_{i_1}}\wedge{e_{i_2}}\wedge \cdots \wedge{e_{i_l}}$, for all ordered $l\hbox{-tuples}$ $I=(i_1,i_2,\cdots,i_l)$, $1\leq{i_1}<{i_2}<\cdots<{i_l}\leq n$. The $l\hbox{-form}$ $u(x)=\Sigma_I{u_I(x)}dx_I$ is called a differential $l\hbox{-form}$, if $u_I$ is differentiable. We use $D^{'}{(\Omega,\Lambda^l)}$ to denote the differential $l\hbox{-form}$ space, and $L^s(\Omega,\Lambda^l)$ consists of all $l$-forms $u(x)$ on $\Omega$ satisfying $\int_\Omega|u_I|^s<\infty$. In particular, we know that a $0\hbox{-form}$ is a function. A differential $l$-form $u\in D'(\Omega,\Lambda^l)$ is called a closed form if $du=0$ in $\Omega$. Similarly, a differential $(l+1)$-form $v\in D'(\Omega,\Lambda^{l+1})$ is called a coclosed form if $d^{\star}v=0$. From the Poincar$\acute{e}$ lemma, $ddu=0$, we know that $du$ is a closed form. The module of a differential form $u$ is given by $|u|^2=\star(u\wedge \star u)\in D'(\Omega,\Lambda^0)$, in other words, it is a function. The homotopy operator $T:C^{\infty}(\Omega,\Lambda^l)\rightarrow C^{\infty}(\Omega,\Lambda^{l-1})$ is a very important operator in differential form theory, given by $$Tu=\int_\Omega\psi(y)K_yudy,$$ where $\psi\in C^\infty_0(\Omega)$ is normalized by $\int_\Omega\psi(y)dy=1$, and $K_y$ is a liner operator defined by $$(K_yu)(x;\xi_1,\cdots,\xi_{l-1})=\int^1_0t^{l-1}u(tx+y-ty;x-y;\xi_1,\cdots,\xi_{l-1})dt.$$ See \cite{8} for more of the function $\psi$ and operator $K_y$. About the homotopy operator $T$, we have the following decomposition, which will be used repeatedly in this paper, $$u=d(Tu)+T(du)$$ for any differential form $u\in L^p(\Omega,\Lambda^l),1\leq p<\infty$. A closed form $u_\Omega$ is defined by $u_\Omega=d(Tu)$, $l=1, \cdots, n$, and when $u$ is a differential $0$-form, $u_\Omega=|\Omega|^{-1}\int_\Omega u(y)dy.$ \vskip 8 pt The Orlicz space $L^\varphi (\Omega, \mu)$ consists of all measurable functions $f$ on $\Omega$ such that $\int_\Omega \varphi\left({|f| \over \lambda} \right) d\mu < \infty$ for some $\lambda=\lambda(f) >0$. $L^\varphi (\Omega, \mu)$ is equipped with the nonlinear Luxemburg functional $$\|f \|_{\varphi (\Omega, \mu)} = {\hbox{\rm{inf }}} \{\lambda >0: \ \int_\Omega \varphi \left({|f| \over \lambda} \right) d\mu \leq 1 \}, $$ where the Radon measure $\mu$ is defined by $d\mu = g(x) dx$ and $g(x) \in A (\alpha, \beta, \gamma; \Omega)$. A convex Orlicz function $\varphi$ is often called a Young function. If $\varphi$ is a Young function, then $\| \cdot \|_{\varphi(\Omega, \mu)}$ defines a norm in $L^\varphi (\Omega, \mu)$, which is called the Orlicz norm or Luxemburg norm. Especially, when $\mu$ is the Lebesgue measure, we let $\| \cdot \|_{\varphi(\Omega, \mu)}=\| \cdot \|_{\varphi,\Omega}$ for convenience. \vskip 8 pt We say the Young function $\varphi$ belongs to the $G(p,q,c)$-class, $1\leq p <q<\infty,c\geq 1$, if $\varphi$ satisfies that: \noindent(1)\ $\frac{1}{c}\leq {\varphi(t^{1/p})}/{g(t)}\leq c$; (2)\ $\frac{1}{c}\leq {\varphi(t^{1/q})}/{h(t)}\leq c$, for every $t>0$, where $g$ is a convex increasing function and $h$ is a concave increasing function on $[0,\infty]$. From \cite{9}, each of $\varphi$, $g$ and $h$ in above definition is doubling in the sense that its values at $t$ and $2t$ are uniformly comparable for all $t>0$, and the consequent fact that $$c_1t^q\leq h^{-1}(\varphi(t))\leq c_2t^q, c_1t^p\leq g^{-1}(\varphi(t))\leq c_2t^p,$$ where $c_1$ and $c_2$ are constants. Especially, if we choose $\varphi (t) = t^p$, the following estimate for conjugate $A$-harmonic tensors in $\Omega\subset\mathbb{R}^n$ can be established by the similar way in \cite{nolder1999hardy}. $$\|u\|_{s,\Omega}\leq C |B|^\beta\|v\|_{t,\Omega},$$ where $\beta=\beta(n,p,q,s,t)$. In Section \ref{yy}, we will give a more general estimate for conjugate $A$-harmonic tensors. \noindent Now, we give the definition of $L^\varphi$-BMO norm. \begin{definition}\label{d2} For $u\in L^{1}_{loc}(\Omega,\Lambda^{l}), l=0,1,\cdots,n$, $\varphi$ is a Young function, we write $u\in L^\varphi$-$BMO(\Omega,\Lambda^l)$, if $$\|u\|_{\varphi*,\Omega}=\sup_{\sigma{B}\subset{\Omega}}|B|^{-1}\|u-u_B\|_{\varphi,B}<\infty$$ for some $\sigma>1$. \end{definition} Similarly, we can define the following $L^\varphi$-Lipschitz norm. \begin{definition}\label{d1} For $u\in L^{1}_{loc}(\Omega,\Lambda^{l}), l=0,1,\cdots,n$, $\varphi$ is a Young function, we write $u\in L^\varphi$-$Lip_{loc, k}(\Omega,\Lambda^l),0<k<1$, if $$\|u\|_{\varphi loc Lip_{k},\Omega}=\sup_{\sigma{B}\subset{\Omega}}|B|^{\frac{-(n+k)}{n}}\|u-u_B\|_{\varphi,B}<\infty$$ for some $\sigma>1$. \end{definition} The following definition for the WRH$(\Lambda^{l},\Omega)$-class appears in \cite{Johnson2013Integral}. \begin{definition}\label{d2} We call $u(x)\in D^{'}(\Omega,\Lambda^{l})$ belongs to the WRH$(\Lambda^{l},\Omega)$-class, $l=0,1,\cdots,n$, if there exists a constant $C>0$ such that $u(x)$ satisfies $$ \|u\|_{s,B}\leq C |B|^\frac{t-s}{st}\|u\|_{t,\rho B} $$ for every $0<s,t< \infty$, where all balls $B\subset \Omega$ with $\rho B \subset \Omega$ and $\rho >1$ is a constant. \end{definition} For the upcoming main results, we also need the following lemmas, given by T. Iwaniec and A. Lutoborski in \cite{8}. \begin{lemma} \label{d3} Let $u\in L^t(\Omega,\Lambda^{l}),l=1,2,\ldots,n,1<t<\infty$ and $T$ be the homotopy operator defined on differential forms. Then, there exists a constant $C$, independent of $u$, such that $$\|Tu\|_{t,\Omega}\leq C|\Omega|\hbox{diam}(\Omega)\|u\|_{t,\Omega}.$$ \end{lemma} \begin{lemma} \label{d31} Let $u\in L^t(\Omega,\Lambda^{l}),l=1,2,\ldots,n,1<t<\infty$. Then, there exists a constant $C$, independent of $u$, such that $$\|u_\Omega\|_{t,\Omega}\leq C|\Omega|\|u\|_{t,\Omega}.$$ \end{lemma} \begin{lemma}\label{a26} Let $u\in D'(\Omega,\Lambda^l)$ be such that $du\in L^t(\Omega,\Lambda^{l+1})$. Then $u-u_\Omega$ is in $L^{\frac{nt}{n-t}}(\Omega,\Lambda^l)$ and $$\left(\int_\Omega|u-u_\Omega|^{\frac{nt}{n-t}}\right)^{\frac{n-t}{nt}}\leq C\left(\int_\Omega|du|^t\right)^{\frac{1}{t}},$$ where $l=1,2,\ldots,n,1<t<n$. \end{lemma} The following lemma appears in \cite{9}. \begin{lemma}\label{d5} Take $\psi$ defined on $[0,+\infty)$ be a strictly increasing convex function, $\psi(0)=0$, and $\Omega\subset \mathbb{R}^{n}$ be a domain. Assume that $u(x)\in D^{'}(\Omega,\Lambda^{l})$ satisfies $\psi(k(|u|+|u_\Omega|))\in L^{1}(\Omega,\mu)$ for any real number $k>0$, and $\mu(x\in\Omega:|\mu-\mu_\Omega|>0)>0$ where $\mu$ be a Radon measure defined by $d\mu(x)=\omega(x)dx$ with a weight $\omega(x)$, then for any $a>0$, we obtain $$ \int_{\Omega}\psi(a|u|)d\mu \leq C\int_{\Omega}\psi(2a|u-u_\Omega|)d\mu, $$ where C is a positive constant. \end{lemma} \section{Comparison Theorems for the $L^\varphi$-BMO Norm, $L^\varphi$-Lipschitz Norm and $L^\varphi$ Norm } In this section, we give two main theorems for the homotopy operator. Theorem \ref{d6} is the $L^\varphi$-Lipschitz norm inequality for the homotopy operator acting on the differential forms which belong to $ WRH(\Lambda^l,\Omega)$-class. Theorem \ref{d11} is the estimate for $L^\varphi$-BMO norm with the exponents $p,q$ in $G(p,q,c)$-class satisfying $q(n-p)< np$. \begin{theorem}\label{d6} Let $\varphi$ be a Young function in the $G(p,q,c)$-class, $1\leq p<q<\infty,c\geq1$, $u$ be a differential form such that $u\in WRH(\Lambda^l,\Omega)$-class, $l=1,2,\ldots,n,$ and $\varphi(|u|)\in L^1_{loc}(\Omega)$. Then, there exists a constant C, independent of u, such that $$\|Tu\|_{\varphi loc\; Lip_k,\Omega}\leq C\|u\|_{\varphi,\Omega},$$ where $\Omega$ is a bounded domain. \end{theorem} \noindent \begin{proof} From the definition of $G(p,q,c)$-class and Jensen's inequality, we obtain \begin{eqnarray} \int_B\varphi\left(|u-u_B|\right)dx&=&h\left(h^{-1}\left(\int_B\varphi\left(|u-u_B|\right)dx\right)\right) \nonumber\\ &\leq&h\left(\int_Bh^{-1}\left(\varphi\left(|u-u_B|\right)\right)dx\right) \nonumber\\ &\leq&h\left(C_1\int_B|u-u_B|^qdx\right) \nonumber\\ &\leq&C_2\varphi\left(\left(C_1\int_B|u-u_B|^qdx\right)^{1/q}\right) \nonumber\\ &\leq&C_3\varphi\left(\left(\int_B|u-u_B|^qdx\right)^{1/q}\right)\label{fai1}. \end{eqnarray} Replacing $u$ by $Tu$ follows \begin{eqnarray}\label{3} \int_B\varphi\left(|Tu-(Tu)_B|\right)dx\leq C_3\varphi\left(\left(\int_B|Tu-(Tu)_B|^qdx\right)^{1/q}\right). \end{eqnarray} \noindent Applying the decomposition theorem of differential form to $Tu$, we have \begin{eqnarray}\label{9} Tu=dT(Tu)+Td(Tu). \end{eqnarray} Noticing $(Tu)_B=dT(Tu)$, combining \ref{9}, Lemma \ref{d3} and Lemma \ref{d31}, we find \begin{eqnarray}\label{1} \left(\int_B|Tu-(Tu)_B|^qdx\right)^{1/q} &=& \left(\int_B|TdTu|^qdx\right)^{1/q}\nonumber\\ &\leq&C_4(n,q)|B|\hbox{diam}(B) \left(\int_B|dTu|^qdx\right)^{1/q}\nonumber\\ &=&C_4(n,q)|B|\hbox{diam}(B) \left(\int_B|u_B|^qdx\right)^{1/q}\nonumber\\ &\leq&C_5(n,q)|B|^2\hbox{diam}(B) \left(\int_B|u|^qdx\right)^{1/q}. \end{eqnarray} Noticing that $u\in WRH(\Lambda^l,\Omega)$-class, so the following inequality holds \begin{eqnarray}\label{2} \left(\int_{ B}|u|^qdx\right)^{1/q}\leq C_6|B|^{(p-q)/pq}\left(\int_{\sigma B}|u|^pdx\right)^{1/p}, \end{eqnarray} where $\sigma>1$ is a constant. Combining (\ref{1}) and (\ref{2}), we have \begin{eqnarray} \left(\int_B|Tu-(Tu)_B|^qdx\right)^{1/q}&\leq& C_7|B|^2(\hbox{diam}(B))|B|^{(p-q)/pq}\left(\int_{\sigma B}|u|^pdx\right)^{1/p}\nonumber. \end{eqnarray} Noticing that $1<p,q<\infty$, so $1+(p-q)/pq>0$, we derive that \begin{equation}\label{guocheng1} \left(\int_B|Tu-(Tu)_B|^qdx\right)^{1/q}\leq C_8|B|^{1+1/n}\left(\int_{\sigma B}|u|^pdx\right)^{1/p}. \end{equation} Since $\varphi$ is an increasing function, using Jensen's inequality and the definition of $G(p,q,c)$-class, we have \begin{eqnarray}\label{4} && \varphi\left(\left(\int_B|Tu-(Tu)_B|^qdx\right)^{1/q}\right)\cr &\leq&\varphi\left(C_8|B|^{1+1/n}\left(\int_{\sigma B}|u|^pdx\right)^{1/p}\right)\nonumber\\ &=&\varphi\left(\left(C^p_8|B|^{p(1+1/n)}\int_{\sigma B}|u|^pdx\right)^{1/p}\right)\nonumber\\ &\leq&C_9g\left(C^p_8|B|^{p(1+1/n)}\int_{\sigma B}|u|^pdx\right)\nonumber\\ &=&C_9g\left(\int_{\sigma B}C^p_8|B|^{p(1+1/n)}|u|^pdx\right)\nonumber\\ &\leq&C_9\int_{\sigma B}g\left(C^p_8|B|^{p(1+1/n)}|u|^p\right)dx\nonumber\\ &\leq&C_{10}\int_{\sigma B}\varphi\left(C_8|B|^{1+1/n}|u|\right)dx\nonumber\\ &\leq&C_{11}\int_{\sigma B}\varphi\left(|B|^{1+1/n}|u|\right)dx. \end{eqnarray} Combining (\ref{3}) and (\ref{4}) yields that $$ \int_B\varphi\left(|Tu-(Tu)_B|\right)dx\leq C_{12}\int_{\sigma B}\varphi\left(|B|^{1+1/n}|u|\right)dx. $$ Noticing that $\varphi$ is doubling, so we obtain $$\int_B\varphi\left(\frac{|Tu-(Tu)_B|}{\lambda}\right)dx\leq C_{12}\int_{\sigma B}\varphi\left(\frac{|B|^{1+1/n}|u|}{\lambda}\right)dx$$ for any $\lambda>0$, and from the Orlicz norm definition, we know \begin{eqnarray}\label{guocheng2} \|Tu-(Tu)_B\|_{\varphi, B}&\leq& C_{12}\|(|B|^{1+1/n}u)\|_{\varphi, \sigma B}\nonumber\\ &\leq& C_{12}|B|^{1+1/n}\|u\|_{\varphi, \sigma B} \end{eqnarray} For all balls $\sigma'B\subset\Omega$ with $\sigma'>\sigma$, we have \begin{eqnarray}\label{guocheng3} \|Tu\|_{\varphi loc\; Lip_k,\Omega}&=&\sup_{\sigma'{B}\subset{\Omega}}|B|^{\frac{-(n+k)}{n}}\|Tu-(Tu)_B\|_{\varphi,B}\nonumber\\ &\leq&\sup_{\sigma'{B}\subset{\Omega}}|B|^{\frac{-(n+k)}{n}}C_{12}|B|^{1+1/n}\|u\|_{\varphi,\sigma B}\nonumber\\ &\leq&\sup_{\sigma'{B}\subset{\Omega}}C_{12}|B|^{1+\frac{1}{n}+\frac{-(n+k)}{n}}\|u\|_{\varphi,\sigma B}. \end{eqnarray} \noindent As $1+\frac{1}{n}+\frac{-(n+k)}{n}>0$, so we have $$\|Tu\|_{\varphi loc\; Lip_k,\Omega}\leq C\|u\|_{\varphi,\Omega}.$$ \end{proof} If we assume the Lebesgue measure $|\{x\in B:|u-u_B|>0\}|>0$, using Lemma \ref{d5} with $\psi(t)=\varphi (t)$, $\omega(x)=1$ over the ball $B$, we have the following corollary. \begin{corollary}\label{d7} Let $\varphi$ be a Young function in the $G(p,q,c)$-class, $1\leq p<q<\infty, c\geq1$, $u$ be a differential form such that $u\in WRH(\Lambda^l,\Omega)$-class, $l=1,2,\ldots,n$, $|\{x\in B:|u-u_B|>0\}|>0$, and $\varphi(|u|)\in L^1_{loc}(\Omega)$. Then, there exists a constant C, independent of u, such that $$ \|Tu\|_{\varphi loc\; Lip_k,\Omega}\leq C\|u\|_{\varphi*,\Omega}, $$ where $\Omega$ is a bounded domain. \end{corollary} \begin{theorem}\label{d11} Let $\varphi$ be a Young function in $G(p,q,c)$-class, $1<p<q<\infty$, $c\geq1,q(n-p)< np$, and $u\in L^p(\Omega,\Lambda^l),l=1,2,\ldots,n,$ be a differential form such that $\varphi(|u|)\in L^1_{loc}(\Omega)$. Then, there exists a constant C, independent of u, such that $$\|Tu\|_{\varphi *,\Omega}\leq C\|u\|_{\varphi,\Omega},$$ where $\Omega$ is a bounded domain. \end{theorem} \noindent \begin{proof} For the case that $1<p<n$, $q(n-p)< np$ means $q<\frac{np}{n-p}$. So using the monotonic property of the $L^p$ space, Lemma \ref{a26} and Lemma \ref{d31}, for any differential form $u\in L^p(\Omega,\Lambda^l)$, we have \begin{eqnarray}\label{2+} \left(\int_B|Tu-(Tu)_B|^qdx\right)^{1/q}&\leq&|B|^{\frac{1}{q}-\frac{1}{p}+\frac{1}{n}}\left(\int_B|Tu-(Tu)_B|^\frac{np}{n-p}dx\right)^{\frac{n-p}{np}}\nonumber\\ &\leq&C_1|B|^{\frac{1}{q}-\frac{1}{p}+\frac{1}{n}}\left(\int_B|dTu|^p dx\right)^{\frac{1}{p}}\nonumber\\ &=&C_1|B|^{\frac{1}{q}-\frac{1}{p}+\frac{1}{n}}\left(\int_B|u_B|^p dx\right)^{\frac{1}{p}}\nonumber\\ &\leq&C_2|B|^{\frac{1}{q}-\frac{1}{p}+\frac{1}{n}+1}\left(\int_B|u|^p dx \right)^{\frac{1}{p}}.\nonumber\\ \end{eqnarray} Next, for the case that $n\leq p<q<\infty$, we can choose $s$ with $1<s<n$ such that $q<\frac{ns}{n-s}$ (remark: it is possible from that $\frac{ns}{n-s}\rightarrow\infty$, as $s\rightarrow n$). Thus, applying Lemma \ref{a26} and Lemma \ref{d31} and noticing the monotonic property of the $L^p$ space with $s<p$, we have \begin{eqnarray}\label{5} \left(\int_B|Tu-(Tu)_B|^\frac{ns}{n-s}dx\right)^{\frac{n-s}{ns}}&\leq& C_{1'}\left(\int_B|dTu|^s dx\right)^{\frac{1}{s}} \cr &=& C_{1'}\left(\int_B|u_B|^s dx\right)^{\frac{1}{s}}\cr &\leq& C_{2'}\left(\int_B|u|^s dx\right)^{\frac{1}{s}} \cr &\leq& C_{2'}|B|^{\frac{1}{s}-\frac{1}{p}}\left(\int_B|u|^p dx\right)^{\frac{1}{p}}. \end{eqnarray} Combining the monotonic property of the $L^p$ space with $q<\frac{ns}{n-s}$ and (\ref{5}) yields \begin{eqnarray}\label{6} \left(\int_B|Tu-(Tu)_B|^qdx\right)^{1/q}&\leq&|B|^{\frac{1}{q}-\frac{1}{s}+\frac{1}{n}}\left(\int_B|Tu-(Tu)_B|^\frac{ns}{n-s}dx\right)^{\frac{n-s}{ns}}\cr &\leq& C_{3'}|B|^{\frac{1}{q}-\frac{1}{s}+\frac{1}{n}}|B|^{\frac{1}{s}-\frac{1}{p}}\left(\int_B|u|^p dx\right)^{\frac{1}{p}}\cr &=& C_{3'}|B|^{\frac{1}{q}-\frac{1}{p}+\frac{1}{n}}\left(\int_B|u|^p dx\right)^{\frac{1}{p}}\cr &\leq&C_{3'}|B|^{\frac{1}{q}-\frac{1}{p}+\frac{1}{n}+1}\left(\int_B|u|^p dx\right)^{\frac{1}{p}}. \end{eqnarray} Since ${\frac{1}{q}-\frac{1}{p}+\frac{1}{n}}>0$, the inequalities (\ref{2+}) and (\ref{6}) indicate that \begin{eqnarray}\label{7} \left(\int_B|Tu-(Tu)_B|^qdx\right)^{1/q}&\leq&C_{3}|B|\left(\int_B|u|^p dx\right)^{\frac{1}{p}} \end{eqnarray} holds for all $1<p<q<\infty$ with $q(n-p)< np$. \noindent Now, beginning with (\ref{7}) and using the similar process from inequality (\ref{guocheng1}) to inequality (\ref{guocheng2}), we get \begin{eqnarray}\label{8} \|Tu-(Tu)_B\|_{\varphi, B}&\leq& C_{4}\||B|u\|_{\varphi, B}\nonumber\\ &\leq& C_{4}|B|\|u\|_{\varphi, B}.\nonumber\\ \end{eqnarray} \noindent According to the definition of the $L^{\varphi}$-BMO norm and (\ref{8}), we obtain \begin{eqnarray} \|Tu\|_{\varphi *,\Omega}&=&\sup_{\sigma{B}\subset{\Omega}}|B|^{-1}\|Tu-(Tu)_B\|_{\varphi,B}\nonumber\\ &\leq&\sup_{\sigma{B}\subset{\Omega}}|B|^{-1}C_{4}|B|^{1}\|u\|_{\varphi, B}\nonumber\\ &=&\sup_{\sigma{B}\subset{\Omega}}C_{4}\|u\|_{\varphi, B}\nonumber\\ &\leq&C\|u\|_{\varphi,\Omega}. \end{eqnarray} \end{proof} \noindent Remark 1: The differential form $u$ in Theorem \ref{d11} does not need satisfy the conditions of $WRH(\Lambda^l,\Omega)$-class in Theorem \ref{d6}. But, we restrain the exponents in $G(p,q,c)$-class. Now we compare the $L^\varphi$-Lipschitz norm and $L^\varphi$-BMO norm of differential forms. \begin{theorem}\label{d8} Let $\varphi$ be a Young function, $u\in D'(\Omega,\Lambda^{l}),l=1,2,\cdots,n,$ be a differential form in $\Omega$ and $\varphi(|u|)\in L^1_{loc}(\Omega,x).$ Then, there exists a constant $C$, independent of $u$, such that $$\|u\|_{\varphi*,\Omega}\leq{C}\|u\|_{\varphi loc\; Lip_k,\Omega},$$ where $k$ is a constant with $0<{k}<1$. \end{theorem} \vspace{3mm} \noindent \begin{proof} From the definition of BMO norm, we have \begin{eqnarray*} \|u\|_{\varphi*,\Omega} &=& \sup_{\sigma{B}\subset\Omega }|B|^{-1}\|u-u_B\|_{\varphi,B} \\ &=& \sup_{\sigma{B}\subset\Omega }|B|^{k/n}|B|^{-(n+k)/n}\|u-u_B\|_{\varphi,B} \\ &\leq & \sup_{\sigma{B}\subset\Omega }|\Omega|^{k/n}|B|^{-(n+k)/n}\|u-u_B\|_{\varphi,B} \\ & \leq & |\Omega|^{k/n}\sup_{\sigma{B}\subset\Omega }|B|^{-(n+k)/n} \|u-u_B\|_{\varphi,B} \\ & \leq &{C}\sup_{\sigma{B}\subset\Omega }|B|^{-(n+k)/n} \|u-u_B\|_{\varphi,B} \\ & \leq & {C}\|u\|_{ \varphi loc\; Lip_k,\Omega}. \end{eqnarray*} \end{proof} Replacing $u$ by $Tu$, and combining Theorem \ref{d6}, we obtain the following corollary. \begin{corollary}\label{d9} Let $\varphi$ be a Young function in the class $G(p,q,c),1\leq p<q<\infty,c\geq1$, $u$ be a differential form such that $u\in WRH(\Lambda^l,\Omega)$-class,$l=1,2,\ldots,n,$ and $\varphi(|u|)\in L^1_{loc}(\Omega)$, where $\Omega$ is a bounded domain. Then, there exists a constant C, independent of u, such that $$\|Tu\|_{\varphi *,\Omega}\leq C\|u\|_{\varphi,\Omega}.$$ \end{corollary} \section{Applications}\label{yy} We call $u$ and $v$ a pair of conjugate $A$-harmonic tensors in $\Omega$, if $u$ and $v$ satisfy the conjugate $A$-harmonic equation \begin{equation}\label{4.1} A(x, d u) =d^{\star} v, \end{equation} where $ A : \Omega \times \Lambda^l$ ($\mathbb{R}^n$) $\to \Lambda^l$($\mathbb{R}^n$) is invertible and satisfies the following conditions: \begin{equation}\label{4.2} |A(x, \xi)| \leq a|\xi|^{q-1} \ \ \ \hbox{and} \ \ \ <A(x, \xi), \xi>\ \ \geq \ |\xi|^q \end{equation} for almost every $x \in \Omega$ and all $\xi \in \Lambda^l$ ($\mathbb{R}^n$). \rm Here, $ a>0$ is a constant and $1<q< \infty$ is a fixed exponent associated with (\ref{4.1}). In recent years, the results for conjugate $A$-harmonic tensors are widely used in quasiregular mappings, and the theory of elasticity. In 1999, the following inequality for conjugate $A$-harmonic tensors in $\Omega$ was given by Nolder in \cite{nolder1999hardy}, $$\|u\|^q_{loc lip_k ,\Omega}\leq C \|v\|^p_{loc lip_k,\Omega},$$ where $0<l,k<1 $ satisfies $q(k-1)=p(l-1)$. Now, we give the $L^\varphi$-BMO norm estimate for conjugate $A$-harmonic tensors in $\Omega$. \begin{theorem}\label{3323} Let $\varphi$ be a Young function in the class $G(p,q,c)$ with $1\leq p<q<\infty,\frac{1}{p}+\frac{1}{q}=1$ and $c\geq1$. $u$ and $v$ are conjugate $A$-harmonic tensors such that $\varphi(|v|)\in L^1_{loc}(\Omega)$. The fixed exponent associated with conjugate $A$-harmonic equation is $q$. Then, there exists a constant C, independent of $u$ and $v$, such that $$\|u\|_{\varphi *,\Omega}\leq C |B|^\beta\|v\|_{\varphi *,\Omega},$$ where $\beta=1+\frac{1}{n}-\frac{p}{nq}$ and $\Omega$ is a bounded domain. \end{theorem} \noindent \begin{proof} From the inequality (\ref{fai1}) in Theorem \ref{d6} and Lemma \ref{d3}, we have \begin{eqnarray}\label{2.11} \int_B\varphi\left(|u-u_B|\right)dx &\leq&C_1\varphi\left(\left(\int_B|u-u_B|^qdx\right)^{1/q}\right)\cr &\leq&C_1\varphi\left(C_2|B|^{1+\frac{1}{n}}\left (\int_B|du|^qdx\right)^{1/q}\right).\nonumber \end{eqnarray} Using the inequality $|du|^q\leq |\ast dv|^p$(which appears in Theorem $3.1$ in \cite{nolder1999hardy}), we obtain \begin{eqnarray}\label{2.12} &&\varphi\left(C_2|B|^{1+\frac{1}{n}}\left (\int_B|du|^qdx\right)^{1/q}\right) \cr &\leq&\varphi\left(C_2|B|^{1+\frac{1}{n}}\left (\int_B|\ast dv|^pdx\right)^{1/q}\right)\cr &=&\varphi\left(C_2|B|^{1+\frac{1}{n}}\| d\ast v\|_{p,B}^{p/q}\right)\cr &\leq&\varphi\left(C_2|B|^{1+\frac{1}{n}}\left(C_3|B|^{\frac{1}{n}}\|\ast v-\ast \theta\|_{p,\rho B}\right)^{p/q}\right)\cr &\leq&\varphi\left(C_4|B|^{1+\frac{1}{n}-\frac{p}{nq}}\left(\int_{\rho B}|\ast v-\ast \theta|^pdx\right)^{1/q}\right)\cr &\leq&C_5g\left(C_4^q|B|^{q+\frac{q}{n}-\frac{p}{n}}\left(\int_{\rho B}|\ast v-\ast \theta|^pdx\right)\right)\cr &\leq&C_5C_4^q|B|^{q+\frac{q}{n}-\frac{p}{n}}\int_{\rho B}g\left(|\ast v-\ast \theta|^p\right)dx, \end{eqnarray} where $\theta$ is any closed form, and the third inequality is from the Caccioppoli inequality for conjugate $A$-harmonic tensors. The properties of $G(p,q,c)$-class yields \begin{eqnarray} \int_{\rho B}g\left(|\ast v-\ast \theta|^p\right)dx &\leq& C_6\int_{\rho B}\varphi\left(| v-\theta|\right)dx. \end{eqnarray} Choose $\theta=v_B$, and similar to the proof of inequalities (\ref{guocheng2}) and (\ref{guocheng3}) in Theorem \ref{d6}, we have $$ \|u\|_{\varphi *,\Omega}\leq C |B|^\beta\|v\|_{\varphi *,\Omega}. $$ \end{proof} \vskip 8 pt \vskip 8 pt Next, we give a weighted estimate for differential forms. The weight we choose is called $A(\alpha,\beta,\gamma,\Omega)$ weight which satisfies $\omega(x)>0$ a.e., and $$\sup_{B\subset \Omega}\left({{1}\over{|B|}}\int_B\omega^\alpha\,dx\right)\left({{1}\over{|B|}}\int_B\omega^{-\beta}\,dx\right)^{\gamma/\beta}<\infty$$ for some positive constants $\alpha,\beta,\gamma$. One may readily see that the well-known $A_p$ weight is a special $A(\alpha,\beta,\gamma,\Omega)$ weight, more properties for $A(\alpha,\beta,\gamma,\Omega)$ weight see \cite{10}. We need the following lemma for Orlicz functions. \begin{lemma}\label{iit} Let $\varphi$ be a Young function such that $\varphi(x)\leq x^p$ for any $x>0$, $u\in L^p(\Omega,\Lambda^{l}),l=1,2,\cdots,n,$ be a differential form in $\Omega$. Then, for any $\omega\in A(\alpha,\beta,\gamma,\Omega)$, we have $$\|u\|_{\varphi ,\omega ,B}\leq{C}\|u\|_{p,\omega,B},$$ where $C$ is a constant independent of $u$. \end{lemma} \noindent \begin{proof} Young function $\varphi\geq 0$ gives \begin{eqnarray*} \int_B\varphi\left(\frac{|u(x)|}{\|u(x)\|_{p,\omega,B}}\right)\omega(x)dx&\leq&\int_B\left(\frac{|u(x)|}{\|u(x)\|_{p,\omega,B}}\right)^p\omega(x)dx\\ &=&\frac{\int_B|u(x)|^p\omega(x)dx}{\|u(x)\|^p_{p,\omega,B}}\\ &=&1. \end{eqnarray*} Then, according to the definition of $L^{\varphi}$-norm, it implies that $$ \inf\left\{\lambda>0:\int_B\varphi\left(\frac{|u(x)|}{\lambda}\right)\omega(x)dx\leq 1\right\}\leq \|u(x)\|_{p,\omega,B}. $$ That is $$\|u\|_{\varphi,\omega,B}\leq\|u\|_{p,\omega,B}.$$ \end{proof} \begin{theorem}\label{T4.3} Let $\varphi$ be a Young function such that $\varphi(x)\leq x^s$, $u\in{L^{p}(\Omega,\Lambda^{l},\mu)}, l=1, 2, \cdots, n$, be a differential form in $\Omega$, Radon measure $\mu$ is defined by $\omega(x)dx=d\mu$, and $\omega(x)\in{A(\alpha,\beta,\gamma,\Omega)}$ for some $\alpha>1$, $\beta=\frac{\alpha{q}}{\alpha{p}-p-\alpha{q}}$, $\gamma=\frac{\alpha{q}}{p}$, and $\alpha{p}-p-\alpha{q}>0,$ where $1\leq s<q<\infty$. Then, there exists a constant $C$, independent of $u$, such that $$\|u\|_{\varphi loc\; Lip_k,\omega,\Omega}\leq{C}\|u\|_{p,\omega,\Omega},$$ where $k$ is a constant with $0<{k}<1.$ \end{theorem} \begin{proof} Applying Lemma \ref{iit}, we have \begin{eqnarray*} \|u(x)-u_B\|_{\varphi,\omega,B}&\leq &\|u(x)-u_B\|_{s,\omega,B}\cr &\leq& |B|^{\frac{1}{s}-\frac{1}{q}}\|u(x)-u_B\|_{q,\omega,B} \cr &= &|B|^{\frac{1}{s}-\frac{1}{q}}\left(\int_B|u(x)-u_B|^q\omega(x)dx\right)^\frac{1}{q}. \end{eqnarray*} Similar to the proof of Theorem 4.2 in \cite{Li2015Lipschitz}, using the H\"{o}lder inequality to get \begin{eqnarray*} &&\left(\int_B|u(x)-u_B|^q\omega(x)dx\right)^\frac{1}{q}\cr &\leq& \left(\int_B|u(x)-u_B|^\frac{\alpha q}{\alpha-1}dx\right)^\frac{\alpha-1}{\alpha q}\left(\int_B\omega(x)^\alpha dx\right)^\frac{1}{\alpha q}\cr &\leq& C_1|B|^{1+\frac{1}{n}}\left(\int_B|u(x)|^\frac{\alpha q}{\alpha-1}dx\right)^\frac{\alpha-1}{\alpha q}\left(\int_B\omega(x)^\alpha dx\right)^\frac{1}{\alpha q}\cr &\leq& C_1|B|^{1+\frac{1}{n}}\left(\int_B|u(x)|^p\omega(x)dx\right)^\frac{1}{p}\cr &&\ \ \ \ \ \ \ \ \ \ \ \ \ \times \left(\int_B\omega(x)^\alpha dx\right)^\frac{1}{\alpha q}\left(\int_B(\omega(x)^{-1})^\frac{\alpha q}{\alpha p-p-\alpha q}dx\right)^\frac{\alpha p-p-\alpha q}{\alpha pq}\cr &\leq& C_2|B|^{1+\frac{1}{n}} \|u(x)\|_{p,B,\Omega}. \end{eqnarray*} The boundedness of the second part in the penultimate inequality above is due to that $\omega(x)\in{A(\alpha,\beta,\gamma,\Omega)}$. Finally, noting that $\frac{1}{n}+\frac{1}{s}-\frac{1}{q}-\frac{k}{n}>0,$ for $0\leq k\leq1,$ and combining the definition of $L^\varphi$-Lipschitz norm, we can complete the proof of Theorem \ref{T4.3}. \end{proof} \noindent Remark 2: Note that the $A(\alpha, \beta, \gamma; \Omega)$-class is an extension of several existing weight classes which contain $A^\lambda_r(\Omega)$-weight, $A_r(\lambda, \Omega)$-weight and $A_r(\Omega)$-weight. Thus, these conclusions obtained in this paper will change into the corresponding versions when we take some weight as a special case. \vspace{3mm} \noindent {\bf Conflict of Interests} The authors declare that there is no conflict of interests regarding the publication of this article. \vspace{3mm} \noindent {\bf Authors' Contributions} All authors put their efforts together on the research and writing of this manuscript. Xuexin Li carried out the proofs of all research results in this manuscript, and wrote its draft. Yuming Xing and Jinling Niu proposed the study, participated in its design and revised its final version. All authors read and approved the final manuscript. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\@hangfrom@section#1#2#3{\@hangfrom{#1#2}\MakeTextUppercase{#3}}% \def\subsection{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries \centering }% }% \def\subsubsection{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape \centering }% }% \def\paragraph{% \@startsection {paragraph}% {4}% {\parindent}% {\z@}% {-1em}% {\normalfont\normalsize\itshape}% }% \def\subparagraph{% \@startsection {subparagraph}% {5}% {\parindent}% {3.25ex \@plus1ex \@minus .2ex}% {-1em}% {\normalfont\normalsize\bfseries}% }% \def\section@preprintsty{% \@startsection {section}% {1}% {\z@}% {0.8cm \@plus1ex \@minus .2ex}% {0.5cm}% {% \normalfont\small\bfseries }% }% \def\subsection@preprintsty{% \@startsection {subsection}% {2}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\bfseries }% }% \def\subsubsection@preprintsty{% \@startsection {subsubsection}% {3}% {\z@}% {.8cm \@plus1ex \@minus .2ex}% {.5cm}% {% \normalfont\small\itshape }% }% \@ifxundefined\frontmatter@footnote@produce{% \let\frontmatter@footnote@produce\frontmatter@footnote@produce@endnote }{}% \def\@pnumwidth{1.55em} \def\@tocrmarg {2.55em} \def\@dotsep{4.5pt} \setcounter{tocdepth}{3} \def\tableofcontents{% \addtocontents{toc}{\string\tocdepth@munge}% \print@toc{toc}% \addtocontents{toc}{\string\tocdepth@restore}% }% \def\tocdepth@munge{% \let\l@section@saved\l@section \let\l@section\@gobble@tw@ }% \def\@gobble@tw@#1#2{}% \def\tocdepth@restore{% \let\l@section\l@section@saved }% \def\l@part#1#2{\addpenalty{\@secpenalty}% \begingroup \set@tocdim@pagenum{#2}% \parindent \z@ \rightskip\tocleft@pagenum plus 1fil\relax \skip@\parfillskip\parfillskip\z@ \addvspace{2.25em plus\p@}% \large \bf % \leavevmode\ignorespaces#1\unskip\nobreak\hskip\skip@ \hb@xt@\rightskip{\hfil\unhbox\z@}\hskip-\rightskip\hskip\z@skip \par \nobreak % \endgroup }% \def\tocleft@{\z@}% \def\tocdim@min{5\p@}% \def\l@section{% \l@@sections{}{section }% \def\l@f@section{% \addpenalty{\@secpenalty}% \addvspace{1.0em plus\p@}% \bf }% \def\l@subsection{% \l@@sections{section}{subsection }% \def\l@subsubsection{% \l@@sections{subsection}{subsubsection }% \def\l@paragraph#1#2{}% \def\l@subparagraph#1#2{}% \let\toc@pre\toc@pre@auto \let\toc@post\toc@post@auto \def\listoffigures{\print@toc{lof}}% \def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}} \def\listoftables{\print@toc{lot}}% \let\l@table\l@figure \appdef\class@documenthook{% \@ifxundefined\raggedcolumn@sw{\@booleantrue\raggedcolumn@sw}{}% \raggedcolumn@sw{\raggedbottom}{\flushbottom}% }% \def\tableft@skip@float{\z@ plus\hsize}% \def\tabmid@skip@float{\@flushglue}% \def\tabright@skip@float{\z@ plus\hsize}% \def\array@row@pre@float{\hline\hline\noalign{\vskip\doublerulesep}}% \def\array@row@pst@float{\noalign{\vskip\doublerulesep}\hline\hline}% \def\@makefntext#1{% \def\baselinestretch{1}% \reset@font \footnotesize \leftskip1em \parindent1em \noindent\nobreak\hskip-\leftskip \hb@xt@\leftskip{% \Hy@raisedlink{\hyper@anchorstart{footnote@\the\c@footnote}\hyper@anchorend}% \hss\@makefnmark\ }% #1% \par }% \prepdef \section{Introduction} [C\,{\sc ii}] 158\,\ensuremath{\,\mu\mbox{m}}\ is one of the strongest emission lines observed in star-forming galaxies and is the dominant coolant of the neutral interstellar medium \citep{Luhman1998}. Its strength is due to the relatively high abundance of singly-ionized carbon in the interstellar medium (ISM) and the wide range of physical conditions (densities and temperatures) in which the fine-structure transition can be collisionally excited \citep[for a review, see][]{Goldsmith2012}. Due to its strength and long wavelength, the [C\,{\sc ii}] line is relatively easy to observe even at high redshifts with sub-mm telescopes, such as ALMA \citep{Carilli2013}. It can potentially be used to estimate the star formation rate (SFR) and, with additional information from other emission lines, to probe some of the physical conditions of the ISM. Many recent studies attempted to empirically calibrate the relation between [C\,{\sc ii}] and SFR, both on global and kpc scales. They all find approximately linear relationships between [C\,{\sc ii}] and SFR with small scatter \citep[see e.g.,][]{deLooze2011, deLooze2014, Herrera2014}. The physical basis for this relation relies upon [C\,{\sc ii}] being the dominant coolant of the neutral ISM heated by the UV photons from young stars, via the photoelectric process on dust grains \citep{Tielens1985, Wolfire1995}. One issue with the interpretation of the [C\,{\sc ii}]--SFR relation, is that the [C\,{\sc ii}] 158\,\ensuremath{\,\mu\mbox{m}}\ emission line arises from multiple ISM phases. The ionization potential of carbon (11.26\,eV) is less than that of hydrogen (13.6\,eV), such that C$^+$ is also present in the neutral ISM, where the [C\,{\sc ii}] line is typically excited via collisions with atomic hydrogen. At the other extreme, the ionization potential of C$^+$ is sufficiently high (24.4\,eV) that the ion is also found in the ionized ISM, where collisions with electrons dominate the line excitation. In addition, the C$^+$ species can also be found in molecular gas before the atom combines to C\,{\sc i}\ and CO \citep{Wolfire2010}. In each phase, the [C\,{\sc ii}] line has a different sensitivity to the gas density (i.e.~different critical densities), and different relation with the heating radiation, and hence SFR. Not only do the energetics of [C\,{\sc ii}] vary with ISM phase, but the timescale over which it measures SFR varies as well. In the ionized ISM, heating is by the photoionization of hydrogen caused by the extreme UV photons ($ >13.6$\,eV) from O-stars, and thus [C\,{\sc ii}] measures the SFR on timescales shorter than 10\,Myrs. In the neutral ISM, the photoelectric effect on dust that dominates the heating requires only far UV photons ($\gtrsim 6$\,eV). These photons are also emitted by B-stars and thus [C\,{\sc ii}] measures the SFR on much longer timescales. Therefore the phase of the ISM from which the [C\,{\sc ii}] line arises from plays a vital role in the relation between the SFR and [C\,{\sc ii}]. Various studies have addressed the origins of [C\,{\sc ii}] emission. \citet{Croxall2012} presented a pilot study on NGC\,1097 and NGC\,4559 -- two galaxies from the KINGFISH sample \citep[a {\em Herschel} Key program of 61 nearby $\lesssim 30$ Mpc galaxies;][]{Kennicutt2011}. They corrected for the [C\,{\sc ii}] emission from the ionized phase by using the [N\,{\sc ii}] 205\,\ensuremath{\,\mu\mbox{m}}\ and 122\,\ensuremath{\,\mu\mbox{m}}\ lines that arise only from ionized gas (as the ionization potential of nitrogen is 14.53\,eV). They found that the fraction of [CII] emission coming from photodissociation regions (PDRs) in these galaxies ranges from 0.35 up to 0.8 for regions with warm dust, depending on the assumed electron density. Also using the [N\,{\sc ii}] lines, \citet{Cormier2012} found for the starburst galaxy Haro\,11 that $\sim$\,40\% of [C\,{\sc ii}] emission arises from a diffuse low-ionization gas phase, $\sim$\,20\% from a diffuse neutral phase, and associated the remaining $\sim$\,40\% of emission with PDRs. In their observations of M\,51, \citet{Parkin2013} found a similar range of the ionized gas fractions of [C\,{\sc ii}], with values of 0.8, 0.7, 0.5 and 0.5 (with a typical uncertainty $\sim$\,0.2), for the nucleus, center, arm and interarm regions of the galaxy, respectively, based on the [N\,{\sc ii}] 122\,\ensuremath{\,\mu\mbox{m}}\ and 205\,\ensuremath{\,\mu\mbox{m}}\ line observations. Thus observational studies to date suggest $\sim$\,50\% of the [C\,{\sc ii}] line may arise from an ionized gas phase, which agrees with theoretical predictions of 10--50\% from \citet{Abel2006}. Even if the line arises purely from the neutral ISM, there are further complications in relating [C\,{\sc ii}] intensity to SFR. First, depending on the density and temperature of the gas, cooling via other emission lines can become more efficient than via [C\,{\sc ii}] and dominate overall gas cooling \citep[{e.g.~[O\,{\sc i}]\,63\,$\mu$m};][]{Tielens1985}. Second, the [C\,{\sc ii}] line may suffer from optical depth affects in dense gas \citep{Graf2012}. Finally, and most importantly, the efficiency with which UV photons heat the gas by the photoelectric heating effect can vary. The variations might occur due to changes in the dust properties, resulting in variable gas heating for a given UV field, or due to change in hardness of the spectrum, resulting in varying heating efficiency for a given dust volume. Thus, to properly calibrate the relation between [C\,{\sc ii}] and SFR, the origins of the [C\,{\sc ii}] line in galaxies must be understood. Given the observed close relation of the [C\,{\sc ii}] line with the UV radiation field, it was also expected that the line and far-IR continuum emission should be related. However, in several galaxies a deficit of the line in relation to the FIR was observed towards high FIR luminosities \citep{Malhotra2001,Helou2001}. This observed FIR line deficit is thought to be associated with a decrease in the efficiency of the photoelectric heating of the gas in FIR luminous objects, caused by changes in the dust grain properties \citep[see][ for a detailed description]{Luhman2003}. Based on over 240 luminous infrared galaxies (LIRGs), \citet{DiazSantos2013} found that the [C\,{\sc ii}]/FIR ratio decreased with increasing dust temperature. They suggest that this implies that the [C\,{\sc ii}] 158\,\ensuremath{\,\mu\mbox{m}}\ luminosity is not a good indicator of the SFR, as it does not scale linearly with the warm dust emission most likely associated with the youngest stars. \citet{GraciaCarpio2011} showed that not only [C\,{\sc ii}], but also other far-IR lines (e.g. [O\,{\sc i}] 63 and 145\,\ensuremath{\,\mu\mbox{m}}, [O\,{\sc iii}] 88\,\ensuremath{\,\mu\mbox{m}}, [N\,{\sc ii}] 122\,\ensuremath{\,\mu\mbox{m}}, from both neutral and ionized phases) exhibit deficits relative to the FIR emission for 44 local starbursts, Seyfert and LIRGs. Several theories have been proposed to explain the observed FIR line deficit, for example charged grains reducing the photoelectric effect \citep{Malhotra2001, Croxall2012}, or a reduction in the relative number of polycyclic aromatic hydrocarbons (PAHs) reducing the photoelectric efficiency \citep{Helou2001}, or ionization parameter \citep{GraciaCarpio2011}. Yet, our understanding of the underlying causes is still incomplete. Even after including both correction for diffuse cool neutral and ionized contributions to line emissions and TIR, PDR models used by \citet{Croxall2012} could not reproduce the observed line deficits in the two studied galaxies. \citet{GraciaCarpio2011} discuss the importance of the ionization parameter (higher starlight heating rate U can increase FIR and dust temperature relative to emission of lines), but it cannot alone explain the deficit of all observed lines relative to IR continuum. \citet{Croxall2012} argue that dusty H\,{\sc ii}\ regions (elevated dust levels in the ionized gas) are not responsible for line deficits, neither is increased gas density. In contrast to the standard FIR line deficit, \cite{Israel1996} studied bright H\,{\sc ii}\ complexes in LMC and found [C\,{\sc ii}]/FIR to be typically around 1\%, considerably higher than found in Galaxy and in most galactic nuclei \citep[i.e. $\sim0.1-1\%$;][and references therein]{Stacey1991}. \cite{Israel1996} explain these higher [C\,{\sc ii}]/FIR values as the result of the lower metallicity and lower dust-to-gas ratio in the LMC relative to Galactic regions. Nevertheless, when \cite{Rubin2009} revisited LMC with [C\,{\sc ii}] observations by \cite{Mochizuki1994} and FIR by Surveying the Agents of a Galaxy's Evolution \citep[SAGE;][]{Meixner2006, Bernard2008}, they found a flattening of [C\,{\sc ii}] as a function of FIR at the FIR high brightness end, a similar trend observed by \citet{Stacey1991}. One obvious route for understanding the multi-phase origins of [C\,{\sc ii}] is to observe the emission at high spatial resolution to resolve the different phases. This has been the goal of many surveys observing [C\,{\sc ii}] within the Milky Way, such as COBE FIRAS \citep{Wright1991, Bennett1994}, FILM \citep{Makiuti2002} and BICE \citep{Nakagawa1998}. However, these Galactic surveys had to deal with line-of-sight (LOS) confusion along the Galactic disk, making comparisons with stars and other gas tracers difficult. The recent GOT C+ survey \citep{Langer2010, Pineda2013, Langer2014} with Herschel-HIFI was able to limit this confusion using their high velocity resolution observations. By comparing their resolved [C\,{\sc ii}] data with similarly spectrally resolved CO and HI data they were able, to compare scale heights measured by [C\,{\sc ii}], atomic and molecular gas, associate [C\,{\sc ii}] emission to the spiral arms between 4 and 11\,kpc, and estimate CO-dark-H$_2$ fraction of the total molecular gas. Yet, while their study is extremely powerful with [C\,{\sc ii}] measured in $\sim$\,500 sight-lines in the Galactic plane, the large angular scale of the Milky Way meant that it is impossible to fully map [C\,{\sc ii}]. Conversely, in more distant galaxies, multiple ISM phases cannot be separated spatially. Early [C\,{\sc ii}] observations of nearby galaxies were typically galaxy averages, comparable to what is now being done with high redshift galaxies \citep{Stacey1991, Malhotra2001, Walter2009, Stacey2010, GraciaCarpio2011}. Recent nearby galaxy studies are only now reaching kiloparsec resolution \citep{Croxall2012, DiazSantos2013, Parkin2013, Herrera2014}. The Local Group represents an ideal compromise between high spatial resolution to study the correlation of [C\,{\sc ii}] with various ISM phases, and the simple LOS and galaxy-scale coverage necessary to address these questions. However, most of the Local Group galaxies are low metallicity objects \citep[dwarf galaxies:][]{Israel1996, Kim2002, Rubin2009, Israel2011, Lebouteiller2012}, or low-mass \citep[M\,33, HerM33es project,][]{Kramer2010, Braine2012}, which have significantly different ISM characteristics to massive galaxies in which most of the star formation in the Universe occurs at present. The Andromeda galaxy (M\,31) provides an ideal target to explore the origins of [C\,{\sc ii}] as the only massive, star-forming $L_{\star}$ spiral galaxy in the Local Group. Therefore, as part of a project to understand the heating and cooling of the ISM, we have carried out a \emph{Herschel Space Observatory}\footnote{Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.} far-IR and ground-based optical integral field emission line survey of M\,31 (SLIM; the Survey of lines in M\,31). Previous studies of [C\,{\sc ii}] in M\,31 with the \emph{Infrared Space Observatory} targeted the bulge \citep{Mochizuki2000} and a spiral arm on the minor axis \citep{Rodriguez2006}, respectively with far lower effective spatial resolution. The proximity of M\,31 \citep[$\sim$\,780 kpc,][]{Stanek1998} combined with the \emph{Herschel} resolution of 11\arcsec at 158\,\ensuremath{\,\mu\mbox{m}}\ (our limiting resolution), enables the study of ISM tracers at sub-kpc scales ($\sim$\,50\,pc), allowing us to spatially separate star forming from diffuse regions. The large amount of available ancillary data, and the simplicity provided by an external perspective, make M\,31 a unique target for understanding [C\,{\sc ii}] emission. Our survey targeted several different star forming regions across the disk of M\,31, enabling a study of the relative variation of [C\,{\sc ii}] emission (and heating and cooling of the ISM in general) over a wide scope of physical conditions such as stellar surface densities, SFRs, and metallicities. The only caveat is that due to high inclination of M\,31 \citep[$70^{\circ}$;][]{Dalcanton2012}, our data also suffer from some LOS confusion. Nevertheless, our analysis is valid within the known limitations, that [C\,{\sc ii}] emission line: 1) does not spatially resolve individual H\,{\sc ii}\ regions, nor PDRs, 2) is not spectrally resolved ($\sim200$km/s.) The paper is organized as follows: the [C\,{\sc ii}] and ancillary data acquisition is described in \S~\ref{sec:data}, along with further processing details. In \S~\ref{sec:result_cii_sfreg}, we present the spatial decomposition of several ISM and SFR tracers. In \S~\ref{sec:result_cii_sfr} we test the calibration of [C\,{\sc ii}] to SFR relation at high spatial resolution, and we compare to existing studies on different spatial scales. In \S~\ref{sec:fir_def}, we investigate the FIR line deficit and its relation to the ability of [C\,{\sc ii}] to track SFR. Finally, we discuss our results in \S~\ref{sec:disc}, and present our conclusions in \S~\ref{sec:concl}. \section{Data} \label{sec:data} M\,31 is the most massive external galaxy in the Local Group. It is a highly inclined \citep[i.e., 70$^{\circ}$;][and references therein]{Dalcanton2012} spiral galaxy classified as SA(s)b (see Tab.~\ref{tab:m31}) and presents ring-like structure. Due to its proximity \citep[$\sim$\,780\,kpc;][]{Stanek1998}, it is possible to reach a high spatial resolution with \emph{Herschel} ($\sim$\,50\,pc at 160\,\ensuremath{\,\mu\mbox{m}}). \begin{deluxetable}{cc} \tablewidth{0.3\textwidth} \tablecolumns{2} \tablecaption{M\,31 information} \tablehead{ } \startdata Nucleus position\tablenotemark{a} & $00^h42^m44.^s35$ \\ (J2000) & $+ 41^{\circ}16\arcmin08\farcs60$ \\ Inclination\tablenotemark{b} & 70$^{\circ}$ \\ Position angle\tablenotemark{b} & 43.2$^{\circ}$ \\ Distance\tablenotemark{c} & $780 \pm 40$\,kpc \\ Morphological type\tablenotemark{a} & SA(s)b \\ SFR\tablenotemark{d} & 0.4\,M$_{\odot}$/yr \enddata \label{tab:m31} \tablenotetext{a}{Based on NED data and references therein} \tablenotetext{b}{\citet{Dalcanton2012}} \tablenotetext{c}{\citet{Stanek1998}} \tablenotetext{d}{\citet{Barmby}} \end{deluxetable} The measured SFR in M\,31 is low, $\sim$\,0.4\,M$_{\odot}$ yr$^{-1}$, over the last 100\,Myr \citep{Barmby}, and is concentrated mostly in the spiral arms and rings. In addition, a large diffuse fraction of H$\alpha$ emission is also observed \citep{Walterbos1994,Azimlu2011}. We carried out a far-IR and optical survey of interstellar emission lines in M\,31 (SLIM; the Survey of lines in M\,31; PI K. Sandstrom) to study the cooling emission from a variety of ISM phases. The survey consists of five 3\arcmin$\times$3$\arcmin$ ($\sim$700$\times$700\,pc) Fields with Herschel PACS spectroscopy and optical integral field spectroscopy. This line survey is complemented with infrared photometry from {\em Herschel} and {\em Spitzer}. We describe the data we used in our present analysis in the following subsections. \subsection{{\em Herschel} PACS spectroscopy} \label{sec:data_pacs} \begin{figure*}[!htp] \begin{center} \includegraphics[width=1.\textwidth]{fig1.pdf} \caption{Position and orientation of the five Fields targeted in this study overlaid on a RGB image (blue - Galex FUV, green - MIPS 24\,\ensuremath{\,\mu\mbox{m}}\ and red - H$\alpha$), region numbers are labeled and the orientation is shown in the upper left. A scale bar indicates 1\,kpc (4.4\arcmin).} \label{fig:coverage} \end{center} \end{figure*} \begin{deluxetable}{ccccc} \tablewidth{0pt} \tablecolumns{5} \tablecaption{Coordinates of Field centers} \tablehead{\multicolumn{1}{c}{F} & \multicolumn{1}{c}{R.A.} & \multicolumn{1}{c}{Dec.} & \multicolumn{1}{c}{P.A.} & \multicolumn{1}{c}{Observation\tablenotemark{a}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{(J2000)} & \multicolumn{1}{c}{(J2000)} & \multicolumn{1}{c}{[$^{\circ}$]} & \multicolumn{1}{c}{ID} } \startdata 1 & $00^h46^m29.^s17$ & $+42^{\circ}11\arcmin30\farcs89$ & 70.7 & 1342236285 \\ 2 & $00^h45^m34.^s79$ & $+41^{\circ}58\arcmin28\farcs54$ & 55.7 & 1342238390 \\ 3 & $00^h44^m36.^s49$ & $+41^{\circ}52\arcmin54\farcs21$ & 55.0 & 1342238391 \\ 4 & $00^h44^m59.^s26$ & $+41^{\circ}55\arcmin10\farcs47$ & 51.0 & 1342238726 \\ 5 & $00^h44^m28.^s76$ & $+41^{\circ}36\arcmin58\farcs91$ & 63.0 & 1342237597 \enddata \label{tab:obs} \tablenotetext{a}{[C\,{\sc ii}] \& [O\,{\sc i}] observations acquired between 28$^{\rm th}$ Feb and 1$^{\rm st}$ Mar, 2012} \end{deluxetable} Observations of the [C\,{\sc ii}] 158\,\ensuremath{\,\mu\mbox{m}}, [O\,{\sc i}] 63\,\ensuremath{\,\mu\mbox{m}}, and [N\,{\sc ii}] 122\,\ensuremath{\,\mu\mbox{m}}\ spectral lines were carried out using the unchopped mapping mode on the Photodetector Array Camera and Spectrometer \citep[PACS;][]{Poglitsch2010} on board of the ESA {\em Herschel} Space Observatory \citep{Pilbratt2010} for a total of 47.1 hours of observing time (OT1\_ksandstr\_1; PI K. Sandstrom). Due to the large angular extent of M\,31, galaxy emission fell within all available chopper-throws. Therefore we used the unchopped mode with an off position defined to be well outside of the body of M\,31, and visited 3 times during each AOR. The [C\,{\sc ii}] 158\,\ensuremath{\,\mu\mbox{m}}\ and [O\,{\sc i}] 63\,\ensuremath{\,\mu\mbox{m}}\ lines were mapped in five Fields of $3\arcmin$ by $3\arcmin$ (700\,pc $\times$ 700\,pc) using rastering in the instrument reference frame with 37\farcs5 and 23\farcs5 steps. We choose $3\arcmin$ by $3\arcmin$ Field sizes: (1) to cover the scale over which energy from star-forming regions should be deposited in the ISM and (2) to match the approximate size of a resolution element in the KINGFISH [C\,{\sc ii}] maps at the sample's average distance of 12\,Mpc. The five Fields probe different physical conditions along M\,31's major axis, sampling different H$\alpha$, FUV, and 24\,\ensuremath{\,\mu\mbox{m}}\ surface brightnesses and atomic to molecular gas ratios, while still focusing on regions of active star-formation. The Fields are shown in Figure~\ref{fig:coverage} and are tabulated in Table~\ref{tab:obs}. All Fields lie on the NE major axis of the galaxy as this side is covered by the Pan-chromatic Hubble Andromeda Treasury program \citep[PHAT;][]{Dalcanton2012}, providing a large, high-resolution, UV to NIR ancillary dataset. We label these Fields F1 (outermost) through F5 (innermost), though we note that this order is not entirely radial as F3 is at a slightly smaller galactocentric radius than F4. F1 covers a star-forming region in the outer spiral arm at $\sim$\,16\,kpc, while Fields F2 to F4 fall in the star forming ring at $\sim$\,10\,kpc, and F5 covers a region in the inner arm at $\sim$\,7\,kpc, from the center of the galaxy, respectively. The [N\,{\sc ii}] 122\,\ensuremath{\,\mu\mbox{m}}\ line was observed only in six smaller maps of $1\arcmin \times 1\arcmin$ that targeted the brightest H\,{\sc ii}\ regions in the [C\,{\sc ii}] Fields (2 in F1, 1 in F2, 2 in F3 and 1 in F4). Unfortunately, the [N\,{\sc ii}] line was found to be very weak and was detected only in very few spaxels. Similarly, though the [O\,{\sc i}] line was significantly detected in several regions in all Fields, it was found to be weaker than [C\,{\sc ii}] in all reliably detected regions, with typical values of [O\,{\sc i}]/[C\,{\sc ii}]\,$\sim 0.46$. We find these reliable detections only in the brightest regions (SF). We expect [O\,{\sc i}]/[C\,{\sc ii}] to be even less in the diffuse regions \citep[see Figure~9.2 in][]{Tielens2005}. In the following, we focus the analysis mostly on the [C\,{\sc ii}] 158\,\ensuremath{\,\mu\mbox{m}}\ line and leave a discussion of the [O\,{\sc i}] and [N\,{\sc ii}] lines for a future paper (Kapala et al. 2015 in prep). The [C\,{\sc ii}] data was reduced using the {\em Herschel} Interactive Processing Environment (HIPE) version 8.0 \citep{Ott2010}. Reductions applied the standard spectral response functions, flat field corrections, and flagged instrument artifacts and bad pixels \citep[see ][]{Poglitsch2010}. The dark current, determined from each individual observation, was subtracted during processing because it was not removed via chopping. Transient removal was performed using the surrounding continuum, as described in \citet{Croxall2012}. In-flight flux calibrations were applied to the data. After drizzling in the HIPE pipeline, the [C\,{\sc ii}] 158\ensuremath{\,\mu\mbox{m}}\ (FWHM\,$\sim$\,11\farcs0) line was integrated in velocity to produce maps with 2\farcs6 pixels. For the analysis, we rebinned the maps to have approximately half-beam spaced pixel size (5\farcs2). The final integrated intensity maps of the [C\,{\sc ii}] 158\,\ensuremath{\,\mu\mbox{m}}\ emission line for each Field are shown in the right side in Figures~\ref{fig:ha_cii_maps} and~\ref{fig:ha_cii_maps2}. The details of the observations; coordinates, AOR numbers, are listed in Table~\ref{tab:obs}. The mean 1-$\sigma$ [C\,{\sc ii}] surface brightness sensitivity of the line integrated intensity in all pixels in the overlapping regions of the [C\,{\sc ii}] and H$\alpha$ Fields is $1.46 \times 10^{38} \ {\rm erg \ s^{-1} kpc^{-2}}$ with standard deviation $5.51 \times 10^{37} \ {\rm erg \ s^{-1} kpc^{-2}}$. Note that individual points might have values below that average limit. That is because for an application of the significance cuts in \S~\ref{sec:result}, we use each pixel's 1-$\sigma$ noise measured from its spectrum, which accounts for the goodness of the line fit and PACS scanning flaws, not only instrumental sensitivity limit. The absolute [C\,{\sc ii}] flux calibration uncertainty is $\sim$\,30\%\footnote{http://herschel.esac.esa.int/twiki/pub/Public/PacsCalibration\\Web/PacsSpectroscopyPerformanceAndCalibration\_v2\_4.pdf}. We visually inspected the spectral cubes in low S/N regions and find that the quoted uncertainties on the line flux are reasonable. \begin{figure*}[ht!] \begin{center} \includegraphics[width=.9\textwidth]{figure2.pdf} \caption{{\bf Left column:} H$\alpha$ emission from SF regions in Fields 1 to 3, convolved to match [C\,{\sc ii}] resolution. Black circles indicate the H\,{\sc ii}\ regions defined in the \citet{Azimlu2011} catalog, with the radius of the circles set to the FWHM of the H\,{\sc ii}\ regions. White contour level indicates chosen level L$_0$ that delineates SF regions, two grey ones are ${\rm L_0 \pm 30\% \, L_0}$. Black boxes show [C\,{\sc ii}] pointings. The [C\,{\sc ii}] beam size (11.0\arcsec) and scale bar (66.1\arcsec) are indicated in the bottom left corner. {\bf Right column:} {\em Herschel} PACS [C\,{\sc ii}] 158\,\ensuremath{\,\mu\mbox{m}}\ maps of Fields 1 to 3 with L$_0$ H$\alpha$ contour overlaid. All maps have a common linear color scale given at the bottom of the figure. Note that only pixels present in both [C\,{\sc ii}] and H$\alpha$ maps are used in our analysis.} \label{fig:ha_cii_maps} \end{center} \end{figure*} \begin{figure*}[ht!] \begin{center} \includegraphics[width=.9\textwidth]{figure3.pdf} \caption{Same as Figure~\ref{fig:ha_cii_maps} for Fields 4 and 5.} \label{fig:ha_cii_maps2} \end{center} \end{figure*} \subsection{PPAK IFS} \label{sec:data_ppak} We obtained optical integral field spectroscopy (IFS) covering the same five Fields as the PACS spectral maps (PI K. Sandstrom), over nine nights in September 2011 using the Calar Alto 3.5m telescope with the PMAS instrument in PPAK mode with the V300 grating \citep{Roth2005, Kelz2006}. This setup provides 331 science fibers, each 2\farcs68 in diameter, that sample a spectral range from 3700--7000\,\AA\ with $\sim$\,200 km s$^{-1}$ instrumental resolution and hexagonally tile a $\sim$\,1\arcmin field of view. Our observation and reduction procedures follow very closely those outlined in \citet{Kreckel2013}, and we summarize here only the key steps and variations from that description. Each of the five Fields were mosaicked with 10 PPAK pointings. To completely recover the flux and fill in gaps between the fibers, each pointing was observed in three dither positions. We reduced these nearly 50,000 spectra using the {\tt p3d} software package \citep{Sandin2010}. All frames are bias subtracted, flat-field corrected and wavelength calibrated using standard calibration observations. Frames are cleaned of cosmic rays using the L. A Cosmic technique \citep{vanDokkum2001} as adapted within {\tt p3d}. Spectra are extracted using a modified optimal extraction method that simultaneously fits all line profiles with a Gaussian function \citep{Horne1986}. Relative flux calibration is applied using one of two standard stars observed during the night, where we choose the star that appears most centered within a single fiber. As M\,31 is quite extended on the sky, separate sky pointings were obtained and a best fit linear combination from the set of sky pointings observed that night are used to optimally subtract the sky emission from the science spectra. Seeing was sub-fiber ($<$3\arcsec) and astrometry for each mosaic position has been applied through comparison by eye of features in our H$\alpha$ maps with Local Group Galaxies Survey H$\alpha$ images \citep{Azimlu2011}. From comparison of stellar sources within the PPAK data and SDSS \citep{Aihara2011} broadband images we estimate our astrometry is accurate to within 1\arcsec, sufficient for comparison with the lower resolution PACS images. All observations were taken at airmass below 2, with nearly all below 1.5, so we neglect the effects of differential atmospheric refraction. Conditions were not consistently photometric, so we have re-calibrated the flux scaling of each dither position and scaled it to the continuum subtracted Local Group Galaxies Survey H$\alpha$ images \citep{Azimlu2011}. We expect our relative flux calibration to be accurate to within 5\%, however, we allow for a larger systematic uncertainty of 20\%. \begin{figure*}[!ht] \begin{center} \includegraphics[width=1.\textwidth]{figure4.pdf} \caption{PPAK IFS data in Field 1. Left panel - H$\alpha$ map at the native resolution, right panel - spectrum in the pixel marked as black circle (RA(J2000) $= 00^h46^m34.^s52$, Dec.(J2000) $= +42^{\circ}11\arcmin43\farcs82$).} \label{fig:ha_spec} \end{center} \end{figure*} The H$\alpha$ line was measured in each spectrum using the GANDALF package \citep{Sarzi2006}, which employs penalized pixel fitting \citep[{\tt pPXF};][]{Cappellari2004} to simultaneously fit both stellar continuum templates and gaussian emission line profiles. This allows us to deblend the contribution of [N\,{\sc ii}] from the H$\alpha$ emission line, as well as to correct for underlying stellar absorption. We use here simple stellar population (SSP) template spectra from the \citet{Tremonti2004} library of \citet{Bruzual2003} templates for a range of 10 ages (5 Myr - 11 Gyr) and two metallicities (1/5 and 1 solar), though using stellar templates selected from the MILES library \citep{MILES2011} does not significantly change our measured line fluxes. These H$\alpha$ line fluxes are interpolated onto a regular grid using Delaunay linear triangulation, resulting in images with $\sim$\,2\farcs5 resolution that reach 3$\sigma$ H$\alpha$ surface brightness sensitivities of ${\rm 2 \times 10^{-16}\ erg\ s^{-1} cm^{ -2} arcsec^{-2}}$. As in the case of [C\,{\sc ii}] data, one of the product of the reduction pipeline is noise map. The H$\alpha$ noise depends not only on the line strength, but also on the observing conditions. Due to atmospheric variations, we compare the values between the individual pointings. This approach returns median noise ${\rm 3.60\times10^{37} \ erg \ s^{-1} kpc^{-2}}$ and standard deviation ${\rm 3.15\times10^{37}\ erg \ s^{-1} kpc^{-2}}$. Figure~\ref{fig:ha_spec} shows the map for Field 1, as well as an extracted spectrum. \subsection{{\em Spitzer} \& {\em Herschel} Photometry} \label{sec:data_ir} We make use of the data from several infrared surveys of M\,31 using {\em Herschel} and {\em Spitzer}. M\,31 was observed with the Multiband Imaging Photometer \citep[MIPS;][]{Rieke2004} on board of the {\em Spitzer} Space Telescope \citep{Werner2004} by \citet{Gordon2006} and the InfraRed Array Camera \citep[IRAC;][]{Fazio2004} by \citet{Barmby}. In addition, M\,31 was observed by the {\em Herschel} Space Observatory (PI O. Krause) using the PACS and SPIRE instruments \citep[Spectral and Photometric Imaging Receiver]{Griffin2010}. For details of the PACS and SPIRE observations and processing see \citet{Groves2012} or \citet{Draine2014}. PACS images in all photometric bands (70, 100 and 160\,\ensuremath{\,\mu\mbox{m}}) were obtained in slow parallel mode, with a final image size of $\sim$\,3$^{\circ} \times 1^{\circ}$ aligned with the position angle of M\,31's major axis. This covers M\,31's full disk including the 16\,kpc ring. All images were reduced to level one using HIPE v6.0, and then SCANAMORPHOS v12.0 \citep{Roussel2012} was used to produce the final product. The FWHM is 5\farcs6 for 70\,\ensuremath{\,\mu\mbox{m}}, 6\farcs8 for PACS 100\,\ensuremath{\,\mu\mbox{m}}\ and 11\farcs4 for PACS 160\,\ensuremath{\,\mu\mbox{m}}. Beam sizes are taken from PACS Observer's Manual\footnote{${\rm http://herschel.esac.esa.int/Docs/PACS/html/pacs\_om.html}$} for 20\arcsec s$^{-1}$ scans. 1\arcsec pixel size was used for all PACS bands. For the {\emph Spitzer} bands we considered, the IRAC 8\,\ensuremath{\,\mu\mbox{m}}\ has a FWHM=2\farcs0 and the MIPS 24\,\ensuremath{\,\mu\mbox{m}}\ has a FWHM=6\farcs5. To remove the foreground emission from the Milky Way and any other foregrounds or backgrounds, we measured the median surface brightness in regions on the edges of the map, away from the main body of M\,31. These values showed no clear gradient, so a uniform background was subtracted from each image. The determined backgrounds were: 2.40 MJy ${\rm sr}^{-1} (8\,\ensuremath{\,\mu\mbox{m}})$, -0.0043 MJy ${\rm sr}^{-1} (24\,\ensuremath{\,\mu\mbox{m}})$, 3.17 MJy ${\rm sr}^{-1} (70\,\ensuremath{\,\mu\mbox{m}})$, 3.23 MJy ${\rm sr}^{-1} (100\,\ensuremath{\,\mu\mbox{m}})$ and 2.29 MJy ${\rm sr}^{-1} (160\,\ensuremath{\,\mu\mbox{m}})$. \subsection{Further processing} \label{sec:data_proc} All maps were convolved to match the PACS 160\,\ensuremath{\,\mu\mbox{m}}\ resolution of 11\farcs0, using the \citet{Astropy2013} package {\tt convolve\_fft} and kernels from \citet{Aniano2011} for each specific filter: IRAC 8\,\ensuremath{\,\mu\mbox{m}}, MIPS 24\,\ensuremath{\,\mu\mbox{m}}, PACS 70\,\ensuremath{\,\mu\mbox{m}}, PACS 100\,\ensuremath{\,\mu\mbox{m}}, and PACS 160\,\ensuremath{\,\mu\mbox{m}}. The latter kernel was also used for the [C\,{\sc ii}] map. For the H$\alpha$ images we assumed an intrinsic Gaussian PSF with a FWHM of 2\farcs5 \citep[see][]{Kreckel2013}, and used the corresponding convolution kernel. The convolved maps were all resampled to match the [C\,{\sc ii}] pixel size of 5\farcs2 using the {\tt Montage} \citep{Jacob2010} Python wrapper\footnote{http://www.astropy.org/montage-wrapper}. For any direct comparison, the units of the images were converted to erg s$^{-1}$ cm$^{-2}$ sr$^{-1}$ or erg s$^{-1}$ kpc$^{-2}$. \subsection{Derived Quantities} \label{sec:quan} We use the \citet{Calzetti2007} formula to calculate SFR surface density from a linear combination of H$\alpha$ and 24\,\ensuremath{\,\mu\mbox{m}}\ emission, which includes a correction for absorbed emission that is important when young stars are embedded in their natal clouds. At the physical resolution we reached in this work, we begin to resolve out the H\,{\sc ii}\ regions and diffuse gas, such that any SFR calibration based on linear combination of star-forming tracers no longer strictly holds \citep{Leroy2012, Simones2014}. Furthermore, SFR measurements at 50\,pc scales are problematic using indirect tracers like H$\alpha$ and dust emission for many reasons, including sampling of the stellar population \citep[others include drift of stars between pixels, star formation history, and others outlined in ][]{KennicuttEvans}. For the purposes of our work, we are primarily interested in the relative spatial distribution of these SFR tracers and not an accurate measurement of the 50\,pc scale SFR. \begin{equation} \begin{multlined} \rm{\Sigma_{SFR}} \ \rm{[ M_{\odot} yr^{-1} kpc^{-2}] =} \\ ( 634 \ I_{\rm H\alpha } + 19.65 \ I_{\rm 24\mu m} ) \ \rm{[ erg \ s^{-1} cm^{-2}sr^{-1}]} \end{multlined} \label{eq:sfr} \end{equation} This calibration assumes a constant SFR for 100\,Myr, solar metallicity, and the stellar population models and Kroupa initial mass function (IMF) from Starburst99 \citep[2005 update;][]{Leitherer1999}. In reality, the SFR in M\,31 has not been constant over the last 100\,Myr, with the star formation histories based on Hubble stellar photometry showing significant variation in this timescale, and across the disk in M\,31 \citep{Lewis2014}. Nevertheless, as H$\alpha$ is the dominant contributor to the SFR shown for most of the points in Figure~\ref{fig:ciisfr50}, therefore the effect of the variation in the star formation history to this calibration is limited. The average SFR surface densities, listed in Table~\ref{tab:sfr}, range between $2-7\times10^{-3}$ M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$, the estimated median uncertainty is $4.2\times10^{-4}$ M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$ with the standard deviation $3.3\times10^{-4}$ M$_{\odot}$ yr$^{-1}$ kpc$^{-2}$. \begin{deluxetable}{cccccc} \tablewidth{0pt} \tablecolumns{6} \tablecaption{The average [C\,{\sc ii}] emission, $\Sigma_{\rm SFR}$ and metallicity of the Fields} \tablehead{\multicolumn{1}{c}{Field} & \multicolumn{1}{c}{R\tablenotemark{a}} & \multicolumn{1}{c}{R} & \multicolumn{1}{c}{[C\,{\sc ii}]$_{\rm TOT}$\tablenotemark{b}} & \multicolumn{1}{c}{${\rm \Sigma_{SFR}}$\tablenotemark{c}} & \multicolumn{1}{c}{[O/H]\tablenotemark{d}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{[kpc]} & \multicolumn{1}{c}{[$^{\circ}$]} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} } \startdata 1 & 16.03 & 1.177 & 8.26 & 4.740 & 0.774 \\ 2 & 12.28 & 0.902 & 5.81 & 1.735 & 0.943 \\ 3 & 11.31 & 0.831 & 17.01 & 7.136 & 0.993 \\ 4 & 11.45 & 0.841 & 7.93 & 2.916 & 0.985 \\ 5 & 6.86 & 0.504 & 4.17 & 1.944 & 1.361 \enddata \label{tab:sfr} \tablenotetext{a}{Galactocentric radius} \tablenotetext{b}{Average Field [C\,{\sc ii}] surf. brightness in ${\rm [ 10^{38} \ erg\ s^{-1} kpc^{-2}] }$} \tablenotetext{c}{Average Field SFR surface density in $[10^{-3} {\rm \ M_{\odot}yr^{-1}kpc^{-2}}]$} \tablenotetext{d}{Metallicity relative to solar with ${\rm \log(O/H)_{\odot}=-3.31}$} \end{deluxetable} The average gas phase metallicity for each Field is calculated from Equation 7 from \citet{Draine2014} and listed below, using the deprojected radial distance of the Field center. The \citet{Draine2014} formula is equivalent to the ``direct T$_e$-based method'' to derive metallicity gradient measured by \citet{Zurita2012} in range of radii where both have measurements. \citet{Zurita2012} use auroral optical line' ratios in H\,{\sc ii}\ regions (for $12+\log(O/H)_{\odot}=8.69$), while \citet{Draine2014} use the dust-to-gas ratio as a metallicity proxy: \begin{equation} \frac{(O/H)}{(O/H)_{\odot} } \approx \left\{ \begin{array}{l l} 1.8 \exp(-R/19\text{kpc}) & \text{for } R<8 \text{kpc} \\ 3.08 \exp(-R/8.4\text{kpc}) & \text{for } 8<R<18 \text{kpc.} \nonumber \end{array} \right.\ \end{equation} The metallicity ranges from 0.8 to 1.4 Z$_{\odot}$, and the individual results per Field are listed in Table~\ref{tab:sfr}. We calculate the total infrared emission (TIR) using a linear combination of IRAC 8 and 24\,\ensuremath{\,\mu\mbox{m}}, PACS 70 and 160\,\ensuremath{\,\mu\mbox{m}}\ (all in units of $\rm{erg \ s^{-1} cm^{-2}sr^{-1}}$), as described in \citet{DraineLi2007}: \begin{equation} TIR = 0.95 (\nu I_{\nu})_8 + 1.15(\nu I_{\nu})_{24} + (\nu I_{\nu})_{70} + (\nu I_{\nu})_{160}, \nonumber \end{equation} where the worst-case error in estimating TIR is $\sim$\,30\%, and dominated by noise and calibration uncertainties of its components. This TIR calibration is identical to what was assumed by \citet{Croxall2012}, whose results we compare in \S~\ref{sec:fir_def700}. \section{Results} \label{sec:result} \subsection{[C\,{\sc ii}] from SF Regions} \label{sec:result_cii_sfreg} As discussed above, the [C\,{\sc ii}] emission comes from multiple phases of the ISM, which includes H\,{\sc ii}\ regions and their bordering PDRs, as well as the diffuse ISM (cold neutral medium -- CNM, warm neutral medium -- WNM, and warm ionized medium -- WIM). Our [C\,{\sc ii}] maps have a resolution of $\sim$\,50\,pc. Individual H\,{\sc ii}\ regions in our Fields have typical sizes of $\sim$\,20--30 pc in diameter \citep{Azimlu2011} and PDRs give an additional $\sim$\,0.3--3\,pc layer, assuming carbon is ionized up to A$_V \sim 5$ \citep{Tielens2005} into the PDR for a typical density $n_H \sim 10^3-10^4 \ {\rm cm^{-3}}$. Therefore, H\,{\sc ii}\ regions and PDRs are blended together at our working resolution. Hereafter, we refer to these ``blended'' H\,{\sc ii}\ regions and PDRs as star-forming (SF) regions. Although we cannot separate H\,{\sc ii}\ regions and PDRs in this study, we can investigate the fraction of [C\,{\sc ii}] arising from the diffuse ISM versus SF regions. In the following Section, we describe how we delineate SF regions using our H$\ alpha$ maps. \subsubsection{Delineating SF Regions} \label{sec:result_reg} H\,{\sc ii}\ regions in M\,31 can be identified and resolved \citep{Azimlu2011} using H$\alpha$ emission. Because of the much lower resolution of the [C\,{\sc ii}] map, multiple H\,{\sc ii}\ regions might reside in a single pixel. Therefore, we use the H\,{\sc ii}\ region catalogs as a guide to define contours in H$\alpha$ emission, at a resolution matched to the [C\,{\sc ii}] maps, that enclose most of the massive star formation. H$\alpha$ emission was previously used to distinguish between H\,{\sc ii}\ regions and diffuse media for distinguishing the origin of [C\,{\sc ii}] in LMC by both \citet{Kim2002} \citep[following work of ][]{Kennicutt1986} with H$\alpha$ contour $5.09\times10^{39} \ {\rm \ erg\ s^{-1} kpc^{-2} }$, and by \citet{Rubin2009} with $1.20\times10^{39} \ {\rm \ erg\ s^{-1} kpc^{-2} }$. H$\alpha$ emission from H\,{\sc ii}\ regions in M\,31 is relatively modest, compared to other Local Group galaxies. \citet{Azimlu2011} report a total H$\alpha$ luminosity coming from H\,{\sc ii}\ regions in M\,31 $\sim$\,$1.77 \times 10^{40}$ erg s$^{-1}$, which is comparable to the luminosity of the 30 Doradus complex in the LMC alone $\sim$\,$1.5 \times 10^{40}$ erg s$^{-1}$ \citep{Kennicutt1984}. On average, \citet{Azimlu2011} deduced a $\sim$\,63\% diffuse ionized gas contribution from spatially separating H\,{\sc ii}\ regions from the surrounding emission. \citet{Walterbos1994} used [S\,{\sc ii}]/H$\alpha$ ratios to identify diffuse ionized gas and obtained a $\sim$\,40\% contribution. To start, we convolve our PPAK H$\alpha$ maps to the [C\,{\sc ii}] resolution. Then, we use the H\,{\sc ii}\ region catalog from the Local Group Survey \citep[LGS;][]{Azimlu2011} to identify the location and half-light radii of the H\,{\sc ii}\ regions. We use the LGS catalog rather than our PPAK maps because of the 2 times higher spatial resolution of their imaging, which allows for better identification of H\,{\sc ii}\ regions. Nevertheless, for the remaining analysis, we use our PPAK H$\alpha$ maps rather than the narrowband imaging from LGS for the following reasons: (1) [N\,{\sc ii}] 6548 and 6583\,\AA \ are blended with H$\alpha$ in the narrow band imaging (see Figure~\ref{fig:ha_spec}) and variations in the H$\alpha$/[N\,{\sc ii}] ratio make correcting for blending difficult and (2) removal of stellar continuum is significantly easier from the PPAK data compared to the narrowband imaging. We then define a surface brightness threshold in the convolved H$\alpha$ map above which the majority of the H\,{\sc ii}\ regions identified by \citet{Azimlu2011} are enclosed. We display the contour at the selected surface brightness threshold, $L_0= 4.19\times 10^{38}{\rm \ erg\ s^{-1} kpc^{-2} }$, in Figures \ref{fig:ha_cii_maps} and \ref{fig:ha_cii_maps2}. Our H$\alpha$ threshold is lower than the values used by \citet{Kennicutt1986,Kim2002} and \citet{Rubin2009}, as we wished to definitively encompass both the H\,{\sc ii}\ regions and their associated PDRS in our ``SF regions''. Table~\ref{tab:sfregions} lists the fraction of H\,{\sc ii}\ regions enclosed by this contour, along with the areal fraction enclosed, for each Field. In Field 5 several very faint H\,{\sc ii}\ regions lie outside the contour, but these regions make a negligible contribution to the overall H$\alpha$ emission. Finally, we use this contour and calculate the fractions of the pixel area within the boundary. ``Diffuse'' regions are defined as lying outside the H$\alpha$ contour. \begin{deluxetable*}{ccccccc} \tablewidth{0.68\textwidth} \tablecolumns{7} \tablecaption{Ratios of fraction of emission of the ISM tracers coming from SF regions over total emission per field, for a H$\alpha$ contour $L_0= 4.19\times 10^{38}{\rm \ erg\ s^{-1} kpc^{-2} }$ defining SF region} \tablehead{\multicolumn{1}{c}{Field} & \multicolumn{1}{c}{$\rm{\frac{H\alpha_{SF}}{H\alpha_{TOT}}}$} & \multicolumn{1}{c}{$\rm{\frac{[CII]_{SF}}{[CII]_{TOT}}}$ } & \multicolumn{1}{c}{$\rm{\frac{M24_{SF}\tablenotemark{a}}{M24_{TOT}}}$} & \multicolumn{1}{c}{$\rm{\frac{TIR_{SF}}{TIR_{TOT}}}$} & \multicolumn{1}{c}{$\frac{A_{SF}\tablenotemark{b}}{A_{TOT}}$} & \multicolumn{1}{c}{$\frac{N_{HII}(SF)\tablenotemark{c}}{N_{TOT}(F)}$} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{} } \startdata 1 & 82.57 & 63.04 & 70.31 & 34.72 & 64.41 & 40/41 \\ 2 & 44.52 & 20.15 & 14.40 & 7.04 & 13.18 & 17/20 \\ 3 & 82.35 & 80.49 & 79.63 & 60.02 & 77.53 & 39/41 \\ 4 & 56.23 & 30.97 & 26.40 & 13.47 & 23.07 & 20/25 \\ 5 & 21.90 & 12.72 & 7.85 & 4.33 & 6.67 & 4/11 \enddata \label{tab:sfregions} \tablenotetext{a}{MIPS 24\,\ensuremath{\,\mu\mbox{m}}} \tablenotetext{b}{SF region areal fraction of Field} \tablenotetext{c}{The number of H\,{\sc ii}\ regions enclosed by SF regions to the total number of H\,{\sc ii}\ regions in the Field} \end{deluxetable*} We apply the above procedure to calculate the fraction of our tracers (H$\alpha$, [C\,{\sc ii}], 24\,\ensuremath{\,\mu\mbox{m}}\, TIR) spatially associated with SF regions. To estimate the uncertainties on this fraction, we moved the contour level by $\pm30$\% of the chosen H$\alpha$ surface brightness and recalculate the fractions. We find that $\pm30$\% defines a reasonable range of potential contours surrounding the H\,{\sc ii}\ regions as can be seen in Figure~\ref{fig:ha_cii_maps}. The uncertainty in defining the contour is the dominant component of the total uncertainty on the diffuse fractions since calibration uncertainties divide out and the S/N is high. We note that the majority of the following analysis compares the relative concentration of H$\alpha$, [C\,{\sc ii}], 24\,\ensuremath{\,\mu\mbox{m}}\ and TIR emission in SF regions, rather than focusing on the absolute value of the fraction. For this reason, our results are not sensitive to the exact definition of the contour level, as it mostly represents a fiducial level for comparing the extent of the various SFR tracers. We note that the absolute values of the fractions are sensitive as well to the size of the maps, as diffuse emission might extend beyond the limits of the maps. However, this issue do not detract from our results, because our Field sizes are representative of individual resolution elements in nearby galaxy surveys, therefore can be directly compared. \subsubsection{Fraction of [C\,{\sc ii}] and other tracers from SF Regions} \label{sec:result_frac} To determine the emission fraction from ``SF regions'', ${\rm I_{SF}/I_{TOT}}$, we integrated the emission from each tracer within the contours and divided from the total emission in the map. The total emission only considered the area that had coverage in all relevant maps. In Figure~\ref{fig:fracvsrad}, we plot these estimates as a function of galactocentric radius. \begin{figure}[!ht] \begin{center} \includegraphics[width=.5\textwidth]{figure5.pdf} \caption{Emission of H$\alpha$, [C\,{\sc ii}], 24\ensuremath{\,\mu\mbox{m}}\ and TIR from the SF region vs. the total emission in a given Field as a function of galactocentric radius. Error bars translate uncertainties in the H$\alpha$ contour level used to define ``SF regions'' (see \S~\ref{sec:result_reg}). Note, that if a 30\% higher H$\alpha$ level is picked, points of all tracers for a given Field will systematically shift down to the value indicated by a respective error bar.} \label{fig:fracvsrad} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=.5\textwidth]{figure6.pdf} \caption{Same emission fraction as in Figure \ref{fig:fracvsrad}, but instead of radial distance, fractions are plotted as a function of star formation rate surface density.} \label{fig:frac} \end{center} \end{figure} The fractions arising from SF regions varies systematically with the choice of tracer. H$\alpha$ always has the highest fraction of its emission arising from SF regions, as one might expect due to the regions' definition. The fractions of H$\alpha$ from SF regions range between $\sim$\,20--80\%. Previous studies have identified a large component of diffuse H$\alpha$ emission from M\,31 \citep[$\sim$\,63\%,][]{Azimlu2011} and we recover a similar value in Field 4. On average our H$\alpha$ diffuse fraction is lower than 63\%, as our Fields are biased to high SFR surface densities by selection. In general, TIR and 24\,\ensuremath{\,\mu\mbox{m}}\ have similar fractions of their emission coming from SF regions, as shown in Figure~\ref{fig:fracvsrad}. In most of the cases, they also have the lowest SF region fractions compared to the other tracers. The fraction of [C\,{\sc ii}] from SF regions is intermediate between that of the IR tracers (TIR and 24\,\ensuremath{\,\mu\mbox{m}}) and H$\alpha$, except in F1. In all of the cases [C\,{\sc ii}] is closer to the IR than the H$\alpha$, aside from in F3 where all tracers give approximately the same fractions. It is somewhat surprising that the 24\,\ensuremath{\,\mu\mbox{m}}\ emission behaves similarly to TIR, since we expect it to trace warm dust dominantly heated by young stars whereas TIR traces all dust emission. We discuss this finding further in Section~\ref{sec:disc}. In Figure~\ref{fig:fracvsrad}, we explore the radial trends of the fractions of H$\alpha$, [C\,{\sc ii}], TIR and 24\,\ensuremath{\,\mu\mbox{m}}\ that arise from SF regions. Given that there are strong radial trends in metallicity, radiation field strength and stellar mass surface density in M\,31 we naively expect some trends, however no such radial trends are apparent. Fields 2, 3 and 4 lie at a similar galactocentric radius in the 10\,kpc ring, yet they span almost the whole range of SF fraction values. Given this, it is likely that local conditions are dominating the fractional contribution of SF in our Fields. The SFR and ISM surface densities in M\,31 do not show radial trends, but rather are dominated by the spiral arm structure and the 10\,kpc ring \citep{Draine2014}. In Figure (\ref{fig:frac}), we find a much clearer trend of increasing fraction of H$\alpha$, [C\,{\sc ii}], 24\ensuremath{\,\mu\mbox{m}}\ and TIR from SF regions with increasing ${\rm \Sigma_{SFR}}$, with Fields 2, 3 and 4 following this trend. This demonstrates the dominance of the local conditions to the SF fractions, with the weak trend of SF fractions with radius in Figure~\ref{fig:fracvsrad} most likely driven by a combination of SFR, opacity and metallicity. Even with these trends it is clear that there is always a substantial diffuse component to the [C\,{\sc ii}] emission in M\,31, with even the Field with the highest ${\rm \Sigma_{SFR}}$ fraction, Field 3, showing a 20\% contribution to [C\,{\sc ii}] from the diffuse phase. Based on Figures~\ref{fig:fracvsrad} and \ref{fig:frac}, the main result is that [C\,{\sc ii}] emission is more extended than H$\alpha$, but less extended than TIR and 24\,\ensuremath{\,\mu\mbox{m}}. We find that the large fractions of diffuse emission arising from outside the SF regions are anti-correlated with ${\rm \Sigma_{SFR}}$. We will discuss possible explanations in Section \ref{sec:disc}, including gas heating generated by a diffuse radiation Field and/or leaked photons from SF regions. \subsection{The Correlation Between [C\,{\sc ii}] and ${\rm \Sigma_{SFR}}$} \label{sec:result_cii_sfr} Due to its strength and accessibility at high redshifts with new sub-mm telescopes (e.g.~ALMA), the correlation between [C\,{\sc ii}] and SFR has recently been explored in nearby galaxies \citep[within 200\,Mpc, e.g.][]{deLooze2014,Herrera2014}, to provide a calibration for this line. These studies have found a tight correlation of [C\,{\sc ii}] surface brightness and $\Sigma_{\rm SFR}$ on kpc scales. However, from the previous section it is clear that even at these scales this will include contributions of diffuse emission to both the [C\,{\sc ii}] line and the $\Sigma_{\rm SFR}$ measurement. Using the high spatial resolution available in M\,31, we can investigate the correlation both on $\sim$\,50\,pc scales (our working resolution) and on $\sim$\,700\,pc scales \citep[i.e.~averaged over one full Field, matching the typical resolution of the KINGFISH galaxies in][]{Herrera2014}. On 50\,pc scales we can separate SF regions from diffuse emission as previously described in \S~\ref{sec:result_frac}. By degrading our resolution to match nearby galaxy studies, we can investigate how the SF and diffuse components participate in creating the observed correlation between [C\,{\sc ii}] and $\Sigma_{\rm SFR}$. \subsubsection{The Correlation Between [C\,{\sc ii}] and ${\rm \Sigma_{SFR}}$ on 50\,pc scales} \label{sec:result_cii_sfr50} We first investigate the correlation between [C\,{\sc ii}] and SFR for individual pixels in each of our Fields. The results are presented in Figure~\ref{fig:ciisfr50}. Each pixel has a physical scale of $\sim$\,20\,pc, meaning that we are slightly oversampling our physical resolution of $\sim$\,50\,pc. All pixels have $3\sigma$ significance in both the [C\,{\sc ii}] and $\Sigma_{\rm SFR}$ measurements. Note that at these physical scales, $\Sigma_{\rm SFR}$ is not truly indicative of the underlying star formation rate. Rather it represents the average H${\rm \alpha+ mid}$-IR flux in each pixel, which could arise from both stellar populations intrinsic to the pixel and photons leaked from stellar populations in nearby regions \citep[][]{Calzetti2007}. A clear correlation between [C\,{\sc ii}] and $\Sigma_{\rm SFR}$ is already visible in each of our Fields, with this supported by the Pearson's correlation coefficients presented in Table~\ref{tab:coeff}. To quantify this relation, we use orthogonal distance regression, which allows us to fit a linear function to data points (in logarithmic space) simultaneously accounting for both the $x$ and $y$ errors. Our fits show sublinear to linear relations of [C\,{\sc ii}] to ${\rm \Sigma_{SFR}}$, with the most linear slope found for F3 which has the highest ${\rm \left< \Sigma_{SFR} \right>}$. Details of the fits for all Fields are summarized in Table~\ref{tab:coeff}, and the best fit linear relation for each Field is overplotted (blue solid line) in Figure~\ref{fig:ciisfr50}. We have also included in each Field the fit determined by \citet{Herrera2014} for their integrated sample (dashed red line). From Figure~\ref{fig:ciisfr50} we see that F1 and F2 have the flattest slopes, with both Fields showing an excess of [C\,{\sc ii}] for the lowest ${\rm \Sigma_{SFR}}$. Note however that at [C\,{\sc ii}] $\sim\,1.76 \times 10^{38}$ erg s$^{-1}$ kpc$^{-2}$ we hit the $1 \sigma$ sensitivity limit, and thus this excess may be due to a larger dispersion of [C\,{\sc ii}] around ${\rm \Sigma_{SFR}}$, rather than a excess due to diffuse [C\,{\sc ii}] emission. Fields 3, 4 and 5 are consistent with the \citet{Herrera2014} trend, though F5 probes only a relatively small range in ${\rm \Sigma_{SFR}}$. The errors of the \citet{Herrera2014} slope and intercept of the fit, where SFR is based on H$\alpha$ and 24\,\ensuremath{\,\mu\mbox{m}}, are $\beta_{HC} = 0.8970\pm0.0078$, $\gamma_{HC} = 41.133\pm0.015$ (priv. com.). Our slopes are significantly flatter at a greater than 2-$\sigma$ level, except F3 which is significantly steeper. In general, the flatter slopes are consistent with the results in the previous section, as these slopes indicate a greater [C\,{\sc ii}] fraction at low ${\rm \Sigma_{SFR}}$ (and hence low H$\alpha$ surface brightnesses and more ``diffuse'' regions). However, the flat slopes indicate that, while there is still a correlation between [C\,{\sc ii}] and ${\rm \Sigma_{SFR}}$, the contribution of diffuse emission means that the calibration determined by \citet{deLooze2014} and \citet{Herrera2014} do not hold on these scales. \begin{figure*}[ht!] \begin{center} \includegraphics[width=1.\textwidth]{figure7.pdf} \caption{[C\,{\sc ii}] and SFR surface densities for individual 5.2\arcsec\ pixels ($\sim$\,20 pc physical scale) in all Fields. Only pixels with signal/noise per pixel greater than 3 in both measurements are included. Note, that 3-$\sigma$ limit is approximate, because we use noise maps. The solid blue line shows the linear fit (in log space) to the data in each Field. The red dashed line is the relation determined by \citet{Herrera2014} for the full KINGFISH sample using data of $\sim$\,1\,kpc resolution (with $\rm{\beta_{HC14}=0.94}$). Lower right panel: This figure reveals the [C\,{\sc ii}] and SFR surface density relation from all Fields together (grey density plot and grey dots), as well as the relation averaged over whole Fields (diamonds, $\sim$\,700\,pc scales). Note that number density plot, with grey levels 5-10, 10-20, 20-41 per bin size 0.05 dex, does not represent uncertainties. The blue line is the best fit to the individual pixels from all Fields together. The red dashed-dotted lines are 1$\sigma$ scatter around \citet{Herrera2014}'s fit. \\ \vspace{0.5cm}} \label{fig:ciisfr50} \end{center} \end{figure*} \begin{deluxetable}{ccccc} \tablewidth{0pt} \tablecolumns{5} \tablecaption{The average [C\,{\sc ii}] emission, $\Sigma_{\rm SFR}$ and metallicity of the Fields} \tablehead{\multicolumn{5}{c}{$\log_{10}(\Sigma_{\rm [CII]}) = \beta \ \log_{10}(\Sigma_{\rm SFR}) + \gamma$ } \\ \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{Pearson's} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \multicolumn{1}{c}{Field} & \multicolumn{1}{c}{correlation} & \multicolumn{1}{c}{$\beta$} & \multicolumn{1}{c}{$\gamma$} & \multicolumn{1}{c}{$\sigma$\tablenotemark{b}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{coefficient $r$\tablenotemark{a}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{[dex]} } \startdata 1 & 0.80 & 0.67 $\pm$ 0.010 & 40.57 $\pm$ 0.019 & 0.17 \\ 2 & 0.68 & 0.68 $\pm$ 0.015 & 40.71 $\pm$ 0.037 & 0.17 \\ 3 & 0.89 & 1.03 $\pm$ 0.014 & 41.50 $\pm$ 0.031 & 0.13 \\ 4 & 0.80 & 0.84 $\pm$ 0.016 & 41.09 $\pm$ 0.037 & 0.15 \\ 5 & 0.64 & 0.76 $\pm$ 0.033 & 40.83 $\pm$ 0.084 & 0.16 \\ \hline F$\rm{_{all}}$& 0.82 & 0.77 $\pm$ 0.009 & 40.84 $\pm$ 0.023 & 0.18 \enddata \label{tab:coeff} \tablenotetext{a}{$r=1$ value of the Pearson's correlation coefficient indicates perfect correlation, on the contrary $r=0$ means no correlation. We found p-values extremely low, demonstrating that the correlation coefficients are significant. We found the highest p-value for F5 ($\sim10^{-41}$).} \tablenotetext{b}{the dispersion of data points around the fit} \end{deluxetable} \subsubsection{[C\,{\sc ii}] vs ${\rm \Sigma_{SFR}}$ on $\sim$\,700\,pc scales} \label{sec:result_cii_sfr700} To match the physical scales probed in the galaxy sample of \citet{Herrera2014}, we average our data in each Field to obtain the [C\,{\sc ii}] surface brightness and SFR surface density at $\sim$\,700\,pc scales. We also attempt to match the removal of the cirrus contribution to SFR as done in \citet{Herrera2014}. The cirrus is the ``diffuse'' contribution to the total 24\,\ensuremath{\,\mu\mbox{m}}\ emission caused by the heating of the dust by the interstellar radiation field arising from older stellar populations not associated with the recent star formation. This cirrus leads to an overestimation of the $\Sigma_{\rm SFR}$. \citet{Leroy2012} determined the fraction of cirrus in nearby galaxies on kpc scales, using both physical modeling of the IR spectral energy distribution to determine the local radiation field and maps of the HI to determine the diffuse gas fraction. They found that the cirrus fraction decreases with increasing $\Sigma_{\rm SFR}$, and is small above ${\rm 10^{-2}\ M_{\odot} yr^{-1} kpc^{-2}}$. However, below this value, they found the contribution of the cirrus to 24\,\ensuremath{\,\mu\mbox{m}}\ flux could be significant, albeit with a large scatter. As all of points are below this limit, we subtract a constant fraction of 24\,\ensuremath{\,\mu\mbox{m}}\ emission, $f_{cir}=0.4$, which is the average $f_{cir}$ in the range of $\Sigma_{\rm SFR}$ that we probe determined by \citet{Leroy2012}\footnote{ We do not to remove the cirrus on our smaller 50\,pc observations, as we are beginning to resolve out the ``diffuse'' emission of 24\,\ensuremath{\,\mu\mbox{m}}\ on these scales. The cirrus correction of \citet{Leroy2012} was derived on scales of $\sim$\,1\,kpc, and cannot be simply applied to these smaller scales. In any case, due to our 3-$\sigma$ cut, H$\alpha$ is the dominant contributor to the SFR we measure for most of the pixels on these scales, such that correcting for the cirrus does not make a significant difference}. The value we use is higher than the median $f_{cir}=0.17$ used by \citet{Herrera2014}, however they probe higher $\Sigma_{\rm SFR}$ values on average, and we wish to assume a conservative correction here. Our value of $f_{cir}$ is consistent with the high ``non-SF'' region contribution to the 24\,\ensuremath{\,\mu\mbox{m}}\ we find in Figure~\ref{fig:frac}. Removal of the cirrus makes the greatest difference to fields with low $\Sigma_{\rm SFR}$ (Fields 2, 4, and 5; $\sim 26\%$), and smaller differences to the higher $\Sigma_{\rm SFR}$ fields ($\sim 15\%$ for Fields 1 and 3). Nevertheless, we find that the details of the cirrus removal do not have a large impact on our results described below. In the lower right panel of Figure~\ref{fig:ciisfr50}, we show the average value for each Field overlayed on the greyscale density plot for the individual pixels from all Fields combined. We repeat the orthogonal distance regression on the pixels from all Fields and have overplotted the fit (solid blue line), with the fit results also presented in Table~\ref{tab:coeff}. While the fit to all pixels shows a similar flat slope to the individual Fields ($\beta_{\rm all}=0.79$)\footnote{ Note, that the fit (blue line) does not go through the densest part of the number density plot, because the latter does not represent the uncertainties of the points, while the ODR fitting method takes them into account.}, we find that once averaged to $\sim$\,700\,pc scales, the data are consistent with the \citet{Herrera2014} relation within uncertainties. Therefore, while the [C\,{\sc ii}] vs SFR relation is flatter at small physical scales ($\sim 50$\,pc) due to the separation of diffuse [C\,{\sc ii}] emitting regions from the star forming regions, on kpc scales this diffuse emission is correlated sufficiently with star formation to provide the observed linear relation. \subsection{Far-IR line deficit} \label{sec:fir_def} Another factor that affects the relation between [C\,{\sc ii}] emission and SFR is the efficiency of converting absorbed photons from the star forming region to [C\,{\sc ii}] emission. Depending on whether the gas is ionized or neutral, this efficiency will differ as a result of different heating processes (gas photoionization versus PE effect on dust grains). In our regions overall, we estimate the fraction of [C\,{\sc ii}] coming from ionized gas to be small. One region in Field 5 may have a larger than usual contribution from ionized gas, and is discussed in Section~\ref{sec:fir_def50}. To get a rough estimate on the ionized [C\,{\sc ii}] contribution we first make use of several locally significant [N\,{\sc ii}]$\lambda 122\,\mu {\rm m}$ measurements in Fields 1 and 3. The line is detected only in the brightest regions, with local $\log \Sigma_{\rm SFR} > -1.5$. For these bright SF regions, we can estimate the ionized contribution to the total emission, [C\,{\sc ii}]$_{\rm ion}$/[C\,{\sc ii}], by following the prescription in \citet[][]{Croxall2012}, particularly their Figure 11. As the [C\,{\sc ii}]/[N\,{\sc ii}] ratio is density dependent and we have no constraint on the density, we assume a low density gas of $ n_{\rm e} \sim 2 \ {\rm cm^{-3}}$, resulting in a high value of [C\,{\sc ii}]$_{\rm ion}$/[N\,{\sc ii}]$ = 6$. Even with this assumption that provides us with an upper limit on the [C\,{\sc ii}]$_{\rm ion}$/[C\,{\sc ii}] ratio and probing the highest SF regions, we still find the ionized contribution to [C\,{\sc ii}] to be less than 50\%, with [C\,{\sc ii}]$_{\rm ion}$/[C\,{\sc ii}]$ = 28\%$ and 40\% in the SF regions of Field 1 and Field 3, respectively. To further support the low [C\,{\sc ii}]$_{\rm ion}$/[C\,{\sc ii}] fraction, we estimate the contribution for the Fields as whole by using H$\alpha$ as a proxy. \citet{Groves2010} models predict [C\,{\sc ii}]$_{\rm ion}$/H$\alpha \sim 0.05-0.1$ in H\,{\sc ii}\ regions, assuming relative efficiency of ionized gas cooling by [C\,{\sc ii}] and H$\alpha$ lines. We measure everywhere the ratios of the surface brightness [C\,{\sc ii}]/H$\alpha > 0.5$. Thus, we infer from the models [C\,{\sc ii}]$_{\rm ion}$/[C\,{\sc ii}]$ \sim 20\%$ (for [C\,{\sc ii}]$_{\rm ion}$/H$\alpha=10\%$). However, we caution that our H$\alpha$ values were not corrected for extinction, and therefore we might slightly underestimate [C\,{\sc ii}]$_{\rm ion}$ contribution. Nevertheless, we expect 10-50\% of the [C\,{\sc ii}] could be from ionized gas, emphasizing that the majority of the [C\,{\sc ii}] arises from neutral gas. Therefore in M\,31 most of the neutral gas cooling occurs through the [C\,{\sc ii}] emission line, due to the weak [O\,{\sc i}] detection and low [C\,{\sc ii}]$_{\rm ion}$ fraction. Given this, and that the neutral gas is predominantly heated by the photoelectric effect on dust grains, the ratio of [C\,{\sc ii}] to the total IR emission (TIR, tracing total dust absorption) should trace the photoelectric (PE) heating efficiency. As described in the Introduction, [C\,{\sc ii}]/TIR has been found to vary globally with galaxy properties such as the total IR luminosity. In particular, a decreasing trend of [C\,{\sc ii}]/TIR versus $\nu I_{\nu} (70 \mu m)/\nu I_{\nu} (100 \mu m)$ (a proxy for the dust temperature), has been observed. Based on the study of global measurements of 60 normal galaxies, [C\,{\sc ii}]/TIR was found to decrease at high dust color values \citep[i.e. warm dust temperatures;][]{Malhotra2001}, but it first was referred to as the FIR-line deficit for [C\,{\sc ii}]/TIR falling below $10^{-3}$ at high dust color values by \citet{Helou2001}\footnote{As the FIR-line deficit definition is vague, hereafter, we adopt the one given by \citet{Helou2001}}. Generally, [C\,{\sc ii}]/TIR was found to be approximately constant (with some scatter) at low dust temperatures, but sharply decreasing in galaxies with warmer dust colors \citep[$\nu I_{\nu} (70 \mu m)/\nu I_{\nu} (100 \mu m)\gtrsim$ 0.95,][]{Croxall2012}. One of the most commonly given explanations for this deficit is grain charging, where warmer dust is more highly charged, increasing its PE threshold and thus decreasing the average energy returned to the gas per photon absorbed. Most of the studies of the deficit have concentrated on the [C\,{\sc ii}] line, because it is typically the brightest, but other lines have also been investigated, and show a similar deficit \citep[e.g. ${\rm [}$O\,{\sc i}${\rm], [O}$\,{\sc iii}${\rm], [}$N\,{\sc ii}${\rm]}$;][]{GraciaCarpio2011}. In our Fields, we already have hints that the [C\,{\sc ii}]/TIR ratio and thus PE efficiency is changing between our SF regions and more diffuse regions, with Figures \ref{fig:fracvsrad} and \ref{fig:frac} revealing that the TIR fraction from the diffuse gas is greater than that of [C\,{\sc ii}] in all fields. Thus, [C\,{\sc ii}]/TIR seems actually higher in SF regions, somewhat contrary to the results in previous works for galaxies as a whole. However, we can also explore [C\,{\sc ii}]/TIR versus dust colors using our higher resolution at $\sim$\,50\,pc scales. \subsubsection{Far-IR line deficit on $\sim$\,50\,pc scales} \label{sec:fir_def50} \begin{figure*}[ht!] \begin{center} \includegraphics[width=1.\textwidth]{figure8.pdf} \caption{[C\,{\sc ii}]/TIR surface brightness ratio (proxy for gas heating efficiency) versus $\nu I_{\nu}(70 \mu m)$/ $\nu I_{\nu}(100 \mu m)$ (a proxy for dust temperature) for individual 5.2\arcsec pixels ($\sim$\,20\,pc physical scale) for each Field (as labelled in top left corners of each panel). Only pixels with S/N above 3-$\sigma$ in all quantities are shown, typically the 70\,\ensuremath{\,\mu\mbox{m}}\ flux is the limiting quantity. The pixels are color coded by the [C\,{\sc ii}] surface brightness (with respect to the scale given in the top right panel). In the lower left of each panel we indicate the Pearson correlation coefficient $r$ is given for each Field. $p$-values (roughly indicate the probability of an uncorrelated system) are high for F2 and F4, therefore correlation estimates for these Fields are not significant, while for the other Fields are sufficiently low to indicate a significant correlation. Lower right panel shows the same figure from \citet{Croxall2012} for two galaxies with $\sim$\,kpc pixels; NGC\,4559 (grey crosses) and NGC\,1097 (black '$+$' signs). The large diamonds show the integrated results for each of our Fields at the same physical scale ($\sim$\,700\,pc), color coded by the Field number. Note, that the averages on the last panel take all the measurements in each Field, not only the 3-$\sigma$ significant points. \\ \vspace{0.5cm}} \label{fig:firdef50} \end{center} \end{figure*} In Figure~\ref{fig:firdef50}, we show the [C\,{\sc ii}]/TIR surface brightness ratio versus the $\nu I_\nu(70 \mu m)$/ $\nu I_\nu(100 \mu m)$ ratio for individual pixels in Fields 1 to 5. Each pixel measures a region of $\sim$\,20\,pc, which slightly oversample our physical resolution of $\sim$\,50\,pc. Only pixels with a signal-to-noise greater than 3 in all quantities are plotted, with the 70\,$\mu$m flux being generally the most limiting quantity at these scales. In Figure~\ref{fig:firdef50}, all points are color coded by their [C\,{\sc ii}] surface brightness as a proxy for diffuseness, where bluer pixels represent the more diffuse regions. Note that due to the limit of the 70\,$\mu$m flux, we do not probe the most diffuse regions explored in section \ref{sec:result_cii_sfreg}, as one can infer from the lower limit of the colorbar in the top right panel of Figure~\ref{fig:firdef50}. The [C\,{\sc ii}]/TIR ratios in all our fields span approximately an order of magnitude, between $\sim 2\times 10^{-2}-10^{-3}$, and span $\sim 0.3-2$ in dust color $\nu I_\nu(70 \mu m)$/ $\nu I_\nu(100 \mu m)$, a very similar phase space to where the bulk part of \citet{Malhotra2001} measurements for individual galaxies reside. All of our observed [C\,{\sc ii}]/TIR ratios lie above the $10^{-3}$ value which classically defines the `[C\,{\sc ii}]-deficient' objects \citet{Helou2001}, thus we could state here that there is no [C\,{\sc ii}] deficit regions across our fields in M\,31. The problem is that this classical deficit definition, as vague as it already is, was applied for global measurements, not for $\sim$\,50\,pc scales, at which scales it breaks down (manifested as a huge scatter in panels 1--5 in Figure~\ref{fig:firdef50}), due to different scale lengths (spatial distributions) of [C\,{\sc ii}] and TIR (see section~\ref{sec:result_frac}). However, we do find moderate to weak negative correlations of the [C\,{\sc ii}]/TIR surface brightness ratio with the $\nu I_\nu(70 \mu m)$/ $\nu I_\nu(100 \mu m)$ dust color in Fields 1 to 4 from the Pearson's coefficients' values ranging from -0.23 to -0.52 (see Figure~\ref{fig:firdef50}). These weak correlations are consistent with the trends measured on global scales in previous studies, with weak negative correlations \citep{Malhotra2001} or constant [C\,{\sc ii}]/TIR values found across 0.3--1 in dust color \citep{Helou2001,GraciaCarpio2011}. However, contrary to the other fields, we observe a very strong positive relation in Field 5, with Pearson's coefficient 0.78. This strong correlation is driven by the cluster of high [C\,{\sc ii}]/TIR $\sim$\,0.006 points at the IR color of $\nu I_\nu(70 \mu m)$/ $\nu I_\nu(100 \mu m) \sim 1.2$, visible in the bottom left panel of Figure~\ref{fig:firdef50}. All these pixels are located in the star forming region in the south-east in Field 5. This bright region contributes on its own to $\sim$\,25-30\% of the total [C\,{\sc ii}] emission in this $\sim$\,700\,pc Field. Despite a warm dust color, this region presents very weak IR emission, suggesting a low total dust mass. Given this low dust mass but relatively high [C\,{\sc ii}] emission, it is likely that this region is dominated by H\,{\sc ii}\ gas. Therefore, most of the [C\,{\sc ii}] emission arises from photoioized gas, and not from neutral gas where the PE effect dominates the gas heating, which leads to the abnormally high [C\,{\sc ii}]/TIR values for these dust colors. At a given dust color, it is clear that we see a large spread of the [C\,{\sc ii}]/TIR ratio, as great as any trends inferred over our observed IR color range. Interestingly, it is noticeable in all Fields that, at a given dust color, there is an increasing trend of [C\,{\sc ii}]/TIR with [C\,{\sc ii}] surface brightness. This clearly indicates that factors other than the dust temperature affect the PE efficiency. The details in each Field are sensitive to the geometry of the stars, dust and different phases of the gas. One possibility is that at this resolution, we can see the effects of softer radiation fields in non-SF regions, such that there are sufficient photons to heat the dust to the observed colors, but fewer photons of sufficient energy to eject electrons from the dust grains, leads to a relatively cooler ISM in the diffuse phase. \subsubsection{Far-IR line deficit on $\sim$\,700\,pc scales} \label{sec:fir_def700} In the lower right panel of Figure~\ref{fig:firdef50} we plot the results from \citet{Croxall2012} showing the [C\,{\sc ii}]/TIR surface brightness ratio versus the $\nu I_\nu(70 \mu m)$/ $\nu I_\nu(100 \mu m)$ ratio on $\sim700$\,pc scales for two nearby galaxies: NGC\,4559 and NGC\,1097. We used the same instruments, data reduction and methodology (i.e. TIR prescription) as in \citet{Croxall2012} paper, which makes the comparison straight forward. Their resolution element scales are similar to our integrated Fields. The galaxies in their paper represent clear examples of a galaxy without a FIR-line deficit (NGC\,4559) and with a deficit (NGC\,1097). These galaxies show the same range in both axes as we see with our higher physical resolution, but show a stronger trend of decreasing [C\,{\sc ii}]/TIR with IR color, with \citet{Croxall2012} clearly demonstrating a lower ratio for these objects with warmer colors. On top of the results for these galaxies, we plot the integrated measurements for each of the Fields matching the $\sim$\,700\,pc pixel sizes. This comparison shows that despite observing cooler dust colors in our Fields than those observed in \citet{Croxall2012}, we see a similar range in the [C\,{\sc ii}]/TIR ratio as in both NGC\,4559 and NGC\,1097. However, it is clear that F1 has a higher than average ratio. \begin{figure}[ht!] \begin{center} \includegraphics[width=.5\textwidth]{figure9.pdf} \caption{The integrated [C\,{\sc ii}]/TIR surface brightness ratio versus the galactic radius (bottom axis) and average gas-phase metallicity (top axis; oxygen abundance as a proxy for metallicity in units relative to solar ${\rm (O/H)/(O/H)_{\odot}}$) for our 5 Fields. Fields 2 -- 4 lie in the same 10\,kpc gas rich ring, and have similar [C\,{\sc ii}]/TIR values.} \label{fig:cii_metal} \end{center} \end{figure} With only 5 data points, we cannot conclusively identify a trend in the [C\,{\sc ii}]/TIR ratio with IR color on these scales in M\,31. However, on closer examination, there is a trend with radius, shown in Figure~\ref{fig:cii_metal}. We observe in Figure~\ref{fig:cii_metal} a strong radial relation with the [C\,{\sc ii}]/TIR ratio, with a lower value presented in the inner region, similar ratios seen for Fields 2 to 4 which all lie in the same 10\,kpc gas rich ring that dominates IR and ISM images of M\,31. A similar radial trend of the [C\,{\sc ii}]/TIR ratio is seen in M\,33 \citep{Kramer2013}. Smith et al. (in prep.) find a similar decreasing relation as a function of the gas-phase metallicity on $\sim$\,1\,kpc scales using the much larger sample of KINGFISH galaxies. However, not only the metallicity varies with radius in M\,31, but many properties vary across M\,31 and could possibly affect the [C\,{\sc ii}]/TIR ratio, such as the stellar density and radiation field strength, as already seen at the higher resolution. We discuss these, and the association with the diffuse and SF regions in the following section. \section{Discussion} \label{sec:disc} As the nearest massive spiral galaxy, M\,31 is an ideal laboratory to study the ISM on small scales while still being comparable to studies of similar galaxies at larger distances. Making use of this high physical resolution, we have explored the origins of [C\,{\sc ii}] and the dominant ISM heating processes by comparing the line emission strength and distribution against other tracers of ISM heating, the H$\alpha$ line, and the 24\ensuremath{\,\mu\mbox{m}}\ and TIR continuum. \subsection{[C\,{\sc ii}] on $\sim$\,50\,pc scales} \label{sec:disc50} At our limiting resolution of $11\arcsec$ ($\sim$\,50\,pc), we clearly see a strong correlation of the [C\,{\sc ii}] emission with H$\alpha$ through the spatial coincidence of the surface brightness peaks in the maps in Figures~\ref{fig:ha_cii_maps} and~\ref{fig:ha_cii_maps2}. However, the [C\,{\sc ii}] emission in the maps appear to be more spatially extended than the H$\alpha$. We quantified this spatially extended [C\,{\sc ii}] emission by separating each Field into ``SF regions'' and ``diffuse'', based on a H$\alpha$ surface brightness cut that encompassed the majority of H\,{\sc ii}\ regions. We find that [C\,{\sc ii}] is always more spatially extended than H$\alpha$, which arises predominantly from ``SF regions'', but less so than the dust emission (traced via the TIR), which had the highest diffuse fraction in all Fields. As our ``SF regions'' are 100's of pc in physical size they encompass several H\,{\sc ii}\ regions (as seen in Figures ~\ref{fig:ha_cii_maps} and~\ref{fig:ha_cii_maps2}), thus the larger diffuse fractions cannot simply be the result of us resolving out the H\,{\sc ii}\ regions and associated photodissociation regions, but indicative of a real diffuse emission. The association of [C\,{\sc ii}] with recent star formation is further supported by the significant correlation between the [C\,{\sc ii}] surface brightness and SFR surface density on $\sim$\,50\,pc scales (Figure \ref{fig:ciisfr50}). The presence of a diffuse contribution to the [C\,{\sc ii}] emission is indicated by the sub-linear slopes found for all Fields in the logarithmic relationship. Thus the resolved [C\,{\sc ii}] emission in Andromeda reveals that some fraction of the [C\,{\sc ii}] line originates from gas directly heated by UV photons from star-forming regions, with this fraction dependent upon the local SFR surface density. While this SF region [C\,{\sc ii}] includes both contribution from the ionized gas in H\,{\sc ii}\ regions and the neutral gas in the directly associated photodissociation regions which are unresolved in our observations, we find that the H\,{\sc ii}\ region contribution must be less than 40\% across our map based on our few [N\,{\sc ii}] 122\ensuremath{\,\mu\mbox{m}}\ detections and model-based estimates from the H$\alpha$ maps. The remaining [C\,{\sc ii}] flux arises from a diffuse phase, not spatially coincident with star-forming regions, that is also associated with a large fraction of the TIR surface brightness. The heating source for this diffuse phase is not directly obvious. We consider two mechanisms for the heating of this diffuse phase: (1) photon leakage from SF regions, and (2) a distinct diffuse UV radiation field. In the first mechanism, the diffuse gas and dust are heated primarily by photons that have escaped the immediate vicinity of the young, massive stars that emit them. Such a mechanism has been put forward to explain the diffuse H$\alpha$ emission. As the FUV photons that heat the dust and gas, are less energetic than ionizing photons and have a longer mean free path, will cause the spatial extent of the diffuse [C\,{\sc ii}] emission to be greater than that of H$\alpha$. In this scenario, even though we have seen a large diffuse [C\,{\sc ii}] component, it is still associated with the stars that heat SF regions. This means the use of [C\,{\sc ii}] as a SFR indicator should still be valid on scales large enough ($\sim$kpc) to average over these leakage effects. Possible evidence for this mechanism are the overdensity of points below the best-fit line at the bright end of the [C\,{\sc ii}]-SFR relation in F3, F4 and F5 (Figure~\ref{fig:ciisfr50}) that could indicate regions where the ionizing photons have been absorbed, but the FUV photons have leaked to adjacent regions. The second mechanism is where the diffuse UV radiation field arises from sources different to the massive stars powering the SF regions. The most likely source of this diffuse UV field would be B-stars, as in the solar neighborhood \citep{Mathis1983}. B-stars generate sufficient far-UV photons to heat the neutral gas, but negligible amounts of ionizing photons. In addition, they would have a more uniform spatial distribution than O-stars due to their longer lifetimes. If the [C\,{\sc ii}] is predominantly heated by these stars, it will still measure the SFR, but over longer timescales than H$\alpha$. Both of the suggested mechanisms will result in a softer radiation field outside of the SF regions. If the heating photons come purely from recent star-formation, we expect a softening of the radiation field with distance from the SF regions as the harder photons are preferentially absorbed. If the diffuse UV radiation field arises from a different stellar population, a softening of the radiation with distance from the SF regions requires that the mean age of the stellar population increases with distance from the youngest stars. Further evidence for the diffuse heating mechanisms can be seen in Figure~\ref{fig:firdef50}, where, in every Field, for a given dust color (i.e.~dust temperature) there is an increasing trend of [C\,{\sc ii}]/TIR with increasing [C\,{\sc ii}] surface brightness. This suggests that the brightest [C\,{\sc ii}] emitting regions (which we have shown to be associated with SF regions) have a higher heating efficiency. One way to produce such a trend is to have softening of the radiation field from SF regions to the more diffuse ISM, reducing the relative energy input to the gas (requiring $>6$\,eV photons) as compared to the dust heating (which absorbs all photons). Realistically, it is likely that both mechanisms play a role, with leaked FUV photons from SF regions gradually merging with the radiation field from an underlying diffuse stellar population. Only through the knowledge of the relative distribution of the stellar population in comparison with the observed dust emission could the relative contribution of each mechanism to the [C\,{\sc ii}] emission across our Fields be disentangled. \subsection{[C\,{\sc ii}] on $\sim$\,700\,pc} \label{sec:disc700} It is clear from the previous section (\S~\ref{sec:disc50}) that at the larger scales of our integrated Fields ($\sim$\,700\,pc), there will be contributions from diffuse emission to both the [C\,{\sc ii}] surface brightness and $\Sigma_{\rm SFR}$ (based on H$\alpha$ and 24\,\ensuremath{\,\mu\mbox{m}}). Based on Figure~\ref{fig:frac}, this diffuse fraction contribution will be greater than 50\% in some regions, depending upon the local SFR surface density. Nevertheless, when we look at these $\sim$\,kpc scales, we find that the integrated Fields are consistent with the [C\,{\sc ii}]--$\Sigma_{\rm SFR}$ relation found on similar scales by \citet{Herrera2014}. From this we can infer that this diffuse emission is sufficiently correlated with SFR on these scales that the relation holds, meaning that, for our two suggested mechanisms for the heating of the diffuse gas, on these scales we must capture all leaked photons or that the source of the diffuse radiation field, i.e.~B-stars, are co-located with star-formation on $\sim$kpc scales. On the other hand, we do find large scale variations of [C\,{\sc ii}]/TIR with galactocentric radius in M\,31 (Figure~\ref{fig:cii_metal}). If the [C\,{\sc ii}]/TIR ratio is tracing PE heating efficiency, then it is puzzling how the relationship with SF can stay the same as in \citet{Herrera2014}, when this efficiency appears to vary so dramatically, by a factor of $\sim 3$, between Fields 1 and 5. Taken together what this suggests is that there is a constant FUV energy fraction emitted by stars transferred to gas that is then cooled by [C\,{\sc ii}], but that the TIR must decrease relative to both [C\,{\sc ii}] and SFR with increasing galactocentric radius. While a changing ionized gas contribution to the [C\,{\sc ii}] emission could possibly explain the [C\,{\sc ii}]/TIR ratio, we have demonstrated that this contribution is less than 50\% in even the highest $\Sigma_{\rm SFR}$ pixels at 50\,pc scales, and thus will be much less on the scales of our integrated Fields. Thus this cannot explain the factor of $\sim 3$ variation. It is also possible that the fraction of the FUV energy absorbed by dust that goes into the gas can change (an increase in the physical photoelectric heating efficiency) and thus alter the [C\,{\sc ii}]/TIR ratio. However this also requires some form of ``conspiracy'' to maintain the [C\,{\sc ii}]--$\Sigma_{\rm SFR}$ relation, such as decreasing the FUV absorbed by dust relative to the SFR when the PE efficiency increases. Thus, while we cannot discount this possibility, we believe it to be unlikely. This is further supported by the non-monotonic radial trend in $q_{\rm PAH}$, the mass fraction of dust in PAHs, as determined by \citet{Draine2014}, in particular their Figure 11. A larger mass fraction of PAHs should lead to more efficient heating \citep{Bakes1998}, yet the determined trend in $q_{\rm PAH}$ peaks at $\sim$11.2\,kpc, unlike the monotonic [C\,{\sc ii}]/TIR ratio. A more likely physical explanation for the [C\,{\sc ii}]/TIR decrease towards galaxy center might lie in the fact that dust can be heated by both UV and optical photons, while both H$\alpha$ and [C\,{\sc ii}] require harder photons ($>13.6$ and $\gtrsim$6\,eV, respectively). Thus to reduce the TIR relative to [C\,{\sc ii}] as a function of radius, the FUV absorbed by dust must increase relative to the lower energy photons ($<6$\,eV) absorbed by dust. This is possible through changing either the intrinsic heating spectrum or by changing the dust properties. The hardness of the intrinsic stellar spectrum heating the dust will change with both stellar metallicity and local star formation history (parameterized through the mean stellar age). The gas-phase metallicity, and presumably the stellar metallicity, decreases monotonically with radius, with a factor of 2 change between Field 1 and Field 5 \citep[based on the dust-to-gas ratios work in][]{Draine2014}. To assess the impact of metallicity on spectral hardness, we compare two almost identical $10^6\,M_{\odot}$ clusters at the age of 10\,Myr, modelled with {\tt Starburst99} \citep{Leitherer1999}, with an instantaneous star-fromation burst and Kroupa initial mass function, differing only by solar ($Z=0.02$) and subsolar (LMC, $Z=0.004$) metallicities. The relative hardness increases by $\sim15$\% from the solar to subsolar metallicity cluster, as measured by comparing the change in the FUV ($\nu > 6\,eV$) to NUV ratio. While this will contribute to the observed trend in [C\,{\sc ii}]/TIR, it is not sufficient to explain the factor of 3 change. The local star formation history can strongly affect the heating spectrum seen by dust. A clear example of this is the work of \citet{Groves2012}, who showed that in the bulge of M\,31 an old stellar population dominates the heating radiation field, with its extremely soft radiation field meaning that optical light actually dominated the heating of dust. Thus we do expect a hardening of the radiation field with increasing radius in M\,31 as we move from older bulge dominated regions to younger disk dominated regions. Evidence for this can be seen in the observed radial FUV-NUV color in M\,31 shown in \citet{Thilker2005}, where the bluer color with radius may indicate a younger mean stellar age. The SFR, however, does not vary monotonically with radius, peaking in the 10\,kpc ring, and this will also affect the hardness of the local radiation field. Changing the dust properties can also change the [C\,{\sc ii}]/TIR ratio by changing the relative amount of FUV photons absorbed. One clear way is to change the dust absorption curve, which steepens with decreasing metallicity as seen through comparison of the extinction curves of the Milky Way versus the LMC \citep{Gordon2003}. The steepening from a Milky Way opacity to a LMC opacity, as plausibly expected from a decrease in the metallicity by a factor of 2, will mean relatively more FUV photons will be absorbed, however this increase is relatively small \citep[see Figure 10 in][]{Gordon2003} and will not contribute a significant amount to the [C\,{\sc ii}]/TIR increase. In addition to steepening the opacity curve, decreasing the metallicity will also decrease the dust-to-gas ratio (DGR), which has been found to decrease monotonically with radius in M\,31 \citep{Draine2014}. Decreasing the DGR will decrease the overall dust opacity for the same column of gas, increasing the mean free path of optical and UV photons, and decreasing the total amount of photons absorbed \citep[a similar argument was put forward by][for the high ratios observed in the LMC]{Israel1996}. Lower dust opacity will increase the average energy absorbed by dust, as preferentially the FUV photons are absorbed and the NUV-optical photons escape (due to the steep power-law nature of the dust opacity), and this decreasing opacity will lead to some part of the radial FUV-NUV color gradient observed in \citet{Thilker2005}. Supporting this idea, we find that the 24\ensuremath{\,\mu\mbox{m}}/H$\alpha$ ratio (a measure of the SF region extinction) increases with decreasing [C\,{\sc ii}]/TIR ratio, albeit in a non-linear manner. However, the overall dust opacity depends upon the total dust column, and this has been found to peak around the 10\,kpc ring in M\,31 \citep{Draine2014}, and is the region with the highest expected extinction \citep{Tempel2010}. This non-monotonic trend in total extinction goes against the simple trend seen in Figure~\ref{fig:cii_metal}. A similar explanation was suggested by \citet{Israel1996} for an increase of the [C\,{\sc ii}] emission relative to IR that they found with decreasing metallicity in an exploration of the LMC. In the low metallicity environment of the LMC ($Z_{\rm LMC}=0.004$), [C\,{\sc ii}]/FIR was found to be $\sim$10 times higher than found in Milky Way \citep[$Z_{\rm \odot}=0.02$; see ][and references therein]{Israel1996}. Their explanation for this was that at the low metallicities, the clumpy nature of the ISM caused deeper penetration of FUV photons into the molecular clouds, for the same A$_{\rm V}$, increasing the [C\,{\sc ii}] flux for the same absorbed radiation. Therefore, it is likely that a combination of a radial variation in both the dust opacity and star formation history causes the observed radial trend in the [C\,{\sc ii}]/TIR ratio in M\,31. While we favor the change of the radiation field due to mean stellar age as being primarily responsible for the observed radial trend in [C\,{\sc ii}]/TIR due to the observed FUV-NUV and extinction gradients, to correctly disentangle which of the mechanisms dominates in M\,31 will require a determination of the spatially resolved star formation history in M\,31. This is currently being undertaken by the Pan-chromatic Hubble Andromeda Treasury Survey \citep[PHAT][]{Dalcanton2012,Lewis2014}, covering all of our Fields. With the intrinsic heating stellar spectrum being determined from this analysis, we will be able to correctly account for this effect on the [C\,{\sc ii}]/TIR ratio and demonstrate whether this is indeed the dominant mechanism. For galaxies in the Local Group, we should be able to demonstrate the same effects, by comparing resolved star formation histories and extinction maps against the observed [C\,{\sc ii}]/TIR ratio. This should answer to what extent the observed [C\,{\sc ii}]/TIR trends in galaxies are due to heating effects or true changes in the photoelectric heating efficiency. \section{Conclusions} \label{sec:concl} In this paper we present an analysis of [C\,{\sc ii}] 158\,\ensuremath{\,\mu\mbox{m}}\ emission in five Fields in M\,31. Combined with ancillary H$\alpha$ and IR emission data, we studied the origins of [C\,{\sc ii}], its relation with the SFR and the ISM properties. In particular, we have found: \begin{itemize} \item Significant amounts of [C\,{\sc ii}] line emission are coming from outside the SF regions. \item Even though we measure a large diffuse [C\,{\sc ii}] fraction, integrated over $\sim$\,kpc scales, [C\,{\sc ii}] still traces the SFR very similarly to what we see in larger samples of more distant galaxies. We explore different mechanisms that could be responsible for this diffuse phase including leakage of photons from H\,{\sc ii}\ regions or diffuse UV radiation field generated by B stars. More diffuse [C\,{\sc ii}] than H$\alpha$ emission is consistent with flatter slopes. \item ${\rm [}$C\,{\sc ii}${\rm ]}$ and SFR are correlated, but with a shallower slope than seen on $\sim$\,kpc scales. This may be a result of the same diffuse [C\,{\sc ii}] emission. \item All of our observed [C\,{\sc ii}]/TIR ratios lie above the $10^{-3}$ value, which classically defines the `[C\,{\sc ii}]-deficient' objects. Yet this is not surprising as we explore much smaller scales than the global measurements that defined the deficit, and much more quiescent conditions than the centers of ULIRGs in which this deficit is clearly seen. On 700\,pc our Fields do show a tentative decreasing trend of [C\,{\sc ii}]/TIR with 70\,\ensuremath{\,\mu\mbox{m}}/100\,\ensuremath{\,\mu\mbox{m}}, however, with only 5 points, considerable scatter and large dust color uncertainties, it is not significant. On the smaller 50\,pc scales we do generally see a weak correlation of decreasing [C\,{\sc ii}]/TIR with warmer dust colors. However, this trend is inverted in F5 and in all fields we see a significant scatter ($\sim$\,order of magnitude) at a given dust color, that may be related to the [C\,{\sc ii}] surface brightness. \item We observe a large scale gradient of [C\,{\sc ii}]/TIR across the disk of M\,31. We explore potential causes for this trend and argue that a combination of effects due to changes in the dust-to-gas ratio, dust extinction curve, star formation history and radiation field are likely responsible. \end{itemize} Using [C\,{\sc ii}] to trace the massive SFR, one must consider possible contributions to ISM gas heating by older stellar populations that can lead to tracing longer timescales, and/or leaked photons from H\,{\sc ii}\ regions. The issue caused by the latter should go away when averaged over larger scales $\sim$\,few hundred pc. We will be able to shed some light in a following paper resolving stellar populations and their energy input in M\,31 using the PHAT survey (Kapala et al. in prep.). \section*{Acknowledgements} M. J. K. acknowledges funding support from the DLR through Grant 50 OR 1115. K. S. acknowledges funding from a Marie Curie International Incoming Fellowship. The authors thank A.~Bolatto, R.~Herrara-Camus, J.~D.~Smith, H.-W.~Rix, S.~Glover, S.~Meidt, and A.~Hughes for helpful conversations in the course of this project. We thank R.~Herrara-Camus for providing an early version of his paper for comparison. The authors would also like to thank the anonymous referee for providing us with very constructive comments. This research made use of (1) Montage, funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. Montage is maintained by the NASA/IPAC Infrared Science Archive. (2) the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in \citet{Ochsenbein2000}. (3) the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of NASA's Astrophysics Data System Bibliographic Services. PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). \bibliographystyle{apj}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Many different platforms are envisaged to process quantum information, corresponding to different ways of encoding qubits. All these implementations fall into two main categories: discrete variables (DV), based on observables with discrete spectra and continuous variables (CV), based on ones with continuous spectra. Both regimes present specific advantages and drawbacks: while DV show high fidelities, their efficiencies are in general low and the contrary applies for CV implementations \cite{van2011optical, qi_nonlinear_2016}. Hybridization between DV and CV states can take advantage of both encodings to implement certain quantum protocols \cite{takeda2019toward}. An example is near deterministic teleportation with high fidelities \cite{takeda_deterministic_2013, lee_near-deterministic_2013, lie2019limitations}, steering \cite{PhysRevLett.121.170403}, Bell protocols \cite{brask2012bell, quintino2012maximal, kwon2013violation,T_ppel_2015} and hybrid quantum repeaters \cite{brask2010hybrid, PhysRevA.99.032349Repeaters}. Quantum information processing using this technique is currently being developed both theoretically \cite{lim_loss-resilient_2016, andersen_hybrid_2015, kwon_generation_2015} and experimentally \cite{engineeringopticalhybrid, PhysRevLett.121.170403, guccione2020, sychev2018, gouzien2020hybrid}. Entanglement lies at the heart of quantum physics and is a key resource for quantum information and computation \cite{RevModPhys.91.025001, horodecki2009quantum}. Its detection is thus of crucial importance and has been studied extensively, notably with so-called entanglement witnesses (EW) \cite{horodecki2009quantum}. The fact that there exist EW for every entangled state \cite{horodecki_separability_1996} has raised their importance on a theoretical point of view even further \cite{chruscinski_entanglement_2014}, and links between entanglement witness and other important features of quantum physics such as Bell inequalities have been assessed \cite{hyllus_relations_2005}. Whenever one is interested in hybrid ressource, the issue of entanglement appears naturally, since we deal with a bipartite quantum system. As a consequence, the complementarity principle will involve producing entangled states. For this reason, entanglement detection is a foundational issue in hybrid encoding. Entanglement witnesses (EW) have been studied extensively for discrete \cite{guhne_entanglement_2009} and continuous \cite{sperling_necessary_2009} systems. Nevertheless, EW involving measurements of observables with a continuous spectrum seem harder to establish \cite{ qi_nonlinear_2016}. This is particularly true if the states considered are non-gaussian, which is precisely the case of all hybrid states \cite{kreis_classifying_2012}. The complete knowledge of the system's density matrix is a sufficient condition to compute EWs, ~\cite{peres_separability_1996,simon_peres-horodecki_2000,arkhipov2018negativity,hou_constructing_2010, guo_sufficient_2011}, but it is not necessary. Besides, this is not a practical solution since it requires time demanding quantum tomography techniques. One natural way for obtaining entanglement witness in CV systems is to use inseparability criteria based on matrices of moments \cite{miranowicz_inseparability_2009, gittsovich_non-classicality_2015}, an approach subsumed in Ref.~\cite{PhysRevLett.95.230502} and applied in~Refs.~\cite{PhysRevLett.84.2726, PhysRevLett.84.2722, PhysRevLett.88.120401, PhysRevA.67.052104, PhysRevLett.96.050503}, which can be generalised to hybrid system \cite{van2011optical}. Another approach was given in \cite{arkhipov2018negativity} where it was shown that the negativity volume of the generalised Wigner function can be used to detect entanglement for hybrid states. These approaches are however too sensitive to noise or too costly in terms of measurements with regard to our goals. In this work we introduce an implementable entanglement witness on a given quantum optics setup where hybrid entangled states are currently produced experimentally \cite{van2011optical, morin_remote_2014}. Our approach is inspired by the well known entanglement witness \cite{chruscinski_entanglement_2014} \begin{equation} W = \lambda\mathds{1} - \ket{\psi}\bra{\psi} \end{equation} where $\lambda \in\mathbb{R}$ is optimised such that $\Tr[W\sigma] >0$ for any separable state $\sigma$ and $\Tr[W\rho]<0$, for the largest possible set of entangled states including $\rho=\ket{\psi}\bra{\psi}$. We then adapt $W$ so that it is robust to noise using a realistic noise model, and require the measurement of only a few observables. We choose to stick to a specific experimental setup to produce a concrete and experimentally realistic example of an efficient hybrid entanglement detection. However, the construction of the witness enables its adaptation to other experimental platforms using different encodings, as for instance in \cite{gouzien2020hybrid}, as we will show. After introducing the set-up, we analyse the evolution of the hybrid entangled state of interest under a general noise model (II) and show that we can define a suitable witness using only measurable genuinely hybrid observables (III). We then discuss the efficiency of the introduced witness (IV) before concluding (V). \section{\label{stateanalysis} The setup } We start by introducing the family of entangled states we aim to characterise. \subsection{The target states} We consider the experimental quantum optics set-up described in details in Ref.~\cite{morin_remote_2014}. It is designed to produce, in the ideal scenario, the following pure state of the electromagnetic field: \begin{equation} \ket{\psi} = \frac{\ket{0} \ket{C^{-}(\alpha)} + \ket{1} \ket{C^{+}(\alpha) }}{\sqrt{2}}, \label{HybridEntangledState} \end{equation} where \begin{equation} \ket{C^{\pm}(\alpha)} = \frac{\ket{\alpha} \pm \ket{- \alpha}}{N^{\pm}(\alpha)} \label{eq:catStateDef} \end{equation} are the so-called symmetric and antisymmetric ``Schr\"odinger cat''-like states, with $\ket{\alpha}$ being a coherent state of amplitude $\alpha$, and $N^{\pm}(\alpha) = 2(1\pm \Re[e^{-2\alpha}])$, so that $\braket{\psi|\psi}=1$. Its specific advantage with respect to the hybrid state $\frac{\ket{0} \ket{\alpha}+ \ket{1}\ket{- \alpha}}{\sqrt{2}}$, which was considered in~\cite{van2011optical,jeong_generation_2014}, is that in Eq.~\eqref{eq:catStateDef} the two considered continuous variables states are orthogonal to each other for all values of $\alpha$. From now on, $\alpha$ will be taken real, without loss of generality. As for the discrete part of $\ket{\psi}$, we consider, as in Ref.~\cite{morin_remote_2014}, that $\ket{0}$ is the vacuum and $\ket{1}$ is the Fock state with one photon. However, the derivation of the witness that we present here can be adapted to other discrete encodings such as orthogonal polarization states of the photon \cite{kwon_generation_2015, FangPola}. In experiments, the produced state is noisy and should be described by a density matrix $\rho_{\text{noise}}$ instead of $\ket{\psi}$. A correct modelisation of $\rho_{\text{noise}}$ depends crucially on the type of encoding as well as on the specificities of the considered experimental setup. In the present context, we consider photon losses in both discrete and continuous channel as being the main source of noise. Such losses can be modelled by the action of a beam-splitter (BS) ~\cite{Leonhardt1993} which entangles an ideal incoming state $\ket{\psi}\bra{\psi}$ with an ancillary fluctuating quantum field. We note $\rho_{da}$ and $\rho_{ca}$ the ancillary fields, respectively on the discrete channel and the continuous channel, and these beam-splitters are refered to as TBS, for theoretical beam-splitters, in the scheme we propose in Fig.~\ref{schemeofsetupvacuum}. After recombination on the beam-splitter, two outputs are produced corresponding to the transmitted part of the beam-splitter and to the reflected one. We trace out the reflected one which corresponds to the losses, and obtain the mixed state $\rho_{\text{noise}} = \Tr_{\text{r}_{\text{c}},\text{r}_{\text{d}}}[\ket{\psi_{\text{noise}}}\bra{\psi_{\text{noise}}}]$ with \begin{align} &\ket{\psi_{\text{noise}}} \nonumber \\ &= \frac{\left( \ket{\sqrt{1\text{-}\eta}\alpha}_{\text{t}_{\text{c}}} \ket{\sqrt{\eta}\alpha}_{\text{t}_{\text{c}}} - \ket{\text{-} \sqrt{1\text{-}\eta}\alpha}_{\text{t}_{\text{c}}} \ket{\text{-}\sqrt{\eta}\alpha}_{\text{t}_{\text{c}}} \right) \ket{0}_{\text{t}_{\text{d}}} \ket{0}_{\text{r}_{\text{d}}}}{\sqrt{2}N^{-}(\alpha)} \nonumber \\ &+ \frac{ \left( \ket{\sqrt{1\text{-}\eta}\alpha}_{\text{t}_{\text{c}}} \ket{\sqrt{\eta}\alpha}_{\text{t}_{\text{c}}}\footnotesize{+}\ket{\text{-} \sqrt{1\text{-}\eta}\alpha}_{\text{t}_{\text{c}}} \ket{\text{-}\sqrt{\eta}\alpha}_{\text{t}_{\text{c}}} \right) \sqrt{1\text{-}\eta_d}\ket{1}_{\text{t}_{\text{d}}} \ket{0}_{\text{r}_{\text{d}}} }{\sqrt{2}N^{+}(\alpha)} \nonumber \\ &+ \frac{ \left( \ket{\sqrt{1\text{-}\eta}\alpha}_{\text{t}_{\text{c}}} \ket{\sqrt{\eta}\alpha}_{\text{t}_{\text{c}}}+\ket{\text{-} \sqrt{1\text{-}\eta}\alpha}_{\text{t}_{\text{c}}} \ket{\text{-}\sqrt{\eta}\alpha}_{\text{t}_{\text{c}}} \right) \sqrt{\eta_d}\ket{0}_{\text{t}_{\text{d}}} \ket{1}_{\text{r}_{\text{d}}} }{\sqrt{2}N^{+}(\alpha)} \end{align} where $\Tr_{\text{r}_{\text{c}},\text{r}_{\text{d}}}$ denotes the partial trace over the reflected modes, respectively in the continuous and discrete channels, $\text{t}_{\text{c}}$ and $ \text{t}_\text{d}$ are the transmitted modes respectively in the continuous and discrete channels and $\eta^2, \eta_d^2$ are the reflexivity of the theoretical beam-splitters, respectively for the continuous channel and for the discete channel. Therefore, $\eta$ and $\eta_d \in [0,1]$ characterise the noise in both channels $\eta_{(d)} = 0$ being the ideal case and $\eta_{(d)} =1$ the completely noisy channel. \begin{comment} \begin{figure}[h] \begin{tikzpicture} \draw(0,0)--(1,0)--(2,1)--(3,0)--(4,0) ; \draw(0,2)--(1,2)--(2,1)--(3,2)--(4,2); \draw (0,0) node[below]{vacuum} ; \draw (0,2) node[below]{$\rho_{\text{ideal}}$ }; \draw (4,2) node[below]{$\rho_{\text{noise}}$} ; \draw (4,0) node[below]{traced out}; \end{tikzpicture} \caption{A scheme of the theoretical beamsplitter on one of the channels.} \label{schemeofbs} \end{figure} \end{comment} \begin{figure} \begin{tikzpicture}[thick, scale=0.36] \draw(10,1)--(10,2) ; \draw(0,2)--(12,2) ; \draw(9,1)--(11,3) ; \draw(7,3)--(5,1) ; \draw(6,2)--(6,8) ; \draw(6,5)--(7,5) ; \draw(0,8)--(12,8) ; \draw(5,9)--(7,7) ; \draw(9,9)--(11,7) ; \draw(10,9)--(10,8) ; \draw (10,1) node[below]{$\rho_{ca}$} ; \draw (0,2) node[below]{Photon Pair} ; \draw (12,2) node[right]{Homodyne detector} ; \draw (12,8) node[right]{Homodyne detector} ; \draw (6,2) node[above left]{PBS} ; \draw (10,2) node[above left]{TBS} ; \draw (10,8) node[below left]{TBS} ; \draw (7,5) node[right]{Photon detector} ; \draw (10,9) node[above]{$\rho_{da}$} ; \draw (6,8) node[below left]{BS} ; \draw (0,8) node[above]{Squeezed vacuum} ; \end{tikzpicture} \caption{A scheme of the set-up, with the theoretical beam-splitters (TBS) which purpose are to take into account noise in the set up.} \label{schemeofsetupvacuum} \end{figure} The experimental setup we consider here uses optical fields at room temperature, so it is reasonable to take $\rho_{ca} = \rho_{da} = \ket{0}\bra{0}$. Indeed, for optical frequencies, the average number of thermal photon at room temperature is $\moy{n} = \frac{1}{e^{\frac{h \nu}{k_B T}} - 1} \approx 10^{-54}$. We nonetheless also considered the case where the fluctuating ancillary fields $\rho_{da}$ and $\rho_{ca}$ are thermal fields at finite temperature instead of vacuum, as shown in Appendix \ref{Thermalnoise}. It does not change our results qualitatively. An important aspect of the noise model we considered is that it does not increase the dimension of the pure state. Indeed, $\rho_{\text{noise}}$ can be represented as a $4\times 4$ matrix like the original $\ket{\psi}\bra{\psi}$, albeit in a different basis. The complete expression of $\rho_{\text{noise}}$ after performing the partial trace is given in Appendix \ref{ConcurrenceAppendix}. It is a ``mixed hybrid entangled states", according to the classification of Kreis and Van Loock in their seminal work \cite{kreis_classifying_2012,kreis_characterizing_2012}. Consequently, its entanglement can be studied analogously to a DV only system: one can define a subspace dependent Pauli-like algebra involving observables with a continuous spectrum in order to define an easy-to-implement EW. In order to simplify the expression of the noisy state, it is convenient to write it in the following orthonormal basis \begin{align} \{\ket{C^+(\sqrt{1-\eta}\alpha)}\ket{0},\ket{C^+(\sqrt{1-\eta}\alpha)}\ket{1}, \\ \ket{C^-(\sqrt{1-\eta}\alpha)}\ket{0},\ket{C^-(\sqrt{1-\eta}\alpha)}\ket{1}\} \label{dampedcat} \end{align} In this basis, $\rho_{\text{noise}}$ takes the following simple form: \begin{equation} \rho_{\text{noise}} = \begin{pmatrix} w & 0 & 0 & z \\ 0 & x_1 & c & 0 \\ 0 & c & x_2 & 0 \\ z & 0 & 0 & y \label{densitynoisymatrix1} \end{pmatrix} , \end{equation} where $w,z,x_1,f,x_2,z$ and $y$ are functions of $\eta$, $\eta_d$ and $\alpha$, that are given in Appendix \ref{ConcurrenceAppendix}. Another interesting aspect of being able to express the noisy state as a $4 \times 4$ system is that the photon loss noise model can be formulated as a quantum channel in terms of Kraus operators. For such, we write $\mathcal{U}(\eta)$ the operator performing the change of basis from $\left\{\ket{C^{\pm}(\alpha)}\right\}$ to the noise dependent basis $\left\{\ket{C^{\pm}(\eta\alpha)}\right\}$, for the continuous part. Then the state $\rho_{\text{noise}}$ given by Eq.~\eqref{densitynoisymatrix1} can be obtained from the ideal state $\ket{\psi}\bra{\psi}$, with the help of local Kraus operators $\mathcal{C}_i\mathcal{U}(\eta) \otimes \mathcal{D}_j (i,j = 1,2)$ as: \begin{equation} \rho_{\text{noise}} = \sum_{i,j=1}^2 \mathcal{C}_i\mathcal{U}(\sqrt{1 \text{-}\eta}\alpha) \otimes \mathcal{D}_j \ket{\psi}\bra{\psi} [\mathcal{C}_i\mathcal{U}(\sqrt{1\text{-}\eta}\alpha)]^{\dagger} \otimes \mathcal{D}_j^{\dagger} \label{IntroKrauss} \end{equation} where the operators ($ \mathcal{C}_i$) and ($\mathcal{D}_j$) are calculated in Appendix \ref{NaimarkAppendix}. For the discrete part we obtain: \begin{equation} \mathcal{D}_1 = \pmat{1}{0}{0}{\sqrt{1-\eta_d}}, \quad \mathcal{D}_2 = \pmat{0}{\sqrt{\eta_d}}{0}{0} \end{equation} which is an amplitude damping channel. The Krauss operators for the continuous part can be written as \begin{equation} \mathcal{C}_1 = \pmat{\cos{\alpha}}{0}{0}{\cos{\beta}} ,\quad \mathcal{C}_2 = \pmat{0}{\sin{\beta}}{\sin{\alpha}}{0} \end{equation} with $$\alpha = \arccos{\frac{\sqrt{(1+\exp{(- 2 (1-\eta) \alpha^2)})(1+\exp{(- 2 \eta \alpha^2)})}}{\sqrt{2+2 \exp{(- 2 \alpha^2)} }}}$$ and $$\beta = \arccos{\frac{\sqrt{(1-\exp{(- 2 (1-\eta) \alpha^2)})(1+\exp{(- 2 \eta \alpha^2)})}}{\sqrt{2-2 \exp{(- 2 \alpha^2)}}}}.$$ When $\alpha = \beta$ we obtain a dephasing channel, whereas when $\beta = 0$ we have an amplitude-damping channel~\cite{wolf2007quantum}, so for the continuous part, aside from the unitary transformation $\mathcal{U}$, the quantum channel is a combination of these two channels. An alternative encoding of DV quantum information for the discrete part of our hybrid state would use the polarisation degrees of freedom instead of the vacuum and one photon Fock state. In this case, the noise model would change, and it would be reasonable to consider instead a depolarizing channel on the discrete side. We can show that even in this case, the density matrix has the same form as the one presented in Equation \eqref{densitynoisymatrix1}. \subsection{Entanglement Characterization} As we have noted previously, for a given value of $\eta$ and $\eta_d$, the state $\rho_{\text{noise}}$ can be described by a $4 \times 4$ density matrix in an orthonormal basis which depends on the noise parameter $\eta$. This means that we can consider it as an effective $4 \times 4$ DV-system and completely characterise its entanglement \cite{van2001entangled, wang2001bipartite, kreis_characterizing_2012}. To this end, we choose the concurrence $C$ of $\rho_{\text{noise}}$, which takes the following simple form:\cite{santos2006direct}: \begin{equation} C(\rho_{\text{noise}}) = \max (0, 2c - 2 \sqrt{wy} ), \label{Concurrence} \end{equation} (See Appendix \ref{ConcurrenceAppendix}). For a 2 qubit system, as it is the case here, it is positive if and only if the state is entangled. We show in Figures~\ref{conccat1} and~\ref{concUniqueT} the variation of the concurrence $C(\rho_{\text{noise}})$ as a function of noise parameters $\eta$ and $\eta_d$ and the amplitude $\alpha$. Figure~\ref{conccat1} shows that the concurrence is decreasing with respect to the amount of noise on each channel. With $\alpha = 1$, the state is separable only when the noise is very important ($\eta = \eta_d \geq 0.8$). Now, if we set $\eta = \eta_d$, we observe in Figure~\ref{concUniqueT} that the concurrence decreases with respect to $\alpha$ and $\eta$. Besides, the entanglement of the state becomes more and more sensitive to the noise, as the amplitude $\alpha$ increases. \begin{figure} \begin{center} \includegraphics[width=1.\linewidth]{ConcurrenceHQ1F.pdf} \caption{Concurrence $C(\rho_\text{noise})$ as a function of noise parameters $\eta$ and $\eta_d$ for an amplitude $\alpha = 1$. The negative value are clipped, only the positive value indicating entanglement of $\rho_\text{noise}$ are plotted (in rainbow colours).} \label{conccat1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.\linewidth]{ConcurrenceHQMODF.pdf} \caption{Concurrence $C(\rho_\text{noise})$ as a function of the cat size $\alpha$ and the noise $\eta=\eta_d$. The negative value are clipped, only the positive value indicating entanglement of $\rho_\text{noise}$ are plotted (in rainbow colours).} \label{concUniqueT} \end{center} \end{figure} \section{ Entanglement witness} We now consider the entanglement witness $W = \frac{1}{2}\mathds{1} - \ket{\psi} \bra{\psi}$. $\Tr[W\sigma]$ is positive for all separable state since the Schmidt rank of $\ket{\psi}$ can not exceed 2. This is due to the fact that the Schmidt rank is bounded by the Hilbert space with the lowest dimension value: the one of the qubit. $W$ is well suited to detect the target state $\ket{\psi}$ since $\bra{\psi}W\ket{\psi} = -\frac{1}{2}<0$. \subsection{Noise robustness} The relevance and usefulness of $W$ is related to its ability to detect entanglement for a large set of $\rho_{\text{noise}}$ states. When computing $\Tr[W\rho_{\text{noise}}]$, we obtain \begin{equation} \Tr[W\rho_{\text{noise}}] = \omega + y - 2c. \end{equation} We show in Figures~\ref{witnesaprox} and~\ref{witnesaproxalpha} the variation of $-\Tr[W\rho_{\text{noise}}]$ (we changed the sign to compare it more easily to the concurrence) as a function of noise parameters $\eta$ and $\eta_d$ and the amplitude $\alpha$ of the cat state. Figure~\ref{witnesaprox} shows that $W$ detects entanglement even when both $\eta = \eta_d$ are equal to $0.5$ for $\alpha=1$. Since state-of-the-art optical set-ups can provide states with less than 20 \% of noise on each channel \cite{PhysRevLett.121.170403}, we consider that the robustness is satisfying. We can now discuss the witness implementation. By comparing Figure~\ref{witnesaproxalpha} with Figure~\ref{concUniqueT}, we see that for increasing $\alpha$, the region of non-detected entangled states in the form of (\ref{eq:catStateDef}) decreases: the witness tends more and more to become a necessary condition, \textit{i.e} $\Tr[W\rho_{\text{noise}}] \leq 0 \sim C(\rho_{\text{noise}})$. \begin{figure} \begin{center} \includegraphics[width=1.\linewidth]{Witness1HQF.pdf} \caption{$- \Tr[W\rho_{\text{noise}}]$ as a function of the noise parameters $\eta$ and $\eta_d$ for a cat size $\alpha=1$. The negative value are clipped, only the positive value indicating entanglement of $\rho_\text{noise}$ are plotted (in rainbow colours).} \label{witnesaprox} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.\linewidth]{WitnessHQFMod.pdf} \caption{$-\Tr[W\rho_{\text{noise}}]$ as a function of the cat size $\alpha$ and the noise $\eta=\eta_d$. The negative value are clipped, only the positive value indicating entanglement of $\rho_\text{noise}$ are plotted(in rainbow colours).} \label{witnesaproxalpha} \end{center} \end{figure} \subsection{Experimental Implementation} Measuring $W$ involves defining local projectors characterising $\ket{\psi}\bra{\psi}$ both on its discrete and its continuous parts. For the discrete part, we can safely consider the Pauli matrices $\sigma_z = \ket{0}\bra{0} - \ket{1}\bra{1}$, $\sigma_x = \ket{0}\bra{0} + \ket{1}\bra{1}$ and $\sigma_y = \frac{1}{2i}[\sigma_z,\sigma_x]$. For the continuous part, we can define analogous observables with a continuous spectrum, \textit{i.e} with the same matrix but in the \{$\ket{C^-(1-\eta)\alpha)}, \ket{C^+(1-\eta)\alpha)}$ \} basis. Specifically, \begin{align} X_C &= \ket{C^-(1-\eta)\alpha)} \bra{C^+(1-\eta)\alpha)} \nonumber \\ &+ \ket{C^+(1-\eta)\alpha)} \bra{C^-(1-\eta)\alpha)},\\ Z_C &= \ket{C^-(1-\eta)\alpha)} \bra{C^+(1-\eta)\alpha)} \nonumber \\ &+ \ket{C^+(1-\eta)\alpha)} \bra{C^-(1-\eta)\alpha)}, \\ Y_C &= \frac{1}{2i}[Z_C,X_C]. \end{align} This yields: \begin{equation} 4 \ket{\psi} \bra{\psi} = \left[ \mathds{1} + \sigma_x \otimes X_C - \sigma_y \otimes Y_C + \sigma_z \otimes Z_C \right] \label{witnessobservables} \end{equation} Observables $X_C$, $Y_C$ and $Z_C$ are non Gaussian and can not be experimentally measured in a straightforward way. In order to propose an easy way to measure the witness, we can replace them by observables that reproduce a Pauli algebra in the specific subspace of interest, that of states given in Eq. (\ref{eq:catStateDef}). In such a subspace, we can replace: \begin{equation} X_C \longrightarrow \frac{a+a^{\dagger}}{n_x},\quad Y_C \longrightarrow \frac{i(a-a^{\dagger})}{n_y} \label{paulialgebrax} \end{equation} \begin{equation} Z_C \longrightarrow \lambda_z a^{\dagger}a + \mu_z \end{equation} where $n_x, n_y, \mu_z, \lambda_z$ are normalisation factors depending weakly on the parameters $\alpha, \eta, \eta_d$ of the experiment. Such observables correspond to homodyne measurements at fixed angles. Hence, we define the new operator \begin{equation} \tilde{W} = \mathds{1} - \frac{1}{2}\left[ \mathds{1} + \sigma_x \otimes \frac{a+a^{\dagger}}{n_x} - \sigma_y\otimes \frac{i(a-a^{\dagger})}{n_y} \right] \label{finalwitness} \end{equation} which is now written in terms of observables which are currently measured in quantum optics experiments using homodyne detection ~\cite{van2011optical, morin_witnessing_2013, PhysRevLett.121.170403}. The term $\sigma_z \otimes Z_C$ has been discarded since it does not significantly change the value of $\Tr[W \rho_{\text{noise}}]$ and thus does not help to detect the entanglement of $\rho_{\text{noise}}$. As a matter of fact, it increases the difficulty to fulfil the condition $\Tr[ \tilde{W} \sigma] \geq 0 $ for all $\sigma$ separable. However, since $\tilde{W}$ is different from $W$, (their expectation values coincide only in the case of cat states) it is necessary to prove that $\tilde{W}$ is still an entanglement witness. Note that the values of $n_x$ and $n_y$ do not need anymore to be normalisation parameters: we can freely choose their values to optimise the witness. We calculate in Appendix \ref{appendixwitness} an upper bound of the expectation value of $\tilde{W}$ for separable states. It depends on the number of photons in the continuous channel and the noise parameters $\eta$ and $\eta_d$. The proof involves approximating the Hilbert space of the continuous part of the hybrid state by a finite dimensional Hilbert space spanned by the Fock states $\{\ket{n}; \widehat{N} \ket{n} = n\ket{n} \text{ and } n\leq N\}$ where $\widehat{N}$ is the photon number operator. The value of the considered cut-off $N$ must of course increase when the cat size $\alpha$ increases, but this will have an impact on the ability of the witness to detect entanglement. Therefore, a balance must be found between the parameters $\alpha , \eta, \eta_d$ in order to detect the entanglement of $\rho_{\text{noise}}$. The detection of entanglement can now be carried out according to the following procedures: we choose a cut-off $N$, compute $n_x$ and $n_y$ such that no separable states within the sub-Hilbert space can violate the upper bound of the witness, and consider that the states we produce are in this subspace. This method is easy to test experimentally but over-evaluates the upper bound for separable states, as detailed in Appendix \ref{appendixwitness}, and necessitates the assumption that the states produced experimentally have no components on the Fock states $\ket{n}$ for $n \geq N$. We propose a second method which requires additional measurements but does not necessitate to make this assumption, and that is more accurate with respect to the upper bound of the separable states. We explain it briefly here and more precisely in Appendix \ref{Control}. We use the method described in \cite{qi2020characterizing} to estimate the photon number distribution of the experimental states on the continuous channel. Using two conjugate homodyne detectors on this channel, we are able to measure simultaneously two orthogonal quadratures of the electromagnetic field. The sum of the square of these two output approximates sufficiently well the photon number operator $\widehat{N}$ to obtain the photon number distribution with a very good precision. Thanks to this knowledge, we are able to determine precisely the cut-off $N$ of the continuous channel without assumptions \textit{a priori}, and to compute an upper bound on the separable states more precise than the one obtained by Method 1. Finally, we also give in Appendix \ref{Control}, for experimental purposes, an alternative protocol for Method 2 which necessitates only one homodyne detector for the continuous channel but at the expense of the accuracy in the photon number distribution estimation. We summarise the two methods in the following table: \begin{table}[h!] \centering \begin{adjustbox}{max width=250 pt} \begin{tabular}{|l|c|c|c|} \hline \hspace{4cm} & Method 1 & \multicolumn{2}{c|}{Method 2} \\ \cline{3-4} \hline Assumption on the dimension & Yes & \multicolumn{2}{c|}{No} \\ \cline{3-4} \hline Evaluation of the photon statistic & No & \multicolumn{2}{c|}{Yes} \\ \cline{3-4} \hline Number of homodyne detectors & 1 & 1 & 2 \\ \hline Robustness to noise & Standard & Increased & Optimal\\ \hline \end{tabular} \end{adjustbox} \label{WitnessComparison} \end{table} We illustrate Method 1 with two plots. Figure \ref{witnestrunca} shows the evolution of $\Tr[\tilde{W}\rho_{\text{noise}}]$ as a function of the noise parameters $\eta = \eta_d$, with $\alpha= 1$ and a cut-off at $N=3$. We see that the critical $\eta$ parameter, $\eta_c$ is equal to $22 \%$. We plot in Figure~\ref{NoisevsFock} $\eta_c$ against $N$ for $\alpha =1$, $\alpha =1.3$ and $\alpha =1.6$ to show the sensibility of $\eta_c$ to $N$ and $\alpha$. Method 2 is intended to be used on experiments. In order to test its relevance, we simulated experiments, like in part IV of \cite{qi2020characterizing}, with very good precision in the photon number distribution, that showed we could obtain $\eta_c \approx 20 \%$ for $\alpha = 1$, which is reasonable. \begin{figure} \begin{center} \scalebox{.7}{\input{HEWcut.tex}} \caption{$\Tr[\tilde{W}\rho_{\text{noise}}]$ as a function of the noise parameters $\eta = \eta_d$ (same on both modes), with $\alpha= 1$ and cut-off at $N=3$. Entanglement is detected in the green zone, undetected in the red zone. $\eta_c = 0.24$.} \label{witnestrunca} \end{center} \end{figure} \begin{figure} \begin{center} \scalebox{.7}{\input{EWnoiseN.tex}} \caption{Critical percentage of noise vs cut-off in the Fock space. The gold band corresponds to typical values of noise observed in state-of-the-art experiments \cite{lejeannic:tel-01665496}. Green zone shows a zone where the detection is experimentally easy. Red zone shows values of noise harder to obtain. $\eta_c = 0$ corresponds to an ideal case.} \label{NoisevsFock} \end{center} \end{figure} \section{Discussion} In the present paper we considered the detection of a useful entangled state currently experimentally produced in quantum optics experiments. Our entanglement witness requires, to be evaluated, only the measurements of correlations between two Pauli matrices on the discrete side, and two quadratures of the field on the continuous side. Hence, contrary to the detection of a Wigner function or even of its negativity \cite{arkhipov2018negativity}, we do not need to measure displacement operators, nor do we need to use Photon Number Resolving (PNR) detectors \cite{laiho2009direct, sridhar2014direct}. The proposed witness can be measured using homodyne detectors in both sides, discrete and continuous. This would only require to lock the phase of the local oscillator at two angles, to obtain two orthogonal quadratures $\hat{x}$ and $\hat{p}$, whereas in a full tomography the measurement of all possible orthogonal quadratures is required. We summarise the proposed measurement protocol as follows : \begin{framed} \begin{enumerate} \item Lock the phase of the local oscillators on the homodyne detectors to detect \^x \item Record data on both sides \item Compute correlations $\moy{\sigma_X \otimes \frac{a+a^{\dagger}}{n_x(\alpha, \eta_X)}}_{\rho_{\text{exp}}}$ \item Lock the phase of the local oscillators on the homodyne detectors to detect \^p \item Record data on both sides \item Compute correlations $\moy{\sigma_Y \otimes \frac{a-a^{\dagger}}{n_y(\alpha, \eta_Y)}}_{\rho_{\text{exp}}}$ \item If Method 2 is chosen, compute the bound on the separable states \item Compute the value of the Witness \end{enumerate} \label{Protocol} \end{framed} \section{Conclusion} We have presented an implementable hybrid entanglement witness. that can be experimentally detected with only a few relatively easy to perform measurements. This was achieved, in a first step, by identifying observables with a continuous spectrum to Pauli matrices in a specific subspace. Such identification was possible thanks to the fact that noise, in the considered subspace, does not increase its dimension. In a second step, we replaced such observables by others, easier to measure, that coincide within the targeted subspace. We hope this work can help to understand better the subtle features of hybrid entanglement and, more generally, hybrid quantum protocols, both theoretically and experimentally. \section*{Acknowledgments} We acknowledge fruitful discussions with T. Darras, J. Laurat, L. Garbe and N. Fabre. G.M. acknowledges support from the French Agence Nationale de la Recherche (ANR-17-CE30-0006). \clearpage \small{\bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} Radiative transfer equation describes the physical phenomenon of energy transport in radiation. It has a variety of applications, such as cooling glass and heat transfer in gas turbines. In this paper we consider a model of glass cooling with the radiative heat transfer equation coupled with a heat equation. The model is given by \begin{equation}\label{eq1} \begin{cases} \displaystyle c_{m}\rho_{m}\partial_{t}T=k_{h}\Delta T-\int_{0}^{\infty}\int_{\mathbb{S}^{2}}\kappa\left(\mathcal{B}-\psi\right)d\beta d\nu,~~t>0,x \in \Omega,\\ \displaystyle \frac{1}{c}\partial_{t}\psi +\beta \cdot\nabla \psi=\kappa\left(\mathcal{B}-\psi\right), ~~t>0, (x,\beta,\nu)\in \Omega\times\mathbb{S}^{2}\times\mathbb{R}_+. \end{cases} \end{equation} Here $\Omega \subset \mathbb{R}^3$ is a bounded domain, $\mathbb{S}^2$ is the unit sphere in $\mathbb{R}^3$. The function $T=T(t,x)$ denotes the temperature of the medium and $\psi=\psi(t,x,\beta,\nu)$ describes the specific radiation intensity at $x\in \Omega$ traveling in direction $\beta\in \mathbb{S}^{2}$ with frequency $\nu>0$ at time $t>0$. The constants $c_{m}$,$\rho_{m}$, $k_{h}$, $\kappa$ and $c$ are the specific heat, the density, the thermal conductivity, the opacity coefficient, and the speed of light, respectively. Furthermore, $\mathcal{B}=\mathcal{B}(\nu,T)$ denotes the Planck's function $$ \mathcal{B}\left(\nu,T\right):=\frac{2h_{p}\nu^{3}}{c^{2}\left(e^{\frac{h_{p}\nu}{k_{b}T}}-1\right)} $$ for black body radiation in glass. Here $h_{p}$ is the Planck's constant and $k_{b}$ is the Boltzmann's constant. We refer the reader to \cite{frank2010optimal}, \cite{modest2013radiative} and references therein for more radiative heat transfer models. In order to solve the glass cooling model \eqref{eq1}, we need to provide initial and boundary conditions for $T$ and $\psi$. The initial conditions are taken to be \begin{align*} T(t=0,x) = ~& T_0(x), \; \text{for any }x\in\Omega \\ \psi(t=0,x,\beta,\nu) = ~& \psi_0(x,\beta,\nu), \; \text{for any } (x,\beta,\nu)\in \Omega\times\mathbb{S}^2\times\mathbb{R}_+. \end{align*} The boundary condition for the temperature $T$ is the following Robin boundary condition \begin{align}\label{eq:robinfort} k n \cdot\nabla T(t,x)=h_{c}\left(T_{b}(t,x)-T(t,x)\right), \text{ for any }t>0,\; x\in\partial\Omega. \end{align} Here $T_b=T_b(t,x)>0$ is a nonnegative function, $n=n(x)$ is the outward unit normal vector to the boundary $\partial\Omega$, $h_{c}$ is the convective heat transfer coefficient and $k \ge 0$ is a constant. When $k=0$, this corresponds to a nonhomogenous Dirichlet boundary condition. To give the boundary condition for $\psi$, we define the boundary set $\Sigma = \partial\Omega\times\mathbb{S}^2$ and \begin{equation}\label{gt0.1} \Sigma_{-}:=\left\{(x,\beta)\in\partial\Omega\times\mathbb{S}^{2},\; \beta\cdot n(x)<0\right\}, \end{equation} \begin{equation}\label{gt2.1} \Sigma_{+}:=\left\{ (x,\beta)\in \partial \Omega \times\mathbb{S}^{2},\;\beta\cdot n(x)> 0\right\}. \end{equation} The boundary condition for the specific radiation intensity is taken to be the the following reflecting absorbing mixed condition \begin{align*} \psi \left(t,x,\beta,\nu\right)=\alpha \psi_b(t,x,\beta)+(1-\alpha) \psi(t,x,\beta',\nu), \quad t>0,\;\left(x,\beta\right)\in \Sigma_{-},\;\nu \in\mathbb{R}_+. \end{align*} Here $\psi_b=\psi_b(t,x,\beta)$ is a given function defined on the half surface $\Sigma_-$ and it describes the radiative intensity transmitted into the medium from outside. The coordinate $\beta^{'}\in\mathbb{S}^2$ is the exiting radius which specularly reflects into the incident radius $\beta$ as $ \beta' = \beta - 2 (n(x)\cdot \beta) n(x)$, and $\alpha \in (0,1)$ is a constant. Next, we give the dimensionless form of the system \eqref{eq1}. We introduce the nondimensional parameter $\varepsilon=1/\kappa_{r}x_{r}$, where $x_{r}$ and $\kappa_{r}$ are the length scale and reference absorption, respectively. Physically $\varepsilon$ represents the ratio of a typical photon mean free path to a typical length scale of the problem. The rescaled system is given by \begin{equation* \begin{cases} \displaystyle \varepsilon^{2}\partial_{t}T=\varepsilon^{2}k\Delta T-\int_{0}^{\infty}\int_{\mathbb{S}^{2}}\kappa\left(\mathcal{B}-\psi\right) d\beta d\nu,~~ t>0,\;x\in\Omega,\\ \displaystyle \varepsilon^{2}\frac{1}{c}\partial_{t}\psi +\varepsilon\beta \cdot\nabla \psi=\kappa\left(\mathcal{B}-\psi\right),~~ t>0,\;(x,\beta,\nu)\in\Omega\times\mathbb{S}^{2}\times\left(0,\infty\right). \end{cases} \end{equation*} See \cite{frank2010optimal} for mores details on the derivation. We consider the glass cooling in the grey medium, that is $\mathcal{B}$ does not depend on the frequency $\nu$. The specular black body intensity $\mathcal{B}$ is then given by $\mathcal{B}=\frac{\sigma}{\pi} T^{4}$, according to the Stefan-Boltzmann law. For simplicity, we take all the constants in \eqref{eq1} and \eqref{eq:robinfort} to be the same $k=\kappa=h_{c}=c=1$, and take $\sigma=\pi$. Since the solutions of the above system depends on $\varepsilon$, we introduce new notations $T_{\varepsilon} = T_\varepsilon(t,x)$ and $\psi_{\varepsilon}=\psi_\varepsilon(t,x,\beta)$ to represent the temperature and the radiative intensity, respectively. We introduce the notation $\langle \psi_{\varepsilon}\rangle :=\int_{\mathbb{S}^{2}}\psi_{\varepsilon} d\beta$ which is the radiative density, the system \eqref{eq1} then can be written as \begin{align} \partial_t T_\varepsilon = ~& \Delta T_\varepsilon +\frac{1}{\varepsilon^2}\langle \psi_\varepsilon - T_\varepsilon^4\rangle , \label{eq:Teps} \\ \partial_t \psi_\varepsilon + \frac{1}{\varepsilon} \beta \cdot \nabla \psi_\varepsilon = ~&-\frac{1}{\varepsilon^2}(\psi_\varepsilon - T_\varepsilon^4). \label{eq:psieps} \end{align} The initial conditions are taken to be \begin{align} T_\varepsilon(t=0,x) = ~& T_{\varepsilon 0}(x), ~~ \text{ for any } x\in\Omega \label{eq:ic1}\\ \psi_\varepsilon(t=0,x,\beta) = ~& \psi_{\varepsilon 0}(x,\beta), ~~ \text{ for any } x\in \Omega, \beta\in\mathbb{S}^2. \label{eq:ic2} \end{align} The boundary condition for $\psi_\varepsilon$ is taken to be \begin{align}\label{bpsi} \psi_\varepsilon(t,x,\beta) = \alpha \psi_b(t,x,\beta) + (1-\alpha)(L \psi_\varepsilon)(t,x,\beta),\,t>0, \quad (x,\beta) \in \Sigma_{-}, \end{align} where the operator $L$ is defined by \begin{align}\label{eq:reflectop} L(f(x,\beta)):=f(x,\beta')=f(x,\beta-2(n(x)\cdot \beta)n(x)). \end{align} The boundary data for $T_\varepsilon$ is taken to be one of the following three conditions: \begin{enumerate}[label=(\Alph*)] \item On the torus: \begin{align} \Omega=\mathbb{T}^3, \label{b1 \end{align} \item Dirichlet boundary condition: \begin{align} T_\varepsilon(t,x)=T_b(t,x), \text{ for any } x \in\partial \Omega, \label{b3} \end{align} \item Robin boundary condition: \begin{align} \varepsilon^r n \cdot \nabla T_\varepsilon(t,x) = -T_\varepsilon(t,x) + T_b(t,x), \text{ for any } x\in \partial \Omega. \label{b2} \end{align} \end{enumerate} Here $r \ge 0$ is a nonnegative constant. The parameter $\varepsilon$ is usually small in applications and it plays an important role in the system \eqref{eq:Teps}-\eqref{eq:psieps}. It is interesting and physically meaningful to study the behavior of its solutions as $\varepsilon \to 0$. We call such a limit the diffusive limit. The objective of this paper is to study the diffusive limit rigorously. First we derive the limit system formally. \subsection{Formal derivation of the limit system.} By equation \eqref{eq:psieps}, \begin{align}\label{eq:10102} \psi_{\varepsilon}=T_{\varepsilon}^{4}-\varepsilon\beta \cdot\nabla \psi_{\varepsilon} -\varepsilon^{2} \partial_{t}\psi_{\varepsilon}. \end{align} Therefore, for small $\varepsilon$, we have \[ \psi_{\varepsilon}=T_{\varepsilon}^{4}-\varepsilon\beta \cdot\nabla\left( T_{\varepsilon}^{4}-\varepsilon\beta \cdot\nabla \psi_{\varepsilon}-\varepsilon^2\partial_t \psi_\varepsilon\right)-\varepsilon^{2} \partial_{t}\left(T_{\varepsilon}^{4}-\varepsilon\beta \cdot\nabla \psi_{\varepsilon} -\varepsilon^{2} \partial_{t}\psi_{\varepsilon}\right). \] Combing the terms with the same order gives \[ \psi_{\varepsilon}=T_{\varepsilon}^{4}-\varepsilon \beta \cdot\nabla T_{\varepsilon}^{4}-\varepsilon^{2}\left(\partial_{t}T_{\varepsilon}^{4}-\beta \cdot\nabla\left(\beta \cdot\nabla \psi_\varepsilon\right)\right)+ \varepsilon^3(\beta \cdot \partial_t \psi_\varepsilon + \beta \cdot \nabla \partial_t \psi_\varepsilon) + \varepsilon^4 \partial_t^2 \psi_\varepsilon. \] We can use \eqref{eq:10102} again in the second term in the above equation and obtain \begin{align}\label{eq:psiasymp} \psi_\varepsilon = ~& T_\varepsilon^4 - \varepsilon \beta \cdot \nabla T_\varepsilon^4 - \varepsilon^2 \left(\partial_t T_\varepsilon^4 - \beta \cdot \nabla (\beta \cdot \nabla T_\varepsilon^4)\right) \nonumber\\ &+ \varepsilon^3(-\beta \cdot \nabla(\beta \cdot \nabla (\beta \cdot \nabla \psi_\varepsilon))+\beta \cdot \partial_t \psi_\varepsilon + \beta \cdot \nabla \partial_t \psi_\varepsilon) \nonumber\\ &+ \varepsilon^4 (- \beta \cdot\nabla(\beta \cdot \nabla \partial_t\psi_\varepsilon) +\partial_t^2 \psi_\varepsilon). \end{align} Assuming $T_\varepsilon \in C_{t,x}^2$ and $\psi_\varepsilon \in C_{t,x,\beta}^3$ are bounded, and assuming \[T_\varepsilon \to \overline{T}, \quad \psi_\varepsilon \to \overline{\psi} \quad \text{as }\varepsilon\to 0,\] we can pass to the limit $\varepsilon\to0$ in \eqref{eq:psiasymp} and get \[\overline{\psi} = \overline{T}^4.\] We can also use \eqref{eq:psiasymp} to find that the radiative density $\langle \psi_{\varepsilon}\rangle$ satisfies \begin{align*} \langle \psi_\varepsilon \rangle =~& 4\pi T_{\varepsilon}^{4}-\varepsilon^{2}4\pi\partial_{t}T_{\varepsilon}^{4}+\varepsilon^{2}\frac{4\pi}{3}\Delta T_{\varepsilon}^{4} \\ &+ \varepsilon^3\langle (-\beta \cdot \nabla(\beta \cdot \nabla (\beta \cdot \nabla \psi_\varepsilon))+\beta \cdot \partial_t \psi_\varepsilon + \beta \cdot \nabla \partial_t \psi_\varepsilon) \rangle \nonumber\\ &+ \varepsilon^4 \langle(- \beta \cdot\nabla(\beta \cdot \nabla \partial_t\psi_\varepsilon) +\partial_t^2 \psi_\varepsilon)\rangle. \end{align*} This enables us to pass to the limit on the last term in \eqref{eq:Teps}: \begin{align*} \frac{1}{\varepsilon^2}(\langle \psi_\varepsilon - T_\varepsilon^4\rangle) \to -4\pi\partial_t \overline{T}^4 + \frac{4\pi}{3}\Delta\overline{T}^4. \end{align*} We can then pass to the limit in \eqref{eq:Teps} and derive the following nonlinear limit system \begin{align}\label{hgm1.0} \partial_{t}\left(\overline{T}+4\pi \overline{T}^{4}\right)=~&\Delta\left(\overline{T}+\frac{4\pi}{3}\overline{T}^{4}\right),\\ \overline{T}(t=0,x) =~& \overline{T}_0(x) = \lim_{\varepsilon \to 0} T_{\varepsilon 0}(x), \quad x\in\Omega, \end{align} associated with suitable boundary conditions which will be given in section \ref{section3}. \subsection{Main results of the paper} Before introducing our main results in this work, we start by giving some assumptions on the initial and boundary values. \begin{itemize} \item \textbf{Well-prepared initial conditions} \begin{align}\label{eq:wellinitial} \lim_{\varepsilon \to 0}(\psi_{\varepsilon0}(x) - T_{\varepsilon0}^4)=0, \;\text{for all } x\in\Omega, \end{align} \item \textbf{Well-prepared boundary conditions} in the case of Dirichlet boundary condition \eqref{b3}, namely \begin{align}\label{eq:wellbc} \psi_b(t,x,\beta) = T_b(t,x)^4, \; \text{for all } t>0, \text{ and }(x,\beta)\in \Sigma_-. \end{align} Notice that for the case of Robin boundary condition \eqref{b2}, the well-prepared boundary condition assumption is not needed. The case of general initial and boundary conditions will be discussed in \cite{Bounadrylayer2019GHM2}. \end{itemize} We now state the main results in the following theorem. \begin{theorem} Suppose the initial condition \eqref{eq:ic1}-\eqref{eq:ic2} are bounded in the space $T_{\varepsilon 0} \in L^5(\Omega), \psi_{\varepsilon0} \in L^2(\Omega\times\mathbb{S}^2)$ and the boundary condition \eqref{bpsi} and \eqref{b1}-\eqref{b2} are bounded in in the space $T_b \in L_{\operatorname{loc}}^5([0,\infty);L^5(\partial \Omega))$ and $\psi_b \in L_{\operatorname{loc}}^2([0,\infty); L^2(\Sigma_-;|n\cdot\beta| d\beta d\sigma_x))$. Then the following statements hold. \begin{enumerate} \item[(1)] \textbf{Existence of weak solution: } There exists a weak solution for the system \eqref{eq:Teps}-\eqref{eq:psieps} with initial conditions \eqref{eq:ic1}-\eqref{eq:ic2} and boundary condition \eqref{bpsi} for $\psi_\varepsilon$ with boundary condition \eqref{b1},\eqref{b3} or \eqref{b2} for $T_\varepsilon$. \item[(2)] \textbf{Diffusive limit}: As $\varepsilon \to 0$, the weak solution $(T_\varepsilon,\psi_\varepsilon)$ to the system \eqref{eq:Teps}-\eqref{eq:psieps} converges to $(\overline{T},\overline{T}^4)$, where $\overline{T}$ is the weak solution of the system \eqref{hgm1.0} with boundary conditions that $\Omega = \mathbb{T}^3$ for the case (A) and $\overline{T}=T_b$ on the boundary for the case (B) and (C). \item [(3)] \textbf{Rate of convergence}: Assume $\overline{T}$ is a strong solution to the system \eqref{hgm1.0} which has a positive lower bound. Then \begin{align*} \|T_\varepsilon(t)-\overline{T}(t)\|_{L^4(\Omega)}^4+\|\psi_\varepsilon(t)-\overline{\psi}(t)\|_{L^2(\Omega)}^2 \le C \|T_{\varepsilon 0}-\overline{T}_0\|_{L^4(\Omega)}^4+C\varepsilon^s, \end{align*} where $s>0$ is a positive constant and takes the value $s=2,1,\min{(1,r)}$ for the case of boundary conditions \eqref{b1},\eqref{b3},\eqref{b2}, respectively. \end{enumerate} \end{theorem} The main contribution of the present work is to give a more rigorous study of the radiative heat transfer system and its diffusive limit. We prove the global existence of weak solutions for the system and the convergence of the weak solutions to a nonlinear diffusion model under the diffusive limit. Our work extend the analysis made by Klar and Schmeiser in \cite{klar2001numerical}, where the existence and diffusive limit were established for smooth solutions. In their work, some extra assumptions on the solutions (which are not known to hold) were needed. Here we do not need these assumptions. The major difficulties in our work lie in the nonlinearity and lack of compactness of the system \eqref{eq:Teps}-\eqref{eq:psieps}. To overcome the difficulties, we use Young measure theory and averaging lemmas. The Young measure is applied to deal with the nonlinearity and the avaraging lemma is applied to get the compactness. The diffusive limit can thus be rigorously justified. Assuming additional regularity on the limit system, the relative entropy method can be used to give the rate of convergence for the diffusive limit. However, this method does not work in the case of the Robin boundary condition ($r=0$), due to the boundary layers. A lot of literature is devoted to the mathematical analysis and numerical computations of the radiative heat transfer system \cite{larsen2002simplified, asllanaj2004transient, ghattassi2016galerkin,pinnau2007analysis,chebotarev2016nondegeneracy,ghattassi2018reduced}. Besides the work \cite{klar2001numerical} on the same model considered here, there are some works on similar models \cite{porzio2004application,golse2008rosseland,amosov2017unique,amosov2016unique, ghattassi2018existence}. For example, the existence and uniqueness of strong solutions for the non-grey coupled convection-conduction radiation system were proved in \cite{porzio2004application} using accretive operators theory. In \cite{golse2008rosseland}, the authors discussed the existence of weak solutions for a grey radiative transfer system without diffusion term in the temperature equation in a bounded domain with non-homogeneous Dirichlet boundary conditions. They supposed that the radiative boundary data do not depend on the direction in order to avoid the boundary layer. The main tools used to prove the existence of weak solutions are the compactness argument based on a maximum principle and velocity averaging lemma. Furthermore, the existence and uniqueness of weak solution for the stationary nonlinear heat equation and the integro-differential radiative transfer equation for semitransparent bodies were studied in \cite{amosov2017unique}, where the authors took into account the effects of reflection and refraction of radiation according to the Fresnel laws at the boundaries of bodies. More recently, in \cite{ghattassi2018existence}, the authors proved the local existence and uniqueness of strong solutions for the radiative heat transfer system under different types of boundary conditions by using the Banach fixed point theorem. The time derivative term in the radiative transfer equation \eqref{eq:psieps} was also neglected therein. The diffusion limit in radiative heat transfer system can be studied via Rosseland approximations \cite{bardos1987rosseland,bardos1988nonaccretive}. In \cite{bardos1987rosseland}, the authors derived the Rosseland approximation on a different radiative transfer equation where the solution also depends on the frequency variable $\nu$. Using the so-called Hilbert's expansion method, they proved the strong convergence of the solution of the radiative transfer equation to the solution of the Rosseland equation for well prepared boundary data. Then, in \cite{bardos1988nonaccretive}, under some weak hypotheses on the various parameters of the radiative transfer equation, the Rosseland approximation was proved in a weak sense. More recently, in \cite{debussche2015diffusion,debussche2016diffusion}, the authors studied the diffusive limit of a stochastic kinetic radiative transfer equation, which is nonlinear and includes a smooth random term. They used a stochastic averaging lemma to show the convergence in distribution to a stochastic nonlinear fluid model. Moreover, there exists a wide literature on the diffusion limits for other kinds of kinetic systems, with various viewpoints and applications \cite{diperna1979uniqueness,dafermos1979stability,dafermos1979second,carrillo2001entropy,masmoudi2007diffusion,dolbeault2007non,saint2009hydrodynamic,el2010diffusion,lattanzio2013relative}. For example, in \cite{masmoudi2007diffusion}, the authors studied the diffusive limit of a semiconductor Boltzmann-Possion system. The method of moments and a velocity averaging lemma were used to prove the convergence of its renormalized solution towards a global weak solution of a drift-diffusion-Poisson model. Similar methods have been used to study the hydrodynamic limit of Boltzmann equation \cite{saint2009hydrodynamic,lions2001boltzmanna,lions2001boltzmannb}. The hydrodynamic limit of Boltzmann equation can also be studied using the relative entropy method, for example to show the incompressible limit to Euler and Navier-Stokes equations \cite{saint2009hydrodynamic}. The origins of the relative entropy method come from continuum mechanis, see \cite{dafermos1979second} for more details. The principle of this method is to measure in a certain way the distance between two solutions in some given space. This method was also used in the stability and asymptotic limit for different type of PDEs, for instance see \cite{el2010asymptotic,demoulini2012weak,lattanzio2013relative,tzavaras2005relative}. The paper is organized as follows. In the next section, the Galerkin approximation is used to show the existence of global weak solution of the radiative heat transfer system. Then we prove the convergence of the weak solutions to a nonlinear parabolic equation in the diffusive limit in Section \ref{section3}, by using the averaging lemma and the theory of Young measures. Moreover, we recover the boundary condition for the nonlinear parabolic limit equation by using trace theorems. In Section \ref{section4}, we give the convergence rate of the diffusive limit by using the relative entropy method. {\bf Notations: } In this paper, we use $\|\cdot\|_{L^{p}}$ to denote the natural norm on $L^{p}(\Omega)$, for $p\in[1,\infty]$ and $\|\cdot\|_{H^{s}}$ is the norm on the sobolev space $H^{s}(\Omega)$, $s>0$. We use $\langle \cdot \rangle$ to denote the integral over $\beta \in \mathbb{S}^2$. $C_{t,x}$ is the space of continuous functions in time and space. \section{Global existence of weak solutions}\label{section2} In this section we prove the global existence of weak solutions for the radiative heat transfer system \eqref{eq:Teps}-\eqref{eq:psieps} under three different boundary conditions: torus, nonhomogeneous Dirichlet condition and Robin condition. We first consider the case of torus, i.e. $\Omega=\mathbb{T}^3$. \subsection{The case of torus} We first prove the existence theorem for the case of torus. The idea of the proof can be modified to deal with bounded domain, which will be done later in this section. Before stating the existence theorem, we first introduce the definition of weak solutions. \begin{definition}\label{df1} Let $T_{\varepsilon0} \in L^5(\mathbb{T}^3)$ and $\psi_{\varepsilon0} \in L^2(\mathbb{T}^3 \times \mathbb{S}^2)$. We say that $(T_\varepsilon,\psi_\varepsilon)$ is a weak solution of the system \eqref{eq:Teps}-\eqref{eq:psieps} with initial conditions \eqref{eq:ic1}-\eqref{eq:ic2} if \begin{align}\label{eq:funsp} &T_\varepsilon \in L^\infty (0,\infty;L^5(\mathbb{T}^3))\cap C_w([0,\infty);L^5(\mathbb{T}^3)),\;\nabla T_\varepsilon^{\frac{5}{2}} \in L^2([0,\infty);L^2(\mathbb{T}^3)),\\ &\psi_\varepsilon \in L^\infty(0,\infty; L^2(\mathbb{T}^3 \times \mathbb{S}^2))\cap C_w([0,\infty);L^2(\mathbb{T}^3\times\mathbb{S}^2)), \end{align} and it solves \eqref{eq:Teps}-\eqref{eq:psieps} in the sense of distributions, i.e., for any test functions $\varphi \in C^\infty([0,\infty)\times\mathbb{T}^3)$ and $\rho\in C^\infty([0,\infty)\times\mathbb{T}^3 \times \mathbb{S}^2)$, the following equations hold: \begin{align} &-\iint_{[0,\infty)\times\mathbb{T}^3} \left(T_\varepsilon\partial_t \varphi + T_\varepsilon \Delta \varphi + \frac{1}{\varepsilon^2}\int_{\mathbb{S}^2} \varphi(\psi_\varepsilon - T_\varepsilon^4) d\beta \right) dxdt = \int_{\mathbb{T}^3} T_{\varepsilon0} \varphi(0,\cdot)dx, \label{eq:weakt1}\\ &-\iiint_{[0,\infty)\times\mathbb{T}^3 \times \mathbb{S}^2} \left(\psi_\varepsilon \partial_t \rho + \frac{1}{\varepsilon} \psi_\varepsilon \beta \cdot \nabla \rho - \frac{1}{\varepsilon^2} \rho(\psi_\varepsilon-T_\varepsilon^4)\right) d\beta dxdt \nonumber\\ &\quad= \iint_{\mathbb{T}^3 \times \mathbb{S}^2} \psi_{\varepsilon0}\rho(0,\cdot) dx. \label{eq:weakt2} \end{align} \end{definition} Next we prove the following existence theorem: \begin{theorem}\label{thm:existencet} Let $T_{\varepsilon0} \in L^5(\mathbb{T}^3)$ and $\psi_{\varepsilon0} \in L^2(\mathbb{T}^3 \times \mathbb{S}^2)$. Then there exists a global weak solution $(T_\varepsilon,\psi_\varepsilon)$ to the system \eqref{eq:Teps}-\eqref{eq:psieps} with initial data \eqref{eq:ic1}-\eqref{eq:ic2}. Moreover the following energy inequality holds for all $t>0$: \begin{align} &\frac{1}{5}\|T_\varepsilon(t,\cdot)\|_{L^5(\mathbb{T}^3)}^5 + \frac{1}{2}\|\psi_\varepsilon(t,\cdot,\cdot)\|_{L^2(\mathbb{T}^3\times \mathbb{S}^2)}^2 + \frac{16}{25}\int_0^t \|\nabla T_\varepsilon^{\frac{5}{2}}(\tau,\cdot)\|_{L^2}^2 d\tau \nonumber \\ &\quad\quad+ \frac{1}{\varepsilon^2} \int_0^t \|\psi_\varepsilon(\tau,\cdot,\cdot) - T_\varepsilon^4(\tau,\cdot)\|_{L^2(\mathbb{T}^3 \times \mathbb{S}^2)}^2 d\tau \nonumber \\ &\quad \le \frac{1}{5} \|T_{\varepsilon0}\|_{L^5(\mathbb{T}^3)}^5 + \frac{1}{2} \|\psi_{\varepsilon0}\|_{L^2(\mathbb{T}^3\times \mathbb{S}^2)}^2. \label{eq:energythm1} \end{align} \end{theorem} \begin{proof} To prove Theorem \ref{thm:existencet}, we construct an approximate system using Galerkin approximations in finite dimensions, and then show the system converges as the dimension goes to inifinity with the limit satisfying \eqref{eq:weakt1}-\eqref{eq:weakt2}. \textbf{Construction of a Galerkin approximate system.} We first construct a finite dimensional approximations to the system \eqref{eq:Teps}-\eqref{eq:psieps} using Fourier series. We take the Fourier series of a $L^s$ ($s\ge1$) function to be \[f(x) = \sum_{k \in \mathbb{Z}^d} \hat{f}(k) e^{ik\cdot x},\] and define the operator $\mathbb{P}_m:L^s \mapsto L^s$ ($s \ge 1$) as \[\mathbb{P}_m f(x) = \sum_{|k| \le m} \hat{f}(k) e^{ik\cdot x}.\] Notice that $\mathbb{P}_m$ commutes with derivatives and convolutions. For function $g=g(x,\beta)$ defined on $\mathbb{T}^3\times \mathbb{S}^2$, \[\mathbb{P}_m g(x,\beta) =\sum_{|k| \le m} \hat{g}(k,\beta) e^{ik\cdot x}.\] We take the $m$-th Galerkin approximate system to be \begin{align} \partial_t T_\varepsilon^m =~& \Delta T_\varepsilon^m + \frac{1}{\varepsilon^2} \int_{\mathbb{S}^2} \left(\psi_{\varepsilon}^m - \mathbb{P}_m\left((T_\varepsilon^m)^4\right)\right) d\beta, \label{eq:gl1}\\ \partial_t \psi_\varepsilon^m + \frac{1}{\varepsilon} \beta \cdot \nabla \psi_\varepsilon^m =~& -\frac{1}{\varepsilon^2}\left(\psi_\varepsilon^m - \mathbb{P}_m\left((T_\varepsilon^m)^4\right)\right). \label{eq:gl2} \end{align} The initial data is taken to be \begin{align*} T_\varepsilon^m \big|_{t=0} = \mathbb{P}_mT_{\varepsilon0},\quad \psi_{\varepsilon}^m \big|_{t=0} = \mathbb{P}_m \psi_{\varepsilon0}. \end{align*} We make a change of variable $\xi=x-\frac{1}{\varepsilon}\beta t$ and equation \eqref{eq:gl2} changes into \[\frac{d}{dt} \psi_\varepsilon^m (t,\xi)= -\frac{1}{\varepsilon^2}\left(\psi_\varepsilon^m(t,\xi)- \mathbb{P}_m\left((T_\varepsilon^m)^4\right)\right),\] which is an ODE in finite dimensional space. Since \eqref{eq:gl1} is also an ODE in finite dimensional space, the system \eqref{eq:gl1}-\eqref{eq:gl2} has a unique solution $(T_\varepsilon^m,\psi_\varepsilon^m)$ on a maximal time interval $t_m$, according to the Cauchy-Lipschitz theorem. The maximal existence time $t_m$ is characterized by \begin{align*} \lim_{t\to t_m^{-}}\sup \left(\|T_\varepsilon^m\|_{L^5(\mathbb{T}^3)}^5 + \|\psi_\varepsilon^m\|_{L^2(\mathbb{T}^3\times\mathbb{S}^2)}^2 \right) = \infty. \end{align*} As will see next the norms above are bounded uniformly in time and so the Galerkin approximate system \eqref{eq:gl1}-\eqref{eq:gl2} is globally well-posed. \textbf{Uniform estimate of the Galerkin system.} Next we derive the energy estimate of the system \eqref{eq:gl1}-\eqref{eq:gl2}. Multiplying \eqref{eq:gl1} by $(T_\varepsilon^m)^4$ and \eqref{eq:gl2} by $\psi_\varepsilon^m$, adding the results together, and using the fact that $\mathbb{P}_m$ is a self-adjoint operator, we obtain \begin{align*} \frac{d}{dt} &\left(\frac{1}{5} \|T_\varepsilon^m\|_{L^5(\mathbb{T}^3)}^5 + \frac{1}{2} \|\psi_\varepsilon^m\|_{L^2(\mathbb{T}^3 \times \mathbb{S}^2)}^2\right) + \frac{16}{25} \|\nabla (T_{\varepsilon}^m)^{\frac{5}{2}}\|_{L^2(\mathbb{T}^3)}^2 \\ &+ \frac{1}{\varepsilon^2} \|\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)\|_{L^2(\mathbb{T}^3\times \mathbb{S}^2)}^2 = \frac{1}{5} \|\mathbb{P}_mT_{\varepsilon0}\|_{L^5(\mathbb{T}^3)}^5 + \frac{1}{2} \|\mathbb{P}_m\psi_{\varepsilon0}\|_{L^2(\mathbb{T}^3 \times \mathbb{S}^2)}^2. \end{align*} Integrating it over $[0,t]$ and using the fact that $\|\mathbb{P}_m f\|_{L^s} \le \|f\|_{L^s}$, we obtain the energy inequality \begin{align} &\frac{1}{5} \|T_\varepsilon^m(t)\|_{L^5(\mathbb{T}^3)}^5 + \frac{1}{2} \|\psi_\varepsilon^m(t)\|_{L^2(\mathbb{T}^3 \times \mathbb{S}^2)}^2+ \frac{16}{25} \int_0^t \|\nabla (T_{\varepsilon}^m)^{\frac{5}{2}}(\tau)\|_{L^2(\mathbb{T}^3)}^2d\tau \nonumber\\ &\quad \quad + \frac{1}{\varepsilon^2}\int_0^t \|\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)(\tau)\|_{L^2(\mathbb{T}^3\times \mathbb{S}^2)}^2 d\tau \nonumber \\ &\quad \le \frac{1}{5} \|T_{\varepsilon0}\|_{L^5(\mathbb{T}^3)}^5 + \frac{1}{2} \|\psi_{\varepsilon0}\|_{L^2(\mathbb{T}^3 \times \mathbb{S}^2)}^2, \label{eq:enegl} \end{align} for all $t>0$. It follows from the above energy inequality that, \begin{align} &\{T_\varepsilon^m\}_{m>0} \text{ is uniformly bounded in } L^\infty([0,\infty);L^5(\mathbb{T}^3)) , \label{eq:unibd1} \\ &\{\nabla (T_\varepsilon^m)^{\frac{5}{2}}\}_{m>0} \text{ is uniformly bounded in } L^2([0,\infty);L^2(\mathbb{T}^3)), \label{eq:unibd2}\\ &\{\psi_\varepsilon^m\}_{m>0} \text{ is uniformly bounded in } L^\infty([0,\infty);L^2(\mathbb{T}^3\times \mathbb{S}^2)), \label{eq:unibd3}\\ &\left\{\frac{1}{\varepsilon}\left(\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)\right) \right\}_{m>0}\text{ is uniformly bounded in } L^2([0,\infty);L^2(\mathbb{T}^3\times \mathbb{S}^2)). \label{eq:unibd4} \end{align} Using \eqref{eq:unibd2} and the Sobolev inequality \[\|(T_\varepsilon^m)^{\frac{5}{2}} \|_{L^6(\mathbb{T}^3)} \le C \|\nabla (T_{\varepsilon}^m)^{\frac{5}{2}}\|_{L^2} + C \|(T_\varepsilon^m)^{\frac{5}{2}}\|_{L^2} \le C \|(\nabla T_{\varepsilon}^m)^{\frac{5}{2}}\|_{L^2} + C \|T_\varepsilon^m\|_{L^5}^{\frac{5}{2}},\] we have \begin{align}\label{eq:l15} \int_0^t &\|T_\varepsilon^m\|_{L^{15}(\mathbb{T}^3)}^5 d\tau \nonumber\\ =& \int_0^t \|(T_\varepsilon^m)^{\frac{5}{2}}\|_{L^6(\mathbb{T}^3)}^2 d\tau \nonumber\\ \le& C \int_0^t \|\nabla (T_{\varepsilon}^m)^{\frac{5}{2}}\|_{L^2}^2 + C\int_0^t \|T_\varepsilon^m\|_{L^5}^2d\tau \nonumber\\ \le& C \|\nabla (T_\varepsilon^m)^{\frac{5}{2}}\|_{L^2([0,\infty);L^2(\mathbb{T}^3))}d\tau + C t \|T_\varepsilon^m\|_{L^\infty([0,\infty);L^5(\mathbb{T}^3))}^2, \end{align} which is bounded. Therefore, \begin{align} &\{(T_\varepsilon^m)^{\frac{5}{2}}\}_{m>0}\text{ is uniformly bounded in }L^2_{\operatorname{loc}}([0,\infty);L^{6}(\mathbb{T}^3)),\label{eq:bdl52}\\ &\{T_\varepsilon^m\}_{m>0} \text{ is uniformly bounded in }L^5_{\operatorname{loc}}([0,\infty);L^{15}(\mathbb{T}^3)).\label{eq:bdl15} \end{align} \textbf{Passing to the limit in the Galerkin system.} Using \eqref{eq:unibd1}-\eqref{eq:unibd4} and \eqref{eq:bdl52}-\eqref{eq:bdl15}, there exists subsequences $\{T_\varepsilon^{m_k}\}_{k>0}$ and $\{\psi_\varepsilon^{m_k}\}_{k>0}$ such that \begin{align} &T_\varepsilon^{m_k} \rightharpoonup T_\varepsilon, \text{ weakly in } L^2_{\operatorname{loc}}([0,\infty);L^5(\mathbb{T}^5))\cap L^{5}_{\operatorname{loc}}([0,\infty);L^{15}(\mathbb{T}^3)), \label{eq:wkconv1} \\ &T_\varepsilon^{m_k} \rightharpoonup^* T_\varepsilon, \text{ weakly star in } L^\infty([0,\infty);L^5(\mathbb{T}^5)), \label{eq:wkstarconvt} \\ &(T_\varepsilon^{m_k})^{\frac{5}{2}} \rightharpoonup \overline{(T_\varepsilon^{m_k})^{\frac{5}{2}}}, \text{ weakly in }L^2_{\operatorname{loc}}([0,\infty);H^1(\mathbb{T}^3)),\label{eq:asm2}\\ &\psi_\varepsilon^{m_k} \rightharpoonup \psi_\varepsilon, \text{ weakly in } L^2_{\operatorname{loc}}([0,\infty);L^2(\mathbb{T}^3\times\mathbb{S}^2)), \label{eq:wkconv2} \\ &\psi_\varepsilon^{m_k} \rightharpoonup^* \psi_\varepsilon, \text{ weakly star in } L^\infty([0,\infty);L^2(\mathbb{T}^3\times\mathbb{S}^2)), \label{eq:wkstarconvpsi} \\ &\frac{1}{\varepsilon}(\psi_\varepsilon^{m_k} - \mathbb{P}_m((T_\varepsilon^{m_k})^4)) \rightharpoonup A, \text{ weakly in } L^2_{\operatorname{loc}}([0,\infty);L^2(\mathbb{T}^3\times\mathbb{S}^2)), \label{eq:weakconv3b} \end{align} as $k\to \infty$. Notice that here $A \in L^2_{\operatorname{loc}}([0,\infty);L^2(\mathbb{T}^3\times\mathbb{S}^2))$ is a bounded function. Due to the property of the operator $\mathbb{P}_m$, \begin{align*} (T_\varepsilon^m)^4 - \mathbb{P}_m((T_\varepsilon^m)^4) \to 0, \end{align*} as $m\to \infty$, so with \eqref{eq:weakconv3b}, we can conclude that \begin{align} &\frac{1}{\varepsilon}(\psi_\varepsilon^{m_k} - (T_\varepsilon^{m_k})^4) \rightharpoonup A, \text{ weakly in } L^2_{\operatorname{loc}}([0,\infty);L^2(\mathbb{T}^3\times\mathbb{S}^2)). \label{eq:weakconv3} \end{align} From the energy estiamte \eqref{eq:enegl}, we have \begin{align*} &\partial_t T_\varepsilon^m \text{ is uniformly bound in } L^2_{\operatorname{loc}}([0,t];H^{-2}(\mathbb{T}^3)),\\ &\partial_t \psi_\varepsilon^m \text{ is uniformly bound in } L^2_{\operatorname{loc}}([0,t];H^{-1}(\mathbb{T}^3)), \end{align*} these, together with \eqref{eq:unibd1} and \eqref{eq:unibd2}, imply that \begin{align} &T_\varepsilon^m \to T_\varepsilon, \text{ strongly in }L_{\operatorname{loc}}^2([0,t];L^2(\mathbb{T}^3)), \label{eq:tconv1}\\ &\psi_\varepsilon^m \to \psi_\varepsilon, \text{ strongly in }L_{\operatorname{loc}}^2([0,t];L^2(\mathbb{T}^3\times\mathbb{S}^2)),\label{eq:psiconv1}\\ &\partial_t T_\varepsilon^m \rightharpoonup \partial_t T_\varepsilon \text{ weakly in }L^2_{\operatorname{loc}}([0,\infty);H^{-2}(\mathbb{T}^3)), \\ &\partial_t \psi_\varepsilon^m \rightharpoonup \partial_t \psi_\varepsilon \text{ weakly in }L^2_{\operatorname{loc}}([0,\infty);H^{-1}(\mathbb{T}^3\times\mathbb{S}^2)). \end{align} Therefore, the Galerkin approximate system converges. It also follows from the above fact that \begin{align}\label{eq:wkcontinuity} T_\varepsilon \in C_{w}([0,\infty);L^2(\mathbb{T}^3)), \quad \psi_\varepsilon \in C_{w}([0,\infty);L^2(\mathbb{T}^3\times\mathbb{S}^2)). \end{align} This means that $T_\varepsilon$ and $\psi_\varepsilon$ are weakly continuous with values in $L^5(\mathbb{T}^3)$ and $L^2(\mathbb{T}^3\times\mathbb{S}^2)$, respectively. \textbf{The limit satisfies the system \eqref{eq:Teps}-\eqref{eq:psieps}.} To show the limit satisfies the system \eqref{eq:Teps}-\eqref{eq:psieps} in the sense of distributions, we apply test functions on \eqref{eq:gl1}-\eqref{eq:gl2}. We take the convergence subsequence obtained in the previous step. Here we will drop the subscript $k$ for simplicity. Fix $t>0$ and apply smooth function $\varphi \in C^\infty([0,t]\times \mathbb{T}^3)$ and $\rho \in C^\infty([0,t]\times \mathbb{T}^3\times \mathbb{S}^2)$ to the equations \eqref{eq:gl1} and \eqref{eq:gl2}, respectively, we arrive at \begin{align} &\int_{\mathbb{T}^3} T_\varepsilon^m(t) \cdot \varphi(t) dx - \int_0^t \int_{\mathbb{T}^3} T_\varepsilon^m \partial_t \varphi dxd\tau - \int_0^t\int_{\mathbb{T}^3} T_\varepsilon^m \Delta \varphi dxd\tau \nonumber\\ &\quad- \int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2} \frac{1}{\varepsilon^2} \varphi (\psi_\varepsilon^m-\mathbb{P}_m((T_\varepsilon^m)^4))d\beta dxd\tau = \int_{\mathbb{T}^3} \mathbb{P}_mT_{\varepsilon0} \cdot \varphi(0) dx, \label{eq:glw1}\\ &\iint_{\mathbb{T}^3\times\mathbb{S}^2} \psi_\varepsilon^m (t) \cdot \rho(t) d\beta dx - \int_0^t \iint_{\mathbb{T}^3\times\mathbb{S}^2} \psi_\varepsilon^m \partial_t \rho d\beta dxd\tau \nonumber \\ &\qquad- \int_0^t \iint_{\mathbb{T}^3\times\mathbb{S}^2} \frac{1}{\varepsilon} \psi_\varepsilon^m \beta \cdot \nabla \rho d\beta dxd\tau - \int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2} \frac{1}{\varepsilon^2}\rho (\psi_\varepsilon^m-\mathbb{P}_m((T_\varepsilon^m)^4))d\beta dxd\tau \nonumber\\ &\quad= \iint_{\mathbb{T}^3\times\mathbb{S}^2} \mathbb{P}_m \psi_{\varepsilon0} \cdot \rho(0) d\beta dx. \label{eq:glw2} \end{align} From the property of the operator $\mathbb{P}_m$, $\|f-\mathbb{P}_mf\|_{L^2} \to 0$ as $m\to\infty$, we get \begin{align*} &\int_{\mathbb{T}^3} \mathbb{P}_mT_{\varepsilon0} \cdot \varphi(0) dx \to \int_{\mathbb{T}^3} T_{\varepsilon0} \cdot \varphi(0) dx, \\ & \iint_{\mathbb{T}^3\times\mathbb{S}^2} \mathbb{P}_m \psi_{\varepsilon0} \cdot \rho d\beta dx \to \iint_{\mathbb{T}^3\times\mathbb{S}^2} \psi_{\varepsilon0} \cdot \rho(0) d\beta dx. \end{align*} For the terms involving $(T_\varepsilon^m)^4$, we can use the strong convergence of $T_\varepsilon^m$ in \eqref{eq:tconv1} to get \begin{align*} &\int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2}\varphi (\mathbb{P}_m((T_\varepsilon^m)^4)-T_\varepsilon^4)d\beta dxd\tau \\ &\quad = \int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2}\varphi (\mathbb{P}_m((T_\varepsilon^m)^4)- \mathbb{P}_m T_\varepsilon^4 + \mathbb{P}_m T_\varepsilon^4- T_\varepsilon^4)d\beta dxd\tau \\ &\quad= \int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2}\mathbb{P}_m\varphi \cdot ((T_\varepsilon^m)^4 - T_\varepsilon^4) d\beta dxd\tau + \int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2} T_\varepsilon^4(\mathbb{P}_m \varphi - \varphi) d\beta dxd\tau\\ &\le C(\|T_\varepsilon^m\|^{3}_{L^\infty([0,t];L^3(\mathbb{T}^3))} + \|T_\varepsilon\|^{3}_{L^\infty([0,t];L^3(\mathbb{T}^3))}) \|\mathbb{P}_m\varphi\|_{L^2(\mathbb{T}^3\times \mathbb{S}^2)}\|T_\varepsilon^m - T_\varepsilon\|_{L^2([0,t];L^2(\mathbb{T}^3))} \\ &\quad + C\|T_\varepsilon\|^{4}_{L^8([0,t];L^8(\mathbb{T}^3))} \|\mathbb{P}_m\varphi - \varphi\|_{L^2([0,t];L^2(\mathbb{T}^3))}. \end{align*} From the strong convergence \eqref{eq:tconv1}, the first term on the right hand side of the above inequality goes to zero as $m \to \infty$. From the property of $\mathbb{P}_m$, the second also goes to zero. Therefore, we conclude that \begin{align*} \int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2}\varphi \mathbb{P}_m((T_\varepsilon^m)^4)d\beta dxd\tau \to \int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2}\varphi T_\varepsilon^4 d\beta dxd\tau. \end{align*} The following convergence result can obtained in a similar way: \begin{align*} \int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2}\rho \mathbb{P}_m((T_\varepsilon^m)^4)d\beta dxd\tau \to \int_0^t\iint_{\mathbb{T}^3\times \mathbb{S}^2}\rho T_\varepsilon^4 d\beta dxd\tau. \end{align*} Lastly, from \eqref{eq:tconv1} and \eqref{eq:psiconv1}, we have \begin{align*} &\int_{\mathbb{T}^d} T_\varepsilon^m(t) \varphi(t) dx \to \int_{\mathbb{T}^d} T_\varepsilon (t) \varphi(t) dx, \\ &\int_{\mathbb{T}^d} \psi_\varepsilon^m(t) \rho(t) dx \to \int_{\mathbb{T}^d} \psi_\varepsilon (t) \rho(t) dx. \end{align*} Notice that the weakly continuity \eqref{eq:wkcontinuity} get rid of the possible bad zero measure set in time. Using \eqref{eq:tconv1} and \eqref{eq:psiconv1}, we can pass to the limit in the other terms in equations \eqref{eq:glw1} and \eqref{eq:glw2}. Finally we arrive at \eqref{eq:weakt1} and \eqref{eq:weakt2}. Thus, for any $t>0$, ($T_\varepsilon$, $\psi_\varepsilon$) solves the system \eqref{eq:Teps}-\eqref{eq:psieps} in the sense of distributions and it satisfies \eqref{eq:funsp}. \textbf{The energy inequality.} To show the energy inequality, we consider the inequality \eqref{eq:enegl}. Notice that since we have the strong convergence of $T_{\varepsilon}^m,\psi_\varepsilon^m$ according to \eqref{eq:tconv1} and \eqref{eq:psiconv1}, we can take a subsequence that converges everywhere. Let's take this subsequence to recover the energy estimate. The weak star convergences in \eqref{eq:wkstarconvt} and \eqref{eq:wkstarconvpsi} imply that \begin{align}\label{eq:lsuptpsi} \|T_\varepsilon(t)\|_{L^5(\mathbb{T}^3)}^5 \le \limsup_{m\to\infty}\|T_\varepsilon^m\|_{L^5(\mathbb{T}^3)}^5, ~~ \|\psi_\varepsilon(t)\|_{L^2(\mathbb{T}^3\times\mathbb{S}^2)}^2 \le \limsup_{m\to\infty}\|\psi_\varepsilon^m\|_{L^2(\mathbb{T}^3\times\mathbb{S}^2)}^2. \end{align} The weak convergence in \eqref{eq:weakconv3} implies that \[\psi_\varepsilon^m - (T_\varepsilon^m)^4 \rightharpoonup \psi_\varepsilon -T_\varepsilon^4 \quad \text{in} \quad L^2([0,t];L^2(\mathbb{T}^3\times \mathbb{S}^2)). \] Therefore, \begin{align}\label{eq:liminfpsi-t4} \int_0^t \|\psi_\varepsilon - T_\varepsilon^4\|_{L^2(\mathbb{T}^3\times\mathbb{S}^2)}^2 d\tau \le \liminf_{m\to\infty} \int_0^t \|\psi_\varepsilon^m -\mathbb{P}_m((T_\varepsilon^m))^4\|_{L^2(\mathbb{T}^3\times \mathbb{S}^2)}^2 d\tau. \end{align} From \eqref{eq:unibd2} and the strong convergence \eqref{eq:tconv1}, we have \[(T_\varepsilon^m)^{\frac{5}{2}} \rightharpoonup (T_\varepsilon)^{\frac{5}{2}}\quad \text{in} \quad L^2([0,\infty);{H}^1(\mathbb{T}^3)), \] and thus \begin{align}\label{eq:liminfnabla} \int_0^t \|(\nabla T_\varepsilon)^{\frac{5}{2}}\|_{L^2(\mathbb{T}^3)}^2 d\tau \le \liminf_{m\to \infty} \int_0^t \|(\nabla T_\varepsilon^m)^{\frac{5}{2}}\|_{L^2(\mathbb{T}^3)}^2 d\tau. \end{align} Taking the $\limsup_{m\to\infty}$ in the energy inequality \eqref{eq:enegl} and using the above estimates, we arrive at \eqref{eq:energythm1} and finish the proof. \end{proof} \subsection{Case of nonhomogeneous Dirichlet boundary condition} We now consider the case of nonhomogeneous Dirichlet boundary condition \eqref{b3} under the assumption of well-prepared initial and boundary conditions. For simplicity, here we assume the boundary data $T_b$ and $\psi_b$ are time independent such that $T_b=T_b(x)$ and $\psi_b = \psi_b(x)$. The case of time dependent boundary can be treated similarily. Before we give the definition of weak solutions, we introduce the trace operators that extends the functions in Sobolev spaces to the boundary. We take $\gamma^1:H^1(\Omega) \to L^2(\partial\Omega)$ to be the trace operator. We learn from the trace theorem (see \cite{evans1997partial}) that if $\Omega$ is bounded and $\partial\Omega \in C^1$, then there exists a trace operator $\gamma^1$ such that \begin{align*} \gamma^1 f = f|_{\partial\Omega}, \text{ if } f \in H^1(\Omega), \end{align*} and \begin{align*} \|\gamma^1 f\|_{L^2(\partial\Omega)} \le C\|f\|_{H^{1}(\Omega)}. \end{align*} Here the constant $C$ only depends on $\Omega$. For the weak formulation of the equation \eqref{eq:Teps} we can apply the trace operator on $T_\varepsilon$ to get \begin{align*} \gamma^1 T_\varepsilon = T_b. \end{align*} To consider the boundary condition \eqref{bpsi}, we define the trace operator following \cite[Appendix (B.1)]{saint2009hydrodynamic} as \begin{align}\label{eq:gamma2def} \gamma^2: \psi \in W^2(\mathbb{R}_+\times\Omega\times\mathbb{S}^2) \mapsto \psi|_{\partial \Omega} \in L^2(\mathbb{R}_+ \times \partial \Omega \times \mathbb{S}^2, |n\cdot \beta| d\beta d\sigma_x dt). \end{align} Here $W^2$ is the space \begin{align}\label{eq:w2def} W^2(\mathbb{R}\times\Omega\times\mathbb{S}^2) := \{\psi \in L^2(\mathbb{R}_+\times\Omega\times\mathbb{S}^2): (\varepsilon \partial_t + \beta \cdot \nabla)\psi \in L^2(\mathbb{R}_+\times\Omega\times\mathbb{S}^2)\}. \end{align} The following lemma holds (See \cite[Proposition B.1]{saint2009hydrodynamic}): \begin{lemma}\label{lm:trace} The trace operator defined in \eqref{eq:gamma2def} is continuous. \end{lemma} \begin{proof} For any bounded function $\rho \in C^1(\bar{\Omega}\times \mathbb{S}^2)$, we use Green's formula to get \begin{align} 2\iiint& \rho(x,\beta) \psi (\varepsilon\partial_t + \beta \cdot \nabla_x)\psi d\beta dxdt + \iiint (\beta \cdot \nabla_x) \rho(x,\beta) \psi^2 d\beta dxdt \nonumber\\=& \iiint \rho(x,\beta) \psi^2(t,x,\beta) (n\cdot \beta) d\beta d\sigma_xdt. \label{eq:psibcal} \end{align} Choose $\rho(x,v)=n\cdot \beta/|n\cdot \beta|$ and we get \[\|\psi|_{\partial\Omega}\|_{L^2(\mathbb{R}_+\times\Omega\times\mathbb{S}^2,|n\cdot \beta|d\beta d\sigma_xdt)} \le C(\|\psi\|_{L^2(\mathbb{R}_+\times\Omega\times\mathbb{S}^2)} + \|(\varepsilon\partial_t + \beta \cdot \nabla_x )\psi)\|_{L^2(\mathbb{R}_+\times\Omega\times\mathbb{S}^2)}).\] \end{proof} \begin{definition}\label{def3} Assume $\Omega\in C^1$. Let $T_{\varepsilon0} \in L^5(\Omega)$ and $\psi_{\varepsilon0} \in L^2(\Omega \times \mathbb{S}^2)$. Let $T_b \in L_{\operatorname{loc}}^5([0,\infty);L^5(\partial \Omega))$ and $\psi_b \in L_{\operatorname{loc}}^2([0,\infty); L^2(\Sigma_-;|n\cdot\beta| d\beta d\sigma_x))$. We say that $(T_\varepsilon,\psi_\varepsilon)$ is a weak solution of the system \eqref{eq:Teps}-\eqref{eq:psieps} with initial conditions \eqref{eq:ic1}-\eqref{eq:ic2} and boundary conditions \eqref{bpsi}, \eqref{b3} if \begin{align*} &T_\varepsilon \in L_{\operatorname{loc}}^\infty (0,\infty;L^5(\Omega)), \quad T_\varepsilon^{\frac{5}{2}} \in L^2_{\operatorname{loc}}(0,\infty;\dot{H}^1(\Omega)),\\ &\psi_\varepsilon \in L_{\operatorname{loc}}^\infty(0,\infty; L^2(\Omega \times \mathbb{S}^2))\cap W^2_{\operatorname{loc}}([0,\infty)\times\Omega\times\mathbb{S}^2), \end{align*} and it solves \eqref{eq:Teps}-\eqref{eq:psieps} in the sense of distributions, i.e. for any test functions $\varphi \in C_0^\infty([0,\infty)\times\Omega)$ and $\rho\in C_0^\infty([0,\infty)\times\Omega\times \mathbb{S}^2)$, the following equations hold: \begin{align} &-\iint_{[0,\infty)\times\Omega} \left(T_\varepsilon\partial_t \varphi + T_\varepsilon \Delta \varphi + \frac{1}{\varepsilon^2}\int_{\mathbb{S}^2} \varphi(\psi_\varepsilon - T_\varepsilon^4) d\beta \right) dxdt \nonumber\\ &\quad + \iint_{[0,\infty)\times \partial \Omega} (\gamma^1 T_\varepsilon) n\cdot \nabla \varphi|_{\partial\Omega} d\sigma_x dt = \int_{\mathbb{T}^3} T_{\varepsilon0} \varphi(0,\cdot)dx, \label{eq:weakd1}\\ &-\iiint_{[0,\infty)\times\Omega \times \mathbb{S}^2} \left(\psi_\varepsilon \partial_t \rho + \frac{1}{\varepsilon} \psi_\varepsilon \beta \cdot \nabla \rho - \frac{1}{\varepsilon^2} \rho(\psi_\varepsilon-T_\varepsilon^4)\right) d\beta dxdt \nonumber \\ &\quad= \iint_{\mathbb{T}^3 \times \mathbb{S}^2} \psi_{\varepsilon0}\rho(0,\cdot) dx, \label{eq:weakd2} \end{align} where \begin{align*} &\gamma^1 T_\varepsilon \big|_{\partial\Omega} = T_b,\\ &\gamma^2 \psi_\varepsilon \big|_{\Sigma_{-}} = \alpha \psi_b + (1-\alpha) L\psi_\varepsilon\big|_{\Sigma_{+}}. \end{align*} \end{definition} Next we prove the following existence theorem. \begin{theorem}\label{thmexistd} Assume $\partial\Omega\in C^1$. Let $T_{\varepsilon0} \in L^5(\Omega)$ and $\psi_{\varepsilon0} \in L^2(\Omega \times \mathbb{S}^2)$. $T_b \in L_{\operatorname{loc}}^5([0,\infty);L^5(\partial \Omega))$ and $\psi_b \in L^2_{\operatorname{loc}}([0,\infty);L^2(\Sigma_-;|n\cdot\beta|d\beta d\sigma_x))$. Let the boundary condition to be well prepared such that $\psi_b = T_b^4$. Then there exists a global weak solution $(T_\varepsilon,\psi_\varepsilon)$ of the system \eqref{eq:Teps}-\eqref{eq:psieps} with initial conditions \eqref{eq:ic1}-\eqref{eq:ic2} and boundary conditions \eqref{bpsi}, \eqref{b3}. Furthermore, the following inequality holds \begin{align}\label{eq:estdirichlet} &\|T_{\varepsilon}(t)\|_{L^5(\Omega))}^5 + \|\psi_\varepsilon(t)\|_{L^2(\Omega\times\mathbb{S}^2)}^2 + \int_0^t \|\nabla (T_\varepsilon)^{\frac{5}{2}}\|_{L^2(\Omega)}^2 d\tau \nonumber\\ &\qquad+ \frac{1}{\varepsilon^2} \int_0^t \|\psi_\varepsilon -T_\varepsilon^4)\|_{L^2(\Omega\times\mathbb{S}^2)}^2 d\tau \nonumber \\ &\qquad+\frac{2\alpha - \alpha^2}{2\varepsilon} \int_0^t \|\psi_\varepsilon - \psi_b\|_{L^2(\Sigma_+;|n\cdot\beta| d\beta d\sigma_x)}^2 d\tau \nonumber\\ &\quad \le C(\|T_{\varepsilon0}\|_{L^5}^5 + \|\psi_{\varepsilon0}\|_{L^2(\Omega\times\mathbb{S}^2)}^2), \end{align} for any $t>0$. Here $C$ depends on $t$ and $\Omega$ but is independent of $\varepsilon$. \end{theorem} \begin{proof} Since the boundary conditions \eqref{b3} and \eqref{bpsi} for $T_\varepsilon$ and $\psi_\varepsilon$ are not homogeneous, we need to lift up the boundary data and make the Galerkin approximations after subtracting the lifted data. For the boundary condition \eqref{eq:Teps}, we introduce $\widetilde{T}=\widetilde{T}(x)$ as the solution to the problem \begin{align} & \Delta \widetilde{T} = 0,\quad x\in \Omega, \label{eq:Gequation}\\ &\widetilde{T}(x) = T_b(x), \quad x\in\partial \Omega. \nonumber \end{align} Owing to the well prepared boundary assumption \eqref{eq:wellbc}, we take $\widetilde{\psi}=\widetilde{T}^4$. After we introducing these variables, we can see that on the boundary, \begin{align*} &T_\varepsilon - \widetilde{T} =0, \text{ on } \partial\Omega,\\ &\psi_\varepsilon - \widetilde{\psi}= (1-\alpha) (L(\psi_\varepsilon - \widetilde{\psi}))(t,x,\beta), \text{ on } \in \Sigma_-. \end{align*} In order to find the Galerkin apprpximations of \eqref{eq:Teps}-\eqref{eq:psieps}, we also need to define some truncation operators. We can take the complete set of the eigenvectors $\{w_k=w_k(x)\}_{k=1}^\infty$ of $H_0^1(\Omega)$ which is also a orthonormal basis in $L^2(\Omega)$. We take the operator $\mathbb{P}_m:L^2(\Omega)\to L^2(\Omega)$ as \begin{align*} \mathbb{P}_m f = \sum_{k\le m} (f,w_k) w_k(x), \end{align*} where $(\cdot,\cdot)$ is the inner product in $L^2(\Omega)$. Notice that $\mathbb{P}_m f =0$ on the boundary $\partial\Omega$. By Lemma \ref{lem:basis} in Appendix \ref{appendixb}, we can also find an orthonormal basis $\{\varphi_k\}_{k=1}^\infty$ in $L^2(\Omega\times\mathbb{S}^2)$. We define the operator $\mathbb{Q}_m$ as \begin{align}\label{eq:operatorQ} \mathbb{Q}_m\psi(x,\beta)=\sum_{k=1}^m ((\psi,\varphi_k))\varphi_k(x,\beta). \end{align} Here $((\cdot,\cdot))$ denotes the inner product in the space $L^2(\Omega\times\mathbb{S}^2)$. After these preparations, now we proceed to prove Theorem \ref{thmexistd}. As in the proof of Theorem \ref{thm:existencet}, we take the Galerkin approximations to be \begin{align} \partial_t T_\varepsilon^m =~& \Delta T_\varepsilon^m + \frac{1}{\varepsilon^2}\left(\int_{\mathbb{S}^2} \mathbb{P}_m \psi_\varepsilon^m - \mathbb{P}_m ((T_\varepsilon^m)^4)d\beta\right), \label{eq:gl21}\\ \partial_t \psi_\varepsilon^m + \frac{1}{\varepsilon}\beta \cdot \nabla \psi_\varepsilon^m =~& -\frac{1}{\varepsilon^2}\left(\psi_\varepsilon^m - \mathbb{Q}_m\mathbb{P}_m((T_\varepsilon^m)^4)\right), \label{eq:gl22} \end{align} with initial conditions \begin{align*} T_{\varepsilon}^m(t=0,x) =~& \mathbb{P}_mT_{\varepsilon0}(x), \\ \psi_{\varepsilon}^m(t=0,x,\beta) =~&\mathbb{Q}_m \psi_{\varepsilon0}(\beta,x), \end{align*} and boundary conditions \begin{align} T_{\varepsilon}^m(t,x) =~& T_b, \text{ for } x\in \partial\Omega, \label{eq:btgl}\\ \psi_\varepsilon^m |_{\Sigma_-}=~&\alpha \psi_b + L\psi_\varepsilon^m|_{\Sigma_+}.\label{eq:bpsigl} \end{align} We can take \begin{align*} &T_\varepsilon^m - \widetilde{T} = \sum_{k=1}^m d_k(t)\langle \varphi_k \rangle(x) , \\ &\psi_\varepsilon^m - \widetilde{\psi}= \sum_{k=1}^m \phi_k(t)\varphi_k(\beta,x), \end{align*} into \eqref{eq:gl21} and \eqref{eq:gl22}, and get a system of ordinary differential equations for $d_k$ and $\phi_k$. From the Cauchy-Lipschitz theorem, the ODE system has a global unique solution if $T_\varepsilon^m$ and $\psi_\varepsilon^m$ is bounded uniformly in time, which will be shown below. It follows that the system \eqref{eq:gl21}-\eqref{eq:gl22} has a global unique solutions. Next we derive the energy estimate for the system \eqref{eq:gl21}-\eqref{eq:gl22}. We multiply equation \eqref{eq:gl21} by $(T_\varepsilon^m)^4 - \widetilde{T}^4$ and equation \eqref{eq:gl22} by $\psi_\varepsilon^m - \widetilde{\psi}$, integrate over $[0,t]\times\Omega$ and $[0,t]\times\Omega\times\mathbb{S}^2$ respectively, and add the results together. We obtain \begin{align} \int_{\Omega}&\left(\frac{(T_\varepsilon^m)^5}{5} - \widetilde{T}^4T_\varepsilon^m\right)(t) dx + \frac{1}{2} \iint_{\Omega \times \mathbb{S}^2}(\psi_\varepsilon^m - \widetilde{\psi})^2(t) d\beta dx \nonumber \\ =&\int_{\Omega}\frac{(T_{\varepsilon0}^m)^5}{5} dx + \frac{1}{2} \iint_{\Omega \times \mathbb{S}^2}(\psi_{\varepsilon0}^m)^2 d\beta dx - \frac{1}{2\varepsilon} \int_0^t\iint_{\Omega\times\mathbb{S}^2} \beta \cdot \nabla (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta dx d\tau \nonumber\\ &+ \int_0^t \iint_{\Omega\times\mathbb{S}^2} \beta \cdot \nabla \widetilde{\psi} (\psi_\varepsilon^m - \widetilde{\psi}) d\beta dxd\tau+\int_0^t \int_{\Omega} ((T_\varepsilon^m)^4 - \widetilde{T}^4)\Delta T_\varepsilon^m dxd\tau \nonumber\\ &- \frac{1}{\varepsilon^2}\int_0^t\iint_{\Omega\times\mathbb{S}^2}\left(\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)\right) (\psi_\varepsilon^m - \widetilde{\psi} - \mathbb{P}_m ((T_\varepsilon^m)^4) + \widetilde{T}^4)d\beta dxd\tau \nonumber\\ =& I_1+I_2+I_3+I_4+I_5+I_6. \label{eq:i123456} \end{align} The first term on the left hand side of the above equation can be estimted by \begin{align} \int_{\Omega}&\left(\frac{(T_\varepsilon^m)^5}{5} - \widetilde{T}^4T_\varepsilon^m\right) dx \ge \int_{\Omega}\frac{(T_\varepsilon^m)^5}{5} dx - \int_{\Omega}\frac{1}{2}\frac{(T_\varepsilon^m)^5}{5} dx - 2\int_\Omega \frac{(\widetilde{T}^4)^{\frac{5}{4}}}{\frac{5}{4}} dx \nonumber \\ \ge& \frac{1}{10} \|T_\varepsilon^m\|_{L^5(\Omega)}^5 - \frac{8}{5} \|\widetilde{T}\|_{L^5(\Omega)}^5. \label{eq:leftest} \end{align} The second term on the left can be estimated by \begin{align} \frac{1}{2} \iint_{\Omega \times \mathbb{S}^2}(\psi_\varepsilon^m - \widetilde{\psi})^2d\beta dx \ge& \frac{1}{2} \iint_{\Omega \times \mathbb{S}^2}(\psi_\varepsilon^m)^2d\beta dx - \frac{1}{2} \iint_{\Omega \times \mathbb{S}^2}(\widetilde{\psi})^2d\beta dx \nonumber\\ \ge&\frac{1}{2} \iint_{\Omega \times \mathbb{S}^2}(\psi_\varepsilon^m)^2d\beta dx - 2\pi \int_{\Omega} \widetilde{T}^8 dx . \label{eq:left2nd} \end{align} Next we estimate the right terms of \eqref{eq:i123456}. First, for $I_1$ and $I_2$, we can use the property of the operators $\mathbb{P}_m$ and $\mathbb{Q}_m$ to get \begin{align} I_1 + I_2 =& \int_{\Omega}\frac{(T_{\varepsilon0}^m)^5}{5} dx + \frac{1}{2} \iint_{\Omega \times \mathbb{S}^2}(\psi_{\varepsilon0}^m)^2 d\beta dx \le \frac{1}{5}\|T_{\varepsilon0}\|_{L^5(\Omega)}^5 + \frac{1}{2}\|\psi_{\varepsilon0}\|_{L^2(\Omega\times\mathbb{S}^2)}^2. \end{align} Using the boundary condition \eqref{eq:bpsigl}, $I_3$ can be calculated as \begin{align} I_3 =& - \frac{1}{2\varepsilon} \int_0^t\iint_{\Omega\times\mathbb{S}^2} \beta \cdot \nabla (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta dx d\tau = -\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma} \beta \cdot n (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta d\sigma_xd\tau \nonumber \\ =& -\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta d\sigma_x d\tau -\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma_-} \beta \cdot n (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta d\sigma_x d\tau \nonumber\\ =&-\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_\varepsilon^m - \psi_b)^2 d\beta d\sigma_x d\tau \nonumber\\ &-\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma_-} \beta \cdot n (1-\alpha)^2(\psi_\varepsilon^m(\beta') -\psi_b)^2 d\beta d\sigma_x d\tau \nonumber\\ =&-\frac{2\alpha - \alpha^2}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_\varepsilon^m - \psi_b)^2 d\beta d\sigma_x d\tau . \label{eq:i3est} \end{align} The term $I_4$ can be estimated by \begin{align} I_4 =& \int_0^t \iint_{\Omega\times\mathbb{S}^2} \beta \cdot \nabla \widetilde{\psi} (\psi_\varepsilon^m - \widetilde{\psi}) d\beta dxd\tau \nonumber\\ =& \int_0^t \iint_{\Omega\times\mathbb{S}^2} \beta \cdot \nabla \widetilde{\psi} (\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)) d\beta dxd\tau \nonumber \\ \le& 2\varepsilon^2 \int_0^t \iint_{\Omega\times\mathbb{S}^2} |\beta \cdot \nabla \widetilde{\psi}|^2 d\beta dxd\tau + \frac{1}{2\varepsilon^2} \int_0^t \iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4))^2 d\beta dxd\tau \nonumber \\ \le& 8\pi \varepsilon^2 \int_0^t \iint_{\Omega\times} | \nabla \widetilde{T}^4|^2 dxd\tau + \frac{1}{2\varepsilon^2} \int_0^t \iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4))^2 d\beta dxd\tau .\label{eq:i4est} \end{align} We estimate the term $I_5$ by \begin{align*} I_5 =& \int_0^t \int_{\Omega} ((T_\varepsilon^m)^4 - \widetilde{T}^4)\Delta T_\varepsilon^m dxd\tau\\ =& -\int_0^t \int_{\Omega} \nabla ((T_\varepsilon^m)^4 - \widetilde{T}^4) \nabla T_\varepsilon dxd\tau \\ =& - \int_0^t\int_\Omega \frac{16}{25} |\nabla (T_\varepsilon^m)^{\frac{5}{2}}|^2dxd\tau - \int_{0}^t \int_\Omega T_\varepsilon^m \Delta \widetilde{T}^4 dxd\tau + \int_0^t \int_{\partial\Omega} T_b n\cdot \nabla \widetilde{T}^4 d\sigma_x \\ \le& - \int_0^t\int_\Omega \frac{16}{25} |\nabla (T_\varepsilon^m)^{\frac{5}{2}}|^2dxd\tau + \int_{0}^t \int_\Omega \frac{(T_\varepsilon^m)^5}{5} dxd\tau + \int_0^t\int_{\Omega} \frac{(\Delta \widetilde{T}^4)^{\frac{5}{4}}}{\frac{5}{4}} dxd\tau \\ &+ \int_0^t \int_{\partial\Omega} 4T_b^4 n\cdot \nabla \widetilde{T} d\sigma_x. \end{align*} Multiplying equation \eqref{eq:Gequation} by $\widetilde{T}^4$ and integrating over $[0,t]\times \Omega$ leads to \begin{align*} \int_{\Omega} \frac{\widetilde{T}^5}{5} (t)dx + \frac{16}{25} \int_0^t\int_{\Omega} |\nabla \widetilde{T}^{\frac{5}{2}}|^2 dxd\tau - \int_0^t\int_{\partial\Omega} T_b^4 n\cdot \nabla \widetilde{T} d\sigma_x d\tau=0, \end{align*} thus \begin{align} I_5 \le& - \frac{1}{\varepsilon^2} \int_0^t\int_\Omega \frac{16}{25} |\nabla (T_\varepsilon^m)^{\frac{5}{2}}|^2dxd\tau + \int_{0}^t \int_\Omega \frac{(T_\varepsilon^m)^5}{5} dxd\tau + \int_0^t\int_{\Omega} \frac{(\Delta \widetilde{T}^4)^{\frac{5}{4}}}{\frac{5}{4}} dxd\tau \nonumber\\ &+ \frac{4}{5} \int_{\Omega} {\widetilde{T}^5} (t)dx + \frac{64}{25}\int_0^t \int_{\Omega} |\nabla \widetilde{T}^{\frac{5}{2}}|^2 dxd\tau. \label{eq:i5est} \end{align} The term $I_6$ can be treated by \begin{align}\label{eq:i6est} I_6=&- \frac{1}{\varepsilon^2}\int_0^t\iint_{\Omega\times\mathbb{S}^2}\left(\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)\right) (\psi_\varepsilon^m - \widetilde{\psi} - \mathbb{P}_m ((T_\varepsilon^m)^4) + \widetilde{T}^4)d\beta dxd\tau \nonumber\\ =& -\frac{1}{\varepsilon^2} \int_0^t \iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4))^2d\beta dxd\tau. \end{align} Here we use $\tilde{\psi}=\widetilde{T}^4$ in the above equality. Taking the estimates \eqref{eq:leftest}-\eqref{eq:i6est} into \eqref{eq:i123456} leads to the estimate \begin{align*} &\frac{1}{10}\|T_\varepsilon^m(t)\|_{L^5(\Omega)}^5 + \frac{1}{2}\|\psi_\varepsilon^m(t)\|_{L^2(\Omega\times\mathbb{S}^2)} + \frac{16}{25}\int_0^t \|\nabla (T_\varepsilon^m)^{\frac{5}{2}}\|_{L^2(\Omega)}^2 d\tau \\ &\qquad+ \frac{1}{\varepsilon^2} \int_0^t (\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4))^2 d\beta dxd\tau +\frac{2\alpha - \alpha^2}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_\varepsilon^m - \psi_b)^2 d\beta d\sigma_x d\tau \nonumber \\ \le& \frac{1}{5} \|T_{\varepsilon0}\|_{L^5}^5 + \frac{1}{2}\|\psi_{\varepsilon0}\|_{L^2(\Omega\times\mathbb{S}^2)}^2+ \frac{8}{5}\|\widetilde{T}\|_{L^5(\Omega)}^5 + 2\pi \|\widetilde{T}\|_{L^8(\Omega)}^8 \nonumber \\ &+ 8\pi \varepsilon^2 \int_0^t \iint_{\Omega} | \nabla \widetilde{T}^4|^2 dxd\tau + \frac{1}{5} \int_0^t\int_{\Omega} \|T_\varepsilon^m\|_{L^5(\Omega)}^5 + \frac{4}{5}\int_0^t\|\Delta \widetilde{T}^4\|_{L^{\frac{5}{4}}(\Omega)}^{\frac{5}{4}}d\tau \nonumber\\ &+ \frac{4}{5} \|\widetilde{T}\|_{L^5(\Omega)}^5 + \frac{64}{25}\int_0^t \|\nabla \widetilde{T}^{\frac{5}{2}}\|_{L^2(\Omega)}^2d\tau. \end{align*} Since $\widetilde{T}$ is the solution of the heat equation \eqref{eq:Gequation}, it is smooth and thus the terms including $\widetilde{T}$ of the above inequality is bounded. We can apply Gronwall's inequality to obtain \begin{align}\label{eq:gl2est} &\|T_{\varepsilon}^m(t)\|_{L^5(\Omega))}^5 + \|\psi_\varepsilon^m(t)\|_{L^2(\Omega\times\mathbb{S}^2)}^2 + \int_0^t \|\nabla (T_\varepsilon^m)^{\frac{5}{2}}\|_{L^2(\Omega)}^2 d\tau \nonumber\\ &\qquad+ \frac{1}{\varepsilon^2} \int_0^t \|\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)\|_{L^2(\Omega\times\mathbb{S}^2)}^2 d\tau \nonumber \\ &\qquad+\frac{2\alpha - \alpha^2}{2\varepsilon} \int_0^t \|\psi_\varepsilon^m - \psi_b\|_{L^2(\Sigma_+;|n\cdot\beta| d\beta d\sigma_x)}^2 d\tau \nonumber\\ &\quad \le C e^{Ct}(\|T_{\varepsilon0}\|_{L^5}^5 + \|\psi_{\varepsilon0}\|_{L^2(\Omega\times\mathbb{S}^2)}^2). \end{align} From this estimate, we can get the same bounds on $T_\varepsilon^m$ and $\psi_\varepsilon^m$ inside the domain, i.e. \eqref{eq:unibd1}-\eqref{eq:unibd4} still hold. We can follow the same proof of Theorem \ref{thm:existencet} to get \begin{align} &T_\varepsilon^m \to T_\varepsilon \text{ almost everywhere}, \label{eq:gl2strongt}\\ &\psi_\varepsilon^m \rightharpoonup \psi_\varepsilon \text{ weakly in }L^2_{\operatorname{loc}}([0,\infty);L^2(\Omega\times\mathbb{S}^2)),\label{eq:gl2strongpsi} \end{align} and that $(T_\varepsilon,\psi_\varepsilon)$ satisfies \eqref{eq:weakd1}-\eqref{eq:weakd2}. Next we consider the boundary conditions. According to \eqref{eq:gl2est}, we have \begin{align*} (T_\varepsilon^m)^{\frac{5}{2}} \text{ is uniformly bounded in }L^2_{\operatorname{loc}}([0,\infty);{H}^1(\Omega)]), \end{align*} so \begin{align*} (T_\varepsilon^m)^{\frac{5}{2}} \to T_\varepsilon^{\frac{5}{2}}, \text{ strongly in } L^2_{\operatorname{loc}}([0,\infty);{H}^{1-\delta}(\Omega)]) \end{align*} for $\delta>0$ small. We can thus use the continuity of the trace operator to get \begin{align}\label{eq:strongtracethm2} (\gamma^1 T_\varepsilon^m)^{\frac{5}{2}} = T_b^{\frac{5}{2}} \to \gamma^1 T_\varepsilon^{\frac{5}{2}} = (\gamma^1 T_\varepsilon)^{\frac{5}{2}}, \text{ strongly in }L^2_{\operatorname{loc}}([0,\infty);{H}^{\frac{1}{2}-\delta}(\Omega)]). \end{align} Therefore, we get \begin{align*} \gamma^1 T_\varepsilon = T_b. \end{align*} To pass to the limit on the boundary of $\psi_\varepsilon^m$, we learn from Lemma \ref{lm:trace} that \begin{align*} \|\psi_\varepsilon^m|_{\partial\Omega}\|_{L^2(\mathbb{R}_+\times\Omega\times\mathbb{S}^2,|n\cdot \beta|d\beta d\sigma_xdt)} \end{align*} is bounded. Therefore, there exists a subsequence $\{\psi_\varepsilon^{m_k}\}_{k>0}$ such that \begin{align*} \gamma^2 \psi_\varepsilon^{m_k} \rightharpoonup \overline{\gamma^2 \psi_\varepsilon^{m_k}} \text{ weakly in } L^2_{\operatorname{loc}}([0,\infty);L^2(\Sigma;|n\cdot\beta| d\beta d\sigma_x d\tau)). \end{align*} We can thus take the weak limit in \eqref{eq:bpsigl} to obtain \begin{align}\label{eq:Mbd} \overline{\gamma^2 \psi_\varepsilon^{m_k}} |_{\Sigma_-} = \alpha \psi_b + L \overline{\gamma^2 \psi_\varepsilon^{m_k}} |_{\Sigma_+}. \end{align} To show $ \overline{\gamma^2 \psi_\varepsilon^{m_k}} =\gamma \psi_\varepsilon$, we use the fact that $(T_\varepsilon,\psi_\varepsilon)$ solves \begin{align*} \varepsilon \partial_t \psi_\varepsilon + \beta \cdot \nabla \psi_\varepsilon = -\frac{1}{\varepsilon}(\psi_\varepsilon - T_\varepsilon^4). \end{align*} We apply test function $\rho \in C^\infty([0,\infty)\times\Omega\times\mathbb{S}^2)$ on the above equation and deduce \begin{align*} \iint_{\Omega\times\mathbb{S}^2}&\psi_\varepsilon(t) \rho(t) d\beta dx - \iint_{\Omega}\psi_{\varepsilon0} \rho(0) d\beta dx - \int_0^t\iint_{\Omega\times\mathbb{S}^2} \psi_\varepsilon \partial_t \rho d\beta dxd\tau \\ &- \frac{1}{\varepsilon} \int_0^t\int_{\Omega\times\mathbb{S}^2} \psi_\varepsilon \beta \cdot \nabla \rho d\beta dxd\tau + \frac{1}{\varepsilon}\int_0^t \iint_{\Sigma} (n\cdot\beta) \gamma^2 \psi_\varepsilon \rho d\beta dxd\tau\\ & = -\frac{1}{\varepsilon^2}\int_0^t\int_{\Omega\times\mathbb{S}^2}(\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon)^4)) \rho d\beta dxd\tau. \end{align*} We can also apply the same test function on the equation \eqref{eq:gl22} and get \begin{align*} \iint_{\Omega\times\mathbb{S}^2}&\psi_\varepsilon^m(t) \rho(t) d\beta dx - \iint_{\Omega}\psi_{\varepsilon0}^{m} \rho(0) d\beta dx - \int_0^t\iint_{\Omega\times\mathbb{S}^2} \psi_\varepsilon^m \partial_t \rho d\beta dxd\tau \\ &- \frac{1}{\varepsilon} \int_0^t\int_{\Omega\times\mathbb{S}^2} \psi_\varepsilon^m \beta \cdot \nabla \rho d\beta dxd\tau + \frac{1}{\varepsilon}\int_0^t \iint_{\Sigma} (n\cdot\beta) \gamma^2\psi_\varepsilon^m \rho d\beta dxd\tau\\ & = -\frac{1}{\varepsilon^2}\int_0^t\int_{\Omega\times\mathbb{S}^2}(\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)) \rho d\beta dxd\tau. \end{align*} Passing $m\to \infty$ in the above equation and comparing with the previous one lead to \begin{align*} \overline{\gamma^2 \psi_\varepsilon^m} = \gamma^2 \psi_\varepsilon. \end{align*} This combing with \eqref{eq:Mbd} implies that $\psi_\varepsilon$ satisfies \begin{align} {\gamma^2 \psi_\varepsilon} |_{\Sigma_-} = \alpha \psi_b + L \gamma^2 \psi_\varepsilon |_{\Sigma_+}, \end{align} on the boundary. The energy inequality can be shown as in the proof of Theorem \ref{thm:existencet}, except that here we need to use \begin{align*} \int_0^t \|\psi_\varepsilon - \psi_b \|_{L^2(\Sigma_+;|n\cdot \beta|d\beta d\sigma_x)} d\tau \le \liminf_{m\to\infty} \int_0^t \|\psi_\varepsilon^m - \psi_b \|_{L^2(\Sigma_+;|n\cdot \beta|d\beta d\sigma_x)} d\tau, \end{align*} which is due to the weak convergence of $\psi_\varepsilon^m$ on the boundary. \end{proof} \begin{rem} In the above proof we assume the boundary condition to be well prepared $\psi_b=T_b^4$. The term \eqref{eq:i3est} can be estimated by \begin{align*} I_3 =& - \frac{1}{2\varepsilon} \int_0^t\iint_{\Omega\times\mathbb{S}^2} \beta \cdot \nabla (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta dx d\tau = -\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma} \beta \cdot n (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta d\sigma_xd\tau \nonumber \\ =& -\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta d\sigma_x d\tau -\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma_-} \beta \cdot n (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta d\sigma_x d\tau \nonumber\\ =&-\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_\varepsilon^m - T_b^4)^2 d\beta d\sigma_x d\tau \nonumber\\ &-\frac{1}{2\varepsilon} \int_0^t \iint_{\Sigma_-} \beta \cdot n \left(\alpha(\psi_b - T_b^4)+(1-\alpha)(\psi_\varepsilon^m(\beta') -T_b^4) \right)^2 d\beta d\sigma_x d\tau \nonumber\\ \le&-\frac{(2\alpha-\alpha^2)}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_\varepsilon^m - T_b^4)^2 d\beta d\sigma_x d\tau \nonumber\\ &+\frac{\alpha^2}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_b - T_b^4)^2 d\beta d\sigma_x d\tau \nonumber\\ &+\frac{2\alpha (1-\alpha)}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_\varepsilon^m - T_b^4)(\psi_b-T_b^4) d\beta d\sigma_x d\tau\nonumber \\ =& -\frac{(2\alpha-\alpha^2)}{2\varepsilon} \int_0^t \iint_{\Sigma_+} \beta \cdot n \left(\psi_\varepsilon^m-T_b^4- \frac{1-\alpha}{2-\alpha}(\psi_b - T_b^4)\right)^2 d\beta d\sigma_x d\tau \nonumber\\ &+\frac{\alpha}{2(2-\alpha)} \int_0^t \iint_{\Sigma_+} \beta \cdot n (\psi_b - T_b^4)^2 d\beta d\sigma_x d\tau. \end{align*} So the estimate \eqref{eq:estdirichlet} holds with a bounded depending on $\varepsilon$. We then get the existence of the weak solutions for fixed $\varepsilon$. However, the well-prepared assumption on the boundary conditions is required to study the diffusive limit in the section \ref{section3}. \end{rem} \subsection{Case of Robin boundary condition} We now proceed to consider the case of Robin boundary condition \eqref{b2}. We first give the definition of the weak solution. \begin{definition}\label{def2} Assume $\partial\Omega\in C^1$. Let $T_{\varepsilon0} \in L^5(\Omega)$ and $\psi_{\varepsilon0} \in L^2(\Omega \times \mathbb{S}^2)$. Let $T_b \in L_{\operatorname{loc}}^5([0,\infty);L^5(\partial \Omega))$ and $\psi_b \in L_{\operatorname{loc}}^2([0,\infty); L^2(\Sigma_-;|n\cdot\beta| d\beta d\sigma_x))$ We say that $(T_\varepsilon,\psi_\varepsilon)$ is a weak solution of the system \eqref{eq:Teps}-\eqref{eq:psieps} with initial conditions \eqref{eq:ic1}-\eqref{eq:ic2} and boundary conditions \eqref{bpsi}, \eqref{b2} if \begin{align*} &T_\varepsilon \in L_{\operatorname{loc}}^\infty (0,\infty;L^5(\Omega)), \quad T_\varepsilon^{\frac{5}{2}} \in L^2_{\operatorname{loc}}(0,\infty;{H}^1(\Omega)),\\ &\psi_\varepsilon \in L_{\operatorname{loc}}^\infty(0,\infty; L^2(\Omega \times \mathbb{S}^2))\cap W^2_{\operatorname{loc}}([0,\infty)\times\Omega\times\mathbb{S}^2), \end{align*} and it solves \eqref{eq:Teps}-\eqref{eq:psieps} in the sense of distributions, i.e. for any test functions $\varphi \in C^\infty([0,\infty),\Omega)$ and $\rho\in C^\infty([0,\infty);\Omega\times \mathbb{S}^2)$, the following equations hold: \begin{align} &-\iint_{[0,\infty)\times\Omega} \left(T_\varepsilon\partial_t \varphi + T_\varepsilon \Delta \varphi + \frac{1}{\varepsilon^2}\int_{\mathbb{S}^2} \varphi(\psi_\varepsilon - T_\varepsilon^4) d\beta \right) dxdt \nonumber\\ &\quad- \iint_{[0,\infty)\times \partial \Omega} \varphi \cdot \frac{T_b-(\gamma^1T_\varepsilon)}{\varepsilon^r} d\sigma_x dt + \iint_{[0,\infty)\times \partial \Omega} (\gamma^1 T_\varepsilon) n\cdot \nabla \varphi d\sigma_x dt \nonumber\\ &\qquad= \int_{\mathbb{T}^3} T_{\varepsilon0} \varphi(0,\cdot)dx, \label{eq:weakr1}\\ &-\iiint_{[0,\infty)\times\Omega \times \mathbb{S}^2} \left(\psi_\varepsilon \partial_t \rho + \frac{1}{\varepsilon} \psi_\varepsilon \beta \cdot \nabla \rho - \frac{1}{\varepsilon^2} \rho(\psi_\varepsilon-T_\varepsilon^4)\right) d\beta dxdt \nonumber \\ &\quad +\iiint_{[0,\infty)\times \Sigma} (n \cdot \beta)\rho \cdot (\gamma^2\psi_\varepsilon) d\beta d\sigma_xdt = \iint_{\mathbb{T}^3 \times \mathbb{S}^2} \psi_{\varepsilon0}\rho(0,\cdot) dx, \label{eq:weakr2} \end{align} where \begin{align}\label{eq:thm3bd} \gamma^2 \psi_\varepsilon \big|_{\Sigma_{-}} = \alpha \psi_b + (1-\alpha) \gamma^2 L\psi_\varepsilon\big|_{\Sigma_{+}}, \end{align} with the reflection operator $L$ defined in \eqref{eq:reflectop}. \end{definition} Next, we prove the following existence theorem. \begin{theorem}\label{thmexistr} Assume $\partial\Omega \in C^1$. Let $T_{\varepsilon0} \in L^5(\Omega)$ and $\psi_{\varepsilon0} \in L^2(\Omega \times \mathbb{S}^2)$. Suppose $T_b\in L_{\operatorname{loc}}^5([0,\infty);L^5(\partial \Omega))$ and $\psi_b \in L_{\operatorname{loc}}^2([0,\infty); L^2(\Sigma_-;|n\cdot\beta| d\beta d\sigma_x))$. Let $\psi_b=\psi_b(t,x)$ independent of $\beta$. Then there exists a global weak solution $(T_\varepsilon,\psi_\varepsilon)$ of the system \eqref{eq:Teps} and \eqref{eq:psieps} with initial conditions \eqref{eq:ic1}-\eqref{eq:ic2} and boundary conditions \eqref{bpsi}, \eqref{b2}. Moreover, the following energy inequality holds for all $t>0$: \begin{align}\label{eq:energyrobin} &\|T_{\varepsilon}(t)\|_{L^5(\Omega))}^5 + \|\psi_\varepsilon(t)\|_{L^2(\Omega\times\mathbb{S}^2)}^2 + \int_0^t \|\nabla T_\varepsilon^{\frac{5}{2}}\|_{L^2(\Omega)}^2 d\tau \nonumber\\ &\qquad + \frac{1}{\varepsilon^2} \int_0^t \|\psi_\varepsilon - T_\varepsilon^4\|_{L^2(\Omega\times\mathbb{S}^2)}^2 d\tau+\frac{1}{\varepsilon^r} \int_0^t \|\gamma^1 T_\varepsilon - T_b\|_{L^5(\partial\Omega)}^5d\tau \nonumber \\ &\qquad+\frac{2\alpha - \alpha^2}{2\varepsilon} \int_0^t \|\gamma^2\psi_\varepsilon - \psi_b\|_{L^2(\Sigma_+;|n\cdot\beta| d\beta d\sigma_x)}^2 d\tau \nonumber\\ &\quad \le C (\|T_{\varepsilon0}\|_{L^5}^5 + \|\psi_{\varepsilon0}\|_{L^2(\Omega\times\mathbb{S}^2)}^2). \end{align} Here $C$ is a positive constant independent on $\varepsilon$. \end{theorem} \begin{proof} To deal with the case of Robin boundary condtion \eqref{b2}, for $s\ge -\frac{1}{2}$, we define the Robin map $\mathcal{R}:H^s(\partial\Omega) \to H^{s+\frac{3}{2}}(\Omega)$ (for example, see \cite{guo2014systems}) with $f=\mathcal{R} g$ as the weak solution for the equation \begin{align} &\Delta f = 0, \text{ in }\Omega, \label{eq:robinf} \\ &\varepsilon^r n\cdot \nabla f + f = g, \text{ on } \partial\Omega. \end{align} We define the operator $\Delta_r$ in $L^2(\Omega)$ by \begin{align*} &\Delta_r: \mathcal{D}(\Delta_r) \subset L^2(\Omega) \to L^2(\Omega),\\ & \Delta_r = -\Delta,\,\, \mathcal{D}(\Delta_r)=\Big\{f\in H^1(\Omega):\Delta f \in L^2(\Omega),\;\varepsilon^r n\cdot \nabla f+f=0 \text{ on }\partial\Omega \Big\}. \end{align*} The space $\mathcal{D}_r$ is equipped with the norm \begin{align*} \|f\|_{\mathcal{D}(\Delta_r)} = (\|\nabla f\|_{L^2(\Omega)}^2 + \frac{1}{\varepsilon^r}\|\gamma f\|_{L^2(\partial\Omega)}^2)^{\frac{1}{2}}. \end{align*} for all $f,h\in H^1(\Omega)$. With the above definitions, we can see that $T_\varepsilon - \mathcal{R}T_b$ satisfies the following condition on the boundary \begin{align*} \varepsilon^r n\cdot \nabla (T_\varepsilon - \mathcal{R}T_b) + T_\varepsilon - \mathcal{R}T_b = 0, \text{ on }\partial \Omega. \end{align*} We take $\{w_m(x)\}_{m=1}^\infty$ to be an orthogonal basis in $\mathcal{D}(\Delta_r)$, for example we can take the complete set of the eigenvectors of $-\Delta_r$ as the basis. It is also orthonormal in $L^2(\Omega)$. We take the operator $\mathbb{P}_m$ to be \begin{align*} \mathbb{P}_m f= \sum_{k=1}^m (f,w_k)w_k(x).\\ \end{align*} Here we take $\mathbb{Q}_m$ as the same with \eqref{eq:operatorQ}. We consider the Galerkin approximate system \begin{align} &\partial_t T_\varepsilon^m = \Delta_r(T_\varepsilon^m - \mathcal{R}T_b) + \frac{1}{\varepsilon^2} \int_{\mathbb{S}^2} (\mathbb{P}_m\psi_\varepsilon^m- \mathbb{P}_m ((T_\varepsilon^m)^4))d\beta, \label{eq:gl31}\\ &\partial_t \psi_\varepsilon^m + \frac{1}{\varepsilon}\beta \cdot\nabla \psi_\varepsilon^m = - \frac{1}{\varepsilon^2}(\psi_\varepsilon^m-\mathbb{Q}_m\mathbb{P}_m(T_\varepsilon^m)^4)).\label{eq:gl32} \end{align} We take $\widetilde{T}$ and $\widetilde{\psi}$ as defined in \eqref{eq:Gequation} but with boundary data \begin{align*} T = \psi_b^{\frac{1}{4}}, \text{ on }\partial\Omega. \end{align*} and by $\widetilde{\psi}=\widetilde{T}^4$, respectively, as before. We take \begin{align* &T_\varepsilon^m - \mathcal{R}T_b = \sum_{k=1}^m d_k(t) w_k(x),\\ &\psi_\varepsilon^m - \widetilde{\psi} = \sum_{k=1}^m \phi_k(t)\varphi_k(x,\beta), \end{align*} into the above system to get an ODE system of $d_k(t)$ and $\phi_k(t)$ with $k=1,\ldots,m$. The existence of the ODE system is guaranteed by the Cauchy-Lipschitz theorem. Next we derive the energy estimate. We multiply \eqref{eq:gl31} by $(T_\varepsilon^m)^4 - \widetilde{T}^4$ and \eqref{eq:gl32} by $\psi_\varepsilon^m - \widetilde{\psi}$ and integrate over time and space to get (same as \eqref{eq:i123456}) \begin{align} \int_{\Omega}&\left(\frac{(T_\varepsilon^m)^5}{5} - \widetilde{T}^4T_\varepsilon^m\right)(t) dx + \frac{1}{2} \iint_{\Omega \times \mathbb{S}^2}(\psi_\varepsilon^m - \widetilde{\psi})^2(t) d\beta dx \nonumber \\ =&\int_{\Omega}\frac{(T_{\varepsilon0}^m)^5}{5} dx + \frac{1}{2} \iint_{\Omega \times \mathbb{S}^2}(\psi_{\varepsilon0}^m)^2 d\beta dx - \frac{1}{2\varepsilon} \int_0^t\iint_{\Omega\times\mathbb{S}^2} \beta \cdot \nabla (\psi_\varepsilon^m - \widetilde{\psi})^2 d\beta dx d\tau \nonumber\\ &+ \int_0^t \iint_{\Omega\times\mathbb{S}^2} \beta \cdot \nabla \widetilde{\psi} (\psi_\varepsilon^m - \widetilde{\psi}) d\beta dxd\tau+\int_0^t \int_{\Omega} ((T_\varepsilon^m)^4 - \widetilde{T}^4)\Delta T_\varepsilon^m dxd\tau \nonumber\\ &- \frac{1}{\varepsilon^2}\int_0^t\iint_{\Omega\times\mathbb{S}^2}\left(\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)\right) (\psi_\varepsilon^m - \widetilde{\psi} - \mathbb{P}_m ((T_\varepsilon^m)^4) + \widetilde{T}^4)d\beta dxd\tau \nonumber\\ =& I_1+I_2+I_3+I_4+I_5+I_6. \end{align} The terms can be treated as in the proof of Theorem \ref{thmexistd} except $I_5$, which can be estimated by \begin{align*} I_5 =& \int_0^t \int_{\Omega} ((T_\varepsilon^m)^4 - \widetilde{T}^4)\Delta T_\varepsilon^m dxd\tau \\ =&-\frac{16}{25}\int_0^t \int_\Omega |\nabla (T_\varepsilon^m)^{\frac{5}{2}}|^2dxd\tau + \int_0^t\int_{\partial\Omega}((T_\varepsilon^m)^4-\widetilde{T}^4) n\cdot \nabla T_\varepsilon^m d\sigma_x d\tau \\ =& -\frac{16}{25}\int_0^t \int_\Omega |\nabla (T_\varepsilon^m)^{\frac{5}{2}}|^2dxd\tau+\int_0^t \int_{\partial\Omega} ((T_\varepsilon^m)^4-T_b^4) \frac{1}{\varepsilon^r}(-T_\varepsilon^m + T_b) d\sigma_x d\tau\\ =&-\frac{16}{25}\int_0^t \int_\Omega |\nabla (T_\varepsilon^m)^{\frac{5}{2}}|^2dxd\tau - \frac{1}{\varepsilon^r} \int_0^t \int_{\partial\Omega} ((T_\varepsilon^m)^2+T_b^2)(T_\varepsilon^m + T_b) (T_\varepsilon^m - T_b)^2 d\sigma_x d\tau. \end{align*} Using the positivity of $T_{b}$, we get \begin{align*} &((T_\varepsilon^m)^2+T_b^2)(T_\varepsilon^m + T_b) (T_\varepsilon^m - T_b)^2 - {(T_\varepsilon^m-T_b)^5} \\ &\quad= {(T_\varepsilon^m - T_b)^2}2T_b(2(T_\varepsilon^m)^2-T_\varepsilon^m T_b + T_b^2) \ge 0, \end{align*} and \begin{align*} &((T_\varepsilon^m)^2+T_b^2)(T_\varepsilon^m + T_b) (T_\varepsilon^m - T_b)^2 + {(T_\varepsilon^m-T_b)^5} \\ &\quad= {(T_\varepsilon^m - T_b)^2}2T_b((T_\varepsilon^m)^2-T_\varepsilon^m T_b + 2T_b^2) \ge 0, \end{align*} so that \begin{align}\label{EstiL5} ((T_\varepsilon^m)^2+T_b^2)(T_\varepsilon^m + T_b) (T_\varepsilon^m - T_b)^2 \ge |T_\varepsilon^m-T_b|^5. \end{align} Consequently, we obtain \begin{align*} I_5 \le& -\frac{16}{25}\int_0^t \int_\Omega |\nabla (T_\varepsilon^m)^{\frac{5}{2}}|^2dxd\tau -\frac{1}{\varepsilon^r} \int_0^t \|\gamma^1 T_\varepsilon - T_b\|_{L^5(\partial\Omega)}^5d\tau. \end{align*} The energy inequality \eqref{eq:gl2est} then becomes \begin{align}\label{eq:gl3est} &\|T_{\varepsilon}^m(t)\|_{L^5(\Omega))}^5 + \|\psi_\varepsilon^m(t)\|_{L^2(\Omega\times\mathbb{S}^2)}^2 + \int_0^t \|\nabla (T_\varepsilon^m)^{\frac{5}{2}}\|_{L^2(\Omega)}^2 d\tau \nonumber\\ &\qquad + \frac{1}{\varepsilon^2} \int_0^t \|\psi_\varepsilon^m - \mathbb{P}_m((T_\varepsilon^m)^4)\|_{L^2(\Omega\times\mathbb{S}^2)}^2 d\tau+\frac{1}{\varepsilon^r} \int_0^t \|\gamma^1 T_\varepsilon - T_b\|_{L^5(\partial\Omega)}^5d\tau \nonumber \\ &\qquad+\frac{2\alpha - \alpha^2}{2\varepsilon} \int_0^t \|\gamma^2\psi_\varepsilon^m - \psi_b\|_{L^2(\Sigma_+;|n\cdot\beta| d\beta d\sigma_x)}^2 d\tau \nonumber\\ &\quad \le C e^{Ct}\left(\|T_{\varepsilon0}\|_{L^5}^5 + \|\psi_{\varepsilon0}\|_{L^2(\Omega\times\mathbb{S}^2)}^2\right). \end{align} We can pass to the limit $m\to \infty$ and use trace theorem like in the proof of Theorem \ref{thmexistd} to get \begin{align*} &\gamma^1 T_\varepsilon^m \to \gamma^1 T_\varepsilon, \text{ strongly in }L^5_{\operatorname{loc}}([0,\infty);L^5(\partial\Omega))\\ &\gamma^2\psi_\varepsilon^m \rightharpoonup \gamma^2 \psi_\varepsilon, \text{ weakly in }L^2_{\operatorname{loc}}([0,\infty);L^2(\Sigma;|n\cdot\beta| d\beta d\sigma_x d\tau)). \end{align*} With this we can apply test functions on the Galerkin system \eqref{eq:gl31}-\eqref{eq:gl32} and pass to the limit $m\to \infty$ to show \eqref{eq:weakr1}-\eqref{eq:weakr2} holds. To show the energy inequality \eqref{eq:energyrobin}, we can pass to the limit $m\to \infty$ in the above energy estimate and follow the proof of Theorem \ref{thm:existencet} and Theorem \ref{thmexistd}, except here we additionally need \begin{align*} \int_0^t \|\gamma^1 T_\varepsilon - T_b\|_{L^5(\partial\Omega)}^5d\tau \le \liminf_{m\to \infty}\int_0^t \|\gamma^1 T_\varepsilon^m - T_b\|_{L^5(\partial\Omega)}^5 d\tau \end{align*} which comes from the convergence of $\gamma^1 T_\varepsilon$. \end{proof} \section{Passage to the limit with weak compactness method} \label{section3} Now we state the main theorem of this paper. \begin{theorem}\label{LimitProof} Consider a family of weak solution $\left(T_\varepsilon,\psi_\varepsilon \right)$ of \eqref{eq:Teps}-\eqref{eq:psieps} with initial conditions \eqref{eq:ic1}-\eqref{eq:ic2} and boundary condition \eqref{bpsi} for $\psi_\varepsilon$ and boundary conditions \eqref{b1}, or \eqref{b3} or \eqref{b2} for $T_\varepsilon$, defined in Definition \ref{df1}, Definition \ref{def3} and Definition \ref{def2}, respectively. Assume the initial data satisfies \[ \|T_{\varepsilon0}-\overline{T}_{0}\|_{L^{5}(\Omega)} \rightarrow 0\quad \text{and}\quad \|\psi_{\varepsilon0}-\overline{T}_0^4\|_{L^{2}(\Omega\times\mathbb{S}^{2})} \rightarrow 0,\; \quad\text{as} \; \varepsilon \rightarrow 0, \] where $\overline{T}_0 \in L^8(\Omega)$. We also suppose that the well prepared data condition \eqref{eq:wellbc} is satisfied. Then, when $\varepsilon \to 0$, we can extract a subsequence of $\left(T_\varepsilon,\psi_\varepsilon \right)$ such that for $t>0$, \begin{align} &T_\varepsilon \to \overline{T} \quad \text{almost everywhere}, \label{eq:tstrong}\\ &\psi_\varepsilon \to \overline{T}^{4} \quad \text{strongly in } L^2([0,t]\times\Omega\times\mathbb{S}^{2}).\label{eq:psistrong} \end{align} Moreover, $\overline{T}=\overline{T}(t,x)$ is the weak solution of the limit equation \begin{align}\label{Limitsystem} \partial_{t}\left(\overline{T}+4\pi \overline{T}^{4}\right)=\Delta\left(\overline{T}+\frac{4\pi}{3}\overline{T}^{4}\right), \end{align} with initial condition \begin{align}\label{eq:inicond} \overline{T}(0,x)=\overline{T}_{0}(x),\,& x \in \Omega, \end{align} and boundary condition \begin{align}\label{eq:Boundarycond} \overline{T}(t,x)= T_{b}(t,x), \, t>0 \text{ and } x\in\partial \Omega. \end{align} \end{theorem} \begin{proof} The proof can be divided into two steps. First we show the convergence of the solutions of the system \eqref{eq:Teps}-\eqref{eq:psieps}, i.e. \eqref{eq:tstrong} and \eqref{eq:psistrong} hold. Then we show that the limit $\overline{T}$ satisfies the equation \eqref{Limitsystem} as well as the initial and boundary conditions. \textbf{Convergence of the solutions for system \eqref{eq:Teps}-\eqref{eq:psieps}.} From Theorem \ref{thm:existencet}, or Theorem \ref{thmexistd} or Theorem \ref{thmexistr}, we get, under any of the three type boundary conditions considered in the above theorems, \begin{align}\label{eq:estC} &\|T_\varepsilon(t)\|_{L^5(\Omega)}^5 +\|\psi_\varepsilon(t)\|_{L^2(\Omega\times\mathbb{S}^2)}^2 + \int_0^t \|\nabla T_\varepsilon^{\frac{5}{2}}\|_{L^2}^2 d\tau \nonumber\\ &\quad+ \int_0^t \left\|\frac{1}{\varepsilon}(\psi_\varepsilon-T_\varepsilon^4)\right\|_{L^2(\Omega\times\mathbb{S}^2)}^2d\tau \le C. \end{align} Here $C$ does not depend on $\varepsilon$. Therefore, it follows that up to a subsequence, \begin{align} &\psi_\varepsilon \rightharpoonup \overline{\psi}, \text{ weakly in }L^2([0,t];L^2(\Omega\times\mathbb{S}^2)), \label{eq:wk1}\\ &T_\varepsilon \rightharpoonup \overline{T}, \text{ weakly in }L^5([0,t];L^5(\Omega)), \label{eq:wk2}\\ &T_\varepsilon^{\frac{5}{2}} \rightharpoonup \overline{T_\varepsilon^{\frac{5}{2}}}, \text{ weakly in } L^2([0,t];H^1(\Omega)),\label{eq:wk3}\\ &\psi_\varepsilon - T_\varepsilon^4 \to 0, \text{ strongly in }L^2([0,t];L^2(\Omega\times\mathbb{S}^2)),\label{eq:wk4} \\ &\frac{1}{\varepsilon}(\psi_\varepsilon-T_\varepsilon) \rightharpoonup A, \text{ weakly in }L^2([0,t];L^2(\Omega\times\mathbb{S}^2)). \label{eq:wk5} \end{align} Here and below, $\overline{f_\varepsilon}$ denotes the weak limit of $\{f_\varepsilon\}_{\varepsilon>0}$ while $\varepsilon\to 0$. Since \begin{align*} 4\pi \|T_\varepsilon^4\|_{L^2(\Omega)}=\|T_\varepsilon^4\|_{L^2(\Omega\times\mathbb{S}^2)} \le \|\psi_\varepsilon-T_\varepsilon^4\|_{L^2(\Omega\times\mathbb{S}^2)} + \|\psi_\varepsilon\|_{L^2(\Omega\times\mathbb{S}^2)}, \end{align*} we have \begin{align*} T_\varepsilon \text{ in uniformly bounded in }L^8([0,t];L^8(\Omega)). \end{align*} It follows that \begin{align}\label{eq:wklp} T_\varepsilon^p \rightharpoonup \overline{T_\varepsilon^p}, \text{ weakly in }L^{q_1}([0,t];L^{q_2}(\Omega)), \end{align} for any $1\le p\le 8$ and $q_1 \le \frac{8}{p}, q_2 \le \frac{8}{p}$. Taking the integral of \eqref{eq:psieps} over $\beta\in\mathbb{S}^2$ and adding \eqref{eq:Teps}, we get \begin{align} \partial_t \left(T_\varepsilon +\langle \psi_\varepsilon \rangle \right) + \frac{1}{\varepsilon} \nabla \cdot \langle \psi_\varepsilon \beta\rangle = \Delta T_\varepsilon. \end{align} Since for all $t>0$, \begin{align*} \|T_\varepsilon\|_{L^1([0,t];L^1(\Omega))} \le C \|T_\varepsilon\|_{L^5([0,t];L^5(\Omega))} \end{align*} is uniform bounded in $\varepsilon$, we get that $ \Delta T_\varepsilon$ is bounded in ${L^{1}([0,t]; W^{-2,1}(\Omega))}$. Moreover, using \eqref{eq:estC}, we have \begin{align*} \int_0^t& \int_\Omega \int_{\mathbb{S}^{2}} \frac{1}{\varepsilon} \psi_\varepsilon \beta d\beta dx d\tau \\ =& \int_0^t \int_\Omega \int_{\mathbb{S}^{2}} \frac{1}{\varepsilon} (\psi_\varepsilon -T_\varepsilon^4)\beta d\beta dxd\tau\\ \le& \left(\frac{1}{\varepsilon^2}\int_0^t \int_\Omega \int_{\mathbb{S}^{2}} (\psi_\varepsilon-T_\varepsilon^4)^2 d\beta dxd\tau\right)^{\frac{1}{2}} \left(\int_0^t \int_\Omega \int_{\mathbb{S}^{2}} |\beta|^2 d\beta dxd\tau\right)^{\frac{1}{2}} \\ \le & C \int_0^t \left\|\frac{1}{\varepsilon}(\psi_\varepsilon-T_\varepsilon^4)^2 d\beta dxd\tau\right\|_{L^2(\Omega\times\mathbb{S}^2)} \le C. \end{align*} Therefore, $\frac{1}{\varepsilon} \nabla \cdot \langle \psi_\varepsilon \beta \rangle $ is bounded in ${L^{1}([0,t]; W^{-1,1}(\Omega))}$. Consequently, we have \begin{align}\label{eq:partialtl1} \partial_t (T_\varepsilon + \langle \psi_\varepsilon \rangle) \in L^{1}([0,t]; W^{-2,1}(\Omega)). \end{align} In addition, from \eqref{eq:estC}, we can get \begin{align}\label{eq:tpsimoml2} T_\varepsilon +\langle \psi_\varepsilon \rangle \in L^2([0,t]\times\Omega). \end{align} From \eqref{eq:wk1} and \eqref{eq:wk2}, we deduce \begin{align}\label{eq:tpluspsiconver} T_\varepsilon + \langle \psi_\varepsilon \rangle \rightharpoonup \overline{T}+\langle \overline{\psi} \rangle, \quad \text{ weakly in } L^2([0,t]\times\Omega). \end{align} On the other hand, from \eqref{eq:wk3}, we have \begin{align}\label{eq:wealconvT52} T_{\varepsilon}^{\frac{5}{2}} \rightharpoonup \overline{T_\varepsilon ^{\frac{5}{2}}},\quad \text{ weakly in } L^2([0,t]\times\Omega). \end{align} Then Lemma \ref{Lemmalions1996mathematical}, with its assumptions verified by \eqref{eq:partialtl1}- \eqref{eq:wealconvT52}, implies that \begin{align}\label{eq:weakprod} \left(T_\varepsilon + \langle \psi_\varepsilon \rangle\right) T_\varepsilon^{\frac{5}{2}} \rightharpoonup \left(\overline{T} + \langle \overline{\psi} \rangle \right) \overline{T_\varepsilon ^{\frac{5}{2}}}, \quad \text{in the sense of distributions}. \end{align} Moreover, due to \eqref{eq:wk4}, we have $\overline{\psi} - \overline{T_\varepsilon ^4}=0$. Taking this into \eqref{eq:weakprod}, we conclude that \begin{align}\label{eq:13523} &\left(T_{\varepsilon}+\langle \psi_\varepsilon \rangle \right)T_\varepsilon^{\frac{5}{2}} \rightharpoonup (\overline{T} + 4\pi\overline{T_\varepsilon^4}) \overline{T_\varepsilon^{\frac{5}{2}}}, \quad \text{in the sense of distributions.} \end{align} On the other hand, using the weak convergence \eqref{eq:wklp} with $p=\frac{7}{2},\frac{13}{2}$ and the strong convergence \eqref{eq:wk4}, we get \begin{align*} &T_\varepsilon^{\frac{7}{2}} \rightharpoonup \overline{T_\varepsilon^{\frac{7}{2}}}, \quad \text{weakly in }L^2([0,t]\times\Omega), \\ & \langle \psi_\varepsilon\rangle T_\varepsilon^{\frac{5}{2}} = \langle \psi_\varepsilon - T_\varepsilon^4 \rangle T_\varepsilon^{\frac{5}{2}} + 4\pi T_\varepsilon^{\frac{13}{2}} \rightharpoonup 4\pi \overline{T_\varepsilon^{\frac{13}{2}}}, \quad \text{weakly in }L^{\frac{16}{13}}([0,t]\times\Omega). \end{align*} Therefore, \[\left(T_{\varepsilon}+\int_{\mathbb{S}^{2}} \psi_\varepsilon \right)T_\varepsilon^{\frac{5}{2}} \rightharpoonup \overline{T_\varepsilon^{\frac{7}{2}}}+\overline{T_\varepsilon^{\frac{13}{2}}}, \text{ in the sense of distributions}.\] Comparing \eqref{eq:13523} and using the uniqueness of weak limits, we arrive at \begin{align}\label{eq:weaklimiteq} \overline{T_\varepsilon^{\frac{7}{2}}}+4\pi\overline{T_\varepsilon^{\frac{13}{2}}}=\overline{T}\overline{T_\varepsilon^{\frac{5}{2}}}+4\pi\overline{T_\varepsilon^{4}}\overline{T_\varepsilon^{\frac{5}{2}}}. \end{align} Next we use the family of Young measures $\{\nu_{x}\}_{x\in \Omega}$ (see \cite[Theorems 2.2,2.3]{chen2000compactness} and \cite{tartar1979compensated,balder1995lectures,ball1989version}) associated with the $\{{T_{\varepsilon_{n}},}_{n\in \mathbb{N}} \}$ to prove that \eqref{eq:weaklimiteq} implies the strong convergence of $\overline{T_\varepsilon}$ to $\overline{T}$. Indeed, we have \begin{align} (T_\varepsilon^{m_k})^p \rightharpoonup \int_{\mathbb{R}} \lambda^p d\nu_x(\lambda). \end{align} for any $p \ge 1$. Hence, \begin{align*} &\overline{T^{\frac{7}{2}}_{\varepsilon}}+\overline{T^{\frac{13}{2}}_{\varepsilon}}\overline{T^{5/2}_{\varepsilon}}=\int_{\mathbb{R}} \lambda^{\frac{7}{2}} d\nu_x(\lambda) + \int_{\mathbb{R}} \lambda^{\frac{13}{2}} d\nu_x(\lambda),\\ & \overline{T}\overline{T_\varepsilon^{\frac{5}{2}}}+\overline{T_\varepsilon^{4}}\overline{T_\varepsilon^{\frac{5}{2}}} = \int_{\mathbb{R}}\int_{\mathbb{R}} \mu\lambda^{\frac{5}{2}} d\nu_x(\lambda) d\nu_x(\mu) + \int_{\mathbb{R}}\int_{\mathbb{R}} \lambda^4\mu^{\frac{5}{2}} d\nu_x(\lambda) d\nu_x(\mu) . \end{align*} From \eqref{eq:weaklimiteq}, the above two equations equal, that is \begin{align*} \int_{\mathbb{R}}\int_{\mathbb{R}} \left(\lambda^{\frac{5}{2}}(\lambda-\mu) + \lambda^4(\lambda^{\frac{5}{2}}-\mu^{\frac{5}{2}})\right)d\nu_x(\lambda)d\nu_x(\nu) =0. \end{align*} Using the symmetric property of the above formula leads to \begin{align*} 0 \le\int_{\mathbb{R}}\int_{\mathbb{R}} \left((\lambda^{\frac{5}{2}}-\mu^{\frac{5}{2}})(\lambda-\mu) + (\lambda^4-\mu^4)(\lambda^{\frac{5}{2}}-\mu^{\frac{5}{2}})\right)d\nu_x(\lambda)d\nu_x(\nu) =0. \end{align*} Since the function inside the integral is strictly positive unless $\lambda=\mu$, we can conclude that $\nu_x(\lambda)$ reduces almost all points of $x$ to a family of Dirac masses concentrated at $\nu_x=\delta_{T_\varepsilon(x)}$. Therefore, according to \cite[Theorem 2.3]{chen2000compactness}, we can conclude that \begin{align}\label{eq:tc} T_\varepsilon \to \overline{T}, \text{ almost everywhere.} \end{align} From this, we get $T_\varepsilon^4 \to \overline{T}^4$ almost everywhere. This combines \eqref{eq:wk4} implies that \begin{align}\label{eq:psic} \psi_\varepsilon \to \overline{T}^4, \quad \text{strongly in } L^2([0,t]\times\Omega \times\mathbb{S}^2). \end{align} Therefore, we have proved \eqref{eq:tstrong} and \eqref{eq:psistrong}. \bigskip \textbf{The limiting system.} To show the limit function $\overline{T}$ satisfy equation \eqref{Limitsystem}, we define \[\rho_\varepsilon = \int_{\mathbb{S}^2} \psi_\varepsilon d\beta, \; j_\varepsilon = \frac{1}{\varepsilon} \int_{\mathbb{S}^2} \psi_\varepsilon\beta d\beta \cdot \] We have \begin{align}\label{eq:rho} \partial_t \rho_\varepsilon + \nabla \cdot j_\varepsilon = -\frac{1}{\varepsilon^2} \int_{\mathbb{S}^{2}} (\psi_\varepsilon-T_\varepsilon^4) d\beta \cdot \end{align} Comparing the equation \eqref{eq:Teps} and \eqref{eq:rho}, we get \begin{align* \partial_t T_\varepsilon - \Delta T_\varepsilon = -(\partial_t \rho_\varepsilon +\nabla \cdot j_\varepsilon). \end{align*} Using \eqref{eq:tc}-\eqref{eq:psic} and $\rho_\varepsilon \to 4\pi \overline{T}^4$ we can pass to the limit in the above equation to get \begin{align}\label{eq:tbarjbar} \partial_t \overline{T} - \Delta \overline{T} = -4\pi\partial_t \overline{T}^4 + \nabla \cdot \overline{j_\varepsilon}, \end{align} in the sense of distributions. Here we use $\overline{j_\varepsilon} $ to denote the weak limit of $j_\varepsilon$. Next we find the weak limit of $j_\varepsilon$. Using equation \eqref{eq:psieps}, we can get \begin{align*} j_\varepsilon = \frac{1}{\varepsilon} \int_{\mathbb{S}^2} \psi_\varepsilon\beta d\beta = \frac{1}{\varepsilon} \int_{\mathbb{S}^2} (\psi_\varepsilon-T_\varepsilon^4)\beta d\beta = -\varepsilon \partial_t\int_{\mathbb{S}^2} \psi_\varepsilon \beta d\beta -\nabla\cdot\int_{\mathbb{S}^2} (\psi_\varepsilon \beta\otimes \beta) d\beta. \end{align*} From the convergence of $\psi_\varepsilon$, we can see the \begin{align*} \varepsilon \partial_t \int_{\mathbb{S}^2} \psi_\varepsilon \beta d\beta \rightharpoonup 0, \text{ weakly in }L^2([0,t];L^2(\Omega)) \end{align*} as $\varepsilon \to 0$, and \begin{align*} \int_{\mathbb{S}^2} \psi_\varepsilon \beta \otimes\beta d\beta \to \overline{T}^4 \int_{\mathbb{S}^2} \beta \otimes\beta d\beta = \frac{4\pi}{3} \overline{T}^4 I \end{align*} where $I$ is the identity matrix in $\mathbb{R}^3$. Therefore, we get \begin{align*} \nabla \cdot j_\varepsilon \rightharpoonup -\Delta (4\pi T_\varepsilon^4), \text{ in the sense of distributions.} \end{align*} It follows from this and \eqref{eq:tbarjbar} that \begin{align*} \partial_t \overline{T} -\Delta \overline{T} = -4\pi \partial_t \overline{T}^4 +\frac{4\pi}{3} \Delta \overline{T}^4 \end{align*} holds in the sense of distributions, i.e. equation \eqref{Limitsystem} holds. \bigskip \textbf{The initial condition.} Next we show the initial condition \eqref{eq:inicond} of the limit system \eqref{Limitsystem} holds in a weak sense. We consider \begin{align*} \int_0^t \int_{\Omega} \left(\frac{1}{\varepsilon} \int_{\mathbb{S}^2} \psi_\varepsilon \beta d\beta\right)^2 dxd\tau =& \int_0^t \int_{\Omega} \left(\frac{1}{\varepsilon} \int_{\mathbb{S}^2} (\psi_\varepsilon- T_\varepsilon^4) \beta d\beta\right)^2 dxd\tau \\ \le&\frac{1}{\varepsilon^2}\int_0^t \int_{\Omega} \int_{\mathbb{S}^2} \left(\psi_\varepsilon-T_\varepsilon^4 \right)^2 d\beta dxd\tau \end{align*} which is uniformly bounded due to \eqref{eq:estC}. This leads to \begin{align*} \partial_t (T_\varepsilon + \langle \psi_\varepsilon \rangle) \in L^2([0,t];H^{-2}(\Omega)), \end{align*} which combing with \eqref{eq:tc}-\eqref{eq:psic} implies that \begin{align*} \overline{T} + 4\pi \overline{T}^4 \in C_w([0,t];L^2(\Omega)). \end{align*} Consequently, we get \[ \overline{T}(t=0)+4\pi \overline{T}^4(t=0)= \lim_{\varepsilon\to0} \left(T_{\varepsilon0}+<\psi_{\varepsilon0}>\right)=\overline{T}_{0}+4\pi \overline{T}_{0}^4. \] Hence, from \eqref{eq:wellinitial} it follows \begin{align*} \overline{T}(t=0) = \overline{T}_0 = \lim_{\varepsilon\to0} T_{\varepsilon0}, \end{align*} in a weak sense. \bigskip Now let us deal with the boundary conditions. \textbf{Dirichlet boundary condtion.} For the boundary condition of $\overline{T}$, due to \eqref{eq:wk3} and \eqref{eq:tc}, there exists a subsequence $\{T_{\varepsilon_k}\}_{k=1}^\infty$ satisfying \begin{align*} T_{\varepsilon_k}^{\frac{5}{2}} \to \overline{T}^{\frac{5}{2}}, \text{ strongly in } L^2([0,t];H^{1-\delta}(\Omega)) \end{align*} for $0<\delta<\frac12$ small. Then we can use the continuity of the trace operator to get \begin{align}\label{eq:traceconv} \gamma^1 T_{\varepsilon_k}^{\frac{5}{2}}\to \gamma^1 \overline{T}^{\frac{5}{2}}, \text{ strongly in } L^2([0,t];L^2(\partial\Omega)). \end{align} For the Dirichlet boundary condition \eqref{b3}, we have $\gamma^1 T_\varepsilon = T_b$, hence \begin{align*} \gamma^1 \overline{T} = T_b. \end{align*} This verifies \eqref{eq:Boundarycond} in the Dirichlet case. \textbf{Robin boundary condition with $r>0$.} We can use the following inequality from the energy inequality \eqref{eq:energyrobin}: \begin{align*} \frac{1}{\varepsilon^r} \int_0^t \|\gamma^1 T_\varepsilon - T_b\|_{L^5(\partial\Omega)}^5d\tau \le C, \end{align*} to deduce that \begin{align*} \gamma^1 T_\varepsilon \to T_b, \text{ strongly in }L^5([0,t];L^5(\partial\Omega)). \end{align*} This combines \eqref{eq:traceconv} leads to \begin{align*} \gamma^1 \overline{T} = T_b. \end{align*} The case of Robin boundary condition \eqref{b2} with $r=0$ will be shown later. \textbf{Boundary condition for $\overline{\psi}$.} To show the boundary condition for $\overline{\psi}$. From \eqref{eq:estC}, we get that \begin{align*} &\psi_\varepsilon \in L^2([0,t];L^2(\Omega\times\mathbb{S}^2)),\\ &(\varepsilon\partial_t + \beta \cdot \nabla) \psi_\varepsilon = \frac{1}{\varepsilon} (\psi_\varepsilon - T_\varepsilon^4) \in L^2([0,t];L^2(\Omega\times\mathbb{S}^2)), \end{align*} we can use the definition of the trace operator $\gamma^2$ to get \begin{align*} \gamma^2 \psi_\varepsilon \rightharpoonup \overline{\gamma^2 \psi_\varepsilon} \text{ weakly in }L^2([0,t];L^2(\Sigma;|n\cdot\beta|d\beta d\sigma_x)). \end{align*} To show $\overline{\gamma^2 \psi_\varepsilon} = \gamma^2 \overline{T}^{4}$, we multiply \eqref{eq:psieps} by $\varepsilon \rho$ with $\rho \in C^\infty([0,t]\times\Omega)$ and integrate over time and space, we get \begin{align* &\varepsilon \iint_{\Omega\times\mathbb{S}^2} \psi_\varepsilon(t) \rho(t) d\beta dx d\tau -\varepsilon \iint_{\Omega\times\mathbb{S}^2} \psi_{\varepsilon0} \rho(0) d\beta dx d\tau-\varepsilon\int_{0}^{t}\iint_{\Omega\times\mathbb{S}^2}\psi_\varepsilon\partial_{t}\rho d\beta dx\nonumber\\ &-\int_{0}^{t}\iint_{\Omega\times\mathbb{S}^2}\psi_\varepsilon\beta\cdot\nabla\rho d\beta dx d\tau+\int_{0}^{t}\iint_{\Sigma}(\beta\cdot n)\gamma^{2}\psi_\varepsilon\cdot\rho d\beta d\sigma_x d\tau\nonumber\\&=-\frac{1}{\varepsilon}\int_{0}^{t}\int_{\Omega\times\mathbb{S}^2}(\psi_\varepsilon-T_\varepsilon^4)\rho d\beta dx d\tau. \end{align*} We pass to the limit $\varepsilon\to 0$ in the above equation and use \eqref{eq:psic} and \eqref{eq:wk5} to get \begin{align} \label{eq:bdbarpsiarho} &-\int_{0}^{t}\iint_{\Omega\times\mathbb{S}^2}\overline{\psi}\beta\cdot\nabla\rho d\beta dx d\tau+\int_{0}^{t}\iint_{\Sigma}(\beta\cdot n)\overline{\gamma^{2}\psi_\varepsilon}\cdot\rho d\beta d\sigma_x d\tau\nonumber\\ &\quad=-\int_{0}^{t}\int_{\Omega\times\mathbb{S}^2}A\cdot \rho d\beta dx d\tau. \end{align} On the other hand, since we take the weak limit in \begin{align*} \varepsilon \partial_t \psi_\varepsilon + \beta \cdot \nabla \psi_\varepsilon = -\frac{1}{\varepsilon}(\psi_\varepsilon - T_\varepsilon^4), \end{align*} and get \begin{align*} \beta \cdot \nabla \overline{\psi} = A. \end{align*} Applying the test function $\rho$ on this equation leads to \begin{align*} &-\int_{0}^{t}\iint_{\Omega\times\mathbb{S}^2}\overline{\psi}\beta\cdot\nabla\rho d\beta dx d\tau+\int_{0}^{t}\iint_{\Sigma}(\beta\cdot n)\gamma^{2}\overline{\psi}\cdot\rho d\beta d\sigma_x d\tau\nonumber\\ &\quad=-\int_{0}^{t}\int_{\Omega\times\mathbb{S}^2}A\cdot \rho d\beta dx d\tau. \end{align*} Comparing the above equation with \eqref{eq:bdbarpsiarho} leads to \begin{align*} \gamma^2\overline{\psi} = \overline{\gamma^2\psi_\varepsilon}. \end{align*} On the other hand, we can use the boundary term in the energy inequality \eqref{eq:estdirichlet} or \eqref{eq:energyrobin}: \begin{align*} &\qquad\frac{2\alpha - \alpha^2}{2\varepsilon} \int_0^t \|\gamma^2\psi_\varepsilon - \psi_b\|_{L^2(\Sigma_+;|n\cdot\beta| d\beta d\sigma_x)}^2 d\tau \le C, \end{align*} to deduce that \begin{align*} \gamma^2 \psi_\varepsilon \to \psi_b, \text{ strongly in }L^2(\Sigma_+;|n\cdot\beta|d\beta d\sigma_x). \end{align*} It follows that \begin{align*} \gamma^2 \overline{\psi}|_{\Sigma_+} = \psi_b. \end{align*} In addition, we can pass to the limit $\varepsilon\to 0$ in the boundary condition \eqref{bpsi} to get \begin{align*} \gamma^2 \overline{\psi}|_{\Sigma_-} = \alpha \psi_b + (1-\alpha)\gamma^2 L\psi_\varepsilon|_{\Sigma_+} = \psi_b. \end{align*} Therefore, \begin{align} \gamma^2 \overline{\psi} = \psi_b = T_b^4. \end{align} \textbf{Robin boundary condition with $r=0$.} We can use the above formula to get \begin{align*} \gamma^2 \overline{T}^4 = T_b^4, \end{align*} from which we can deduce that \begin{align*} \gamma^2 \overline{T} =T_b, \end{align*} i.e. \eqref{eq:Boundarycond} also holds in the case of Robin boundary condition with $r=0$. Hence we finish the proof. \end{proof} \begin{rem} Here in the proof we use the continuity of $T_\varepsilon^{\frac{5}{2}}$ and Lemma \ref{Lemmalions1996mathematical} to show the strong convergence of $T_\varepsilon$. If we drop the Laplacian term in the equation \ref{eq:Teps}, we can no longer show this by the above proof. However, thanks to the averaging lemma, i.e. Lemma \ref{AveragingLemma}, we have that for any $\eta \in C^\infty(\mathbb{S}^2)$, \begin{align} \left\|\int_{\mathbb{S}^2} (\psi_\varepsilon(\cdot,\cdot+y,\beta) - \psi_\varepsilon(\cdot,\cdot,\beta))d\beta\right\|_{L^2([0,t];L^2(\mathbb{T}^3))} \to 0, \end{align} as $y\to 0$ uniformly in $\varepsilon$. Thus we can take $h$ to be \begin{align*} \langle \psi_\varepsilon \rangle = \int_{\mathbb{S}^2} \psi_\varepsilon d\beta \end{align*} instead of $(T_\varepsilon)^{\frac{5}{2}}$ in Lemma \ref{Lemmalions1996mathematical} and the equation \eqref{eq:weakprod} becomes \begin{align}\label{eq:prodwk123} \left(T_\varepsilon +\langle \psi_\varepsilon \rangle \right)\langle \psi_\varepsilon \rangle \rightharpoonup \left(\overline{T} + \langle \overline{\psi} \rangle \right) \langle \overline{\psi} \rangle. \end{align} Due to the strong convergence in \eqref{eq:wk4}, \begin{align*} \langle \psi_\varepsilon - T_\varepsilon^4 \rangle \to 0, \end{align*} which leads to \begin{align*} &T_\varepsilon \langle \psi_\varepsilon \rangle = T_\varepsilon \langle \psi_\varepsilon - T_\varepsilon^4 \rangle + 4\pi T_\varepsilon^5 \rightharpoonup \langle A \rangle \overline{T} + 4\pi \overline{T_\varepsilon^5}, \\ &\langle \psi_\varepsilon \rangle \langle \psi_\varepsilon \rangle = \langle \psi_\varepsilon - T_\varepsilon^4 \rangle \left(\langle \psi_\varepsilon - T_\varepsilon^4 \rangle + 4\pi T_\varepsilon^4\right) +4\pi T_\varepsilon^4 \langle \psi_\varepsilon - T_\varepsilon^4 \rangle + 16\pi^{2}T_\varepsilon^8 \\ &\qquad \rightharpoonup \langle A\rangle (\langle A\rangle + 4\pi \overline{T_\varepsilon^4}) + 4\pi \overline{T_\varepsilon^4}\langle A\rangle + 16\pi^2 \overline{T_\varepsilon^8}, \end{align*} weakly in $L^2_{\operatorname{loc}}([0,\infty);L^2(\mathbb{T}^3))$ as $\varepsilon \to 0$. Thus we can also pass to the limit $\varepsilon \to 0$ in \eqref{eq:prodwk123} and get the same limit with \begin{align*} \langle A \rangle & \overline{T} + 4\pi \overline{T_\varepsilon^5} + \langle A\rangle(\langle A\rangle + 4\pi \overline{T_\varepsilon^4})+4\pi \overline{T_\varepsilon^4}\langle A\rangle + 16\pi^2 \overline{T_\varepsilon^8}\\ =& (\overline{T} + (\langle A\rangle + 4\pi \overline{T_\varepsilon^4}))(\langle A\rangle + 4\pi \overline{T_\varepsilon^4}), \end{align*} which implies that \begin{align*} 4\pi \overline{T_\varepsilon^5} + 16\pi^2 \overline{T_\varepsilon^8} = \overline{T}4\pi \overline{T_\varepsilon^4} + 16\pi^2 \overline{T_\varepsilon^4}\cdot\overline{T_\varepsilon^4}. \end{align*} We can also apply the Young measure theory to get \begin{align*} 4\pi& \int_{\mathbb{R}} \lambda^5 d\nu_x(\lambda) + 16\pi^2 \int_{\mathbb{R}} \lambda^8 d\nu_x(\lambda)\\ =& 4\pi \int_{\mathbb{R}} \int_{\mathbb{R}} \lambda \mu^4 d\nu_x(\lambda)d\nu_x(\mu) + 16\pi^2 \int_{\mathbb{R}}\int_{\mathbb{R}} \lambda^4\mu^4 d\nu_x(\lambda) d\nu_x(\mu). \end{align*} It implies that \begin{align*} 4\pi& \int_{\mathbb{R}}\int_{\mathbb{R}} \lambda(\lambda^4-\mu^4) d\nu_x(\lambda) d\nu_x(\mu) + 16\pi^2 \int_{\mathbb{R}}\int_{\mathbb{R}} \lambda^4(\lambda^4-\mu^4) d\nu_x(\lambda) d\nu_x(\mu) \\ =& 4\pi \int_{\mathbb{R}}\int_{\mathbb{R}} (\lambda-\mu)(\lambda^4-\mu^4) d\nu_x(\lambda) d\nu_x(\mu) + 16\pi^2 \int_{\mathbb{R}}\int_{\mathbb{R}}(\lambda^4-\mu^4)^2 d\nu_x(\lambda) d\nu_x(\mu) \\ =&0. \end{align*} Therefore, $\nu_x$ is concentrated at $\delta_{T_\varepsilon(x)}$. We can thus conclude that \eqref{eq:tc} holds. Therefore, the above theorem also holds for the system \begin{align*} \partial_t T_\varepsilon =~&\frac{1}{\varepsilon^2}(\langle \psi_\varepsilon\rangle - 4\pi T_\varepsilon^4), \\ \partial_t \psi_\varepsilon + \frac{1}{\varepsilon} \beta \cdot \nabla \psi_\varepsilon =~& -\frac{1}{\varepsilon^2}(\psi_\varepsilon - T_\varepsilon^4). \end{align*} \end{rem} \section{The relative entropy method}\label{section4} The compactness method gives a clear justification of the diffusive limit of the system \eqref{eq:Teps}-\eqref{eq:psieps}. In this section, we give the rate of convergence of the diffusive limit under assumption on regularity of the limit system \eqref{hgm1.0}. We will introduce a relative entropy function to compare the solutions between \eqref{eq:Teps}-\eqref{eq:psieps} and \eqref{hgm1.0}. The difference of their solutions are estimated using this relative entropy function. To compare solutions of the equations \eqref{eq:Teps}-\eqref{eq:psieps} and \eqref{hgm1.0}, we notice that the limit system \eqref{hgm1.0} does not include the equation for $\psi_\varepsilon$ is not included. To use the relative entropy method, we define $\overline{\psi}$ as follows \begin{align*} \overline{\psi} = \overline{T}^4 - \varepsilon \beta \cdot \nabla \overline{T}^4 -\varepsilon^2 \partial_t \overline{T}^4 + \varepsilon^2 \beta \cdot \nabla (\beta \cdot \nabla \overline{T}^4). \end{align*} so that $\overline{T}$ and $\overline{\psi}$ satisfies \begin{align} \partial_t \overline{T} =~& \Delta \overline{T} + \frac{1}{\varepsilon^2} \int_{\mathbb{S}^{2}} ( \overline{\psi}-\overline{T}^4) d\beta, \label{eq_1}\\ \partial_t \overline{\psi} + \frac{1}{\varepsilon} \beta \cdot \nabla \overline{\psi} =~& - \frac{1}{\varepsilon^2}(\overline{\psi}-\overline{T}^4)+\overline{R}. \label{eq_2} \end{align} where \begin{align*} \overline{R} =& \partial_t \overline{\psi} + \frac{1}{\varepsilon} \beta \cdot \nabla \overline{\psi} + \frac{1}{\varepsilon^2} (\overline{\psi} - \overline{T}^4) \\ =& \partial_t \overline{T}^4 + \frac{1}{\varepsilon} \beta \cdot \nabla \overline{T}^4-\beta \cdot \nabla (\beta \cdot \nabla \overline{T}^4) - \frac{1}{\varepsilon} \beta \cdot \nabla \overline{T}^4 -\partial_t \overline{T}^4 +\beta \cdot \nabla (\beta \cdot \nabla \overline{T}^4) \\ &-\varepsilon \beta \cdot \nabla \partial_t \overline{T}^4+\varepsilon \beta \cdot \nabla (-\partial_t \overline{T}^4+\beta \cdot \nabla (\beta \cdot \nabla \overline{T}^4)) -\varepsilon^2\partial_t^2 \overline{T}^4+\varepsilon^2 \beta \cdot \nabla (\beta \cdot \nabla \overline{T}^4) \\ =&\varepsilon \beta \cdot \nabla (-2 \partial_t \overline{T}^4+ \beta \cdot \nabla (\beta \cdot \nabla \partial_t\overline{T}^4)) - \varepsilon^2(\partial_t \overline{T}^4-\beta \cdot \nabla (\beta \cdot \nabla \partial_t \overline{T}^4)). \end{align*} We define the energy function \begin{align*} E(T_\varepsilon,\psi_\varepsilon):=\int_{\Omega}& \frac{T_\varepsilon^5}{5} dx + \iint_{\Omega\times\mathbb{S}^2} \frac{\psi_\varepsilon^2}{2} d\beta dx. \end{align*} The relative energy function is defined to be \begin{align*} H(T_\varepsilon&,\psi_\varepsilon|\overline{T},\overline{\psi})\\ :=& E(T_\varepsilon,\psi_\varepsilon) - E(\overline{T},\overline{\psi}) - \left\langle\frac{\delta E}{\delta T}(\overline{T},\overline{\psi}),T_\varepsilon-\overline{T}) \right \rangle - \left\langle\frac{\delta E}{\delta \psi}(\overline{T},\overline{\psi}),\psi_\varepsilon-\overline{\psi}) \right \rangle \\ =& \int_\Omega \frac{T_\varepsilon^5-\overline{T}^5-5\overline{T}^4 (T_\varepsilon - \overline{T})}{5} dx + \int_\Omega\int_{\mathbb{S}^2} \frac{(\psi_\varepsilon - \overline{\psi})^2}{2} d\beta dx. \end{align*} We will apply the relative entropy method to compare the solutions $(T_\varepsilon,\psi_\varepsilon)$ and $(\overline{T},\overline{\psi})$ under three different boundary conditions: in the torus \eqref{b1}, Dirichlet boundary condition \eqref{b3} and Robin boundary condtion \eqref{b2}. The main result of this section is the following theorem. \begin{theorem}\label{thmre} Assume $(T_\varepsilon,\psi_\varepsilon)$ is a weak solution of the system \eqref{eq:Teps}-\eqref{eq:psieps} and $\overline{T}$ is a strong solution of the equation \eqref{eq:Teps} with $\overline{T}\in H^2(\Omega)$. Assume the well-prepared boundary condition \eqref{eq:wellbc} holds. Suppose $ \overline{T} \ge c >0$. Then the following inequality holds: \begin{align}\label{eq:Heps} \int_\Omega& (T_\varepsilon-\overline{T})^2 + (T_\varepsilon-\overline{T})^4 \bigg|_tdx + \iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi})^2 \bigg|_t d\beta dx \nonumber\\ &\le \int_\Omega (T_{\varepsilon0}-\overline{T}_0)^2 + (T_{\varepsilon0}-\overline{T}_0)^4 dx + \iint_{\Omega\times\mathbb{S}^2} (\psi_{\varepsilon0}-\overline{\psi}_0)^2 d\beta dx + C\varepsilon^s. \end{align} Here $s=2$ for the case of torus, $s=\min\{1,r\}$ for the case of Robin boundary condition \eqref{b2} with $r>0$ and $s=1$ for the case of nonhomogeneous Dirichlet condition \eqref{b3}. Furthermore, if the initial data is well-prepared that \eqref{eq:wellinitial} holds, and $T_{\varepsilon0}-\overline{T} \to 0$ as $\varepsilon \to 0$, then $T_\varepsilon \to \overline{T}$ and $\psi_\varepsilon \to \overline{\psi}$ strongly in $L^2(\Omega)$ and $L^2(\Omega\times\mathbb{S}^2)$, respectively for any $t>0$. \end{theorem} \subsection{The case of torus} We next derive the relative entropy inequality for the case of torus $\Omega=\mathbb{T}^3$. \begin{lemma} \label{lm1} Assume $T_\varepsilon$ is a weak solution of the system \eqref{eq:Teps}-\eqref{eq:psieps}, and $\overline{T}$ is a smooth solution of the equation \eqref{hgm1.0}. The following inequality holds: \begin{align}\label{eq:Hevol} H(&T_\varepsilon,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{t}+ \frac{16}{25}\int_0^t\int_\Omega (\nabla T_\varepsilon^{\frac{5}{2}} - \nabla \overline{T}^{\frac{5}{2}})^2 dxd\tau \nonumber\\ \le& H(T_\varepsilon,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{0} +\frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau \nonumber\\ &+\frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2}\left(T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T})\right) \left(\overline{\psi} - \overline{T}^4\right) d\beta dxd\tau \nonumber\\ & - \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi}) \overline{R} d\beta dxd\tau- \frac{1}{\varepsilon^2}\int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - T_\varepsilon^4 - (\overline{\psi}-\overline{T}^4))^2 d\beta dxd\tau . \end{align} \end{lemma} \begin{proof} First, we recall that from \eqref{eq:energythm1}, the energy function for the equations \eqref{eq:Teps}-\eqref{eq:psieps} satisfies \begin{align}\label{eq:ene1} E(T_\varepsilon,\psi_\varepsilon) \bigg|_{0}^t + \frac{16}{25}\int_0^t \int_{\Omega} \left|\nabla (T_\varepsilon)^{\frac{5}{2}}\right|^2dxd\tau + \frac{1}{\varepsilon^2}\int_0^t\int_{\Omega} \int_{\mathbb{S}^{2}} (\psi_\varepsilon-(T_\varepsilon)^4 )^2 d \beta dxd\tau = 0. \end{align} The function $(\overline{T}, \overline{\psi})$ also satisfies a similar equality: \begin{align}\label{eq:ene2} &E(\overline{T},\overline{\psi}) \bigg|_{0}^t + \frac{16}{25}\int_0^t \int_{\Omega} \left|\nabla (\overline{T})^{\frac{5}{2}}\right|^2dxd\tau + \frac{1}{\varepsilon^2}\int_0^t\int_{\Omega} \int_{\mathbb{S}^{2}} (\overline{\psi}-\overline{T}^4 )^2 d \beta dxd\tau \nonumber\\ &\quad= \int_0^t\iint_{\Omega\times\mathbb{S}^2} \overline{\psi}\cdot \overline{R} d\beta dxd\tau. \end{align} Next we consider the equations of the difference $(T_\varepsilon - \overline{T},\,\psi_\varepsilon-\overline{\psi})$ using the definition of weak solutions \eqref{eq:weakt1}-\eqref{eq:weakt2} : \begin{align} -\int_0^\infty&\int_\Omega \varphi_t(T_\varepsilon-\overline{T})dxdt - \int_\Omega \varphi (T_\varepsilon-\overline{T}) \bigg|_{t=0}dx \nonumber\\ =& \int_0^\infty \int_\Omega \Delta \varphi (T_\varepsilon - \overline{T}) dxdt + \frac{1}{\varepsilon^2} \int_0^\infty \iint_{\Omega\times\mathbb{S}^2}(\psi_\varepsilon - \overline{\psi} - T_\varepsilon^4 + \overline{T}^4) \varphi d\beta dxdt, \label{eq:we1} \\ -\int_0^\infty&\iint_{\Omega\times\mathbb{S}^2} \rho_t (\psi_\varepsilon - \overline{\psi})d\beta dxdt - \int_\Omega\int_{\mathbb{S}^2} \rho(\psi_\varepsilon- \overline{\psi}) \bigg|_{t=0} d\beta dx \nonumber\\&\quad- \frac{1}{\varepsilon} \int_0^\infty\iint_{\Omega\times\mathbb{S}^2}(\psi_\varepsilon-\overline{\psi}) \beta \cdot \nabla \rho d\beta dxdt \nonumber\\ =& -\frac{1}{\varepsilon^2} \int_0^\infty\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - \overline{\psi} - T_\varepsilon^4 + \overline{T}^4)\rho d\beta dxdt - \int_0^\infty\iint_{\Omega\times\mathbb{S}^2} \rho \overline{R} d\beta dxdt.\label{eq:we2} \end{align} We introduce the following test function \begin{align}\label{eq:testfun} \varphi = \theta(\tau) \overline{T}^4,\quad \rho = \theta(\tau) \overline{\psi}, \end{align} where \begin{align*} \theta(\tau) := \left\{\begin{array}{cl} 1, &\text{ for } 0\le \tau < t, \\ \frac{t-\tau}{\delta} + 1, &\text{ for } t \le \tau < t+\delta, \\ 0, &\text{ for } \tau \ge t+\delta. \end{array}\right. \end{align*} Taking these test functions into \eqref{eq:we1}-\eqref{eq:we2} and let $\delta \to 0$, we obtain \begin{align*} \int_{\Omega} &\overline{T}^4(T_\varepsilon-\overline{T}) \bigg|_{\tau=0}^t dx - \int_0^t \int_\Omega \partial_\tau(\overline{T}^4) (T_\varepsilon-\overline{T}) dxd\tau \\ &= \int_0^t\int_\Omega \Delta \overline{T}^4(T_\varepsilon-\overline{T})dxd\tau + \frac{1}{\varepsilon^2} \int_0^t \int_\Omega\int_{\mathbb{S}^2} \overline{T}^4(\psi_\varepsilon-\overline{\psi}-T_\varepsilon^4+\overline{T}^4) d\beta dxd\tau. \\ \int_\Omega &\int_{\mathbb{S}^2} \overline{\psi} (\psi_\varepsilon-\overline{\psi}) \bigg|_{\tau=0}^t d\beta dx - \int_0^t\int_\Omega\int_{\mathbb{S}^2} (\partial_\tau \overline{\psi})(\psi_\varepsilon-\overline{\psi}) d\beta dxd\tau \\ &\qquad -\frac{1}{\varepsilon} \int_0^t\int_\Omega\int_{\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi})\beta \cdot \nabla \overline{\psi} d\beta dx d\tau \\ &= - \frac{1}{\varepsilon^2}\int_0^t\int_\Omega\int_{\mathbb{S}^2} \overline{\psi}(\psi_\varepsilon-\overline{\psi}-T_\varepsilon^4+\overline{T}^4) d\beta dxd\tau - \int_0^t \int_\Omega\int_{\mathbb{S}^2} \overline{\psi} \cdot \overline{R} d\beta dxd\tau. \end{align*} Using the equations \eqref{eq_1} and \eqref{eq_2}, the above equations become \begin{align*} \int_{\Omega}& \overline{T}^4(T_\varepsilon-\overline{T}) \bigg|_{\tau=0}^t dx \\ =& \int_0^t \int_\Omega 4\overline{T}^3 \Delta \overline{T} (T_\varepsilon-\overline{T}) dxd\tau + \int_0^t\int_\Omega \Delta \overline{T}^4(T_\varepsilon-\overline{T})dxd\tau \\ & +\frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2}4\overline{T}^3(\overline{\psi}-\overline{T}^4)(T_\varepsilon-\overline{T}) d\beta dxd\tau \\ &+ \frac{1}{\varepsilon^2} \int_0^t \int_\Omega\int_{\mathbb{S}^2} \overline{T}^4(\psi_\varepsilon-\overline{\psi}-T_\varepsilon^4+\overline{T}^4) d\beta dxd\tau, \\ \int_\Omega& \int_{\mathbb{S}^2} \overline{\psi} (\psi_\varepsilon-\overline{\psi}) \bigg|_{\tau=0}^t d\beta dx \\ = & \int_0^t\int_\Omega\int_{\mathbb{S}^2} (-\frac{1}{\varepsilon} \beta \cdot \nabla \overline{\psi})(\psi_\varepsilon-\overline{\psi}) d\beta dxd\tau +\frac{1}{\varepsilon} \int_0^t\int_\Omega\int_{\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi})\beta \cdot \nabla \overline{\psi} d\beta dx d\tau \\ &-\frac{1}{\varepsilon^2} \int_0^t\int_\Omega\int_{\mathbb{S}^2}(\overline{\psi}-\overline{T}^4)(\psi_\varepsilon-\overline{\psi})d\beta dxd\tau + \int_0^t \int_\Omega\int_{\mathbb{S}^2} \overline{R} (\psi_\varepsilon-\overline{\psi}) d\beta dxd\tau\\ & - \frac{1}{\varepsilon^2}\int_0^t\int_\Omega\int_{\mathbb{S}^2} \overline{\psi}(\psi_\varepsilon-\overline{\psi}-T_\varepsilon^4+\overline{T}^4) d\beta dxd\tau - \int_0^t \int_\Omega\int_{\mathbb{S}^2} \overline{\psi} \cdot \overline{R} d\beta dxd\tau. \end{align*} Adding them together gives \begin{align*} \int_{\Omega} &\overline{T}^4(T_\varepsilon-\overline{T}) \bigg|_{\tau=0}^t dx + \iint_{\Omega\times\mathbb{S}^2} \overline{\psi} (\psi_\varepsilon-\overline{\psi}) \bigg|_{\tau=0}^t d\beta dx \\ =& \int_0^t \int_\Omega 4\overline{T}^3 \Delta \overline{T} (T_\varepsilon-\overline{T}) + \Delta \overline{T}^4(T_\varepsilon-\overline{T})dxd\tau \\ & -\frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2}(\overline{\psi}-\overline{T}^4)(T_\varepsilon^4-\overline{T}^4-4\overline{T}^3(T_\varepsilon-\overline{T}) )d\beta dxd\tau \\ &- \frac{2}{\varepsilon^2} \int_0^t \int_\Omega\int_{\mathbb{S}^2} (\overline{\psi}-\overline{T}^4)(\psi_\varepsilon-\overline{\psi}-T_\varepsilon^4+\overline{T}^4) d\beta dxd\tau \\ &+ \int_0^t\int_\Omega\int_{\mathbb{S}^2} \overline{R}(\psi_\varepsilon-\overline{\psi}) d\beta dxd\tau -\int_0^t\int_\Omega\int_{\mathbb{S}^2} \overline{\psi} \cdot \overline{R} d\beta dxd\tau. \end{align*} We substract the above equation from the difference between \eqref{eq:ene1} and \eqref{eq:ene2} and arrive at the following inequality: \begin{align}\label{eq:Hincal} H(T_\varepsilon&,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{t} \nonumber \\ \le& H(T_\varepsilon,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{0} - \frac{16}{25} \int_0^t\int_\Omega \left(\left|\nabla(T_\varepsilon)^{\frac{5}{2}}\right|^2 - \left|\nabla(\overline{T})^{\frac{5}{2}}\right|^2\right) dxd\tau \nonumber\\ &-\int_0^t \int_\Omega 4\overline{T}^3 \Delta \overline{T} (T_\varepsilon-\overline{T}) + \Delta \overline{T}^4(T_\varepsilon-\overline{T})dxd\tau \nonumber\\ & - \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi}) \overline{R} d\beta dxd\tau- \frac{1}{\varepsilon^2}\int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - T_\varepsilon^4 - (\overline{\psi}-\overline{T}^4))^2 d\beta dxd\tau \nonumber\\ &+\frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2}\left(T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T})\right) \left(\overline{\psi} - \overline{T}^4\right) d\beta dxd\tau. \end{align} To simplify the inequality, we rewrite the third term on the right hand side as: \begin{align} -\int_0^t& \int_\Omega 4\overline{T}^3 \Delta \overline{T} (T_\varepsilon-\overline{T}) + \Delta \overline{T}^4(T_\varepsilon-\overline{T})dxd\tau \nonumber\\ =& \int_0^t\int_\Omega (T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T}))\Delta \overline{T} dxd\tau \nonumber\\ &- \int_0^t\int_\Omega (T_\varepsilon^4 - \overline{T}^4) \Delta \overline{T} + \Delta \overline{T}^4 (T_\varepsilon - \overline{T}) dxd\tau \nonumber\\ =& \int_0^t\int_\Omega (T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T}))\Delta \overline{T} dxd\tau -\frac{32}{25} \int_0^t\int_\Omega \left|\nabla \overline{T}^{\frac{5}{2}}\right|^2 dxd\tau\nonumber\\& - \int_0^t \int_\Omega T_\varepsilon^4 \Delta \overline{T} + T_\varepsilon \overline{T}^4 dxd\tau.\label{eq:4t3d} \end{align} Here we use the fact that \begin{align}\label{eq:calt52} \int_\Omega \overline{T}^4 \Delta \overline{T} dx = \int_\Omega \overline{T} \Delta \overline{T}^4 dx = - \frac{16}{25} \int_\Omega \left|\nabla \overline{T}^{\frac{5}{2}}\right|^2 dx. \end{align} We calculate the last term in \eqref{eq:4t3d} as \begin{align*} -\int_{0}^t \int_\Omega (T_\varepsilon^4 \Delta \overline{T} + T_\varepsilon \Delta \overline{T}^4) dxd\tau =& -\int_0^t\int_\Omega T_\varepsilon^4 \Delta \overline{T} + 4T_\varepsilon \overline{T}^3 \Delta \overline{T} + 12 T_\varepsilon \overline{T}^2 |\nabla \overline{T}|^2 dxd\tau. \end{align*} Using \[\Delta \overline{T}^{\frac{5}{2}} = \nabla \cdot \left(\frac{5}{2} \overline{T}^{\frac{3}{2}}\nabla \overline{T} \right) = \frac{5}{2} \overline{T}^{\frac{3}{2}}\Delta\overline{T} + \frac{15}{4} \overline{T}^{\frac{1}{2}}|\nabla \overline{T}|^2,\] we obtain \begin{align} -\int_{0}^t &\int_\Omega (T_\varepsilon^4 \Delta \overline{T} + T_\varepsilon \Delta \overline{T}^4) dxd\tau \nonumber \\ =& -\int_0^t\int_\Omega T_\varepsilon^4 \Delta \overline{T} + 4T_\varepsilon \overline{T}^3\Delta \overline{T} + 12 T_\varepsilon \overline{T}^{\frac{3}{2}} \cdot \frac{4}{15}(\Delta \overline{T}^{\frac{5}{2}} - \frac{5}{2}\overline{T}^{\frac{3}{2}}\Delta \overline{T})dxd\tau \nonumber\\ =& -\int_0^t \int_\Omega T_\varepsilon^4\overline{T} + 4T_\varepsilon \overline{T}^3 \Delta \overline{T} + \frac{16}{5} T_\varepsilon \overline{T}^{\frac{3}{2}} \Delta \overline{T}^{\frac{5}{2}} - 8 T_\varepsilon \overline{T}^3\Delta \overline{T} dxd\tau \nonumber\\ =& -\int_0^t \int_\Omega T_\varepsilon^4\overline{T} - 4T_\varepsilon \overline{T}^3 \Delta \overline{T} + \frac{16}{5} T_\varepsilon \overline{T}^{\frac{3}{2}} \Delta \overline{T}^{\frac{5}{2}}dxd\tau \nonumber\\ =& -\int_0^t\int_\Omega (T_\varepsilon^4 - \overline{T}^4 - 4\overline{T}^3(T_\varepsilon - \overline{T}))\Delta \overline{T} dxd\tau + 3\int_0^t\int_\Omega \overline{T}^4 \Delta \overline{T} dxd\tau\nonumber\\ & - \frac{16}{5} \int_0^t\int_\Omega T_\varepsilon \overline{T}^{\frac{3}{2}}\Delta \overline{T}^{\frac{5}{2}} dxd\tau. \label{eq:T4Tcal} \end{align} The last term in the above equation can be calculate as \begin{align*} -\frac{16}{5} \int_0^t\int_\Omega T_\varepsilon \overline{T}^{\frac{3}{2}}\Delta \overline{T}^{\frac{5}{2}} dxd\tau =&\frac{16}{5}\int_0^t\int_\Omega \frac{2}{5}(T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau \\ & - \frac{32}{25} \int_0^t\int_\Omega T_\varepsilon^{\frac{5}{2}} \Delta \overline{T}^{\frac{5}{2}} dxd\tau - \frac{16}{5} \frac{3}{5} \int_0^t\int_\Omega \overline{T}^{\frac{5}{2}} \Delta \overline{T}^{\frac{5}{2}} dxd\tau. \end{align*} Taking this equation into \eqref{eq:T4Tcal} and using \eqref{eq:calt52}, we obtain \begin{align*} -\int_{0}^t& \int_\Omega (T_\varepsilon^4 \Delta \overline{T} + T_\varepsilon \Delta \overline{T}) dxd\tau\\ =& -\int_0^t\int_\Omega (T_\varepsilon^4 - \overline{T}^4 - 4\overline{T}^3(T_\varepsilon - \overline{T}))\Delta \overline{T} dxd\tau \\ & +\frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau -\frac{32}{25}\int_0^t\int_\Omega T_\varepsilon^{\frac{5}{2}} \Delta \overline{T}^{\frac{5}{2}} dxd\tau . \end{align*} Taking it into \eqref{eq:4t3d} and using \begin{align*} &-\frac{16}{25} \int_0^t \left|\nabla T_\varepsilon^{\frac{5}{2}}\right|^2-\frac{16}{25} \int_0^t \left|\nabla \overline{T}^{\frac{5}{2}}\right|^2dxd\tau + \frac{32}{25} \int_0^t \nabla T_\varepsilon^{\frac{5}{2}}\cdot\nabla \overline{T}^{\frac{5}{2}} \\ &\quad= -\frac{16}{25}\int_0^t\int_\Omega (\nabla T_\varepsilon^{\frac{5}{2}} - \nabla \overline{T}^{\frac{5}{2}})^2 dxd\tau, \end{align*} inequality \eqref{eq:Hincal} becomes \eqref{eq:Hevol} and finished the proof. \end{proof} We now prove Theorem \ref{thmre}. \begin{proof}[Proof of Theorem \ref{thmre}] From Lemma \ref{lm1}, we have \begin{align}\label{relpf} H(T_\varepsilon&,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{t}+ \frac{16}{25}\int_0^t\int_\Omega (\nabla T_\varepsilon^{\frac{5}{2}} - \nabla \overline{T}^{\frac{5}{2}})^2 dxd\tau \nonumber\\ \le& H(T_\varepsilon,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{0} +\frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau \nonumber\\ &+\frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2}\left(T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T})\right) \left(\overline{\psi} - \overline{T}^4\right) d\beta dxd\tau \nonumber\\ & - \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi}) \overline{R} d\beta dxd\tau- \frac{1}{\varepsilon^2}\int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - T_\varepsilon^4 - (\overline{\psi}-\overline{T}^4))^2 d\beta dxd\tau \nonumber\\ =& I_1 + I_2 + I_3 + I_4 + I_5. \end{align} To control the relative entropy, we need the following lemma. \begin{lemma}\label{lmtg} Let $c>0$. Suppose $A\ge c,\, A+g\ge 0$, then \begin{align*} (A+g)^5 - A^5 - 5A^4g \ge (c^3|g|^2+c|g|^4). \end{align*} \end{lemma} \begin{proof} We can prove this lemma by direct calculations: \begin{align*} (A+g)^5&- A^5 - 5A^4g \\ =& A^5 + 5 A^4g + 10 A^3 g^2+10A^2g^3+5Ag^4+g^5-A^5-5A^4g \\ =&10 A^3g^2+10 A^2g^3+5Ag^4+g^5 \\ \ge& 10A^3g^2+10 A^2g^3+5Ag^4-Ag^4 \\ =& 10A^3 g^2+10 A^2g^3+4A g^4 \\ =&A^3g^2 +\left(9A^3g^2 + 10 A^2g^3 + \frac{25}{9} Ag^4\right) + \frac{11}{9}A g^4 \\ \ge& A^3g^2 + A^5\left(3\frac{g}{A} + \frac{5}{3}\frac{g^2}{A^2}\right)^2 + \frac{11}{9} Ag^4 \\ \ge & c^3 g^2 + cg^4 \end{align*} \end{proof} By applying Lemma \ref{lmtg} with $g:=T_\varepsilon-\overline{T}$ and $A=\overline{T}\ge c$, we have \begin{align}\label{eq:Hes} H(T_\varepsilon,\psi_\varepsilon|\overline{T},\overline{\psi}) \ge& \int_\Omega \frac{T_{\varepsilon}^5}{5} - \frac{\overline{T}^5}{5}-\overline{T}^4(T_\varepsilon-\overline{T}) dx \ge C \int_\Omega (T_\varepsilon-\overline{T})^2 + (T_\varepsilon-\overline{T})^4 dx. \end{align} Now we estimate the right hand side of inequality \eqref{relpf}. We first consider $I_2$. Using the mean value theorem, we obtain \begin{align*} T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon-\overline{T}) =& \frac{15}{4}\int_0^1\int_0^r \left(s(T_\varepsilon-\overline{T}) + \overline{T} \right)^{\frac{1}{2}} ds dr \cdot (T_\varepsilon - \overline{T})^2.\\ \le& \frac{15}{4} \int_0^1\int_0^r ((s|T_\varepsilon-\overline{T}|)^{\frac{1}{2}} + \overline{T}^{\frac{1}{2}} )dsdr \cdot (T_\varepsilon - \overline{T})^2 \\ \le & C |T_\varepsilon - \overline{T}|^2 + C|T_\varepsilon - \overline{T}|^4. \end{align*} Therefore, we have \begin{align}\label{I1} I_2 =& \frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau\nonumber\\ \le& C \int_0^t \int_\Omega (T_\varepsilon - \overline{T})^2 + (T_\varepsilon - \overline{T})^4 dxd\tau. \end{align} Next we consider $I_3$. From the property that \begin{align*} T_\varepsilon^4 - \overline{T}^4-4\overline{T}^3(T_\varepsilon-\overline{T}) =& 6\overline{T}^2(T_\varepsilon-\overline{T})^2 + 4\overline{T}(T_\varepsilon-\overline{T})^3 + (T_\varepsilon-\overline{T})^4\\ \le& C(T_\varepsilon-\overline{T})^2 + C(T_\varepsilon-\overline{T})^4, \end{align*} we have \begin{align*} \int_0^t\int_\Omega(T_\varepsilon^4 - \overline{T}^4 - 4\overline{T}^3(T_\varepsilon-\overline{T}))\Delta \overline{T} dxd\tau \le C \int_0^t\int_\Omega (T_\varepsilon-\overline{T})^2 + (T_\varepsilon-\overline{T})^4 dxd\tau. \end{align*} So $I_3$ can be estimated as \begin{align}\label{I3} I_3=& \frac{1}{\varepsilon^2} \int_0^t\int_\Omega\int_{\mathbb{S}^2} \left(T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T})\right) \left(\overline{\psi} - \overline{T}^4\right) d\beta dx d\tau \nonumber\\ =&\frac{1}{\varepsilon^2}\int_0^t\int_\Omega\int_{\mathbb{S}^2} (T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T}) \nonumber\\ &\qquad \cdot (\overline{T}^4 - \varepsilon \beta \cdot \nabla \overline{T}^4 -\varepsilon^2 \partial_t \overline{T}^4 + \varepsilon^2 \beta\cdot \nabla(\beta \cdot \nabla \overline{T}^4) - \overline{T}^4) d\beta dx d\tau \nonumber\\ =& \int_0^t\int_\Omega \left(T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T})\right)(-4\pi \partial_t \overline{T}^4 + \frac{4}{3}\pi \Delta\overline{T}^4) dxd\tau \nonumber\\ \le& C(\|\partial_t \overline{T}\|_{L^\infty} + \|\Delta \overline{T}\|_{L^\infty})\int_0^t\int_\Omega\left(T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T})\right) dxd\tau \nonumber\\ \le& C\int_0^t \int_\Omega (6\overline{T}^2(T_\varepsilon-\overline{T})^2 + 4\overline{T}(T_\varepsilon-\overline{T})^3 + (T_\varepsilon-\overline{T})^4)dxd\tau \nonumber\\ \le& C\int_0^t \int_\Omega (T_\varepsilon-\overline{T})^2 + (T_\varepsilon-\overline{T})^4 dxd\tau. \end{align} For $I_4$, we have \begin{align}\label{I2} I_4 =&\int_0^t\int_\Omega\int_{\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi}) \overline{R} d\beta dxd\tau\nonumber \\ \le& \int_0^t\int_\Omega\int_{\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi})^2 d\beta dx d\tau +\int_0^t\int_\Omega\int_{\mathbb{S}^2}\overline{R}^2 d\beta dx d\tau\nonumber\\ \le&\int_0^t\int_\Omega\int_{\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi})^2 d\beta dxd\tau + C \varepsilon^2 . \end{align} Taking the above estimate and \eqref{I1}-\eqref{I2} into \eqref{relpf}, we get the estimate \begin{align*} \int_\Omega& (T_\varepsilon-\overline{T})^2 + (T_\varepsilon-\overline{T})^4 \bigg|_tdx + \iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi})^2 \bigg|_t d\beta dx \\ &\quad+ \frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - T_\varepsilon^4 - (\overline{\psi}-\overline{T}^4))^2 d\beta dxd\tau \\ &\quad +\frac{16}{25} \int_0^t\int_\Omega \left(\nabla (T_\varepsilon)^{\frac{5}{2}} -\nabla (\overline{T})^{\frac{5}{2}} \right)^2 dxd\tau\\ &\le C\int_0^t \int_\Omega (T_\varepsilon-\overline{T})^2 + (T_\varepsilon-\overline{T})^4dxd\tau + \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi})^2 d\beta dx d\tau +C\varepsilon^2. \end{align*} Applying Gronwall's lemma to the above inequality leads to \eqref{eq:Heps} and finishes the proof. \end{proof} \subsection{Dirichlet boundary conditions.} In this case, we can do the similar calculations as before and use the boundary condition \[T_\varepsilon =\overline{T}=T_b, \quad \text{for } x\in\partial \Omega,\] to get the relative entropy inequality. \begin{lemma} Assume $T_\varepsilon$ is the weak solution of the system \eqref{eq:Teps}-\eqref{eq:psieps} with boundary conditions \eqref{bpsi} and \eqref{b3}, $\overline{T}$ is a smooth solution of the equation \eqref{hgm1.0} with boundary condition $\overline{T} (t,x) =T_b $ for $x \in \partial \Omega$. We have the following inequality: \begin{align}\label{eq:reformuladirichlet} H(T_\varepsilon&,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{t}+ \frac{16}{25}\int_0^t\int_\Omega (\nabla T_\varepsilon^{\frac{5}{2}} - \nabla \overline{T}^{\frac{5}{2}})^2 dxd\tau \nonumber\\ \le& H(T_\varepsilon,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{0} +\frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau \nonumber\\ &+\frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2}\left(T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T})\right) \left(\overline{\psi} - \overline{T}^4\right) d\beta dxd\tau \nonumber\\ & - \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi}) \overline{R} d\beta dxd\tau- \frac{1}{\varepsilon^2}\int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - T_\varepsilon^4 - (\overline{\psi}-\overline{T}^4))^2 d\beta dxd\tau \nonumber\\ &-\frac{1}{2\varepsilon}\int_0^t\iint_{\Sigma_{+}} (\beta \cdot n)(\psi_\varepsilon- \overline{\psi})^2 d\sigma_xdxd\tau \nonumber\\ &- \frac{1}{2\varepsilon} \int_0^t\iint_{\Sigma_{-}} (\beta \cdot n)(\alpha T_b^4 + (1-\alpha)\psi_\varepsilon' - \overline{\psi})^2 d\sigma_xdxd\tau. \end{align} \end{lemma} Here, the above inequality does not include boundary terms of $T_\varepsilon$ since $T_\varepsilon-\overline{T}$ vanishes on the boundary. \begin{proof} We can slightly modify the proof of Theorem \ref{thmexistd} to show that the following energy inequality holds \begin{align}\label{eq:ene1b} E(T_\varepsilon&,\psi_\varepsilon) \bigg|_{0}^t + \frac{16}{25}\int_0^t \int_{\Omega} \left|\nabla (T_\varepsilon)^{\frac{5}{2}}\right|^2dxd\tau + \frac{1}{\varepsilon^2}\int_0^t\int_{\Omega} \int_{\mathbb{S}^{2}} (\psi_\varepsilon-(T_\varepsilon)^4 )^2 d \beta dxd\tau\nonumber\\ &+ \int_0^t\int_{\partial \Omega} T_b^4 n\cdot \nabla T_\varepsilon d\sigma_x d\tau+ \frac{1}{2\varepsilon} \int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) \psi_\varepsilon^2 d\sigma_xdx d\tau\nonumber\\ &+ \frac{1}{2\varepsilon} \int_0^t\iint_{\Sigma_{-}} (\beta \cdot n) (\alpha T_b^4 + (1-\alpha) \psi_\varepsilon')^2d\sigma_xdxd\tau\le 0. \end{align} Similarily the equations \eqref{eq_1} and \eqref{eq_2} also satisfy \begin{align}\label{eq:ene2b} E(\overline{T}&,\overline{\psi}) \bigg|_{0}^t + \frac{16}{25}\int_0^t \int_{\Omega} \left|\nabla (\overline{T})^{\frac{5}{2}}\right|^2dxd\tau + \frac{1}{\varepsilon^2}\int_0^t\int_{\Omega} \int_{\mathbb{S}^{2}} (\overline{\psi}-\overline{T}^4 )^2 d \beta dxd\tau \nonumber\\ &- \int_0^t \int_{\partial \Omega} T_b^4 n \cdot \nabla \overline{T} d\sigma_x d\tau + \frac{1}{2\varepsilon}\int_0^t \iint_{\Sigma} (\beta \cdot n) \overline{\psi}^2 d\sigma_xdx d\tau\nonumber\\ =& \int_0^t\iint_{\Omega\times\mathbb{S}^2} \overline{\psi}\cdot \overline{R} d\beta dxd\tau. \end{align} We recall that from the definition of weak solutions \eqref{eq:weakd1}-\eqref{eq:weakd2}, the difference $T_\varepsilon-\overline{T},\psi_\varepsilon-\overline{\psi}$ satisfy \begin{align} -&\int_0^\infty\int_\Omega \varphi_t(T_\varepsilon-\overline{T})dxdt - \int_\Omega \varphi (T_\varepsilon-\overline{T}) \bigg|_{t=0}dx \nonumber\\ &= \int_0^\infty \int_\Omega \Delta \varphi (T_\varepsilon - \overline{T}) dxdt + \int_0^\infty \int_{\partial \Omega} n\cdot \nabla T_\varepsilon \varphi d\sigma_x dt \nonumber\\ &\quad- \int_0^\infty \int_{\partial \Omega} \varphi n \cdot \nabla \overline{T} d\sigma_x dt - \int_0^\infty \int_{\partial \Omega} (T_\varepsilon - \overline{T}) n \cdot \nabla \varphi dxdt \nonumber\\ &\quad + \frac{1}{\varepsilon^2} \int_0^\infty \iint_{\Omega\times\mathbb{S}^2}(\psi_\varepsilon - \overline{\psi} - T_\varepsilon^4 + \overline{T}^4) \varphi d\beta dxdt, \label{eq:we1b} \\ -&\int_0^\infty\iint_{\Omega\times\mathbb{S}^2} \rho_t (\psi_\varepsilon - \overline{\psi})d\beta dxdt - \int_\Omega\int_{\mathbb{S}^2} \rho(\psi_\varepsilon- \overline{\psi}) \bigg|_{t=0} d\beta dx\nonumber \\ &\quad- \frac{1}{\varepsilon} \int_0^\infty\iint_{\Omega\times\mathbb{S}^2}(\psi_\varepsilon-\overline{\psi}) \beta \cdot \nabla \rho d\beta dxdt + \frac{1}{\varepsilon}\int_0^\infty\iint_{\Sigma_{+}} (\beta \cdot n) \psi_\varepsilon \rho d\sigma_xdxdt \nonumber \\ &\quad+ \frac{1}{\varepsilon}\int_0^t\iint_{\Sigma_{-}} (\beta \cdot n) (\alpha T_b^4 + (1-\alpha) \psi_\varepsilon) \rho d\sigma_xdxdt - \frac{1}{\varepsilon} \int_0^t\iint_{\Sigma} (\beta \cdot n) \overline{\psi} \rho d\sigma_xdxdt \nonumber\\ &= -\frac{1}{\varepsilon^2} \int_0^\infty\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - \overline{\psi} - T_\varepsilon^4 + \overline{T}^4)\rho d\beta dxdt - \int_0^\infty\iint_{\Omega\times\mathbb{S}^2} \rho \overline{R} d\beta dxdt.\label{eq:we2b} \end{align} We choose the test function same as \eqref{eq:testfun} and let $\delta \to 0$. We will get \begin{align*} \int_{\Omega}& \overline{T}^4 (T_\varepsilon-\overline{T}) \bigg|_{\tau=0}^t dx \\ =& \int_0^t \int_\Omega 4\overline{T}^3 \Delta \overline{T} (T_\varepsilon-\overline{T}) dxd\tau + \int_0^t\int_\Omega \Delta \overline{T}^4(T_\varepsilon-\overline{T})dxd\tau \\ & +\frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2}4\overline{T}^3(\overline{\psi}-\overline{T}^4)(T_\varepsilon-\overline{T}) d\beta dxd\tau \\ &+ \frac{1}{\varepsilon^2} \int_0^t \int_\Omega\int_{\mathbb{S}^2} \overline{T}^4(\psi_\varepsilon-\overline{\psi}-T_\varepsilon^4+\overline{T}^4) d\beta dxd\tau \\ &+ \int_0^t \int_{\partial \Omega} n\cdot \nabla T_\varepsilon\overline{T}^4 d\sigma_x d\tau - \int_0^t \int_{\partial \Omega} T_b^4 n \cdot \nabla \overline{T} d\sigma_x d\tau \nonumber\\ &- \int_0^t \int_{\partial \Omega} (T_\varepsilon - \overline{T}) n \cdot \nabla \overline{T}^4 dxd\tau , \\ \iint_{\Omega\times\mathbb{S}^2} &\overline{\psi} (\psi_\varepsilon-\overline{\psi}) \bigg|_{\tau=0}^t d\beta dx \\ = & - \frac{1}{\varepsilon}\int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) \psi_\varepsilon \overline{\psi} d\sigma_xdxd\tau \nonumber\\ &- \frac{1}{\varepsilon}\int_0^t\iint_{\Sigma_{-}} (\beta \cdot n) (\alpha T_b^4 + (1-\alpha) \psi_\varepsilon) \overline{\psi} d\sigma_xdxd\tau \\ &+ \frac{1}{\varepsilon} \int_0^t\iint_{\Sigma} (\beta \cdot n) \overline{\psi} \cdot\overline{\psi} d\sigma_xdxd\tau -\frac{1}{\varepsilon^2} \int_0^t\int_\Omega\int_{\mathbb{S}^2}(\overline{\psi}-\overline{T}^4)(\psi_\varepsilon-\overline{\psi})d\beta dxd\tau \\&+ \int_0^t \int_\Omega\int_{\mathbb{S}^2} \overline{R} (\psi_\varepsilon-\overline{\psi}) d\beta dxd\tau - \frac{1}{\varepsilon^2}\int_0^t\int_\Omega\int_{\mathbb{S}^2} \overline{\psi}(\psi_\varepsilon-\overline{\psi}-T_\varepsilon^4+\overline{T}^4) d\beta dxd\tau \\&- \int_0^t \int_\Omega\int_{\mathbb{S}^2} \overline{\psi} \cdot \overline{R} d\beta dxd\tau. \end{align*} We substract the summation of the above two equations from the difference of equations \eqref{eq:ene1b} and \eqref{eq:ene2b} to get \begin{align}\label{eq:Hincalb} H(T_\varepsilon&,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{t}\\ \le& H(T_\varepsilon,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{0} - \frac{16}{25} \int_0^t\int_\Omega \left(\left|\nabla(T_\varepsilon)^{\frac{5}{2}}\right|^2 - \left|\nabla(\overline{T})^{\frac{5}{2}}\right|^2\right) dxd\tau \nonumber\\ &-\int_0^t \int_\Omega 4\overline{T}^3 \Delta \overline{T} (T_\varepsilon-\overline{T}) + \Delta \overline{T}^4(T_\varepsilon-\overline{T})dxd\tau \nonumber\\ & - \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi}) \overline{R} d\beta dxd\tau- \frac{1}{\varepsilon^2}\int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - T_\varepsilon^4 - (\overline{\psi}-\overline{T}^4))^2 d\beta dxd\tau \nonumber\\ &+\frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2}\left(T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T})\right) \left(\overline{\psi} - \overline{T}^4\right) d\beta dxd\tau \nonumber \\ & + \int_0^t \int_{\partial \Omega} (T_\varepsilon - \overline{T}) n\cdot \nabla \overline{T}^4 d\sigma_xd\tau \nonumber\\ &-\frac{1}{2\varepsilon}\int_0^t\iint_{\Sigma_{+}} (\beta \cdot n)(\psi_\varepsilon- \overline{\psi})^2 d\sigma_xdxd\tau \nonumber\\&- \frac{1}{2\varepsilon} \int_0^t\iint_{\Sigma_{-}}(\beta \cdot n) (\alpha T_b^4 + (1-\alpha)\psi_\varepsilon' - \overline{\psi})^2 d\sigma_xdxd\tau. \end{align} By considering the boundary conditions, the equation \eqref{eq:4t3d} becomes \begin{align*} -\int_0^t& \int_\Omega 4\overline{T}^3 \Delta \overline{T} (T_\varepsilon-\overline{T}) + \Delta \overline{T}^4(T_\varepsilon-\overline{T})dxd\tau \nonumber\\ =& \int_0^t\int_\Omega (T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T}))\Delta \overline{T} dxd\tau + \int_0^t\int_\Omega (\overline{T}^4 \Delta \overline{T} + \overline{T} \Delta \overline{T}^4)dxd\tau \\ &- \int_0^t \int_\Omega( T_\varepsilon^4 \Delta \overline{T} + T_\varepsilon \Delta \overline{T}^4) dxd\tau. \end{align*} The last term is \begin{align*} -\int_{0}^t& \int_\Omega (T_\varepsilon^4 \Delta \overline{T} + T_\varepsilon \Delta \overline{T}^4) dxd\tau \\ =& -\int_0^t\int_\Omega (T_\varepsilon^4 - \overline{T}^4 - 4\overline{T}^3(T_\varepsilon - \overline{T}))\Delta \overline{T} dxd\tau \\ & +\frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau -\frac{32}{25}\int_0^t\int_\Omega T_\varepsilon^{\frac{5}{2}} \Delta \overline{T}^{\frac{5}{2}} dxd\tau \\ & + 3\int_0^t\int_\Omega \overline{T}^4\Delta \overline{T} dxd\tau - \frac{48}{25}\int_0^t\int_\Omega \overline{T}^{\frac{5}{2}} \Delta \overline{T}^{\frac{5}{2}} dxd\tau. \end{align*} Adding the above two equations and using the integration-by-parts formulas \begin{align*} \int_\Omega \overline{T}^4 \Delta \overline{T} dx =& - \frac{16}{25}\int_\Omega \left|\nabla \overline{T}^{\frac{5}{2}}\right|^2 dx + \int_{\partial \Omega} \overline{T}^4 n \cdot \nabla \overline{T} d\sigma_x, \\ \int_\Omega \overline{T} \Delta \overline{T}^4 =& - \frac{16}{25}\int_\Omega \left|\nabla \overline{T}^{\frac{5}{2}}\right|^2 dx + \int_{\partial \Omega} \overline{T} n \cdot \nabla \overline{T}^4 d\sigma_x, \\ \int_{\Omega} \overline{T}^{\frac{5}{2}}\Delta \overline{T}^{\frac{5}{2}} dx =& -\int_\Omega \left|\nabla \overline{T}^{\frac{5}{2}}\right|^2 dx + \frac{5}{2} \int_{\partial \Omega} \overline{T}^4 n \cdot \nabla \overline{T} dx, \\ \int_{\Omega} {T}_\varepsilon^{\frac{5}{2}}\Delta \overline{T}^{\frac{5}{2}} dx =& -\int_\Omega \nabla {T}_\varepsilon^{\frac{5}{2}} \cdot \nabla \overline{T}^{\frac{5}{2}} dx + \frac{5}{2} \int_{\partial \Omega} T_\varepsilon^{\frac{5}{2}} \overline{T}^{\frac{3}{2}} n \cdot \nabla \overline{T} dx, \end{align*} we obtain \begin{align}\label{eq:4t3DeltaT} -\int_0^t& \int_\Omega 4\overline{T}^3 \Delta \overline{T} (T_\varepsilon-\overline{T}) + \Delta \overline{T}^4(T_\varepsilon-\overline{T})dxd\tau \nonumber\\ =& \frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau \\ &- \frac{32}{25} \int_0^t\int_{\Omega} \left|\nabla \overline{T}^{\frac{5}{2}}\right|^2 dxd\tau + \frac{32}{25} \int_0^t\int_{\Omega} \nabla {T}_\varepsilon^{\frac{5}{2}} \cdot \nabla \overline{T}^{\frac{5}{2}} dxd\tau \nonumber\\ &+\frac{16}{5}\int_0^t\int_{\partial \Omega} \overline{T}^4 n\cdot \nabla \overline{T} d\sigma_xd\tau - \frac{16}{5} \int_0^t\int_{\partial \Omega} T_\varepsilon^{\frac{5}{2}} \overline{T}^{\frac{3}{2}} n\cdot \nabla \overline{T} d\sigma_x d\tau \nonumber\\ =& \frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau - \frac{32}{25} \int_0^t\int_{\Omega} \left|\nabla \overline{T}^{\frac{5}{2}}\right|^2 dxd\tau \nonumber\\ &+ \frac{32}{25} \int_0^t\int_{\Omega} \nabla {T}_\varepsilon^{\frac{5}{2}} \cdot \nabla \overline{T}^{\frac{5}{2}} dxd\tau \nonumber\\ & -\frac{16}{5}\int_0^t\int_{\partial \Omega} \overline{T}^{\frac{3}{2}}(T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T})) n\cdot \nabla \overline{T} d\sigma_xd\tau \nonumber \\&- 2 \int_0^t\int_{\partial \Omega} (T_\varepsilon - \overline{T}) n\cdot \nabla \overline{T}^4d\sigma_xd\tau\nonumber\\ =& \frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau - \frac{32}{25} \int_0^t\int_{\Omega} \left|\nabla \overline{T}^{\frac{5}{2}}\right|^2 dxd\tau \nonumber\\ &+ \frac{32}{25} \int_0^t\int_{\Omega} \nabla {T}_\varepsilon^{\frac{5}{2}} \cdot \nabla \overline{T}^{\frac{5}{2}} dxd\tau. \end{align} Taking the above equation into \eqref{eq:Hincal} will lead to the inequality and finishes the proof. \end{proof} We now proceed to prove Theorem \ref{thmre}. \begin{proof}[Proof of Theorem \ref{thmre}.] Notice that the relative entropy formula \eqref{eq:reformuladirichlet} only differs from \eqref{eq:Hevol} of the torus case by the last two boundary terms on the right hand side of \eqref{eq:reformuladirichlet}. To control these two terms, we recall \[\overline{\psi}|_{\partial \Omega} =T_b^4-\varepsilon\beta \cdot \nabla \overline{T}^4-\varepsilon^2\partial_t\overline{T}^4 + \varepsilon^2 \beta \cdot \nabla(\beta \cdot \nabla \overline{T}^4)=T_b^4+\varepsilon\overline{R}_b,\] with $\overline{R}_b = -\beta \cdot \nabla \overline{T}^4-\varepsilon\partial_t\overline{T}^4 + \varepsilon \beta \cdot \nabla(\beta \cdot \nabla \overline{T}^4)$ bounded. So we have \begin{align*} -\frac{1}{2\varepsilon}&\int_0^t\iint_{\Sigma_{+}} (\beta \cdot n)(\psi_\varepsilon- \overline{\psi})^2 d\sigma_xdxd\tau \\ =& -\frac{1}{2\varepsilon}\int_0^t\iint_{\Sigma_{+}} (\beta \cdot n)(\psi_\varepsilon- T_b^4 - \varepsilon \overline{R}_b)^2 d\sigma_xdxd\tau \\ =& - \frac{1}{2\varepsilon} \int_0^t\iint_{\Sigma_{+}} (\beta \cdot n)(\psi_\varepsilon - T_b^4)^2 d\sigma_xdxd\tau +\int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) (\psi_\varepsilon - T_b^4)\overline{R}_b d\sigma_xdxd\tau \\ &- \frac{\varepsilon}{2} \int_0^t\iint_{\Sigma_{+}} \overline{R}_b^2 d\sigma_xdxd\tau \\ \end{align*} From the coordinate transform $\beta' = \beta - 2n(n\cdot \beta)$, we get $n \cdot \beta = -n\cdot \beta'$. We can also get $\beta = \beta' + 2n(n \cdot \beta')$ and $\overline{R}_b$ can be also expressed using $\beta'$. We denote $\overline{R}_b'[\beta'] = \overline{R}_b[\beta]$. We have \begin{align*} - \frac{1}{2\varepsilon}& \int_0^t\iint_{\Sigma_{-}}(\beta \cdot n) (\alpha T_b^4 + (1-\alpha)\psi_\varepsilon' - \overline{\psi})^2 d\sigma_xdxd\tau \\ =& - \frac{1}{2\varepsilon} \int_0^t\iint_{\Sigma_{-}}(\beta \cdot n) (\alpha T_b^4 + (1-\alpha)\psi_\varepsilon' - T_b^4 - \varepsilon \overline{R}_b)^2 d\sigma_xdxd\tau\\ =& \frac{1}{2\varepsilon} \int_0^t\iint_{\Sigma_{+}}(\beta' \cdot n) ( (1-\alpha)(\psi_\varepsilon' - T_b^4) - \varepsilon \overline{R}_b')^2 dS_{\beta',x}d\tau \\ =& \frac{(1-\alpha)^2}{2\varepsilon}\int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) (\psi_\varepsilon - T_b^4)^2 d\sigma_xdx d\tau \\ &- \int_0^t\iint_{\Sigma_{+}}(\beta\cdot n)(\psi_\varepsilon - T_b^4) \overline{R}_b' d\sigma_xdx d\tau+\frac{\varepsilon}{2} \int_0^t\iint_{\Sigma_{+}} (\overline{R}_b')^2 d\sigma_xdxd\tau. \end{align*} Adding the above two equations together, we have \begin{align}\label{eq:I3b} -\frac{1}{2\varepsilon}&\int_0^t\iint_{\Sigma_{+}} (\beta \cdot n)(\psi_\varepsilon- \overline{\psi})^2 d\sigma_xdxd\tau \nonumber\\&\quad- \frac{1}{2\varepsilon} \int_0^t\iint_{\Sigma_{-}}(\beta \cdot n) (\alpha T_b^4 + (1-\alpha)\psi_\varepsilon' - \overline{\psi})^2 d\sigma_xdxd\tau \nonumber\\ \le& -\frac{(2\alpha - \alpha^2)}{2\varepsilon} \int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) (\psi_\varepsilon - T_b^4)^2 d\sigma_xdx d\tau \nonumber\\ &+ \int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) (\psi_\varepsilon - T_b^4)(\overline{R}_b - \overline{R}_b') d\sigma_xdx d\tau + C\varepsilon \nonumber\\ \le& -\frac{(2\alpha - \alpha^2)}{2\varepsilon} \int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) (\psi_\varepsilon - T_b^4)^2 d\sigma_xdx d\tau \nonumber\\ &+ \frac{(2\alpha-\alpha^2)}{4\varepsilon} \int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) (\psi_\varepsilon - T_b^4)^2 d\sigma_xdx d\tau \nonumber\\ &+ \frac{C\varepsilon}{2\alpha-\alpha^2} \int_0^t\iint_{\Sigma_{+}} (\overline{R}_b-\overline{R}_b')^2 d\sigma_xdxd\tau + C\varepsilon \nonumber\\ \le& -\frac{(2\alpha - \alpha^2)}{4\varepsilon} \int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) (\psi_\varepsilon - T_b^4)^2 d\sigma_xdx d\tau + C\varepsilon. \end{align} Combining the above estimates with the estimates of other terms in proof of the torus case and applying Gronwall's inequality lead to \eqref{eq:Heps} with $s=1$ and finishes the proof. \end{proof} \subsection{Robin boundary condition with $r>0$} For the case of Robin boundary condition \eqref{b2}, a similar relative entropy inequality can also be derived like for the Dirichlet case. The result is \begin{align}\label{eq:Hevolb} H(T_\varepsilon&,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{t}+ \frac{16}{25}\int_0^t\int_\Omega (\nabla T_\varepsilon^{\frac{5}{2}} - \nabla \overline{T}^{\frac{5}{2}})^2 dxd\tau \nonumber\\ \le& H(T_\varepsilon,\psi_\varepsilon | \overline{T},\overline{\psi}) \bigg|_{0} +\frac{32}{25} \int_0^t\int_\Omega (T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T}))\Delta \overline{T}^{\frac{5}{2}} dxd\tau \nonumber\\ &+\frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2}\left(T_\varepsilon^4 - \overline{T}^4 - 4 \overline{T}^3(T_\varepsilon - \overline{T})\right) \left(\overline{\psi} - \overline{T}^4\right) d\beta dxd\tau \nonumber\\ & - \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi}) \overline{R} d\beta dxd\tau- \frac{1}{\varepsilon^2}\int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - T_\varepsilon^4 - (\overline{\psi}-\overline{T}^4))^2 d\beta dxd\tau \nonumber\\ & + \int_0^t\int_{\partial \Omega} (T_\varepsilon^4 - \overline{T}^4) \frac{T_b - T_\varepsilon}{ \varepsilon^r} d\sigma_xd\tau - \int_0^t \int_{\partial \Omega} (T_\varepsilon - \overline{T}) n\cdot \nabla \overline{T}^4 d\sigma_xd\tau \nonumber\\ &-\frac{16}{5}\int_0^t\int_{\partial \Omega} \overline{T}^{\frac{3}{2}}(T_\varepsilon^{\frac{5}{2}} - \overline{T}^{\frac{5}{2}} - \frac{5}{2} \overline{T}^{\frac{3}{2}}(T_\varepsilon - \overline{T})) n\cdot \nabla \overline{T} d\sigma_xd\tau \nonumber \\ &-\frac{1}{2\varepsilon}\int_0^t\iint_{\Sigma_{+}} (\beta \cdot n)(\psi_\varepsilon- \overline{\psi})^2 d\sigma_xdxd\tau \nonumber\\ &- \frac{1}{2\varepsilon} \int_0^t\iint_{\Sigma_{-}} (\beta \cdot n)(\alpha T_b^4 + (1-\alpha)\psi_\varepsilon' - \overline{\psi})^2 d\sigma_xdxd\tau . \end{align} With the above relative entropy inequality we now proceed to prove Theorem \ref{thmre}. \begin{proof}[Proof of Theorem \ref{thmre}] Compare the relative entropy inequality \eqref{eq:Hevolb} with \eqref{eq:reformuladirichlet} of the Dirichlet case, there are two additional terms with the boundary of $T_\varepsilon$ that need to control. First, by \eqref{EstiL5} we have \begin{align}\label{eq:I1b} \int_0^t&\int_{\partial \Omega} (T_\varepsilon^4 - \overline{T}^4) \frac{T_b - T_\varepsilon}{ \varepsilon^r} d\sigma_x d\tau \nonumber\\ =& - \int_0^t\int_{\partial \Omega} (T_\varepsilon^4 - T_b^4) \frac{T_b - T_\varepsilon}{ \varepsilon^r} d\sigma_x d\tau \nonumber\\ =& -\frac{1}{\varepsilon^r}\int_0^t\int_{\partial \Omega} (T_\varepsilon - T_b)^2 (T_\varepsilon + T_b)(T_\varepsilon^2 + T_b^2) d\sigma_x d\tau \nonumber\\ \le& -\frac{1}{ \varepsilon^r} \int_0^t\int_{\partial \Omega} |T_\varepsilon - T_b|^5 d\sigma_x d\tau, \end{align} The second bounary term is \begin{align}\label{eq:I2b} \int_0^t& \int_{\partial \Omega} (T_\varepsilon - \overline{T}) n\cdot \nabla \overline{T}^4 d\sigma_xd\tau \nonumber\\=& \int_0^t \int_{\partial \Omega} (T_\varepsilon - T_b) n\cdot \nabla \overline{T}^4 d\sigma_xd\tau \nonumber\\ \le& \frac{1}{2 \varepsilon^r} \int_0^t\int_{\partial \Omega} |T_\varepsilon - T_b|^5 d\sigma_x d\tau + C\varepsilon^r \int_0^t\int_{\partial \Omega} |n\cdot \nabla \overline{T}^4|^2 d\sigma_xd\tau \nonumber \\ \le& \frac{1}{2 \varepsilon^r} \int_0^t\int_{\partial \Omega} |T_\varepsilon - T_b|^5 d\sigma_x d\tau + C\varepsilon^r . \end{align} Combining \eqref{eq:I1b}, \eqref{eq:I2b}, and result of the Dirichlet case, we can conclude that the following inequality holds: \begin{align*} \int_\Omega& (T_\varepsilon-\overline{T})^2 + (T_\varepsilon-\overline{T})^4 \bigg|_tdx + \iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi})^2 \bigg|_t d\beta dx \\ &\quad + \frac{1}{\varepsilon^2} \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon - T_\varepsilon^4 - (\overline{\psi}-\overline{T}^4))^2 d\beta dxd\tau \\&\quad + \int_0^t\int_\Omega \left(\nabla (T_\varepsilon)^{\frac{5}{2}} -\nabla (\overline{T})^{\frac{5}{2}} \right)^2 dxd\tau + \frac{1}{2\varepsilon^r} \int_0^t\int_{\partial \Omega} |T_\varepsilon -T_b|^5 d\sigma_x d\tau \\ &\quad+ \frac{(2\alpha - \alpha^2)}{\varepsilon} \int_0^t\iint_{\Sigma_{+}} (\beta \cdot n) (\psi_\varepsilon - T_b^4)^2 d\sigma_xdx d\tau\\ &\le C\int_0^t \int_\Omega (T_\varepsilon-\overline{T})^2 + (T_\varepsilon-\overline{T})^4dxd\tau + \int_0^t\iint_{\Omega\times\mathbb{S}^2} (\psi_\varepsilon-\overline{\psi})^2 d\beta dx d\tau +C\varepsilon + C\varepsilon^r. \end{align*} Applying Gronwall's inequality leads to \eqref{eq:Heps} and finishes the proof. \end{proof} For the case of Robin boundary condition with $r=0$, a boundary layer exists for $T_\varepsilon$, thus we can not apply the above relative entropy method directly to show the convergence of $T_\varepsilon$, although with the compactness method this is done in Theorem \ref{LimitProof}. This boundary layer problem will be investigated in our future paper. \bibliographystyle{siam}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In recent years, with the success of deep learning, researchers on graph learning have gradually turned their attention to Graph Neural Networks (GNN). In some previous works, GNNs yield reasonable representations for graphs \cite{hamilton2017inductive, velivckovic2017graph} and provide promising results over methods based on shallow embeddings \cite{grover2016node2vec}. By using the spatial construction of graph convolutional filters \cite{kipf2016semi, hamilton2017inductive, velivckovic2017graph}, the node embeddings can be computed by iteratively aggregating and transforming neighbors' embeddings. Thus, the local structure of the nodes' can be summarized into low-dimensional vectors. The trained GNNs can also support inductive learning for unseen graphs with similar types of features. However, one common limitation of existing GNN models is that the edge features on the graphs are often ignored or poorly supported. One should expect that edge attributes carry information about a pair of adjacent nodes, which can be instrumental in graph learning. Following this intuition, some works \cite{gong2019exploiting, simonovsky2017dynamic} extend the GNN framework by incorporating both node and edge features of a static graph and result in substantial performance improvement. However, graphs for real-world applications are usually time-evolving, where there can be ongoing interactions between a pair of nodes that define their changing relationships over an extended period of time. One common way to address this problem is to decompose a temporal graph into multiple static graph snapshots by a regular time interval \cite{yu2018spatio, manessi2020dynamic}. Some other methods further attempt to yield embeddings that are adaptive to continuous-time space \cite{xu2020inductive, zhang2020learning, kumar2019predicting}. Although these works have made substantial advances in temporal interaction graph learning, they suffer from the following limitations: \textbf{First}, all previous works consider all interactions associated with a node as a single time-series when generating temporal node embeddings. Therefore, they fail to explicitly capture multi-dimensional temporal interaction patterns and relationships for each individual node-pair, which should be beneficial for generating discriminative node representations. Consider the example in Fig. \ref{fig:example} where node $A$ is a gambler who has regular betting-related payment interactions with node $C$ while behaving normally otherwise with the rest of its neighbors. In this case, the abnormal behavior can be readily captured by modeling the pairwise temporal interactions between node $A$ and $C$ explicitly, which in turn, helps to identify the role/ illicit activities of these nodes. \textbf{Second}, although attention mechanism has been introduced to distinguish different neighbors in some works \cite{velivckovic2017graph,xu2020inductive}, a large number of neighbors with small but non-zero attention weight can still overwhelm the few important ones. This is particularly true for real-world applications, e.g., mobile payment networks, where most target nodes do have a large number of (normal) interacting neighbors. This motivates the need for filtering out irrelevant neighbors to reduce the effect of noisy information during the learning process. \textbf{Third}, some previous work \cite{zhang2020learning} simply concatenate node and edge representations, which holds inferior support to integrate node and edge information as neighbor node embeddings and the corresponding edge embeddings are summed independently during aggregation. \begin{figure}[t] \centering \includegraphics[scale=0.2]{fig/example.png} \caption{ An illustrated example of the motivation of capturing pairwise relationships in a temporal interaction graph. Nodes can have different kinds of interaction events with their neighbors, e.g., node $A$ behaves normally with most of its neighbors, while conducting regular gambling activities with node $C$. } \label{fig:example} \end{figure} To tackle these challenges, we propose \textbf{GTEA} (\textbf{G}raph \textbf{T}emporal \textbf{E}dge \textbf{A}ggregation), a framework of representation learning for temporal interaction graphs. The pipeline of GTEA is depicted in Fig. \ref{fig:pipeline}, under which a sequence model is first introduced to learn the temporal dynamics of pairwise nodal interactions. As a result, a low-dimensional edge embedding is generated to encode the temporal interaction patterns between different node pairs. Topological dependencies and relationships among different nodes are then captured through the GNN-based backbone, which incorporates both node features and the learned edge embeddings. To filter out irrelevant information, a sparsity-inducing self-attention mechanism is used when conducting graph neighborhood aggregations. The entire model can be trained end-to-end and all proposed modules are jointly optimized to yield discriminative representations for node classification tasks. We have implemented multiple variants of different sequence models for GTEA and experimental results on four large-scale real-world datasets demonstrate the effectiveness, scalability and state-of-the-art performance of GTEA by conducting binary and multi-class node classifications. \begin{figure}[t] \centering \includegraphics[scale=0.35]{fig/pipeline.png} \caption{The pipeline of the proposed GTEA framework.} \label{fig:pipeline} \end{figure} \section{Related Work} In recent years, the development of graph learning has been well propelled by Graph Neural Network (GNN), which achieve great success in tasks including node classification, clustering, link prediction, etc. \cite{kipf2016semi, hamilton2017inductive, velivckovic2017graph}. However, real-world graphs are typically time-evolving with node and edge attributes, which can be informative for graph representation learning but usually separately processed by previous works. EGNN \cite{gong2019exploiting} constructs a weighted graph for each edge features' dimension. Node embeddings of each weighted graph are obtained by applying a GCN model and they are concatenated together to form the final node embeddings of the original graph. As such, edge features are only regarded as connectivity/ aggregation weights for nodes. ECConv \cite{simonovsky2017dynamic} uses a filter generating network which takes multiple edge features as input and generates a set of edge-specific weights for the GCN. Although different patterns can be learned through the model, the dense filter in ECConv can result in huge memory consumption, which hinders its scalability with large graphs. One common way to handle temporal graphs is to decompose it into multiple static graph snapshots by a regular time interval \cite{yu2018spatio, singer2019node}. DCRNN \cite{li2018dcrnn_traffic} embeds the graph convolution into the LSTM model, which learns to capture relational dependencies within a period of time. EvolveGCN \cite{pareja2020evolvegcn} utilizes RNN-based variants to generate different GCN weights for each snapshot, which learns to extract different relational patterns at different time. However, a sequence of graph snapshots is a coarse approximation of the continuous time-evolving graph in the real-world. This results in quantization and temporal information loss as it groups all data within a time interval into a single static graph. In addition, the setting of regular time interval prevents these models from learning relational patterns with different time-scales, which makes it hard to capture variable interactional relationships of a node pair across time. Such drawbacks prevent them from generalizing to more complicated temporal interaction graphs. Instead of learning from graph snapshots, JODIE\cite{kumar2019predicting} uses two recurrent neural networks to update node embeddings in each interaction. \cite{zhang2020learning} further utilizes the edge features to enhance the representation of nodes. TGAT \cite{xu2020inductive} adopts a continuous-time kernel encoder with a self-attention layer to aggregate interactions among neighbors. Know-Evolve \cite{trivedi2017know} and DyRep \cite{trivedi2018dyrep} learn evolving entity representations on temporal knowledge graphs using temporal point process. However, all models treat the interactions between a target node and all of its neighbors as one single time-series, which makes it difficult for the model to learn relational patterns that are specific to individual node-pairs. Different from the aforementioned works, our proposed model learns the temporal interaction patterns for each specific node pair, which leads to a more fine-grained pairwise nodal relationship modeling for temporal interaction graphs. \section{The GTEA Framework} \begin{figure}[t] \centering \includegraphics[scale=0.4]{fig/framework.png} \caption{The framework of the proposed GTEA. For a pair of nodes, temporal dynamics are built through a sequence model, which yields a fine-grained temporal edge embedding. Besides, we apply one more sequence model with sparse attention mechanism to catch important temporal behaviors. The learned edge embeddings will be aggregated together with nodes attributes recursively and finally generate a discriminative representation for each node, which can be generalized to some downstream tasks, e.g., node classification.} \label{fig_framework} \end{figure} \subsection{Problem Formulation} Definition: a \textbf{Temporal Interaction Graph} is an attributed graph $\ensuremath{\mathcal{G}} = (\ensuremath{\mathcal{V}}, \ensuremath{\mathcal{E}})$ where $\ensuremath{\mathcal{V}}$ is the set of vertices and $\ensuremath{\mathcal{E}}$ is a set of edges. Let $N=|\ensuremath{\mathcal{V}}|$ be the total number of vertices and $M=|\ensuremath{\mathcal{E}}|$ be the total number of edges. $\mathcal{N}(u)$ stands for the set of neighbors that node $u$ interacts with. Let $x_{u} \in\mathbb{R}^{D_N}$ be the feature vector of node $u$ with $D_N$ dimension. An edge $(u,v)$ in $\ensuremath{\mathcal{E}}$ corresponds to a sequence of interaction events between node $u$ and $v$ taking over an extended period of time. The $k$-th interaction event between $u$ and $v$ occurs at time $t^k_{uv}$ and is represented by $\vctr{e}_{uv}^{k}=(t^k_{uv}, \vctr{f}^k_{uv})$ where $\vctr{f}^k_{uv} \in \mathbb{R}^{D_E}$ is a $D_E$-dimension event feature-vector. Let $S_{uv}$ be the number of interaction events between node $u$ and $v$ with $t^1_{uv} \leq t^2_{uv} \ldots \leq t^{S_{uv}}_{uv}$. Our definition of temporal interaction graph is similar to that of \cite{zhang2020learning} except that we also consider node features and the graphs of interest are not restricted to be bipartite. In contrast to the temporal graphs in \cite{yan2018spatial}, which contain a sequence of graph snapshots with only node features, our model considers temporal interactions that take place over continuous time and supports graphs with multi-dimensional node and edge attributes. \subsection{Learning Edge Embedding via Sequence Model} To capture the temporal pattern and relationship of the interactions between a pair of nodes, we propose to use state-of-the-art sequence models, including LSTM, Transformer and their time-aware variants, to generate the edge embeddings for a temporal interaction graph. The pairwise nodal interactions are viewed as a time-series that will be fed into the sequence model. For the node pair $(u, v)$, we denote $\vctr{\tilde{e}_{uv}}$ as the edge embedding generated by the sequence model named $M_t$, where: \begin{align} \vctr{\tilde{e}}_{uv}=M_t([\vctr{e}_{uv}^1, ..., \vctr{e}_{uv}^{S_{uv}}]) \label{eq:nbr-repr-lstm}. \end{align} \subsubsection{Temporal Dynamics Modeling with LSTM} We apply LSTM model \cite{hochreiter1997long} to characterize the interactions between a pair of nodes. The full LSTM model can be described as follows (where we drop the subscript ($uv$) for notational convenience): \begin{linenomath} \begin{align*} \vctr{i}[k] &= \sigma\left(\matr{W}_i \vctr{e}^k+\matr{U}_i\vctr{h}[k-1]+\vctr{b}_i\right) \\ \vctr{f}[k] &= \sigma\left(\matr{W}_f\vctr{e}^k+\matr{U}_f\vctr{h}[k-1]+\vctr{b}_f\right) \\ \overline{\vctr{c}}[k] &= \tanh\left(\matr{W}_c \vctr{e}^k+\matr{U}_c \vctr{h}[k-1]+\vctr{b}_c\right) \\ \vctr{c}[k] &= \vctr{f}[k]\odot\vctr{c}[k-1] + \vctr{i}[k] \odot \overline{\vctr{c}}[k] \\ \vctr{o}[k] &= \sigma\left(\matr{W}_o \vctr{e}^k+\matr{U}_o \vctr{h}[k-1]+\vctr{b}_o\right) \\ \vctr{h}[k] &= \vctr{o}[k] \odot\tanh\left(\vctr{c}[k] \right) \end{align*} \end{linenomath} \sloppy For the temporal interaction sequences between edge $(u, v)$, $\vctr{e}^k$ represents the current input of the LSTM. $\vctr{i}[k]$, $\vctr{f}[k]$, and $\vctr{o}[k]$ denote as state vectors of input gates, forget gates and output gates respectively, while $\vctr{c}[k]$ is the memory cell and $\vctr{h}[k]$ is the hidden state. $\sigma$ and $\tanh$ represent the sigmoid and hyperbolic tangent activation functions respectively. $\matr{W}_i, \matr{W}_f, \matr{W}_c, \matr{W}_o, \matr{U}_i, \matr{U}_f, \matr{U}_c, \matr{U}_o, \vctr{b}_i, \vctr{b}_f, \vctr{b}_c, \vctr{b}_o$ are trainable parameters, which are shared for all node-pairs in the network. We take the last hidden output $\vctr{h}_{uv}[S]$ of the LSTM as the edge embeddings $\vctr{\tilde{e}_{uv}}$. \subsubsection{Temporal Dynamics Modeling with Transformer} Apart from LSTM, Transformer is another popular architecture for sequence modeling, under which sequential, hard-to-parallelize computation during training and inference can be greatly reduced. The key component of the transformer architecture is the self-attention mechanism where a self-attention function maps a query and a set of key-value pairs to an output. In particular, the ``Scaled Dot-Product Attention'' function can be defined as: \begin{linenomath} $$\text{Attention}(\matr{Q,K,V}) = \text{softmax}(\frac{\matr{QK^T}}{\sqrt{D_{E}}})\matr{V}$$ \end{linenomath} where $\matr{Q, K, V}$ denotes queries, keys and values matrix, respectively. Given a pair of nodes $(u, v)$, we denote the event matrix by: $\matr{E}_{uv}=[\vctr{e}^1, ..., \vctr{e}^{S}]^\intercal$. Then the queries, keys and values matrix can be computed as: $\matr{Q}=\matr{E}_{uv}\matr{W}_{Q}$, $\matr{K}=\matr{E}_{uv}\matr{W}_{K}$, $\matr{V}=\matr{E}_{uv}\matr{W}_{V}$, (i.e. they are the projection of $\matr{E}_{uv}$). With the self-attention layer, the edge embeddings $\vctr{\tilde{e}}_{uv}$ of the edge $(u,v)$ is given by the S-th row of the matrix $\text{Attention}(\matr{Q,K,V})$. \subsubsection{Temporal Dynamics Modeling with Irregular Temporal Encoding} One drawback of using existing sequence models, e.g., vanilla LSTM or Transformer, as in the previous two subsections, is that it cannot handle irregular time-series. In real-world applications, the interaction sequence between any pair of users are irregular in nature, namely, the update interval between events is not fixed. In this work, in order to learn edge embedding in irregular and continuous time-space, we consider integrating state-of-the-art sequence models with the recent time representation learning method - Time2Vec \cite{mehran2019time2vec} (\text{\textbf{T2V}}). For any given time $t$, we will generate its time embeddings (denoted as $\text{\textbf{T2V}}(t) \in \mathbb{R}^(l+1)$) as follows: \begin{linenomath} \begin{equation} \label{eq:t2vec_def} \text{\textbf{T2V}}(t)[i]= \begin{cases} \omega_i t + \varphi_i, & \text{if~~$i=0$}. \\ \cos{(\omega_i t + \varphi_i)}, & \text{if~~$1\leq i \leq l$}. \end{cases} \end{equation} \end{linenomath} where $\omega_0', ... \omega_l'$ and $\varphi_0', ..., \varphi_l'$ are trainable parameters. \sloppy On the theoretical side, $\text{\textbf{T2V}}$ can also be viewed as an extension of Random Fourier Features (RFF) \cite{rahimi2008random}. If we define $\mathbf{\gamma}(t) = \sqrt{\frac{2}{k}}[\cos(\omega_1't + \varphi_1') ... \cos(\omega_k't + \varphi_k')]^\intercal$ where $\omega_1', ... \omega_k'$ are iid samples from some probability distribution $p(\omega)$ and $\varphi_1', ..., \varphi_k'$ are iid samples from the uniform distribution on $[0, 2\pi]$. Then, as a consequence of the Bochner's theorem from harmonic analysis, it can be shown that $\mathbb{E}[\mathbf{\gamma}(t_1)^\intercal \mathbf{\gamma}(t_2)] = \phi(t_1, t_2)$ for some positive definite shift-invariant kernel $\phi(t_1, t_2) = \phi(t_1 - t_2)$. Thus, $\text{\textbf{T2V}}$ can be regarded as RFF with tunable phase-shifts and frequencies. It is worth noting that sinusoidal activation is also found to be suited for representing complex natural signals \cite{sitzmann2019siren} with high precision. In addition, sinusoidal functions with fixed frequencies and phase-shifts have also been used in the Transformer model \cite{vaswani2017attention} as positional encodings. However, it has been shown that learning the frequencies and phase-shifts as in $\text{\textbf{T2V}}$ rather than fixing them achieves higher performance. By using $\text{\textbf{T2V}}(t)$ instead of $t$, Eq.~(\ref{eq:nbr-repr-lstm}) can be extended as follows: \begin{linenomath} \begin{align} \vctr{\tilde{e}}_{uv}&=M_t([\vctr{e}_{uv}^1, ..., \vctr{e}_{uv}^{S_{uv}}])\notag\\ &= M_t([\vctr{f}_{uv}^1||\text{\textbf{T2V}}(t_{uv}^1), ..., \vctr{f}_{uv}^{S_{uv}}||\text{\textbf{T2V}}(t_{uv}^{S_{uv}})]) \label{eq:seq-t2v} \end{align} \end{linenomath} \subsection{Sparsity-Inducing Attention Mechanism} In practice, each target node can interact with many neighbors but only a few of them are involved in interactions that are of interest, while other interactions are routine or irrelevant to our data analytic task. To distinguish significant behaviors from non-interesting activities, such as the detection of illicit behaviors within an E-payment network, we adopt another sequence model denoted by $M_a$ with a sparse self-attention mechanism to learn attention scores which reflect the importance of temporal interactions with each individual neighbors of a target node. Neighbors with relatively low attention scores will be filtered out during the GNN-based neighborhood aggregation. Formally, for the edge between node $u$ and node $v$, let $\vctr{\tilde{h}}_{uv}$ be the output of the sequence model and apply the self-attention mechanism with a trainable weight vector $\vctr{a}$. As in \cite{velivckovic2017graph}, one can use the softmax function to normalize the attention score across the neighbors of a target node $u$ in the graph. However, softmax can result in many small but non-zero attention weights. If the target node interacts with many neighbors and most of them are irrelevant, those small but non-zero attention values can result in considerable noises during attention-based neighborhood aggregation. To further reduce such noises, we adopt the Sparsemax transformation proposed by \cite{martins2016softmax} to obtain a sparse attention-weight vector. The operation of sparsemax are shown in Algorithm \ref{alg:sparsemax}, where $[x]_+$ represents the following function: \begin{equation} {[x]_+}= \begin{cases} x& {x > 0}\\ 0& {x \le 0} \end{cases} \end{equation} \begin{algorithm}[tb] \caption{Sparsemax Transformation} \label{alg:sparsemax} \begin{algorithmic}[1] \State \textbf{Input:} $\boldsymbol{z}$ \State Sort $\boldsymbol{z}$ as $z_{(1)}\geq ... \geq z_{(K)} $ \State Find $k(\vctr{z}):=$max$\{k \in [K] | 1+kz_{(k)} \geq \sum_{j \leq k} z_{(j)}\}$ \State Define $\tau(\vctr{z})=\frac{\sum_{j \leq k} z_{(j)}-1}{k(\vctr{z})}$ \State \textbf{Output:} $\boldsymbol{p}$ s.t. $p_i=[z_i-\tau(\boldsymbol{z})]_{+}$ \end{algorithmic} \end{algorithm} After the Sparsemax transformation, unimportant neighbors will be assigned attention scores with zero value and make no contribution to the neighborhood aggregation. \begin{linenomath} \begin{ceqn} \begin{align} \vctr{\tilde{h}}_{uv}&=M_a([\vctr{e}_{uv}^1, ..., \vctr{e}_{uv}^{S}]) \\ \tilde{\alpha}_{uv}&=\vctr{a}^\intercal \vctr{\tilde{h}}_{uv} \\ \alpha_{uv}&=\text{sparsemax}_v(\tilde{\alpha}_{u:}) \label{attention-weight} \end{align} \end{ceqn} \end{linenomath} \subsection{Node Representation with Edge Aggregation} To obtain an expressive node representation, we integrate edge embeddings with neighbor node embeddings. \cite{xu2020inductive, zhang2020learning} simply concatenate the node embedding with the edge embedding. However, as shown in Equation~(\ref{eq:cat-lin}), the weight matrix can be split into two different matrices $W=[W_1, W_2]$. \begin{linenomath} \begin{align} \vctr{z}_{\mathcal{N}(u)}^{(l)} &= \sum_{v \in \mathcal{N}(u)}\mathbf{W}([\vctr{z}_v^{(l-1)} || \vctr{\tilde{e}}_{uv}]]) \notag\\ &=\mathbf{W}_1\sum_{v \in \mathcal{N}(u)}\vctr{z}_v^{(l-1)} + \mathbf{W}_2\sum_{v \in \mathcal{N}(u)} \vctr{\tilde{e}}_{uv} \label{eq:cat-lin} \end{align} \end{linenomath} Thus applying naive concatenation cannot have good integration of node and edge embedding because the neighbor node embeddings and corresponding edge embeddings are summed independently \cite{li2019gcn}. We propose to apply nonlinear activation function, e.g., a Multilayer Perceptron after the concatenation to address this problem. Specifically, the node embedding of any given node $u \in \ensuremath{\mathcal{V}}$ at layer $l$, denoted by $\vctr{z}_u^{(l)}$ is defined recursively as follows: \begin{linenomath} \begin{align} \vctr{z}_{\mathcal{N}(u)}^{(l)} &= \sum_{v \in \mathcal{N}(u)}\alpha_{uv}\text{MLP}_1([\vctr{z}_v^{(l-1)} || \vctr{\tilde{e}}_{uv}]]) \label{nbr-repr}\\ \vctr{z}_u^{(l)} &= \text{MLP}_2([\vctr{z}_u^{(l-1)} || \vctr{z}_{\mathcal{N}(u)}^{(l)}]) \label{node-emb} \end{align} \end{linenomath} Equation~(\ref{nbr-repr}) applies attention weighted sum to aggregate both the node embeddings $\vctr{z}_u^{(l-1)}$ of its neighbors at layer $(l-1)$ and the associated edges representation vector $\vctr{\tilde{e}}_{uv}$. After computing the neighboring feature vectors $\vctr{z}_{\mathcal{N}(v)}^{(l)}$, we concatenate it with the target node's embeddings in the previous layer $\vctr{z}_v^{(l-1)}$ and feed it through another Multilayer Perceptron $\text{MLP}_2$ as in Equation~(\ref{node-emb}). \subsection{Loss Function} By combining the GCN module and the pairwise temporal sequence model, we can recursively stack the framework at a depth of $L$ layers. We use the standard Cross-entropy loss to drive the end-to-end training of the entire pipeline: \begin{linenomath} $$\mathcal{L} = - \sum_{l \in \mathcal{Y}_L} \vctr{y}_l^T \ln(\text{softmax}(\vctr{z}_i^{(L)}))$$ \end{linenomath} where $\mathcal{Y}_L$ denotes the set of node indices that have labels and $\vctr{y}_l$ denotes the one-hot representation of the ground-truth label. We summarize the workflow of GTEA framework in Algorithm~\ref{alg:GTEA}. \begin{algorithm}[tb] \caption{ Node Classification using GTEA} \label{alg:GTEA} \begin{algorithmic}[1] \Require Graph $\ensuremath{\mathcal{G}}=(\ensuremath{\mathcal{V}}, \ensuremath{\mathcal{E}})$; node features $x_{u}, \forall u\in\ensuremath{\mathcal{V}}$; temporal edge sequences $\matr{e}_{uv}, \forall (u, v)\in\ensuremath{\mathcal{E}}$; depth of GCN module $L$; aggregation functions $\text{MLP}_1$ and $\text{MLP}_2$; neighborhoods of node $u$: $\mathcal{N}(u)$; sequence model $M_t$ and $M_a$; trainable attention weights $\vctr{a}$ \Ensure Classification result of node $u$; \State $\vctr{z}_u^{(0)} \leftarrow x_{u}$; \For{$l = 1, 2, ..., L$} \State $\vctr{\tilde{e}}_{uv} \leftarrow M_t(e_{uv}), \forall v\in\mathcal{N}(u)$; \State $\vctr{\tilde{h}}_{uv} \leftarrow M_a(e_{uv}), \forall v\in\mathcal{N}(u)$; \State $\tilde{\alpha}_{uv}\leftarrow \vctr{a}^\intercal\vctr{\tilde{h}}_{uv}$ \State $\vctr{\alpha}_{uv} \leftarrow \text{sparsemax}_v(\tilde{\alpha}_{u:})$ \State $\vctr{z}_{\mathcal{N}(u)}^{(l)} \leftarrow \sum_{v \in \mathcal{N}(u)}\alpha_{uv}\text{MLP}_1([\vctr{z}_v^{(l-1)} || \vctr{\tilde{e}}_{uv}]])$; \State $\vctr{z}_u^{(l)} \leftarrow \text{MLP}_2([\vctr{z}_u^{(l-1)} || \vctr{z}_{\mathcal{N}(u)}^{(l)}])$ \EndFor \State $y_u \leftarrow \text{softmax}(\vctr{z}_u^{(L)})$; \\ \Return $y_u$; \end{algorithmic} \end{algorithm} \section{Experiments} In this section, we evaluate the performance of GTEA using four large-scale real-world datasets whose statistics are summarized in Table \ref{tab:DS}. Experiments are performed to answer the following five questions: \begin{enumerate} \item How effective is GTEA in binary/ multi-class node classification for temporal interaction graphs? \item To what extent can GTEA automatically learn from temporal edge features to improve the classification performance? \item Does the modeling of pairwise nodal interactions/ relationships benefit node classification tasks? \item How much improvement can the sparsity-inducing attention mechanism achieve over the softmax operation? \item Is it possible for GTEA to achieve high classification performance without the help of handcrafted or derived interaction/ topology-related node features? \end{enumerate} \begin{table*}[tbp] \begin{center} \caption{Dataset Summary} \label{tab:DS} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \cline{2-6} Dataset & \# Nodes & \# Edges & \# Interactions & \thead{Labeled Node \\ Avg. Degree} & \thead{\# Ground \\Truths} & \thead{\# Node \\Features} & \thead{\# Edge \\Features}\\ \hline Eth-Role & 2,180,689 & 3,745,858& 55,322,096 & 306387.1 & 445 & 23 & 5 \\ \hline Phishing-Small & 1,329,729 & 2,161,573& 6,794,521 &80.2 & 3360 & 3& 3 \\ \hline Phishing-Large & 2,973,489 & 5,355,155& 184,398,820 &302.0 & 6165& 3& 3\\ \hline Mobile-Payment & 2,143,844 & 4,568,936 & 21,326,122 & 66.8& 6688 & 68& 4\\ \hline \end{tabular} \end{center} \end{table*} \subsection{Datasets} Experiments are conducted on Ethereum-Role Dataset (Eth-Role), Ethereum Phishing Large Dataset (Phishing-Large), Ethereum Phishing Large Dataset (Phishing-Small) and Mobile Payment Network Dataset (Mobile-Payment). \textbf{Eth-Role} is a multi-class role classification dataset based on the Ethereum Smart Contract network. Unlike Bitcoin, the address in Ethereum is an account that can be used repeatedly. We collect Ethereum transactions by using Google BigQuery and ground truth labels are collected from: https://etherscan.io. \textbf{Phishing-Large} and \textbf{Phishing-Small} are collected from Ethereum Phishing Datasets\footnote{\scriptsize{Raw data can be downloaded from: https://www.kaggle.com/xblock/ethereum-phishing-transaction-network}}, which is provided by \cite{xblockEthereum}. Binary classification is conducted on these two datasets to detect phishing accounts in the Ethereum payment network. \textbf{Mobile-Payment} is collected from a major mobile payment provider. Models should conduct binary classification to identify abnormal users in the payment network. To build the graph, each account/ user is taken as a node and transaction sequences between two nodes indicate an edge. Detailed descriptions of datasets can be found in the supplementary materials. \subsection{Baselines and Variants} To evaluate the performance of GTEA, we compare our model with state-of-the-art GNN methods that are able to handle large-scale graphs. Different variants of advanced sequence modeling approaches have been implemented under the GTEA framework for comparison. Specifically, there are eight baseline models and five GTEA variants, which can be categorized as follows: \begin{itemize} \item \textbf{Models without graph topology}. In this group, we choose XGBoost \cite{chen2016xgboost}, which only uses node features without explicitly considering the graph topology. \item \textbf{GNN with only node features}. These methods are state-of-the-art GNN that learn node embeddings based on node features. We take GCN\cite{kipf2016semi}, GraphSAGE \cite{hamilton2017inductive}, GAT \cite{velivckovic2017graph} , and APPNP\cite{klicpera2018predict} as the representative schemes for comparison. \item \textbf{GNN incorporates edge features}. This category of models incorporates node features as well as statistical edge features that are manually derived from temporal interactions between each node pair. We choose ECConv\cite{simonovsky2017dynamic} and EGNN\cite{gong2019exploiting} as representatives for comparison. We also implement another variant of GTEA named GTEA-ST, which replaces the automatically learned edge-embedding with handcrafted interaction-statistics-based edge features, including the averaged, minimum, maximum and standard deviation amount of transaction values of each node pair. \item \textbf{GNN with temporal sequences learning}. We also compare GTEA with TGAT (Xu et al. 2020), a state-of-the-art representation learning scheme for temporal interaction graphs, which has shown to consistently outperform various recent schemes in this category, e.g. GAT+T and GraphSAGE+T. \item \textbf{GTEA variants} are implemented with different sequence model. Specifically, the variants equipped with LSTM and Transformer are denoted by GTEA-LSTM and GTEA-Trans respectively. Besides, the variants contain $\text{\textbf{T2V}}$ are denoted by GTEA-LSTM+T2V and GTEA-Trans+T2V respectively. \end{itemize} \begin{table*}[t] \begin{center} \caption{Node Classification Results} \label{tab_CR} \resizebox{\textwidth}{28mm}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{Datasets} & \multicolumn{2}{c|}{Eth-Role} & \multicolumn{2}{c|}{Phishing-Small} & \multicolumn{2}{c|}{Phishing-Large} & \multicolumn{2}{c|}{Mobile-Payment}\\ \hline \textbf{Metrics} & \textbf{Accuracy} & \textbf{Macro F1}& \textbf{Accuracy} & \textbf{Macro F1}& \textbf{Accuracy} & \textbf{Macro F1}& \textbf{Accuracy} & \textbf{Macro F1} \\ \hline XGBoost & 0.9211 & 0.9164 &0.8482 &0.8481 & 0.8694 & 0.7573& 0.7152 & 0.7152\\ \hline GCN & 0.8084 & 0.8145 & 0.9077 & 0.9077 & 0.9298 & 0.8683 & 0.7481 & 0.7480\\ GraphSAGE & 0.9868 & 0.9857 & 0.9405 & 0.9405 & 0.9753 &0.9569 & 0.7474 & 0.7472 \\ GAT & 0.9868 & 0.9857 & 0.9405 & 0.9405 & 0.9631 & 0.9375 & 0.7265 & 0.7264 \\ APPNP & 0.9868 & 0.9857 & 0.9241 & 0.9241 & 0.9084 & 0.8338 & 0.7716 & 0.7716 \\ \hline ECConv & 0.9079 & 0.8875 & 0.9301 & 0.9300& 0.8962& 0.7926 &0.7399 & 0.7399\\ EGNN & 0.9605 & 0.9497 & 0.8839 & 0.8823 & 0.9465 & 0.9035 & 0.7549 & 0.7538 \\ GTEA-ST & 0.9737 & 0.9708 & 0.9673 & 0.9673 & 0.9777 & 0.9615 & 0.7519 & 0.7516 \\ \hline TGAT & 0.9737 & 0.9708 & 0.9673 & 0.9673 & 0.9623 & 0.9344 & 0.7212 & 0.7212 \\ \hline GTEA-LSTM & 0.9868 & 0.9857 & 0.9836 & 0.9836 & \textbf{0.9805} & \textbf{0.9668}& 0.7848 & 0.7847\\ GTEA-LSTM+T2V & \textbf{1.0000} & \textbf{1.0000}& 0.9777 & 0.9777 &0.9789 &0.9640 & \textbf{0.7990} & \textbf{0.7990} \\ GTEA-Trans & 0.9737 & 0.9708 & \textbf{0.9851} & \textbf{0.9851} & 0.9801 & 0.9658& 0.7676 & 0.7670 \\ GTEA-Trans+T2V & 0.9868 & 0.9857 & 0.9792 & 0.9792 & 0.9769 & 0.9603 & 0.7758 & 0.7758\\ \hline \end{tabular}} \end{center} \end{table*} \subsection{Experimental Setup} In our experiments, each dataset is randomly split into a training set (60\%), a validation set (20\%) and a test set (20\%). All models are tuned on the validation set and we report the classification accuracy and macro F1 score on the test set. For a fair comparison, each model is tuned independently with grid search to find the optimal hyperparameter combination. Only the best performance of each model is reported in this paper. Detailed information including hyperparameters, network configurations and the training process can be found in the supplementary materials. \subsection{Experimental Results on Node Classification} Table \ref{tab_CR} shows the performance of the proposed model as well as all other compared methods. Some key observations are summarized as follow: \subsubsection{Overall Performance} We observe that different variants of GTEA perform well in all four datasets and GTEA-LSTM consistently outperforms all other methods in the experiments. Such quantitative results demonstrate the effectiveness of the proposed GTEA framework. Different from other methods that learn from only part of available information, GTEA captures pairwise interaction dynamics through a temporal sequence model and incorporates the learned edge embeddings using GNNs. By making full use of all available information (from node features, features of temporal events and the graph topology), and learn the relationships and dependencies among all these features efficiently, GTEA delivers better node classification performance over all the dataset under test. An additional observation is that LSTM variants perform slightly better than that of the Transformer. This may owe to the step-wise time-series processing and the sophisticated gate mechanism. In contrast to the Transformer, which looks into the entire interaction sequence at one go, LSTM incrementally updates its hidden states using successive interaction information. This enables the LSTM variant to capture patterns with time continuousness and model more fine-grained temporal dependencies for an edge, which leads to better performance. We also observe that in Eth-Role and Mobile-Payment dataset, GTEA variants with $\text{\textbf{T2V}}$ outperform those without $\text{\textbf{T2V}}$. This implies that periodic and non-periodic temporal patterns captured by $\text{\textbf{T2V}}$ can benefit the node classification task. On the contrary, the performance of GTEA-LSTM and GTEA-Trans outperform their corresponding $\text{\textbf{T2V}}$ variants in Phishing-Small and Phishing-Large dataset. This may be due to the fact that periodic patterns are not highly correlated with phishing events, where $\text{\textbf{T2V}}$ can only make limited effect. \subsubsection{Effect of Edge Features} From Table \ref{tab_CR}, we can observe that different GTEA variants outperform ECConv and EGNN, which both incorporate handcrafted edge features in their framework. In contrast, GTEA learns edge embeddings from pairwise interactions through a sequence model. The performance gap between GTEA and ECConv/ EGNN demonstrates that learning edge embeddings automatically can substantially improve the performance on node classifications. We also observe that ECConv/ EGNN is hard to dominate methods trained with only node features, especially in the Eth-Role dataset. This is because node features in Eth-Role are elaborately derived from interaction sequences (e.g., number of days the account have activity records and the amount of mining reward received by the accounts), which can be informative and discriminative in classification tasks. Even though, GTEA still yields a better performance over these methods in all datasets, which reflects that our model can extract useful patterns from interaction sequences and make full use of them to improve the performance. \subsubsection{Effect of Pairwise Relationships Modeling} Among all models that incorporate temporal information, GTEA performs the best, which answers the Question\#3. Note that in TGAT, interactions among a target node and all its neighbors are grouped as one single time-series, which ignores pairwise temporal patterns and mutual relationships. On the contrary, the proposed GTEA explicitly models relation dependencies for pairwise nodal interaction-event sequence. This enables GTEA to capture more informative patterns and be superior to other competitors. \iffalse \fi \begin{figure}[tb] \centering \subfigure{ \graphicspath{{charts/}} \def0.20\textwidth{0.1\textwidth} \input{charts/barplot_color_legend_sparsemax_eth_phish2.pdf_tex} } \addtocounter{subfigure}{-1} \subfigure{ \graphicspath{{charts/}} \def0.20\textwidth{0.15\textwidth} \input{charts/barplot_hatch_legend_sparsemax_eth_phish2.pdf_tex} } \addtocounter{subfigure}{-1} \subfigure[Phishing-Small Accuracy]{ \label{bar:accuracy:sparsemax:eth:phish2} \graphicspath{{charts/}} \def0.20\textwidth{0.20\textwidth} \input{charts/barplot_Accuracy_sparsemax_eth_phish2.pdf_tex} } \subfigure[Phishing-Small Macro F1]{ \label{bar:f1:sparsemax:eth:phish2} \graphicspath{{charts/}} \def0.20\textwidth{0.20\textwidth} \input{charts/barplot_MacroF1_sparsemax_eth_phish2.pdf_tex} } \subfigure[Mobile-Payment Accuracy]{ \label{bar:accuracy:sparsemax:tencent} \graphicspath{{charts/}} \def0.20\textwidth{0.20\textwidth} \input{charts/barplot_Accuracy_sparsemax_tencent.pdf_tex} } \subfigure[Mobile-Payment Macro F1]{ \label{bar:f1:sparsemax:tencent} \graphicspath{{charts/}} \def0.20\textwidth{0.20\textwidth} \input{charts/barplot_MacroF1_sparsemax_tencent.pdf_tex} } \caption{Ablation Study on Phishing-Small and Mobile-Payment Network dataset (with/ without sparsemax)} \label{bar:ab:sparsemax} \end{figure} \begin{figure}[tb] \centering \subfigure{ \graphicspath{{charts/}} \def0.20\textwidth{0.1\textwidth} \input{charts/barplot_color_legend_nodefts_eth_phish1.pdf_tex} } \addtocounter{subfigure}{-1} \subfigure{ \graphicspath{{charts/}} \def0.20\textwidth{0.15\textwidth} \input{charts/barplot_hatch_legend_nodefts_eth_phish1.pdf_tex} } \addtocounter{subfigure}{-1} \subfigure[Phishing-Small Accuracy]{ \label{bar:accuracy:nodefts:eth:phish:large:1} \graphicspath{{charts/}} \def0.20\textwidth{0.20\textwidth} \input{charts/barplot_Accuracy_nodefts_eth_phish1.pdf_tex} } \subfigure[Phishing-Small Macro F1]{ \label{bar:f1:nodefts:eth:phish:large:1} \graphicspath{{charts/}} \def0.20\textwidth{0.20\textwidth} \input{charts/barplot_MacroF1_nodefts_eth_phish1.pdf_tex} } \subfigure[Mobile-Payment Accuracy]{ \label{bar:accuracy:nodefts:tencent} \graphicspath{{charts/}} \def0.20\textwidth{0.20\textwidth} \input{charts/barplot_Accuracy_nodefts_tencent.pdf_tex} } \subfigure[Mobile-Payment Macro F1]{ \label{bar:f1:nodefts:tencent} \graphicspath{{charts/}} \def0.20\textwidth{0.20\textwidth} \input{charts/barplot_MacroF1_nodefts_tencent.pdf_tex} } \caption{Ablation Study on Phishing-Small and Mobile-Payment Dataset (with/ without node features)} \label{bar:ab:nodefts} \end{figure} \subsubsection{Effect of Sparsemax} Fig. \ref{bar:ab:sparsemax} shows the comparisons when GTEA is equipped with Sparsemax or Softmax (i.e., without Sparsemax), where the former can ususally achieve a better performance. By introducing sparsemax, unimportant neighbors and noises can be filtered out before aggregation, which yields more refined and discriminative embeddings that can benefit the classification task. \subsubsection{Effect of Temporal Learning without Node Features} Fig. \ref{bar:ab:nodefts} shows that the performance gap for all GTEA variants is much smaller. This implies that some critical information carried by handcrafted or derived interaction/ topology-related node features can be automatically learned by GTEA when generating edge embeddings. We also observe that the performance gap for GTEA is smaller than that of TGAT, which shows GTEA can learn the node representation better by capturing pairwise temporal temporal patterns and relationships. These facts also demonstrate the advantages of GTEA over TGAT. \section{Conclusion and Future Work} In this work, we have presented GTEA - a new message passing mechanism which learns node and edge embeddings by exploiting the multi-dimensional, pairwise temporal patterns as well as complicated topological information in the temporal interaction graph. This method is designed to address the common drawbacks of existing representation learning methods for temporal interaction graph where complex temporal edge features and their interactions are either partially ignored or poorly supported. Empirical results show that GTEA consistently outperforms current state-of-the-art models. We also demonstrate that temporal edge features are important for node classification tasks. Ablation studies reveal that learning edge embeddings by the sophisticated neural sequence model with self-attention mechanism can give us better performance over static, handcrafted interaction-statistics-based features. Developing a more scalable, parallelized training platform for GTEA will be one promising direction for our future work. \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec1} It is well known that in nature there exist not only second order phase transitions (SOPT) but also crossovers. The former is characterized by its order parameter which goes to zero exactly at a certain critical temperature $T=T_c$. This is illustrated in Fig.\ref{Fig1}a (solid line), where $T_c$ is always interpreted as a critical temperature. The latter also has its own ``order parameter", which diminishes asymptotically at high temperatures (see Fig.\ref{Fig1}b, solid curve). Such kind of phase transitions are observed in spin-gapped quantum magnets, whose magnetization versus temperature can be explained by BEC to normal phase transition of triplons (magnons) \cite{Zapf}. In this work, we address the question of determining the critical temperature in a crossover from the Bose-Einstein condensed phase to the normal phase. To illustrate our proposal on the characterization of the crossover, we study the temperature dependence of magnetization of spin-gapped quantum magnets described by BEC of triplons. In this context, triplons are bosonic quasi-particles that describe the singlet-triplet excitations by external magnetic field in spin-gapped magnetic materials. As there are singlet-triplet excitations, they are referred as ``triplons'' instead of magnons \cite{Zapf}. The triplon gas has its own specifications being compared with an atomic gas \cite{ourctan2}. Particularly, in tasks related to atomic Bose gases the number of particles $N$ is assumed to be fixed, while the chemical potential $\mu(N,T)$ is to be calculated e.g., by the relation $N\sim\sum_{k}/[e^{\beta(\varepsilon _k-\mu)}-1]$, where $\beta$ is the inverse temperature\footnote{Here and below we adopt the units $k_B=1$ for the Boltzmann constant, $\hbar=1$ for the Planck constant, and V=1 for the unit cell volume.}. As to the triplon gas, the chemical potential characterizes an additional direct contribution to the triplon energy due to the external magnetic field $H$, giving $\mu=g\mu_B(H-H_c)$ where $g$ is the electron Land{\'e} factor, $\mu_B=0.672$ K/T is the Bohr magneton and $H_c$ is the critical magnetic field which defines the gap $\Delta_{ST}=g\mu_BH_c$ between singlet and triplet states. In the field induced BEC, $\mu$ is assumed to be an input parameter, from which the total number of triplons can be calculated. Besides, for homogenous atomic gases one may use simple quadratic bare dispersion $\varepsilon _k=k^2/2m$ with a good accuracy, while for spin-gapped quantum magnets a more complicated form of bare dispersion is needed \cite{ouraniz1}. Moreover, it has been established that in some magnetic compounds such as Ba$_3$Cr$_2$O$_8$ with a good isotropic symmetry, the phase transition from BEC into a normal phase of triplons is of a second order \cite{aczel,ourmce}, while the existence of anisotropies for example in TlCuCl$_3$ smears out the transition into a crossover \cite{Zapf,Sirker1,ouraniz2part1,ouraniz2part2,tanaka}. In contrast to trapped atomic gases, the fraction of particles, $N_0$, in the condensate can easily be measured as $N_0\sim {M_{stag}^2} $, and the number of triplons is $N\sim M$, where $M_{stag}$ and $M$ are the staggered and total magnetizations, respectively, caused by the external magnetic filed $H$ \cite{Sirker1}, which defines the chemical potential. \begin{figure}[h]% \begin{minipage}[H]{0.4\linewidth} \hfill {\includegraphics[width=1.4\textwidth]{fig1a.pdf}} \end{minipage} \qquad\qquad \begin{minipage}[H]{0.4\linewidth} {\includegraphics[width=1.4\textwidth]{fig1b.pdf}} \hfill \end{minipage} \caption{The number fluctuations (dashed lines) and condensed fractions (solid lines) in a SOPT (a) and a smeared phase transitions (b).} \label{Fig1} \end{figure} The first problem we address is how to define an effective critical temperature for such crossover. In other words, it is curious to know if there is a physical quantity, which has at least a local extremum near a specific temperature ($T_c$) both for the second order phase transitions and crossover. We will show that the answer is positive: it is the particle number fluctuations versus temperature. For the SOPT this is illustrated in Fig.\ref{Fig1}a (dashed line). The second problem which we discuss in this work is related to the scaling of fluctuations with atom number which can be classified either as normal or anomalous \cite{wilhelm, yukfluc}. The rest of this paper is organized as follows. In Section 2 we outline the definition of fluctuations and present some of their general properties. In Section 3 we outline our main methodology used for evaluation of particle number fluctuations and related physical quantities for the system of triplons in spin-gapped magnets and present our results in Section 4. The discussions of results and conclusion will be presented in the last section. The explicit derivation of the main equations are presented in the Appendix. \section{Normal and anomalous fluctuations} For the convenience of a reader we start with the main definitions and properties of fluctuations. A fluctuation or a dispersion of any physical quantity is a measure of deviation from its average value derived from many identical random processes. Fluctuations are important to our understanding of classical and quantum systems \cite{patash}. They are ubiquitous in physics: from the primordial quantum fluctuations in the early universe that reveal themselves as fluctuations in the cosmic microwave background, to current fluctuations in every-day conductors. Even the Casimir effect is caused by fluctuations (of vacuum), where the average electromagnetic field is zero: $\langle{\hat{\bf E}}\rangle=\langle{\hat {\bf H}}\rangle=0$. One may differentiate between normal and anomalous fluctuations. Let an observable quantity $A$ is given by statistical average $\langle \hat{A}\rangle$ of a Hermitian operator $\hat{A}$. Its fluctuation is characterized by the dispersion: \begin{equation} \Delta^2(\hat{A})= \langle\hat{A}^2\rangle- \langle \hat{A}\rangle^2\, \end{equation}\normalsize Further, let $\Delta^2(\hat{A})\sim N ^{1+\alpha}$, where $N$ is the number of particles. Fluctuation is referred to as normal, if $\alpha=0$ and anomalous otherwise, $\alpha \neq 0$. It can be shown that a system with an anomalous fluctuation is unstable. In fact, one may write the general form of the necessary stability condition as \cite{yukfluc} \begin{equation} \label{stab} 0<\frac{\Delta^{2}(\hat{A})}{N}<\infty \end{equation}\normalsize This condition is required for any stable equilibrium systems at finite temperature and for all $N$, including the limit $N\rightarrow\infty$. The value of (\ref{stab}) can be zero only at $T=0$. Additionally, one can see that while $\Delta^{2}(\hat{A})$ has the order of $N$, the dispersion is normal and stability condition is maintained. But when $\Delta^{2}(\hat{A})\sim N^{\alpha}$, with $\alpha>1$, then such a dispersion is called anomalous and the fluctuation of $\hat{A}$ is anomalous, since equation (\ref{stab}) becomes proportional to $N^{\alpha-1}$ and goes to infinity as $N\rightarrow\infty$. As a result, the stability condition is broken and anomalous fluctuations are unstable. Using Maxwell relations, one can express a fluctuation of a physical quantities in terms of thermodynamic observables \cite{yuktutorial}. Below we provide some examples. (i) Energy fluctuations: \begin{equation} \frac{\Delta^2 (\hat H)}{N}=C_VT^2 \end{equation}\normalsize where $\hat H$ is Hamiltonian operator and $C_V$ is the heat capacity (at constant volume). (ii) Particle number fluctuations: \begin{equation} \frac{\Delta^2 (\hat N)}{N}=\kappa_T \rho T \end{equation}\normalsize where $\kappa_T$ is the isothermal compressibility and $\rho$ is the density of particles. (iii) Magnetization fluctuations: \begin{equation} \label{flucM} \frac{\Delta^2 (\hat M)}{N}=\frac{\langle {\hat M}^2\rangle - \langle \hat M\rangle^2}{N}=\chi T \end{equation}\normalsize in which $\chi $ is the magnetic susceptibility $\chi=dM/dH$. Note that all of these fluctuations are normal and these expressions are relevant to any thermodynamically stable system in the equilibrium \cite{yuksyms}. \subsection{Anomalous fluctuation in BEC} However, it is well known that, near the point of phase transition the system become unstable. A phase transition occurs exactly because one phase becomes unstable and has to change to another stable phase and further, after a phase transition has occurred, the system becomes stable again and susceptibilities become finite.. We know that exactly at the critical temperature of a second order phase transition the fluctuations become anomalous \cite{yuksyms}. However, after the phase transition has occured, the system becomes stable again. In Fig.\,1a (dashed line) we present the example of number fluctuations. As it is seen from Fig.\ref{Fig1}a (solid line), at $T=T_c$ the condensed fraction $N_0$ vanishes, but the fluctuation in number of particles becomes very large. Moreover, at exactly $T=T_c$ the fluctuations become anomalous. It is interesting to explore whether the fluctuations remain anomalous below the critical temperature also. That is, whether $\alpha(T<T_c ) \neq 0$ or not. Unfortunately, in the literature it has not still been established if the fluctuations in condensed fraction are normal or anomalous. Actually, on the one hand, Giorgini et al. \cite{giorgini} and Christensen et al. \cite{Christensen} found for the fluctuations in condensed fraction $\alpha\approx 0.134$, which means that the fluctuations remain anomalous even in the whole BEC phase. Thus, in many works on Bose systems, it is argued that fluctuations remain anomalous far below the condensation point, in the whole region of the Bose-condensed system. On the other hand, Yukalov \cite{yuksyms,yukfluc} has proven that there are no anomalous fluctuations in stable equilibrium systems. Briefly, he rigorously proved the following theorem. The dispersion of a global observable is normal if and only if all partial dispersions of its terms are normal, and it is anomalous if and only if at least one of the partial dispersions is anomalous. In other words, if ${\hat A}={\hat A}_1 + {\hat A}_2+....+{\hat A}_n $ and ${\hat A}$ is normal, then each ${\hat A}_i $ should be normal and vice versa. Particularly, for a BEC system we have ${\hat N} = {\hat N}_0 +{\hat N}_1$, with $N_0$ the number of condensed particles. Since ${\hat N}$ is normal i.e. has a normal fluctuation, then ${\hat N}_0$ as well as ${\hat N}_1$ should be normal. In the next section we shall consider the consequences of this statement on the example of triplon BEC. \section{Magnetization fluctuations in BEC of triplons in spin-gapped magnets} Presently, it has been established that \cite{pitbook, yukquas} not only atoms but also quasiparticles may undergo BEC. The experiments on magnetization of spin-gapped magnets, (see the review by Zapf et al. \cite{Zapf}) can be explained by BEC of triplons, with $ M=g\mu_B N$. At low temperatures, $(T\leq T_c)$ triplons are condensed, which leads to increase of magnetization of the antiferromagnets, e.g., TlCuCl$_3$. Moreover, the experiments on staggered magnetization $M_{stag}$ by Tanaka et al. \cite{tanaka} have revealed that staggered magnetization and hence the condensed fraction in TlCuCl$_3$ diminishes smoothly, not abruptly as in the case of pure BEC. This means that the condensate density is non-zero for all temperatures due to the relation $M_{stag}^2=(g\mu_B)^2 N_0$. Thus, one may conclude that the phase transition of triplons from BEC into the normal phase, in general, is not of second order. In fact, it is a smeared phase transition, so that there is no fixed temperature where the order parameter exactly would be equal to zero. Below we briefly outline the methodology we used to evaluate magnetizations and number fluctuations. \subsection{Methodology} It has been shown that \cite{Sirker1, ouraniz2part1}, the reason of this phenomena is the existence of exchange (EA) and Dzyaloshinsky-Moriya (DM) anisotropies. The effective Hamiltonian of a triplon gas can be presented as the sum of ``isotropic'' and ``anisotropic'' terms \cite{ ouraniz2part1, ouraniz2part2} \begin{subequations} \begin{align} &{\cal H}=H_{iso}+H_{aniso}, \label{eq:H}\\ &H_{iso}=\int d{\bf r}\left[\psi^{\dagger}({\bf r}) (\hat{K}-\mu) \psi({\bf r})+ \frac{U}{2} (\psi^{\dagger}({\bf r})\psi({\bf r}))^2\right], \label{eq:H1}\\ &H_{aniso}=H_{EA}+ H_{DM},\label{eq:H2}\\ &H_{EA}=\frac{\gamma}{2} \int d{\bf r} \left[\psi^{\dagger}({\bf r})\psi^{\dagger}({\bf r}) + \psi({\bf r})\psi({\bf r}) \right], \label{eq:H3} \\ &H_{DM}= i\gamma'\int d{\bf r} \left[\psi({\bf r})-\psi^{\dagger}({\bf r}) \right], \label{eq:H4} \end{align} \label{eq:Htot} \end{subequations} where $\psi({\bf r})$ is the bosonic field operator, $U, \gamma, \gamma'$ are the interaction strengths ($U\geq 0, \gamma \geq 0, \gamma' \geq 0$) and $\hat{K}$ is the kinetic energy operator which defines the bare triplon dispersion $\varepsilon_k$ in momentum space. The integration is performed over the unit cell of the crystal with corresponding momenta defined in the first Brillouin zone \cite{ouraniz1}. The linear Hamiltonian, an external source, in Eq.\,\eqref{eq:H4} corresponds to a simple case when singlet-triplet mixing is neglected and DM vector is chosen as $D \parallel x$ and $H\parallel z$ \cite{Sirker1}. Thus, once the Hamiltonian is given, one first separates fluctuations as $\psi=\xi\sqrt{\rho_0}+\tilde{\psi}$, where $\xi=e^{i\Theta}$ and $\rho_0$ are the phase of the condensate wave function and its magnitude, respectively; and then introducing second quantization, $\tilde{\psi}=\sum_ke^{i{\bf k}{\bf r}}a_k$, $\tilde{\psi}^{\dagger}=\sum_ke^{-i{\bf k}{\bf r}}a_k^{\dagger}$, makes an attempt to diagonalize the Hamiltonian $H$ with respect to creation ($a^{\dagger}$) and annihilation ($a$) operators. As a result, analytical expressions for quasiparticle (bogolon) dispersion $E_k$ and some other quantities may be obtained. In the present work, we shall take into account anomalous averages $\sigma=\sum_{k}\sigma_{k}=\frac{1}{2}\sum_k \left(\langle a_{k}a_{-k}\rangle+\langle a_{k}^{\dag}a_{-k}^{\dag}\rangle\right) $ ($\sigma$-anomalous density) based on Hartree-Fock-Bogoliubov approach, which was neglected in \cite{Sirker1}. This allows one to obtain continuous magnetization across the BEC transition, which would be discontinuous otherwise, in the so-called Hartree-Fock-Popov (HFP) approximation with $\sigma=0$ \cite{ourANN}. In order to get more information about thermodynamics of the system we exploit the grand canonical thermodynamic potential $\Omega$, which may be evaluated in the path integral formalism \cite{cooper,andersen,ouryee,klbookfi}. This will be convenient to study the modification of the condensate wave function, entropy $S=-(\partial\Omega/\partial T)$, heat capacity $C_H=T(\partial S/\partial T)$, magnetization $M=-(\partial\Omega/\partial H)$ as well as magnetization fluctuations given by Eq.\,\eqref{flucM}. For the convenience of a reader the detailed calculations and the explicit expressions for these quantities are moved to the Appendix. \section{Results} Clearly, to obtain realistic numerical results one should find optimal input parameters of the Hamiltonian. The effective Hamiltonian in \eqref{eq:Htot} has mainly the following input parameters: $U, \gamma$ and $\gamma'$. Unfortunately, their optimal values have been fairly known in the literature \footnote{The set of parameters, proposed by Sirker et.al. \cite{Sirker1} could not be reliable, since in their HFP approximation they didn't take into account the anomalous density $\sigma$}. There, using the method of least squares in data fitting we have found optimal values as $U=367$K, $\gamma=0.05$K and $\gamma'=0.001$K, corresponding to the best description of experimental data on magnetizations \cite{tanaka, tanaka1,recenttan} of TlCuCl$_3$ as it is shown in Figs. 2. This set of parameters can be considered as one of our main results, since it may be used in theoretical description of spin-gapped magnets such as TlCuCl$_3$ and KCuCl$_3$. \begin{figure}[h]% \centering \begin{minipage}[H]{0.45\linewidth} \center{\includegraphics[width=1.25\textwidth]{fig2a.pdf}} \end{minipage} \hfill \begin{minipage}[H]{0.5\linewidth} \center{\includegraphics[width=1.25\textwidth]{fig2b.pdf}} \end{minipage} \caption {Uniform (a) and staggered (b) magnetizations for TlCuCl$_3$, $H // b$. Solid and dashed lines correspond to the present approximation and approximation in Ref.\cite{Sirker1}, respectively. Experimental data are taken from Ref.\cite{tanaka,recenttan}. Following set of input parameters are used: $U=367$K, $\gamma=0.05$K and $\gamma'=0.001$K} \label{Fig2} \end{figure} Now, we are on the stage of studying magnetic fluctuations and heat capacity at constant external magnetic field ($C_H$) in TlCuCl$_3$. In Fig.\ref{Fig3} we present the fluctuations in magnetization (solid lines) and heat capacity (dashed lines) for TlCuCl$_3$ at $H=9$ T (Fig.\ref{Fig3}a) and $H=10$ (Fig.\ref{Fig3}b). It is seen that near phase transition the heat capacity is rather smeared, while the fluctuations in magnetizations has a sharp maximum. Therefore, e.g. $T\simeq5.4$K may be considered as a critical temperature of this crossover for $H=9$T. It is also seen that $\Delta M$ increases at high temperatures as it is expected from the Eq.\,\eqref{flucM}. On the other hand, it should be noted that the fluctuation vanishes at exactly $T=0$, owing to the relations $\Delta \hat{M}\sim \Delta \hat{N}$ and $\Delta\hat{N}\mid_{T=0}=0$ \cite{yuksyms}. \begin{figure}[h]% \centering \begin{minipage}[H]{0.45\linewidth} \center{\includegraphics[width=1.25\textwidth]{fig3a.pdf}} \end{minipage} \hfill \begin{minipage}[H]{0.45\linewidth} \center{\includegraphics[width=1.25\textwidth]{fig3b.pdf}} \end{minipage} \caption {The fluctuations in magnetization (solid lines) and the heat capacity (dashed lines) for TlCuCl$_3$ at $H=9$\,T (a) and $H=10$\,T (b), respectively. It is seen that, in the region of phase transition the heat capacity is smeared, but the fluctuation has a sharp maximum at a definite temperature, which may be interpreted as a critical temperature of such crossover.} \label{Fig3} \end{figure} In some cases, a crossover has its precursor effect. For example, BCS-BEC crossover includes a pseudogap region \cite{nature}. However, there are no precursor effects in spin-gapped quantum magnets under consideration. Note that, our general assumption on the relation between the critical temperature and maximum of fluctuations is in good agreement with pair fluctuation approximation \cite{prb66} where the critical temperature corresponds to the pole of pair fluctuation parameter. \section{Discussions and Conclusion} In this work, we looked at the problem of defining a critical temperature in a crossover from the BEC phase to the normal phase by studying the temperature dependence of magnetization of spin-gapped quantum magnets described by BEC of triplons. We have calculated the heat capacity $C_H$ at constant field and fluctuations in magnetization using the Hartree-Fock-Bogouliubov approximation and found optimized parameters of the Hamiltonian of triplon gas. In the region of phase transition, the heat capacity $C_H$ is smeared out due to the Dzyaloshinsky-Moriya interaction. The sharp maximum of the fluctuations in the magnetization is identified as the critical temperature of the crossover. We found that there is no anomalous fluctuation in the condensed fraction. This is simply because in the mean-field approximation one always uses Bogoliubov shift $ {\psi}= \sqrt{\rho_0}+ { {\tilde{\psi}}}$ where $ {\psi}$ and ${\tilde{\psi}}$ are operators, however, the first term here is not an operator, but just a function. Thus, from the definition of the dispersion one gets $ \Delta ^2 (\rho_0)=\langle\rho_{0}^{2}\rangle - \langle\rho_{0}\rangle^2 =0$. We have also found optimized values of coupling constants, $U$, $\gamma$ and $\gamma'$ for TlCuCl$_3$, which are the strengths of the contact triplon-triplon interaction, exchange anisotropy and DM interactions, respectively. These parameters may be used to further study of the physical properties of this material. We have shown that, for the crossover phase transition the heat capacity is smeared, while, the number fluctuation, which is proportional to the magnetization has a maximum. This point may be interpreted as a critical temperature of the crossover. It would be interesting to study the number (or density) fluctuations for other smeared phase transitions such as BCS-BEC \cite{Burovski,Klimin} and magnetic transitions in sodium rich materials, e.g., Na$_x$CoO$_2$ \cite{baran}. It is beyond the scope of the present paper to have a detailed discussion of BCS to BEC, thus the reader is referred to recent lectures by Zwerger \cite{wilhelm2}. There are also smeared phase transitions due to disorder in binary alloys. Such materials consist of substances A and B. Material A is in the magnetic phase while material B is in the paramagnetic phase. The smeared phase transition, as well as quantum ones, from a nonmagnetic to a magnetic phase can be tuned by substituting magnetic atoms A for nonmagnetic atoms B in the binary alloy A$_{1-x}$B$_{x}$ \cite{nozadze}. It would be interesting to study fluctuations in such materials, to see if they have a maximum at some value of $x$ or temperature. \acknowledgements We are indebted to participants of the conference ICSM-21 for useful discussions and comments. This work is supported by the Ministry of Innovative Development of the Republic of Uzbekistan and the Scientific and the Technological Research Council of Turkey (TUBITAK) under Grant No.\,119N689. \section*{Appendix: Derivation of thermodynamic quantities} \defA.\arabic{equation}{A.\arabic{equation}} \setcounter{equation}{0} \begin{eqnarray} \psi(\mathbf{r},t)= i\sqrt{\rho_0} + \tilde{\psi}(\mathbf{r},t), \quad \psi^\dagger(\mathbf{r},t)= -i\sqrt{\rho_0} + \tilde{\psi}^\dagger(\mathbf{r},t) \label{eq:psi} \end{eqnarray} where $\rho_0$ is density of condensed particles and $\tilde{\psi}$ is fiedd operator of uncondensed particles. One of our main goals is to find an analytical expression for thermodynamic potential $\Omega$, which contains almost all the information about the equilibrium statistical system \cite{yuktutorial}. For this purpose we use the path integral formalism where $\Omega$ is given by \begin{equation} \Omega = - T\ln{Z}, \qquad Z = \int D \tilde\psi^\dagger D \tilde\psi e^{-A[\psi,\psi^\dagger]}. \label{eq:entropy} \end{equation} Here, $A[\psi,\psi^\dagger]$ is the action and its form can be chosen by the total Hamiltonian (\ref{eq:Htot}) as follows \begin{eqnarray} &A[\psi,\psi^\dagger]= \nonumber \\ &\int_{0}^{\beta}d{\tau} d\mathbf{r}\left( \psi^\dagger K_{id} \psi + \frac{U}{2}\left( \psi^\dagger\psi\right)^2 +\frac{\gamma}{2}\left(\psi^\dagger\psi^\dagger+ \psi\psi\right) +i \gamma'\left( \psi-\psi^\dagger\right)\right) \label{eq:entropy1} \end{eqnarray} where $K_{id}=\frac{\partial}{\partial\tau}-\mathbf{\hat{K}}-\mu$. For simplicity, we dropped arguments of wave function operators. In Eq.\,\eqref{eq:entropy1} the fluctuating fields $\tilde{\psi}({\bf r},\tau)$ and $\tilde{\psi}^\dagger({\bf r},\tau)$ satisfy the bosonic commutation relations and periodic in $\tau$ with period $\beta=1/T$. Clearly, this path integral can not be evaluated exactly, so an approximation is needed. In the present work, we used an approach, which is called the variational perturbation theory \cite{stancu}, or $\delta$-expansion method. To apply this method, we maked following replacements in the action (\ref{eq:entropy1}): $U\rightarrow \delta U$, $\gamma\rightarrow \delta \gamma$, $\gamma'\rightarrow \sqrt{\Delta^2}\gamma' $. And added to the action the term \begin{eqnarray} A_\Sigma= (1-\delta)\int d{\tau} d\mathbf{r}\left[ \Sigma_n\tilde{\psi}^\dagger\tilde{\psi} + \frac{1}{2} \Sigma_{an}\left(\tilde{\psi}^\dagger\tilde{\psi}^\dagger+\tilde{\psi}\tilde{\psi} \right) \right] \label{eq:Asigma} \end{eqnarray} where the variational parameters $\Sigma_n$ and $\Sigma_{an}$ may be interpreted as the normal and anomalous self-energies, respectively. They are defined as \cite{andersen}: \begin{subequations} \begin{align} &\Sigma_{n}=(\Pi_{11}(0,0)+\Pi_{22}(0,0))/2,\\ &\Sigma_{an}=(\Pi_{11}(0,0)-\Pi_{22}(0,0))/2, \\ &\Pi_{ab}(\omega_{n},\mathbf{k})=(G(\omega_{n},\mathbf{k}))^{-1}_{ab}-(G^{0}(\omega_{n},\mathbf{k}))^{-1}_{ab} \label{eq:9} \end{align} \end{subequations} and the Green functions $G(\omega_n, \mathbf{k})$, $G^{0}(\omega_n, \mathbf{k})$ are given further below. Now, we insert (\ref{eq:psi}) into the action (\ref{eq:entropy1}) and divide this action into five parts according to orders of $\tilde{\psi}$ (see \cite{ouraniz2part1} for more detailed calculations) \begin{equation} A=A_0+A_1+A_2+A_3+A_4. \end{equation}\normalsize Then, writing $\tilde{\psi} $, $\tilde{\psi}^\dagger$ in Cartesian form as \begin{subequations} \begin{align} \tilde{\psi}&= \frac{1}{\sqrt{2}}(\psi_1 + i \psi_2).\\ \tilde{\psi}^\dagger&= \frac{1}{\sqrt{2}}(\psi_1 - i \psi_2). \end{align} \end{subequations} the form of grand partition function $Z$ can be obtained in (\ref{eq:entropy}) and that helps us to calculate $\Omega$. The perturbation scheme may be considered as an expansion in powers of $\delta$ by using the Greens functions \begin{eqnarray} G_{ab}(\tau, \mathbf{r}; \tau', \mathbf{r}')=\frac{1}{\beta}\sum_{n,k} e^{i\omega_n(\tau-\tau')+i\mathbf{k}(\mathbf{r}-\mathbf{r}')} G_{ab}(\omega_n, \mathbf{k}) \label{eq:Green1} \end{eqnarray} $(a, b= 1,2)$, where $\omega_n=2\pi nT$ is the $n$th bosonic Matsubara frequency and \begin{eqnarray} G_{ab}(\omega_n, \mathbf{k}) = \frac{1}{\omega_n^2 + E_k^2}\begin{bmatrix} \epsilon_k+X_2 & \omega_n \\ -\omega_n & \epsilon_k+X_1 \end{bmatrix}. \label{eq:Gab} \end{eqnarray} In Eq.\,\eqref{eq:Gab} $E_k$ corresponds to the dispersion of quasi-particles (Bogolons) \begin{eqnarray} E_k = \sqrt{\epsilon_k+X_1}\sqrt{\epsilon_k+X_2}. \label{eq:energy} \end{eqnarray} where is the bare dispersion of triplons and the self-energies $X_1$ and $X_2$ which are given by \begin{subequations} \begin{align} X_1 =\Sigma_{n}+\Sigma_{an}-\mu, \label{eq: x1}\\ X_2 = \Sigma_{n}-\Sigma_{an}-\mu. \label{eq: x2} \end{align} \end{subequations} Finally, the following expression for $\Omega$ can be obtained including EA and DM interactions \begin{subequations} \label{eq:omega} \begin{equation} \Omega= \Omega_{ISO} + \Omega_{EA} + \Omega_{DM}. \end{equation} \begin{align} \Omega_{ISO}= -\mu \rho_0 + \frac{U\rho_0^2}{2} +\frac{1}{2}\sum_k(E_k-\epsilon_k) +T \sum_k \ln (1-e^{-\beta E_k})\nonumber \\ \qquad\qquad\qquad\qquad + \frac{1}{2}(\beta_1 B +\beta_2 A) +\frac{U}{8}(3A^2+3B^2+2AB), \label{eq:omega1}\\ \Omega_{EA}= -\gamma \rho_0 +\frac{\gamma}{2}(B-A), \label{eq:omega2} \\ \Omega_{DM} = -2\gamma'\sqrt{\rho_0} -\frac{\gamma'^2}{X_2}. \label{eq:omega3} \end{align} \end{subequations} where \begin{subequations} \begin{align} \beta_1& =-\mu -X_1 + U\rho_0\\ \beta_2& = - \mu -X_2 + 3U\rho_0\\ A&= T\sum_{k,n} \frac{\epsilon_k+X_1}{\omega_n^2+E_k^2}= \sum_k W_k \frac{\epsilon_k+X_1}{E_k}\\ B&= T\sum_{k,n} \frac{\epsilon_k+X_2}{\omega_n^2+E_k^2}= \sum_k W_k \frac{\epsilon_k+X_2}{E_k} \end{align} \end{subequations} in which $W_k =1/2 +1/(e^{\beta E_k}-1)$ and $$\sum_{n, \mathbf{k}}\equiv\sum_{n=-\infty}^{n=\infty}\int d\mathbf{k}/(2\pi)^3 $$. The stability condition in the equilibrium requires, $X_1 \geq 0$, $X_2 \geq 0$ as well as for the grand thermodynamic potential satisfies that \begin{eqnarray} \frac{\partial \Omega}{\partial X_1}=0, \quad \frac{\partial \Omega}{\partial X_2}= 0, \quad \frac{\partial \Omega}{\partial \rho_0}= 0. \label{eq: partial} \end{eqnarray} This condition leads to the following equations for $X_1$ $X_2$ and $\rho_0$ \begin{subequations} \begin{align} X_1&= 2U\rho + U\sigma -\mu - U\rho_0+\gamma +\frac{2\gamma'^2 D_1}{X_2^2} \label{eq:X1}\\ X_2&=2U\rho - U\sigma -\mu +U\rho_0-\gamma -\frac{2\gamma'^2 D_2}{X_2^2} \label{eq:x2}\\ \mu&=U (\rho_0 +2\rho_1)-U\sigma - \gamma -\frac{\gamma'}{\sqrt{\rho_0}} \label{eq:mu} \end{align} \end{subequations} where \begin{subequations} \begin{align} A_1'&= \frac{\partial A}{\partial X_1}= \frac{1}{8}\sum_{k}\frac{(E_k W_k' + 4W_k)}{E_k}\\ A_2'&= \frac{\partial A}{\partial X_2}=\frac{1}{8}\sum_{k}\frac{(\epsilon_k + X_1)^2(E_k W_k' - 4W_k)}{E_k^3}\\ B_1'&= \frac{\partial B}{\partial X_1} = \frac{1}{8}\sum_{k}\frac{(\epsilon_k + X_2)^2(E_k W_k' - 4W_k)}{E_k^3}\\ D_1&=\frac{A_1'}{\bar{D}}; \quad D_2=\frac{B_1'}{\bar{D}}; \quad \bar{D}=A_1'^2 - A_2'B_1'\\ W_k'& =\beta (1-4W_k^2)=\frac{-\beta}{\sinh^2(\beta E_k/2)}. \label{eq:parameter} \end{align} \end{subequations} Now, solving these (\ref{eq:X1}), (\ref{eq:x2}) and (\ref{eq:mu}) with respect to $X_1$, $X_2$ and $\rho_0$ one can find other physical quantities. Additionally, the crtical temperature $T_c$ is also found with these equations. However, application of these equations for $T>T_c$ and $T<T_c$ regions should be written separately \cite{ouraniz2part2}. For a homogeneous system the normal and anomalous densities are defined as $\rho_1 = \int \langle\tilde{\psi}^\dagger(r)\tilde{\psi}(r)\rangle d\mathbf{r}$ and $\sigma=\int \langle\tilde{\psi}(r)\tilde{\psi}(r)\rangle d\mathbf{r}$ respectively, and may be calculated using the Green functions given in Eqs.\,\eqref{eq:Green1} and \eqref{eq:Gab}. As a result, they take the following explicit form \begin{subequations} \begin{align} \rho_1 &= \frac{A+B}{2} = \sum_k\left[\frac{W_k(\epsilon_k+X_1/2 +X_2/2)}{E_k} -\frac{1}{2} \right] \equiv\sum_k \rho_{1k}, \label{eq:17a}\\ \sigma &= \frac{B-A}{2} =\frac{(X_2-X_1)}{2} \sum_k \frac{W_k}{E_k} \equiv\sum_k \sigma_{k}. \label{eq:17b} \end{align} \end{subequations} The total density of triplons per dimer is the sum of condensed and uncondensed fractions: \begin{eqnarray} \rho =\frac{N}{V}=\rho_0 +\rho_1. \end{eqnarray} Now, total and staggered magnetizations can be calculated by \begin{eqnarray} M=g\mu_B\rho, \quad M_{stag}=g\mu_B\sqrt{\rho_0}. \end{eqnarray}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{1. Introduction} The problem of the self-force, or radiation reaction, of a non-uniformly moving charged particle has a long history. The Abraham-Lorentz evaluation of the self-force \cite{lore}, dating to the early 1900's, assumed the particle's charge distribution to be spherically symmetric, of radius $a$, and obtained the self-force as an expansion in powers of $a$ (including the power $a^{-1}$). Dirac \cite{dir} was the first to obtain the radiation-reaction force assuming at the outset the particle to be point-like, in a relativistic calculation that used conservation of energy and momentum, and, unlike the Abraham-Lorentz evaluation, both the retarded and advanced self-fields of the charge. For a recent survey of the history of this topic see Ref. \cite{mcd}. In this paper, we present a novel calculation of the self-force of a point charge, using only its retarded self-field. As in the account of Jackson of the Abraham-Lorentz evaluation (\cite{jac}, Sec. 16.3), our starting points are (i) the conservation of momentum in the system consisting of a single charged particle and an external electromagnetic field, \begin{align}\label{ecc1} \frac{d \vec{p}_{\rm{mech}}}{dt}+ \frac{d \vec{G}}{dt}=0, \end{align} where $\vec{p}_{\rm{mech}}$ is the purely mechanical, or ``bare'', momentum of the particle, and $\vec{G}$ is the momentum of the total electromagnetic field (external plus the particle's self-field), and (ii) the relation \begin{align}\label{ecc2} \frac{d \vec{p}_{\rm{mech}}}{dt}= \vec{F}_{\rm{ext}}+ \vec{F}_{\rm{s}}. \end{align} Here, $\vec{F}_{\rm{ext}}$ and $\vec{F}_s$ are the Lorentz forces on the charged particle due to the external field and the particle's self-field, respectively (in the original calculations of Abraham and Lorentz, the particle's momentum was taken to be entirely electromagnetic, entailing that $\vec{p}_{\rm{mech}}=0$, and so the total force $\vec{F}_{\rm{ext}}+ \vec{F}_s$ was assumed to vanish). Equations (\ref{ecc1}) and (\ref{ecc2}) can be combined to write \begin{align}\label{ecc3} -\frac{d\vec{G}}{dt}= - \frac{d}{dt}(\vec{G}-\vec{G_s}) - \frac{d\vec{G}_s}{dt}= \vec{F}_{\rm{ext}}+ \vec{F}_{\rm{s}}, \end{align} where $\vec{G}_s$ is the electromagnetic momentum of the self-field only. Equation (\ref{ecc3}) suggests expressing the self-force $\vec{F}_{\rm{s}}$ and the external force $\vec{F}_{\rm{ext}}$ separately as \begin{align} &\vec{F}_{\rm{s}}= - \frac{d\vec{G}_{\rm{s}}}{dt},\\ &\vec{F}_{\rm{ext}}=- \frac{d}{dt}(\vec{G}-\vec{G}_{\rm{s}}). \end{align} We shall calculate the self-force $\vec{F}_{\rm{s}}$ using the {\it ansatz} (4), assuming for simplicity that the particle moves on a rectilinear trajectory. Remarkably, the calculation will turn out to yield the exact space part of the radiation-reaction 4-force of Dirac's equation \cite{dir} (now usually called the Lorentz-Abraham-Dirac (LAD) equation to reflect fully its origins), with no need of renormalization or any other explicit removal of infinities. \section*{2. Calculation of the self-force} \subsection*{2.1 Retarded self-fields} We consider a point particle of charge $e$, moving on a rectilinear trajectory $w(t)$, say along the $x$-axis. Its charge and current densities are then given by \begin{align}\label{ec1} \rho(\vec{r},t)=e \delta (\vec{r} -w(t) \hat{\vec{x}}), \; \vec{j}(\vec{r},t)= \rho(\vec{r},t) \dot{w}(t) \hat{\vec{x}}, \end{align} where overdot denotes differentiation with respect to time, and $\delta$ is the Dirac delta function. We assume that the charge density can be expanded as the Taylor series: \begin{align}\label{ec2} \rho(\vec{r},t)= e\sum_{n=0}^{\infty} \frac{(-1)^n}{n!} w^n(t) \partial_x^n \delta(\vec{r}), \end{align} where $ \partial_x^n$ denotes the partial differentiation with respect to $x$ of order $n$; $w^n(t) \equiv \left(w(t)\right)^n$. The Taylor-series expansion suggests that we consider only small oscillations about an equilibrium position, but our final results will not depend on any amplitude of $w(t)$. Using Eqs.\ (\ref{ec1}) and (\ref{ec2}), the retarded scalar and vector potentials of the moving charge are calculated to be: \begin{align}\label{ec3} \phi( \vec{r}, t)&= \int d\vec{r}\,' \; \frac{ \rho( \vec{r}\,', t- |\vec{r}- \vec{r}\,'|/c)}{ |\vec{r}- \vec{r}\,'|}\nonumber \\ &= e \sum_{n=0}^{\infty} \frac{(-1)^n}{n!} \partial_x^n \frac{w^n(t-r/c)}{r}, \end{align} \begin{align}\label{ec4} \vec{A}( \vec{r}, t)&= \frac{1}{c} \int d\vec{r}\,' \; \frac{ \vec{j}( \vec{r}\,', t- |\vec{r}- \vec{r}\,'|/c)}{ |\vec{r}- \vec{r}\,'|}\nonumber \\ &= \frac{e}{c} \sum_{n=0}^{\infty} \frac{(-1)^n}{n!} \partial_x^n \frac{ w^n(t-r/c)\dot{w}(t-r/c)}{r} \hat{\vec{x}}. \end{align} Here, we used the facts that $\partial_{x'}^n \delta(\vec{r}\,') = \delta^{(n)}(x') \delta(y')\delta(z')$ and \begin{equation} \int_{-\infty}^{\infty} dx' f(\vec{r}- \vec{r}\,') \delta^{(n)}(x') = (-1)^n \partial_{x'}^n f(\vec{r}- \vec{r}\,')|_{x'=0}= \partial_x^n f(\vec{r}- \vec{r}\,')|_{x'=0}. \end{equation} The $y$- and $z$-components of the retarded electric and magnetic self-fields of the charge, \begin{align} \vec{E}(\vec{r},t)= - \nabla \phi(\vec{r},t) - \frac{1}{c}\frac{\partial}{\partial t} \vec{A}(\vec{r},t), \; \; \vec{B}(\vec{r},t)= \nabla \times \vec{A}(\vec{r},t), \end{align} are obtained using Eqs.\ (\ref{ec3}) and (\ref{ec4}) to be \begin{align}\label{ec7} &E_{y,z}(\vec{r},t) = e \sum_{n=0}^{\infty} \frac{(-1)^{n+1}}{n!} \partial_{y,z} \partial_x^n \frac{w^n(t-r/c)}{r},\\ &B_{y,z}(\vec{r},t) = \frac{e}{c} \sum_{n=0}^{\infty} \frac{(-1)^n}{n!} \tilde{\partial}_{z,y}\partial_x^n \frac{w^n(t-r/c) \dot{w}(t-r/c)}{r}, \end{align} where $\tilde{\partial_z}=\partial_z$ and $\tilde{\partial_y}=-\partial_y$. \subsection*{2.2 Self-field momentum} The self-fields create an electromagnetic-self-field momentum \begin{align}\label{ec9} \vec{G}_s(t)= &\frac{1}{4\pi c} \int d \vec{r} \;\vec{E}(\vec{r},t) \times \vec{B}(\vec{r},t) \nonumber \\ =& \frac{1}{4\pi c} \int d \vec{r}\; \left[ E_y(\vec{r},t) B_z(\vec{r},t)- E_z(\vec{r},t) B_y(\vec{r},t) \right] \hat{\vec{x}}; \end{align} the $y$- and $z$-components of $\vec{G}_s(t) $ can be seen to vanish already on account of symmetry (the particle is moving only along the $x$-axis). We evaluate the integral in (\ref{ec9}) in the momentum space: \begin{align}\label{ec10} \vec{G}_s(t)= \frac{1}{4\pi c} \frac{1}{8\pi^3} \int d \vec{k} \left[ E_y(\vec{k},t) B_z(-\vec{k},t) - E_z(\vec{k},t) B_y(-\vec{k},t) \right] \hat{\vec{x}}, \end{align} where the Fourier transforms are defined by \begin{equation}\label{ec11} f(\vec{k},t) = \int d \vec{r} f(\vec{r},t) e^{i \vec{k} \cdot \vec{r}}. \end{equation} Using Eqs.\ (\ref{ec7}) and (\ref{ec11}), we express the Fourier transforms $E_{y,z}(\vec{k},t)$ as \begin{align}\label{ec12} E_{y,z}(\vec{k},t)&= e\sum_{n=0}^{\infty} \frac{1}{n!} \int d \vec{r} \left( \partial_{y,z} \partial_x^n e^{i \vec{k} \cdot \vec{r}} \right) \frac{w^n(t-r/c)}{r} \nonumber \\ &=e \sum_{n=0}^{\infty} \frac{i^{n+1}}{n!} k_{y,z} k_x^n \int_0^{\infty} dr \; r \; w^n(t-r/c) \int d \Omega \;e^{ikr\cos \theta} \nonumber \\ & =4\pi e \sum_{n=0}^{\infty} \frac{i^{n+1}}{n!} \frac{k_{y,z} k_x^n}{k} \int_0^{\infty} dr \sin (kr) w^n(t-r/c), \end{align} where, in the $1^{\rm{st}}$ line, we integrated by parts; $k_{x,y,z}$ are the Cartesian components of $\vec{k}$ in the momentum space and $k=| \vec{k}|$. Similar calculation yields, using Eq. (13): \begin{align}\label{ec13} B_{y,z}(-\vec{k},t)= \frac{4\pi e}{c} \sum_{n=0}^{\infty} \frac{(-i)^{n+1}}{n!} \frac{\tilde{k}_{z,y} k_x^n}{k} \int_0^{\infty} dr \sin(kr) w^n(t-r/c) \dot{w}(t-r/c), \end{align} where $\tilde{k}_z=-k_z$ and $\tilde{k}_y=k_y$. Using Eqs.\ (\ref{ec12}) and (\ref{ec13}), the self-field momentum (\ref{ec10}) is now calculated as follows: \begin{align}\label{ec14} \vec{G}_s(t) = &\frac{e^2}{2 \pi^2 c^2} \sum_{n=0}^{\infty} \sum_{p=0}^{\infty} \frac{(-1)^{p+1} i^{n+p+2}}{n!p!} \int_0^{\infty} dk \int d \Omega_{\vec{k}} \; (k_y^2 + k_z^2 )k_x^{n+p} \nonumber \\ & \times \int_0^{\infty} dr \sin(kr) w^n(t-r/c) \int_0^{\infty} dr' \sin(kr') w^p(t-r'/c) \dot{w}(t-r'/c) \hat{\vec{x}} \nonumber \\ = &\frac{4e^2}{\pi c^2} \sum_{\kappa =0}^{\infty} \sum_{n=0}^{2\kappa} \frac{(-1)^{n+\kappa}}{n! (2\kappa -n)! (2\kappa+1)(2\kappa+3)} \int_0^{\infty} dk \; k^{2\kappa+2} \nonumber \\ &\times \int_0^{\infty} dr \int_0^{\infty} dr' \sin(kr) \sin(kr') w^n(t-r/c) w^{2\kappa-n}(t-r'/c) \dot{w}(t-r'/c) \hat{\vec{x}} \nonumber \\ = &\frac{2e^2}{c} \sum_{\kappa =0}^{\infty} \sum_{n=0}^{2\kappa}\frac{(-1)^{n+1}}{n! (2\kappa-n+1)! (2\kappa+1)(2\kappa+3) c^{2\kappa+3}} \nonumber \\ &\times \int_0^{\infty} dr \; w^n(t-r/c) \frac{d^{2\kappa+3}}{dt^{2\kappa+3}} w^{2\kappa -n+1}(t-r/c) \hat{\vec{x}}. \end{align} Here, in the $2^{\rm{nd}}$ equality, we used the result \begin{align} \int d \Omega_{\vec{k}} k_{y,z}^2 k_x^m = \left\lbrace \begin{array}{l} 4 \pi k^{m+2}/(m+1)(m+3) , \; m \; \mbox{even}\\ 0 , \; m \; \mbox{odd} \end{array} \right. , \end{align} and changed the summation indices using $n+p=2\kappa$ in view of the fact that only the terms with $n+p$ even contribute to $\vec{G}_s(t)$; in the last equality, we used the generalized-function identity\footnote{Relation (21) can be obtained from $\int_0^{\infty} dk \;k^m \cos(kx)=\pi i^m \delta^{(m)}(x)$, $m$ even \big(see Ref.\ \cite{ligh}, p.\ 43, Table 1; note that the Fourier transform of $f(x)$ is there defined as $g(y)=\int_{-\infty}^{\infty} dx \; f(x)\;\mbox{exp}(-2\pi i y x)$, and $\sin(kx) \sin(ky) = \frac{1}{2}\cos(k(x-y))- \frac{1}{2}\cos(k(x+y))\big). $} \begin{align}\label{ec16} \int_0^{\infty}dk\; k^m \sin(kx) \sin(ky) =\frac{\pi}{2} i^m \left[ \delta^{(m)}(x-y) - \delta^{(m)}(x+y) \right],\; m \;\; \rm{even}, \end{align} where only the first term contributed in the radial integration, and then the identity \begin{align}\label{eq17} \frac{d^m f(t-r/c)}{dr^m} = \left( - \frac{1}{c} \right)^m \frac{d^m f(t-r/c)}{dt^m}. \end{align} Transforming the integration variable $r$ to $t'=t-r/c$, the self-field momentum (\ref{ec14}) can now be expressed as \begin{align}\label{ec18} \vec{G}_s(t) = - \int_{-\infty}^t dt' F_s(t') \hat{\vec{x}}, \end{align} where $F_s(t')$ (the subscript anticipates relation (4)) is a force given by \begin{equation}\label{aa} F_s(t)=2e^2 \sum_{k=0}^{\infty} \frac{(2k+2)c^{-(2k+3)}}{(2k+1)(2k+3)!} \sum_{n=0}^{2k} {{2k+1}\choose{n}} \left( -w(t) \right)^n \frac{d^{2k+3}w^{2k-n+1}(t)}{dt^{2k+3}}; \end{equation} $ {{2k+1}\choose{n}}=(2k+1)!/(2k+1-n)!n!$ is a binomial coefficient. \subsection*{2.3 Self-force} We now proceed to evaluate the force (\ref{aa}) in closed form. Using the identity \begin{equation} \frac{df(t)}{dt} =-c \frac{df(t-r/c)}{dr}\big{|}_{r=0}, \end{equation} we can write (\ref{aa}) as \begin{equation}\label{bb} F_s(t)=2e^2 \sum_{k=0}^{\infty} \frac{(-1)^{2k+3}(2k+2)}{(2k+1)(2k+3)!} \sum_{n=0}^{2k}{{2k+1}\choose{n}} \left( -w(t) \right)^n \frac{d^{2k+3}w^{2k-n+1}(t-r/c)}{dr^{2k+3}}\big{|}_{r=0}. \end{equation} But \begin{align}\label{cc} &\sum_{n=0}^{2k}{{2k+1}\choose{n}} \left( -w(t) \right)^n w^{2k-n+1}(t-r/c)\nonumber\\ & \; \;\;=w^{2k+1}(t-r/c) \sum_{n=0}^{2k}{{2k+1}\choose{n}} \left(- \frac{w(t)}{w(t-r/c)} \right)^n \nonumber\\ &\; \; \;=w^{2k+1}(t-r/c) \left[ \left( 1-\frac{w(t)}{w(t-r/c)} \right)^{2k+1}- \left( \frac{-w(t)}{w(t-r/c)} \right)^{2k+1} \right] \nonumber\\ &\; \;\;=\left(w(t-r/c)-w(t) \right)^{2k+1}+w^{2k+1}(t), \end{align} where we used in the $3^{\rm{rd}}$ line the binomial theorem. Using now (\ref{cc}) in (\ref{bb}), we get \begin{equation}\label{dd} F_s(t)=2e^2 \sum_{k=0}^{\infty}\frac{(-1)^{2k+3}(2k+2)}{(2k+1)(2k+3)!} \frac{d^{2k+3} \left( w(t-r/c)-w(t) \right)^{2k+1}}{dr^{2k+3}}\big{|}_{r=0}, \end{equation} since $d^{2k+3} w^{2k+1}(t)/dr^{2k+3}=0$ for all $k \geq 0$. We can use here for the higher-order derivatives the formula \begin{align} \frac{d^m}{dx^m} f^k(x)= \sum_{j_1+ \dots +j_k=m} {{m}\choose{j_1, \dots, j_k}} f^{(j_1)}(x) \dots f^{(j_k)}(x), \end{align} which is the differentiation analogue of the multinomial theorem. This transforms (\ref{dd}) into \begin{align}\label{ee} F_s(t)=&2e^2 \sum_{k=0}^{\infty} \frac{(-1)^{2k+3}(2k+2)}{(2k+1)(2k+3)! } \sum_{j_1+ \dots +j_{2k+1}=2k+3} {{2k+3}\choose{j_1, \dots, j_{2k+1}}} \nonumber \\ &\times \frac{d^{j_1} \left( w(t-r/c) - w(t) \right)}{dr^{j_1}}\big{|}_{r=0} \dots \frac{d^{j_{2k+1}} \left( w(t-r/c) - w(t) \right)}{dr^{j_{2k+1}}}\big{|}_{r=0}. \end{align} Utilizing that \begin{align} \frac{d^j \left( w(t-r/c) -w(t) \right)}{dr^j}\big{|}_{r=0} = \left(1- \delta_{j,0} \right) \left( - \frac{1}{c} \right)^j w^{(j)}(t), \end{align} Eq.\ (\ref{ee}) can be written more simply as \begin{equation}\label{ff} F_s(t) = 2e^2 \sum_{k=0}^{\infty} \frac{2k+2}{(2k+1) c^{2k+3}} \sum_{ \begin{array}{c} j_1+ \dots + j_{2k+1}=2k+3\\ j_1, \dots, j_{2k+1} > 0 \end{array}} \frac{w^{(j_1)}(t)}{j_1!} \dots \frac{w^{(j_{2k+1})}(t)}{j_{2k+1}!}. \end{equation} We note that, because of the summation constraints on the orders $j_i$ of the derivatives $w^{(j_i)}(t)$, only the factors $\dot{w}^{2k}(t) \dddot{w}(t)$ and $ \dot{w}^{2k-1}(t) \ddot{w}^2(t)$ are allowed in Eq. (\ref{ff}), their numbers being $ {{2k+1}\choose{1}} =2k+1$ and $ {{2k+1}\choose{2}} =(2k+1)k$, respectively. Equation (\ref{ff}) can thus be still more simplified to read \begin{align}\label{ec28} F_s(t)&= 2e^2 \sum_{k=0}^{\infty} \frac{2k+2}{(2k+1) c^{2k+3}} \left[ \frac{2k+1}{3!} \dot{w}^{2k}(t) \dddot{w}(t) + \frac{(2k+1)k}{2! 2!} \ddot{w}^2(t) \dot{w}^{2k-1}(t) \right] \nonumber \\ &= 2e^2 \sum_{k=0}^{\infty} \frac{k+1}{c^{2k+3}} \left( \frac{1}{3} \dot{w}^{2k}(t) \dddot{w}(t) + \frac{k}{2} \dot{w}^{2k-1}(t) \ddot{w}^2(t) \right). \end{align} Using now the results $\sum_{k=0}^{\infty} (k+1) x^k =1/(1-x)^2$ and $\sum_{k=0}^{\infty} (k+1)(k+2) x^{2k+1} = 2x/(1-x^2)^3$, $0 \le x <1$, the series in (\ref{ec28}) can be summed, yielding \begin{align}\label{ec29} F_s(t) = \frac{2e^2}{3c^3} \gamma^4 \ddot{v}(t) + \frac{2e^2}{c^5} \gamma^6 v(t) \dot{v}^2(t), \; \gamma=(1-v^2(t)/c^2)^{-1/2}, \end{align} where we now write the velocity $v(t)$ for $\dot{w}(t)$. This is exactly the space part of the relativistic LAD radiation-reaction force \cite{roh1} \begin{align} \vec{F}_{\rm{LAD}}= \frac{2e^2}{3c^3} \gamma^2\left[ \ddot{\vec{v}} + \frac{3\gamma^2}{c^2} (\vec{v} \cdot \dot{\vec{v} })\;\dot{\vec{v}} + \frac{\gamma^2}{c^2} (\vec{v} \cdot \ddot{\vec{v}}) \; \vec{v} + \frac{3\gamma^4}{c^4} (\vec{v} \cdot \dot{\vec{v}})^2 \vec{v} \right], \end{align} when that is adapted for a rectilinear motion. Returning to Eq.\ (\ref{ec18}), we now see that if the limits $t \rightarrow -\infty$ of $\dot{v}(t)$ and $\ddot{v}(t)$ vanish, i.e., if the particle moved uniformly in the remote past, then \begin{equation}\label{ec30} -\frac{\vec{G}_s(t)}{dt} = F_s(t) \hat{\vec{x}}. \end{equation} In words, the self-force calculated as the negative of the rate of change of the charge's self-field momentum is the LAD radiation-reaction force for the charge's assumed motion. \section*{3. Conclusions} We calculated the time rate of change of the retarded self-field momentum of a point charge moving on a rectilinear trajectory of a remote-past constant velocity, and found that its negative equals the charge's LAD radiation-reaction force. The requisite integration was performed in the momentum space, where, apart from the standard practice of doing the angular integrations first, no renormalization or any other removal of infinities was required (unlike in the traditional treatment, in which the charge has a finite extension $a$ and the self-force contains a term proportional to $1/a$, which diverges in the limit $a \rightarrow 0$). For simplicity, our calculation was performed assuming a rectilinear motion, but we believe that it should be possible to extend it to three-dimensional motion. There are good reasons, among them the existence of runaway, and pre- and post-acceleration solutions of the LAD equation, to regard the very concept of a point charge as unphysical in classical electrodynamics. The equations of motion that admit no unphysical solutions, like the Landau-Lifshitz, Eliezer and Ford-O'Connell equations, assume explicitly or implicitly the charge to have a finite spatial extension of the order of the classical radius corresponding to its mass \cite{ste}. Be it as it may, our result supports the commonly held notion of the LAD equation as the ``exact'' equation of motion of a point charge.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} Machine learning systems depend on both statistical inference procedures and efficient implementations of these procedures. These issues are reflected clearly within a risk minimization framework, in which given a known loss $\loss(\ww;\zz)$ depending on data $\zz$ and parameters $\ww$, the ultimate objective is minimization of the risk $\risk(\ww) \defeq \exx\loss(\ww;\zz)$, where expectation is taken with respect to the data. Since $\risk$ is unknown, the learner seeks to determine a candidate $\wwhat$ based on a limited sample $\zz_{1},\ldots,\zz_{n}$ such that $\risk(\wwhat)$ is sufficiently small, with high probability over the random draw of the sample. Inference is important because $R$ is always unknown, and the implementation is important because the only $\wwhat$ we ever have in practice is one that can be computed given finite time, memory, and processing power. Our problem of interest is binary classification, where $\zz = (\xx,y)$ with inputs $\xx \in \RR^{d}$ and labels $y \in \{-1,1\}$. Parameter $\ww$ shall determine a scoring rule $h(\cdot;\ww)$, where $h(\xx) > 0$ implies a prediction of $y=+1$, and $h(\xx) \leq 0$ implies a prediction of $y=-1$. The classification \textit{margin} achieved by such a candidate is $y \, h(\xx)$, and the importance of the margin in terms of evaluating algorithm performance has been recognized for many years \citep{anthony1999NNTheory,langford2002a}. The work of \citet{koltchinskii2002a} provide risk bounds that depend on the empirical mean of $I\{y\,h(\xx) \leq \gamma\}$, providing useful generalization bounds for existing procedures whose on-sample margin error can be controlled. Intuitively, one might expect that having larger minimum margins on average would lead to better off-sample generalization. However, influential work by \citet{breiman1999a} showed that the problem is not so simple, demonstrating cases in which the margins achieved are higher, but generalization is worse. In response to this, \citet{reyzin2006a} make the important suggestion that it is not merely the location of the margins, but properties of the entire \textit{margin distribution} that are important to generalization. New algorithms based on trying to control the empirical margin distribution, albeit indirectly, were proposed early on by \citet{garg2003a}, who proposed a strategy of optimizing the random projection error, namely $\prr\{ h(\xx)\,\widetilde{h}(\widetilde{\xx}) < 0 \}$, where $\widetilde{h}$ and $\widetilde{\xx}$ are respectively random projections of $h$ and $\xx$ from $d$-dimensional Euclidean space to a $k$-dimensional subspace, where $k \ll d$. The bounds are lucid and are suggestive of practical objective functions, but their analysis requires that the inputs $\xx$ be bounded, namely that they are distributed on the unit ball, $\|\xx\|=1$. More recent work from \citet{zhang2016a} suggests an objective which simultaneously maximizes the mean while minimizing the variance of the empirical margin distribution. Their routines are computationally tractable, but hyperparameter settings are non-trivial, and their risk bounds (in expectation) depend on the expected outcome of a leave-one-out cross-validation procedure, which is not characterized using interpretable quantities, reducing the utility of the bounds. Another natural algorithmic strategy is to construct loss functions using more ``robust'' estimators of the \textit{true} expected margin $\exx y \, h(\xx)$, or related quantities such as the expected hinge loss $\exx \max\{1-y\,h(\xx),0\}$. In this regard the work of \citet{brownlees2015a} is highly relevant, in that sharp, descriptive risk bounds can be obtained for a wide class of learning algorithms, indeed any minimizer of such a loss. The practical downside is that computation is highly non-trivial and no procedures are proposed. The formal downside is that once again $\|\xx\|$ must be bounded for meaningful guarantees. \paragraph{Our contributions} To deal with the limitations of existing procedures highlighted above, the key idea here is to introduce a new convex loss that encourages the distribution of the margin to be tightly concentrated near a certain prescribed level. The procedure is easily implemented using gradient descent, admits formal performance guarantees reflecting both computational cost and optimization error, and aside from the usual cost of gradient computation there is virtually no computational overhead. Two key highlights are: \begin{itemize} \item The proposed algorithm enjoys high-probability risk bounds under moment bounds on $\xx$, and does not require $\|\xx\|$ to be bounded. \item Numerical experiments show how a simple data-dependent re-scaling procedure can reduce the need for trial-and-error tuning of regularization. \end{itemize} \section{Algorithm introduction} In this section we begin by introducing relevant algorithms from the literature, after which we introduce our proposed procedure. \subsection{Related work}\label{sec:related_work} Here we review the technical literature closely related to our work. Starting with the proposal of \citet{garg2003a}, their main theoretical results are a bound on the misclassification risk $\risk(h) \defeq \prr\{ y\,h(\xx) < 0 \}$ of $h(\xx) = \langle \ww, \xx \rangle + b$ for any $\ww \in \RR^{d}$ and $b \in \RR$. Assuming that $\|\xx\| = 1$, and given $2n$ observations, with probability no less than $1-4\delta$, we have \begin{align} \risk(h) \leq \widehat{\risk}(h) + \min_{d} \left( \mu_{d}(h) + 2 \sqrt{\frac{(d+2)\log(ne/(d+2))+\log(2\delta^{-1})}{2n}} \right) \end{align} where $\widehat{\risk}(h) = n^{-1}\sum_{i=1}^{n}I\{ y_{i}\,h(\xx_{i}) < 0 \}$, and the $\mu_{d}(h)$ term takes the form \begin{align*} \mu_{d}(h) \defeq \frac{2\delta^{-1}}{n} \sum_{i=1}^{2n} \min\left\{1, 3\exp\left(\frac{-h(\xx_{i})^{2}d}{2(2+|h(\xx_{i})|)^{2}}\right), \frac{2}{h(\xx_{i})^{2}d} \right\}. \end{align*} The projection error terms are derived from the fact that \begin{align*} \prr\{ h(\xx)\,\widetilde{h}(\widetilde{\xx}) < 0 \} \leq \min\left\{1, 3\exp\left(\frac{-h(\xx)^{2}d}{2(2+|h(\xx)|)^{2}}\right), \frac{2}{h(\xx)^{2}d} \right\} \end{align*} where $\widetilde{h}(\widetilde{\xx})=\langle P\ww, P\xx \rangle + b$, and $P$ is a $k \times d$ random matrix of independent Gaussian random variables, $N(0,1/d)$. Probability here is over the random draw of the matrix elements. Based on these guarantees, they construct a new loss, defined by \begin{align*} l(h;\zz) = \sum_{i \in \II_{+}} \exp\left(-\alpha h(\xx_{i})^{2}\right) + \sum_{i \in \II_{-}} \exp\left(-\beta y_{i}\,h(\xx_{i})\right), \end{align*} where $\II_{+}$ and $\II_{-}$ are respectively the indices of correctly and incorrectly classified observations. For correctly classified examples, they seek to minimize the projection error bound, whereas for incorrectly classified examples, then use a standard exponential surrogate loss. Depending on what $k \leq d$ minimizes their upper bound, the dependence on the number of parameters may be better than $O(\sqrt{d})$, but a price is paid in the form of $O(1/\delta)$ dependence on the confidence. On the computational side, proper settings of $\alpha$ and $\beta$ in practice is non-trivial. The work of \citet{zhang2016a} considers using first- and second-order moments of the margin distribution as relevant quantities to build an objective. Writing \begin{align*} \overbar{m}(h) & \defeq \frac{1}{n} \sum_{i=1}^{n} y_{i} \, h(\xx_{i})\\ \overbar{v}(h) & \defeq \frac{1}{n} \sum_{i=1}^{n} \left(y_{i} \, h(\xx_{i}) - \overbar{m}(h) \right)^{2}, \end{align*} in the case of $h(\xx) = \langle \ww,\xx \rangle$, they construct a loss \begin{align*} l(h;\zz) = \frac{\|\ww\|^{2}}{2} + \lambda_{1} \overbar{v}(h) - \lambda_{2} \overbar{m}(h) + \frac{\lambda_{3}}{n} \sum_{i=1}^{n} \max\{1-y_{i}\,h(\xx_{i}),0\}, \end{align*} where the $\lambda_{1},\lambda_{2},\lambda_{3}$ are parameters to be set manually. The authors show how the optimization can be readily cast into an $n$-dimensional dual program of the form \begin{align*} \min_{\mv{\alpha} \in \RR^{n}} & \enspace \frac{1}{2} \mv{\alpha}^{T}U\mv{\alpha} + \uu^{T}\mv{\alpha}\\ \text{ s.t. } & \enspace 0 \leq \alpha_{i} \leq a_{i}, \quad i = 1,\ldots,n \end{align*} for appropriate data-dependent matrix $U$, vector $\uu$, and weight bounds $a_{i}$, and they give some examples of practical implementations using dual coordinate descent and variance-reduced stochastic gradient descent. In all cases, parameter settings are left up to the user. Furthermore, statistical guarantees leave something to be desired; the authors prove that for any $\widehat{\mv{\alpha}}$ satisfying their dual objective, risk bounds hold as \begin{align*} \exx \risk(\widehat{\mv{\alpha}}) \leq \frac{1}{n} \exx\left(\sum_{i \in \II_{1}}\widehat{\alpha}_{i}U_{i,i} + |\II_{2}| \right), \end{align*} where expectation is taken with respect to the sample, $U_{i,i}$ are the diagonal elements of $U$, and the index sets are defined \begin{align*} \II_{1} & = \{i: 0 < \widehat{\alpha}_{i} < \lambda_{3}/n \}\\ \II_{2} & = \{i: \widehat{\alpha}_{i} = \lambda_{3}/n \}. \end{align*} These bounds provide limited insight into how and when the algorithm performs well, and in practice the algorithm requires substantial effort for model selection. Finally, we consider the path-breaking analysis of \citet{brownlees2015a}, which greatly extends foundational work done by \citet{catoni2012a}. Letting $\varphi(u) = \max\{1-u,0\}$ denote the hinge loss, the Catoni estimator of the true location of a margin-based loss at candidate $h$, namely $\exx \varphi(y\,h(\xx))$, is defined as \begin{align}\label{eqn:est_BJL} \text{ any } \est(h) \geq 0 \enspace \text{ s.t. } \sum_{i=1}^{n} \psi\left(\frac{\est(h)-\varphi(y_{i}\,h(\xx_{i}))}{s}\right) = 0 \end{align} where $s>0$ is a scaling parameter, and $\psi$ is a soft truncation function (see Figure \ref{fig:rho_psi_deriv}) defined by \begin{align}\label{eqn:influence_cat17} \psi(u) \defeq \begin{cases} u - u^{3}/6, & -\sqrt{2} \leq u \leq \sqrt{2}\\ 2\sqrt{2}/3, & u > \sqrt{2}\\ -2\sqrt{2}/3, & u < -\sqrt{2}. \end{cases} \end{align} The general analysis of \citet{brownlees2015a} provides a rich set of tools for obtaining risk bounds for any minimizer of this new robust objective function, namely bounds on $\risk(\widehat{h})$ where $\widehat{h}$ satisfies \begin{align*} \widehat{h} \in \argmin_{h \in \HH} \est(h), \end{align*} and $\HH$ denotes the hypothesis space our candidate lives in. Note that the $1$-Lipschitz continuity of the hinge loss gives us that for any candidates $g$ and $h$, \begin{align*} |\varphi(y\,g(\xx))-\varphi(y\,h(\xx))| \leq |y||g(\xx)-h(\xx)| = |g(\xx)-h(\xx)|, \end{align*} which means we can bound distances defined on the space $\{f(\xx)=\varphi(y\,h(\xx)): h \in \HH\}$ by distances on the space $\HH$. Going back to the linear model case of $h(\xx) = \langle \ww, \xx \rangle$, bounds in the $\LL_{2}$ distance $d_{2}$ can be constructed using \begin{align*} \exx |\varphi(y\,g(\xx))-\varphi(y\,h(\xx))|^{2} \leq \exx |g(\xx)-h(\xx)|^{2} \leq \|\ww_{g} - \ww_{h}\|^{2} \exx\|\xx\|^{2}, \end{align*} and bounds in the $\LL_{\infty}$ distance take the form \begin{align*} \sup_{\xx} |\varphi(y\,g(\xx))-\varphi(y\,h(\xx))| \leq \sup_{\xx} |g(\xx)-h(\xx)| \leq \|\ww_{g} - \ww_{h}\| \sup_{\xx}\|\xx\|. \end{align*} Now, using their results, for large enough $s$ and $n$, one can show that with probability no less than $1-\delta$, it holds that \begin{align*} \exx \varphi\left(y\,\widehat{h}(\xx)\right) - \inf_{h \in \HH} \exx \varphi(y\,h(\xx)) \leq O\left( \sqrt{\frac{\log(3\delta^{-1})}{n}} + \log(2\delta^{-1})\left(\frac{\eta_{2}(\HH)}{\sqrt{n}} + \frac{\eta_{\infty}(\HH)}{n} \right) \right), \end{align*} where $c$ is a universal constant, and $\eta_{2}(\HH)$ and $\eta_{\infty}(\HH)$ are complexity terms. When these terms can be bounded, we can use the fact that the hinge loss is ``classification calibrated,'' and using standard results from \citet{bartlett2006b}, can obtain bounds on the excess misclassification risk based on the above inequality. The problem naturally is how to control these complexity terms. Skipping over some technical details, these terms can be bounded using covering number integrals dependent on $\HH$. As a concrete example, we have \begin{align*} \eta_{\infty}(\HH) \leq c_{\infty} \int_{0}^{\diameter(\HH;d_{\infty})} \log N(\epsilon,\HH,d_{\infty}) \, d\epsilon, \end{align*} where $d_{\infty}(g,h) = \sup_{\xx}|g(\xx)-h(\xx)|$ is the $\LL_{\infty}$ metric on $\HH$, the covering number $N(\epsilon,\HH,d_{\infty})$ is the number of $\epsilon$-balls in the $d_{\infty}$ metric needed to cover $\HH$, and $\diameter(\HH;d_{\infty}) = \sup\{d_{\infty}(g,h): g,h \in \HH\}$. In the case of $h(\xx) = \langle \ww, \xx \rangle$, this means $\|\xx\|$ must be almost surely bounded in order for the $\LL_{\infty}$ distance to be finite and the upper bounds to be meaningful. Under such assumptions, say $\ww$ comes from the unit ball and $\|\xx\| \leq B_{X}$ almost surely. Then ignoring non-dominant terms, the high-probability bounds can be specified as \begin{align*} \exx \varphi\left(y\,\widehat{h}(\xx)\right) - \inf_{h \in \HH} \exx \varphi(y\,h(\xx)) \leq O\left( \sqrt{\frac{\log(3\delta^{-1})}{n}} + \frac{\log(2\delta^{-1})d B_{X}}{\sqrt{n}} \right). \end{align*} While extremely flexible and applicable to a wide variety of learning tasks and algorithms, for the classification task, getting around the bound on $\xx$ is impossible using the machinery of \citet{brownlees2015a}. Even more serious complications are introduced by the difficulty of computation: while simple fixed-point procedures can be used to accurately approximate the robust objective $\est(h)$, it cannot be expressed explicitly, and indeed need not be convex as a function defined on $\HH$, even in the linear model case. Approximation error is unavoidable due to early stopping, and in addition to this computational overhead, using non-linear solvers to minimize the function $\est(h)$ can be costly and unstable in high-dimensional tasks \citep{holland2017a}. A recent pre-print from \citet{lecue2018a} considers replacing the M-estimator of \citet{brownlees2015a} with a median-of-means risk estimate, which does not require bounded inputs to get strong guarantees, but which requires an expensive iterative sub-routine for every loss evaluation, leading to substantial overhead for even relatively small learning tasks. \subsection{Proposed algorithm}\label{sec:derivation} We would like to utilize the strong elements of the existing procedures cited, while addressing their chief weaknesses. To do so, we begin by integrating the Catoni influence function $\psi$ defined in (\ref{eqn:influence_cat17}), which results in a new function of the form \begin{align}\label{eqn:newloss_rho} \rho(u) \defeq \begin{cases} \frac{u^{2}}{2} - \frac{u^{4}}{24} & |u| \leq \sqrt{2},\\ |u| \frac{2\sqrt{2}}{3} - \frac{1}{2} & |u| > \sqrt{2}. \end{cases} \end{align} Note that $\rho^{\prime}(u) = \psi(u)$ for all $u \in \RR$. This function satisfies $\rho(u) \geq 0$, is symmetric about zero so $\rho(u)=\rho(-u)$, and since the absolute value of the slope is bounded by $|\rho^{\prime}(u)| \leq 2\sqrt{2}/3$, we have that $\rho$ is Lipschitz continuous, namely that for any $u,v \in \RR$, we have $|\rho(u)-\rho(v)| \leq (2\sqrt{2}/3)|u-v|$. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{{rho_psi_deriv}.pdf} \caption{Graphs of $\rho$, $\rho^{\prime}$ and $\rho^{\prime\prime}$ near the origin.} \label{fig:rho_psi_deriv} \end{figure} Recalling the Catoni estimator (\ref{eqn:est_BJL}) used by \citet{brownlees2015a}, we define a new objective which is closely related: \begin{align}\label{eqn:newloss_full} \objnew(h;\gamma) \defeq \frac{s^{2}}{n} \sum_{i=1}^{n} \rho\left( \frac{\gamma - y_{i}\,h(\xx_{i})}{s} \right). \end{align} Here $\gamma \in \RR$ is the desired margin level, and once again $s>0$ is a re-scaling parameter. Note that this loss penalizes not only incorrectly classified examples, but also examples which are correctly classified, but \textit{overconfident}. The intuition here is that by also penalizing overconfident correct examples to some degree, we seek to constrain the variance of the margin distribution. The nature of this penalization is controlled by $\gamma$: a larger value leads to less correct examples being penalized. It remains to set the scale $s$. To do so, first note that for any candidate $h$, we have \begin{align*} \est(h) \in \argmin_{\gamma} \objnew(h;\gamma), \end{align*} and that this estimator enjoys a pointwise error bound dependent on $s$ (see appendix \ref{sec:tech_proofs} for details), which says \begin{align}\label{eqn:error_bound_s} |\est(h) - \exx y\,h(\xx)| \leq \frac{\vaa y \, h(\xx)}{s} + \frac{2s\log(2\delta^{-1})}{n}, \end{align} with probability no less than $1-\delta$. Minimizing this bound in $s>0$ naturally leads to setting $s^{2} = n \vaa y \, h(\xx) / 2\log(2\delta^{-1})$, but in our case, a certain amount of bias is assuredly tolerable; say a certain fraction $1/k$ of the desired $\gamma$ setting, plus error that vanishes as $n \to \infty$. By setting $s \geq \vaa y \, h(\xx) \, k / \gamma$ then, we have \begin{align*} |\est(h) - \exx y\,h(\xx)| \leq \frac{\gamma}{k} + O\left(\frac{1}{n}\right). \end{align*} The exact setting of $s>0$ plays an important role both in theory and in practice; we shall look at this in more detail in sections \ref{sec:theory}--\ref{sec:empirical}. In practice, the true variance will of course be unknown, but we can replace the true variance with any valid upper bound on the variance; rough estimates are easily constructed using moments of the empirical distribution (see section \ref{sec:empirical}). With scaling taken care of, our proposed algorithm is simply to minimize the new loss (\ref{eqn:newloss_full}) using gradient descent, namely to run the iterative update \begin{align*} \hhat_{(t+1)} = \hhat_{(t)} - \alpha_{(t)} \nabla \objnew(\hhat_{(t)};\gamma), \end{align*} where $\alpha_{(t)}$ are step sizes. We summarize the key computations in Algorithm \ref{algo:mainGD} for the case of a linear model $h(\xx) = \langle \ww, \xx \rangle$ with fixed step sizes. \begin{algorithm} \caption{Margin pursuit by steepest descent.} \label{algo:mainGD} \begin{algorithmic} \State \textbf{input:} $(\xx_{1},y_{1}),\ldots,(\xx_{n},y_{n}) \in \RR^{d} \times \{-1,1\}$ \smallskip \State \textbf{parameters:} $\wwhat_{(0)} \in \RR^{d}$, $\gamma \in \RR$, $k > 0$, $\alpha > 0$ \smallskip \State \textbf{scaling:} $\displaystyle s \geq \vaa y \, h(\xx) \, k / \gamma$ \smallskip \For{$t = 0, 1, \ldots, T-1$} \smallskip \State $\displaystyle \wwhat_{(t+1)} \gets \wwhat_{(t)} + \frac{s\,\alpha}{n} \sum_{i=1}^{n} \psi\left( \frac{\gamma - y_{i} \, \langle \wwhat_{(t)}, \xx_{i} \rangle}{s} \right) y_{i}\xx_{i}$ \smallskip \EndFor \end{algorithmic} \end{algorithm} \begin{rmk}[Algorithm \ref{algo:mainGD} and distribution control]\label{rmk:unsatisfactory_learner} Intuitively, in running Algorithm \ref{algo:mainGD} (or any generalization of it), the expectation is that with enough iterations, the approximation $\est(\wwhat_{(t)}) \approx \gamma$ should be rather sharp, although arbitrary precision assuredly cannot be guaranteed. If the $\gamma$ level is set too high given a hypothesis class $\HH$ with low complexity, we cannot expect $\gamma$ to be near the location of the margin $y \, h(\xx)$, which is accurately approximated by $\est(h)$. This can be easily proven: there exists a set of classifiers $\HH$ and distribution $\mu$ under which even a perfect optimizer of the new risk has a Catoni-type estimate smaller than $\gamma$ (proof given in appendix \ref{sec:tech_proofs}). \end{rmk} If the approximation $\est(\wwhat_{(t)}) \approx \gamma$ actually is sharp, how does this relate to \textit{control} of the margin distribution? By design, the estimator $\est(\cdot)$ is resistant to errant observations and is located near the \textit{majority} of observations (see Proposition \ref{prop:empirical_gamma_control}), if it turns out that $\est(\wwhat_{(t)})$ is close to $\gamma$, then it is \textit{not possible} for the majority of margin points be much smaller (or much larger) than $\gamma$.\footnote{Note that we still cannot rule out the possibility that the margin distribution is spread out over a wide region; a simple example is the case where the margins are symmetrically distributed around $\gamma$.} Conceptually, the desired outcome is similar to that of the procedure of \citet{brownlees2015a} discussed in section \ref{sec:related_work}, but with an easy implementation and more straightforward statistical analysis. In section \ref{sec:theory}, we show that risk bounds are readily available for the proposed procedure, even without a bound on the inputs $\xx$. Empirical analysis in section \ref{sec:empirical} illustrates the basic mechanisms underlying the algorithm, using real-world benchmark data sets. \section{Theoretical analysis}\label{sec:theory} \paragraph{Notation} For positive integer $k$, write the set of all positive integers no greater than $k$ by $[k] \defeq \{1,\ldots,k\}$. The underlying distribution of interest is that of $(\xx,y)$, here taking values on $\RR^{d} \times \{-1,1\}$. The data sample refers to $n$ independent and identically distributed (``iid'') copies of $(\xx,y)$, denoted $(\xx_{i},y_{i})$ for $i \in [n]$. Let $\HH$ denote a generic class of functions $h:\RR^{d} \to \RR$. The running assumption will be that all $h \in \HH$ are measurable, and at the very least satisfy $\exx |h(\xx)|^{2} < \infty$. Denote the input variance by $v_{X} \defeq \exx\|\xx\|^{2}$. \paragraph{Scaling and location estimates} Our chief interest from a theoretical standpoint is in statistical properties of Algorithm \ref{algo:mainGD}, in particular we seek high-probability upper bounds on the excess risk of the procedure after $T$ iterations, given $n$ observations, that depend on $T$, $n$, and low-order moments of the underlying distribution. We begin with some statistical properties of the motivating estimator, and a look at how scale settings impact these properties. \begin{prop}[Scaling and location estimates]\label{prop:empirical_gamma_control} For any $h \in \HH$ and scale $s>0$, the estimate $\est(h)$ satisfies the following: \begin{enumerate} \item There exists $0 < s^{\prime} < \infty$ such that for all $0 < s \leq s^{\prime}$, we have $\est(h) = \med \{ y_{i} \, h(\xx_{i}) \}_{i \in [n]}$. \item There exists a constant $c>0$ such that for all $s>0$, \begin{align*} \left| \est(h) - \frac{1}{n} \sum_{i=1}^{n} y_{i} \, h(\xx_{i})\right| \leq \frac{c}{s^{2}}. \end{align*} \end{enumerate} \end{prop} \begin{rmk} The basic facts laid out in Proposition 1 illustrate how $s$ controls the ``bias'' of the Catoni estimator. A larger scale factor makes the estimator increasingly sensitive to errant data, and causes it to close in on the empirical mean. A sufficiently small value on the other hand causes the estimator to effectively ignore the distribution tails, closing in on the empirical median. \end{rmk} \begin{prop}[Scaling and stability]\label{prop:stability} Given any dataset $\zz_{1},\ldots,\zz_{n}$ and candidate $h \in \HH$, construct $\est(h)$ as usual. Then consider a modified dataset $\zz^{\prime}_{1},\ldots,\zz^{\prime}_{n}$, which is identical to the original except for one point, subject to arbitrary perturbation. Let $\est^{\prime}(h)$ denote the estimator under the modified data set. Defining a sub-index as \begin{align*} \II \defeq \left\{ i \in [n]: |\obj(h)-y_{i}\,h(\xx_{i})| \leq s\sqrt{2}/2 \right\}, \end{align*} it follows that whenever $n$ and $s$ are large enough that $|\II| \geq n/2 > 24$, we have \begin{align*} |\obj(h)-\obj^{\prime}(h)| \leq \frac{s}{\sqrt{n}}. \end{align*} \end{prop} \begin{rmk} The stability property highlighted in Proposition \ref{prop:stability} is appealing because the difference $\max \{|y_{i}\,h(\xx_{i})-y_{i}^{\prime}\,h(\xx_{i}^{\prime})| : i \in [n]\}$ could be arbitrarily large, while the estimator $\obj(h)$ in shifting to $\obj^{\prime}(h)$ remains close to the majority of the points, and cannot be drawn arbitrarily far away. For clarity, we have considered the case of just one modified point, but a brief glance at the proof (in the appendix) should demonstrate how analogous results can readily be obtained for the case of larger fractions of modified points. \end{rmk} \begin{lem}[Pointwise error bound]\label{lem:pointwise_accuracy} Fixing any $h \in \HH$, consider the estimate $\est(h)$ defined in (\ref{eqn:est_BJL}), equivalently characterized as a minimizer of $\objnew(h;\gamma)$ in $\gamma$, with scaling parameter $s$ set such that $s^{2} = nv/2\log(2\delta^{-1})$, where $v$ is any upper bound $\vaa y\,h(\xx) \leq v < \infty$. It follows that \begin{align*} \prr\left\{ |\est(h) - \exx y\,h(\xx)| > \sqrt{\frac{2v\log(2\delta^{-1})}{n}} \right\} \leq \delta. \end{align*} \end{lem} \begin{rmk} The confidence interval in Lemma \ref{lem:pointwise_accuracy} is called pointwise because it holds for a pre-fixed $h \in \HH$, in contrast with uniform bounds that hold independent of the choice of $h$. When considering our Algorithm \ref{algo:mainGD}, the candidate $h$ will be data-dependent and thus random, meaning that pointwise bounds will have to be extended to cover all possible contingencies; see the proof of Theorem \ref{thm:riskbd} for details. \end{rmk} \paragraph{Classification-calibrated loss} Proceeding with our analysis, the ultimate evaluation metric of interest here is the classification risk (expectation of the zero-one loss), denoted \begin{align}\label{eqn:risk_01} R(h) \defeq \prr\{ \sign(h(\xx)) \neq y \}, \quad R^{\ast} \defeq \inf_{h \in \HH} R(h). \end{align} Using empirical estimates of the zero-one loss is not conducive to efficient learning algorithms, and our Algorithm \ref{algo:mainGD} involves the minimization of a new loss $\objnew(\cdot;\gamma)$, defined in equation (\ref{eqn:newloss_full}). To ensure that good performance in this metric implies low classification risk, the first step is to ensure that the function is \emph{calibrated} for classification, in the sense of \citet{bartlett2006b}. To start, fixing any $\gamma > 0$, define $\varphi(u) \defeq s^{2} \, \rho((\gamma - u)/s)$. This furnishes the surrogate risk \begin{align}\label{eqn:risk_surrogate} R_{\varphi}(h) \defeq \exx \varphi\left(y \, h(\xx)\right), \quad R_{\varphi}^{\ast} \defeq \inf_{h \in \HH} R_{\varphi}(h). \end{align} The basic idea is that if this loss $\varphi$ is calibrated, then one can show that there exists a function $\Psi_{s,\gamma}$ depending on user-specified $\gamma$ and $s$ settings, which is non-decreasing on the positive real line and satisfies \begin{align*} \Psi_{s,\gamma}( R(h) - R^{\ast} ) \leq R_{\varphi}(h) - R_{\varphi}^{\ast}. \end{align*} Our loss function $\rho$ defined in \ref{eqn:newloss_rho} is congenial due to the fact that it is classification-calibrated, with a $\Psi$-transform $\Psi_{s,\gamma}(\cdot)$ that can be computed exactly, for arbitrary values of $\gamma > 0$ and $s > 0$. Details of this computation are not difficult, but are rather tedious, and thus we relegate them to appendix \ref{sec:tech_getroot}. Basic facts are summarized in the following lemma. \begin{lem}\label{lem:Psi_transform} The loss function $\varphi(u) \defeq s^{2} \, \rho((\gamma - u)/s)$ is classification calibrated such that for each $\gamma > 0$, the following statements hold. \begin{enumerate} \item $\Psi$-transform: there exists a function $\Psi_{s,\gamma}:[0,1] \to \RR_{+}$ for which $\Psi_{s,\gamma}(R(h)-R^{\ast}) \leq R_{\varphi}(h) - R_{\varphi}^{\ast}$, depending on $\rho$, $s$, $\gamma$, and a concave function $H_{s,\gamma}(\cdot)$ defined on $[0,1]$, specified in the proof (also see Figure \ref{fig:H_Psi_inverse}). This $\Psi$-transform function takes the form \begin{align*} \Psi_{s,\gamma}(u) = s^{2}\rho(\gamma/s) - H_{s,\gamma}\left(\frac{1+u}{2}\right). \end{align*} \item Risk convergence: given a sequence $(\hhat_{n})$ of sample-dependent $\{\zz_{1},\ldots,\zz_{n}\} \mapsto \hhat_{n}$, we have that convergence in our surrogate is sufficient for convergence in the zero-one risk, namely \begin{align*} \left\{ \lim\limits_{n \to \infty} R_{\varphi}(\hhat_{n}) = R_{\varphi}^{\ast} \right\} \subseteq \left\{ \lim\limits_{n \to \infty} R(\hhat_{n}) = R^{\ast} \right\}. \end{align*} \item Invertibility: $\Psi_{s,\gamma}(u)$ is invertible on $[0,1]$, and thus for small enough excess risk, we can bound as $R(h)-R^{\ast} \leq \Psi_{s,\gamma}^{-1}(R_{\varphi}(h) - R_{\varphi}^{\ast})$. \end{enumerate} \end{lem} \begin{rmk}[Generalization and $\gamma$ level setting] One would naturally expect that all else equal, if a classifier achieves the same excess $\varphi$-risk for a larger value of $\gamma$, then the resulting excess classification risk should be smaller, or at least no larger. More concretely, we should expect that \begin{align*} \gamma \leq \gamma^{\prime} \implies \Psi_{s,\gamma}^{-1}(a) \geq \Psi_{s,\gamma^{\prime}}^{-1}(a), \quad a \in [0,s^{2}\,\rho(\gamma/s)]. \end{align*} This range comes from the fact that $\Psi_{s,\gamma}(0) = 0$ and $\Psi_{s,\gamma}(1) = s^{2} \, \rho(\gamma/s)$. This monotonicity follows from the definition of $\rho$ and the convexity of the $\Psi$-transform (also see Figure \ref{fig:H_Psi_inverse} in the following section). \end{rmk} \paragraph{Assumptions and risk bounds, with discussion} With preparatory results in place, we can now pursue an excess risk bound for Algorithm \ref{algo:mainGD}. To make notation more transparent, we accordingly write $\risk(\ww)$ and $\rnew(\ww)$ to denote the respective risks under $\HH = \{h: h(\xx)=\langle \ww, \xx \rangle, \ww \in \WW \}$, where $\WW \subset \RR^{d}$. The core technical assumptions are as follows: \begin{itemize} \item[\namedlabel{asmp:model_compact}{A0}.] $\WW$ is a compact subset of $\RR^{d}$, with diameter $\Delta \defeq \sup\{\|\uu-\vv\|: \uu,\vv \in \WW\} < \infty$. \item[\namedlabel{asmp:risk_flat_min}{A1}.] There exists $\wwstar \in \WW$ at which $\rnew^{\prime}(\wwstar)=0$. \item[\namedlabel{asmp:risk_strong_convex}{A2}.] $\rnew(\ww)$ is $\kappa$-strongly convex on $\WW$, with minimum\footnote{Assuming we can take the derivative under the integral, the smoothness of $\rho$ implies differentiability of $R_{\varphi}$. Then using the compactness of $\WW$, it follows that $\wwstar \in \WW$.} denoted by $\wwstar$. \item[\namedlabel{asmp:sub_gaussian}{A3}.] The gradient distribution follows a standard form of high-dimensional sub-Gaussianity, characterized as follows. Writing $\mv{b}(\ww) \defeq -\rho^{\prime}(\gamma - y \langle \ww, \xx \rangle ) y \, \xx$ for the new loss gradient before scaling by $s$, and $\Sigma(\ww)$ for its covariance matrix, there exists some $c>0$ such that for all $\ww \in \WW$, $a \geq 0$, and $\|\uu\|=1$, we have \begin{align*} \exx\exp\left( a \langle \uu, \mv{b}(\ww) - \exx\mv{b}(\ww) \rangle \right) \leq \exp\left(c a^{2} \langle \uu, \Sigma(\ww)\uu \rangle \right). \end{align*} \end{itemize} \begin{rmk}[Feasibility of assumptions] The important assumptions here are \ref{asmp:risk_strong_convex} and \ref{asmp:sub_gaussian}. The latter can be satisfied with inputs $\xx$ that have sub-Gaussian tails; this does not include data with higher-order moments that are infinite, but requires no bound on $\|\xx\|$ at all. As for the former assumption \ref{asmp:risk_strong_convex}, first note that the $(i,j)$th element of the Hessian of the new loss function is \begin{align*} \frac{\partial^{2}}{\partial w_{i} \partial w_{j}} s^{2} \, \rho\left(\frac{\gamma - y \, \langle \ww, \xx \rangle}{s}\right) = \rho^{\prime\prime}\left(\frac{\gamma - y \, \langle \ww, \xx \rangle}{s}\right) x_{i}x_{j}, \quad i,j \in [d] \end{align*} where \begin{align*} \rho^{\prime\prime}(u) = \begin{cases} 1 - u^{2}/2, & \text{ if } |u| \leq \sqrt{2}\\ 0, & \text{ else.} \end{cases} \end{align*} Write $q = \uu^{T}(\xx\xx^{T})\uu$ for readability, and use $\exx_{+}$ and $\exx_{-}$ to denote integration over the positive and non-positive parts of $q$. First, observe that \begin{align*} \exx_{-} \rho^{\prime\prime}\left(\frac{\gamma - y \, \langle \ww, \xx \rangle}{s}\right) q & = \exx I\{q \leq 0\} \rho^{\prime\prime}\left(\frac{\gamma - y \, \langle \ww, \xx \rangle}{s}\right) q\\ & \geq \exx I\{q \leq 0\} q\\ & = \exx q - \exx_{+} q. \end{align*} Using this inequality, we have \begin{align*} \uu^{T} \rnew^{\prime\prime}(\ww) \uu & = \exx \rho^{\prime\prime}\left(\frac{\gamma - y \, \langle \ww, \xx \rangle}{s}\right) q\\ & = \exx_{+} \rho^{\prime\prime}\left(\frac{\gamma - y \, \langle \ww, \xx \rangle}{s}\right) q + \exx_{-} \rho^{\prime\prime}\left(\frac{\gamma - y \, \langle \ww, \xx \rangle}{s}\right) q\\ & \geq \exx_{+} \rho^{\prime\prime}\left(\frac{\gamma - y \, \langle \ww, \xx \rangle}{s}\right) q + \left( \exx q - \exx_{+} q \right)\\ & = \exx q + \exx_{+} \left(\rho^{\prime\prime}\left( \frac{\gamma - y \, \langle \ww, \xx \rangle}{s}\right) - 1 \right) q. \end{align*} The second term on the right-hand side is a negative value that can be taken near zero for any $\ww \in \WW$ by taking $s>0$ large enough. The first term is $\exx q = \uu^{T}\exx\xx\xx^{T}\uu$, and thus with large enough $s$, as long as the second moment matrix of the inputs is positive definite satisfying $\exx\xx\xx^{T} \succeq c I_{d}$ for some $c>0$ (a weak assumption), it follows that there exists a $\kappa>0$ such that $\rnew^{\prime\prime}(\ww) \succeq \kappa I_{d}$ holds. Since the risk is twice continuously differentiable, This implies $\kappa$-strong convexity \citep[Theorem 2.1.11]{nesterov2004ConvOpt}. \end{rmk} With these assumptions in place, finite-sample risk bounds can be obtained. \begin{thm}\label{thm:riskbd} Running Algorithm \ref{algo:mainGD} for $T$ iterations, the final output produced, written $\wwhat_{(T)}$, for constant $c>0$ and $\beta \defeq 2\kappa v_{X} / (\kappa + v_{X})$ satisfies \begin{align*} \risk(\wwhat_{(T)}) - \risk^{\ast} \leq \Psi_{s,\gamma}^{-1}\left( (1-\alpha\beta)^{T} v_{X} \|\wwhat_{(0)}-\wwstar\|^{2} + \frac{4v_{X}}{\beta^{2}n}\left((1+\delta)v_{X} + 2s\,\varepsilon^{\ast}\right)^{2} \right) \end{align*} with probability no less than $1-2\delta$ over the random draw of the sample, where the dominant term $\varepsilon^{\ast}$ is defined \begin{align*} \varepsilon^{\ast} \defeq \sqrt{c\rho^{\prime}(\sqrt{2})^{2}\exx\|\xx\xx^{T}\|(d\log(3\sqrt{n}(2\delta)^{-1}) + \log(\delta^{-1}))}. \end{align*} \end{thm} \begin{rmk}[Interpretation and tradeoffs] Excess risk bounds give in Theorem \ref{thm:riskbd} are composed of two key terms, one of a computational nature, and one of a statistical nature. The first term is optimization error, which decreases as $T$ grows, and depends on the initial estimate $\wwhat_{(0)}$, the step-size $\alpha$, and the convexity of the surrogate risk through $\beta$. The second term is statistical error, and depends on the sample size, scale $s$, the number of parameters, and second-order moments of the inputs $\xx$. Note that there is a clear tradeoff due to $s$: a sufficiently large scale factor is needed to ensure \ref{asmp:risk_strong_convex} holds (yielding large enough $\beta$), but setting $s$ too large impacts the statistical error in a negative way. Finally, we note the $d$ factor in $\varepsilon^{\ast}$ is due to a covering number argument used to obtain a bound on the empirical gradient error that holds uniformly over $\ww \in \WW$. Does there exist another computational procedure, with the \textit{same optimization error}, and without this seemingly superfluous $d$ factor in the statistical error? We pursue such analysis in future work. \end{rmk} \section{Empirical analysis}\label{sec:empirical} In our numerical experiments, we aim to complement the theoretical analysis carried out in the previous section. We look at how algorithm parameter settings impact generalization guarantees, and using real-world datasets, investigate how Algorithm \ref{algo:mainGD} performs, comparing its behavior with a benchmark procedure. \paragraph{Margin level, scale, and generalization} First, we look at the function $\Psi_{s,\gamma}$ introduced in the previous section, and its inverse, $\Psi_{s,\gamma}^{-1}$. In the two leftmost plots of Figure \ref{fig:H_Psi_inverse}, we plot the graph of $H_{s,\gamma}(\cdot)$ and $\Psi_{s,\gamma}(\cdot)$ over $[0,1]$, for $s=1$ and varying values of $\gamma$. Convexity of $\Psi_{s,\gamma}$ and its monotonic dependence on $\gamma$ can be clearly observed. In the second plot from the right, we fix $a$ and $s=1$ and plot the graph of $\Psi_{s,\gamma}^{-1}(a)$ over a range of $\gamma$ and $a$ values. We can clearly observe how achieving a better excess surrogate risk (corresponding to a smaller $a$ value) for a larger $\gamma$ value leads to smaller excess misclassification risk (corresponding to smaller values of the function plotted). In addition, the same excess surrogate risk $a$ clearly leads to better generalization in the misclassification risk if it is achieved with a larger $\gamma$ value, although this positive impact diminishes quickly as $\gamma$ gets large. Finally, in the rightmost plot of Figure \ref{fig:H_Psi_inverse}, we fix $\gamma$ and $\varepsilon = 0.5$, and plot $\Psi_{s,\gamma}^{-1}(\varepsilon/s)$ for a range of positive $s$ values. In the limit as $s$ gets large, we find that this quantity bottoms out quickly at a positive value. This has important implications in terms of scaling strategies, because it demonstrates where issues can arise with scaling $s \to \infty$ with $n \to \infty$, as would be implied by simply minimizing the pointwise error bound (as seen in (\ref{eqn:error_bound_s}) and Lemma \ref{lem:pointwise_accuracy}). Indeed, if any algorithm can achieve an excess surrogate risk of $O(n^{-1/2})$ (corresponding to $\varepsilon$), if $s$ is allowed to scale as $O(\sqrt{n})$, then even taking $n$ large will not imply a small misclassification risk. This is one important reason that Algorithm \ref{algo:mainGD} does not scale using the bound-minimizing $s$ value, but rather a value that allows for consistency in the limit as $n$ and $T$ grow large. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Hfn_Linkfn_generalcase}\includegraphics[width=0.25\textwidth]{gamma_inversePsi}\includegraphics[width=0.25\textwidth]{inversePsi_over_s} \caption{Graphs of quantities related to the $\Psi$-transform of the proposed loss, namely $\Psi_{s,\gamma}$. In the leftmost two plots, from smallest to largest, the $\gamma$ values are $\gamma = \sqrt{2}/2, \sqrt{2}-0.4, \sqrt{2}-0.11, \sqrt{2}+0.11, 2\sqrt{2}$. Computation of the inverse is approximate, and done as follows. For any $(s,\gamma)$ pair, we compute $\Psi_{s,\gamma}(u)$ for $u \in [0,1]$ over a uniformly spaced grid $0 = u_{1} \leq u_{2} < \cdots < u_{K} = 1$, with $K=2500$. The approximate value is then given as $\Psi_{s,\gamma}^{-1}(a) = u_{k^{\ast}}$, where $k^{\ast} = \max\{k \in [K]: \Psi_{s,\gamma}(u_{k}) \leq a\}$.} \label{fig:H_Psi_inverse} \end{figure} \paragraph{Benchmark data tests: experimental setup} In all the experiments discussed here, we consider binary classification on real-world data sets, modified to control for unbalanced ratios of positive and negative labels. Training for each data set is done using pair $(\mv{X},\yy)$, where $\mv{X}$ is $n \times d$, and $\yy$ is $n \times 1$, and testing is done on a disjoint subset. The train-test sequence is repeated over 25 trials, and all numerical performance metrics displayed henceforth should be assumed to be averages taken over all trials. We use four data sets, denoted \textsc{cov}, \textsc{digit5}, \textsc{protein}, and \textsc{sido}, creating subsets under the following constraints: (1) Sample size $n$ is no more than ten times the nominal dimension $d$, and (2) both the training and testing data sets have balanced ratios of labels (as close as possible to $50\%$ each). Starting with \textsc{cov} ($n=540$, $d=54$, non-zero: $22\%$), this is the ``Forest CoverType dataset'' on the UC Irvine repository, converted into a binary task identifying class 1 against the rest. \textsc{digit5} ($n=5000$, $d=784$, non-zero: $19\%$) is the MNIST hand-written digit data, converted into a binary task for the digit 5. \textsc{protein} ($n=740$, $d=74$, non-zero: $99\%$) is the protein homology dataset (KDD Cup 2004). \textsc{sido} ($n=425$, $d=4932$, non-zero: $11\%$) is the molecular descriptor data set (NIPS 2008 causality challenge), with binary-valued features. In each trial, from the full original data set, we take a random sub-sample of the specified size, without replacement, for training, and for test data we use as much of the remaining data as possible, within the confines of constraint (2) above. As a well-known benchmark algorithm against which we can compare the behaviour and performance of the proposed Algorithm \ref{algo:mainGD}, we implement and run the well-known Pegasos algorithm of \citet{shalev2011a}. For both methods, the initial value $\wwhat_{(0)}$ is determined randomly in each trial. We explore multiple settings of Algorithm \ref{algo:mainGD} described further below, but in all cases we take the stochastic optimization approach: instead of using all $n$ training examples at each step, we randomly select one at a time for computing the update direction, and use a step size of . For direct comparison with Pegasos, we set the margin level to $\gamma = 1$, add a squared $\ell_{2}$-norm regularization term with coefficient $\lambda$, utilizing a step size of $\alpha = (s\sqrt{\lambda}(1+t))^{-1}$, and projecting to the $1/\sqrt{\lambda}$-radius ball. That is, we run a stochastic projected gradient descent version of Algorithm \ref{algo:mainGD}, and evaluate the impact of the proposed loss function. \paragraph{Benchmark data tests: generalization with naive scaling} We begin with the simplest setting of Algorithm \ref{algo:mainGD}, where $s=1$ is fixed throughout. In Figures \ref{fig:realtest_naive_scale_1}--\ref{fig:realtest_naive_scale_2}, we plot training error, test error, and numerous statistics of the empirical margin distribution, all as a function of cost incurred (equal to number of gradients computed). For each dataset, we experimented with $\lambda \in \{10^{0},10^{-6},10^{-6},\ldots,10^{-1}\}$ and display the results for the case of $\lambda$ that resulted in the best performance, as measured by the lowest test error achieved over all iterations. We see that our proposed procedure is highly competitive with the best setting of Pegasos, and results in a margin distribution very distinct from that of the competing procedure. On the whole, we see a much more symmetrical distribution, with smaller variance, that over iterations pushes the margin location up in a monotonic fashion, in stark contrast to that of Pegasos, whose empirical distribution peaks early and slowly settles down over time. The smaller variance and higher degree of symmetry is precisely what we would expect given the definition of $\rho$, which assigns a penalty for correctly classified examples that are overconfidently classified, as discussed in section \ref{sec:derivation}. \begin{figure}[h] \centering (\textsc{cov}) \hspace{6.5cm} (\textsc{digit5}) \smallskip \includegraphics[width=0.5\textwidth]{{bestperf_PegMSeek_lamreg1.00e-03_batch1_cov1_balanced}.pdf}\includegraphics[width=0.5\textwidth]{{bestperf_PegMSeek_lamreg1.00e-02_batch1_MNIST5_balanced}.pdf}\\ \includegraphics[width=0.5\textwidth]{{bestperf_PegOrig_lamreg1.00e-03_batch1_cov1_balanced}.pdf}\includegraphics[width=0.5\textwidth]{{bestperf_PegOrig_lamreg1.00e-02_batch1_MNIST5_balanced}.pdf} \caption{Top row: Algorithm \ref{algo:mainGD}. Bottom row: Pegasos.} \label{fig:realtest_naive_scale_1} \end{figure} \begin{figure}[h] \centering (\textsc{protein}) \hspace{6.5cm} (\textsc{sido}) \smallskip \includegraphics[width=0.5\textwidth]{{bestperf_PegMSeek_lamreg1.00e-02_batch1_protein_balanced}.pdf}\includegraphics[width=0.5\textwidth]{{bestperf_PegMSeek_lamreg1.00e-01_batch1_sido_balanced}.pdf}\\ \includegraphics[width=0.5\textwidth]{{bestperf_PegOrig_lamreg1.00e-03_batch1_protein_balanced}.pdf}\includegraphics[width=0.5\textwidth]{{bestperf_PegOrig_lamreg1.00e+00_batch1_sido_balanced}.pdf} \caption{Top row: Algorithm \ref{algo:mainGD}. Bottom row: Pegasos.} \label{fig:realtest_naive_scale_2} \end{figure} \paragraph{Benchmark data tests: scaling and regularization} Next, we look at the impact of a fixed scale, determined by observed data, as follows. Each run of Algorithm \ref{algo:mainGD} starts with $s=1$ fixed just as in the previous tests, but after a pre-fixed number of steps, updates the scale just once, to take a value of $s \geq \sqrt{nv_{X}/(2\lambda\log(\delta^{-1}))}$ (see Lemma \ref{lem:pointwise_accuracy}), where $v_{X}$ is approximated using the 75th quantile of the empirical distribution induced by $\{|y_{i}\,\langle \wwhat_{(t)}, \xx_{i} \rangle|: i \in [n]\}$. This time, we intentionally under-regularize, setting $\lambda$ at less than 1/100th of the best setting found in the previous tests. Representative results are given in Figure \ref{fig:realtest_scaled_once}. When highly under-regularized, \textit{and} without scaling, the learning algorithm just wanders about, overwhelmed by the variance of the per-iteration sub-sampling; when the procedure is left to run like this, a good solution can rarely be found before the step size grows small, highly inefficient. On the other hand, using the simple data-driven scaling procedure just described to fix a ``safe'' value of $s$, we find that the learning algorithm is almost immediately accelerated, and in less time essentially catches up with the performance achieved under the best regularization possible. This is extremely encouraging, as it suggests that a safe, inexpensive, automated scaling procedure can make up for our lack of knowledge about the ideal regularization parameter, allowing for potentially significant savings in hyper-parameter exploration. \begin{figure}[h] \centering \includegraphics[width=0.25\textwidth]{{scaleOncegoodbadSingle_RescaleSym_lamreg1.00e-05_batch1_q75_cov1_balanced}.pdf}\includegraphics[width=0.25\textwidth]{{scaleOncegoodbadSingle_RescaleSym_lamreg1.00e-05_batch1_q75_MNIST5_balanced}.pdf}\includegraphics[width=0.25\textwidth]{{scaleOncegoodbadSingle_RescaleSym_lamreg1.00e-05_batch1_q75_protein_balanced}.pdf}\includegraphics[width=0.25\textwidth]{{scaleOncegoodbadSingle_RescaleSym_lamreg1.00e-03_batch1_q75_sido_balanced}.pdf} \caption{Algorithm \ref{algo:mainGD} with data-based $s$ setting starting from the point marked by a black vertical line. From left to right, \textsc{cov}, \textsc{digit5}, \textsc{protein} (all $\lambda = 10^{-5}$), and \textsc{sido} ($\lambda = 10^{-3}$).} \label{fig:realtest_scaled_once} \end{figure} \section{Concluding remarks}\label{sec:conclusion} In this paper, we introduced and analyzed a new learning algorithm which, via a new convex loss with re-scaling, lets us pursue stronger guarantees for the resulting margin distribution (and classifier) than are possible with the traditional hinge loss. This allows us to bridge the gap between inference and computation, since strong learning guarantees are available for Algorithm \ref{algo:mainGD}, which is readily implemented in practice. Empirical tests confirmed that the algorithm basically behaves as we would expect, and that even with naive parameter settings, appropriate re-scaling on the back end allows our procedure to match or exceed the performance of well-known competitors. \newpage \bibliographystyle{refs/apalike}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \addtocounter{section}{1} \addcontentsline{toc}{section}{1. Introduction} \thispagestyle{empty} Resonance free regions near the essential spectrum have been extensively studied since the foundational work of Lax-Phillips and Vainberg. Their size is related to the dynamical structure of the set of trapped classical trajectories. More trapping typically results in a smaller region, and the largest resonance free regions exist when there is no trapping. \noindent \textbf{Example.} Let $\Hh^2$ be the hyperbolic upper half plane. Let $(X, g)$ be a nonpositively curved, compactly supported metric perturbation of the quotient space $\langle z \mapsto z+1 \rangle\backslash \Hh^2$. As we show in \S\ref{s:examples}, there are no trapped geodesics (that is, all geodesics are unbounded). Let $(X,g)$ be as above or as in \S \ref{s:assumptions}, with dimension $n+1$ and Laplacian $\Delta \ge 0 $. The resolvent $(\Delta - n^2/4 - \sigma^2)^{-1}$ is holomorphic for $\im \sigma > 0$, except at any $\sigma \in i \R$ such that $\sigma^2 + n^2/4$ is an eigenvalue, and has essential spectrum $\{\im \sigma = 0\}$: see Figure \ref{f:intro}. \begin{thm} For all $\chi \in C_0^\infty(X)$, there exists $M_0 > 0$ such that for all $M_1>0$ there exists $M_2 >0$ such that the cutoff resolvent $\chi (\Delta - n^2/4 - \sigma^2)^{-1} \chi$ continues holomorphically to $\{|\re \sigma| \ge M_2,\, \im \sigma \ge - M_1\}$, where it obeys the estimate \begin{equation}\label{logreg} \|\chi (\Delta - n^2/4 - \sigma^2)^{-1} \chi\|_{L^2(X) \to L^2(X)} \le M_2|\sigma|^{-1 +M_0|\im \sigma|}. \end{equation} \end{thm} \begin{figure}[htbp] \includegraphics[width=15cm]{figresreg.pdf} \caption We prove that the cutoff resolvent continues holomorphically to arbitrarily wide strips and obeys polynomial bounds. }\label{f:intro} \end{figure} In the example above, and in many of the examples in \S\ref{s:examples}, $\chi (\Delta - n^2/4 - \sigma^2)^{-1} \chi$ is meromorphic in $\C$. The poles of the meromorphic continuation are called \emph{resonances}. Logarithmically large resonance free regions go back to work of Regge \cite{regge} on potential scattering. In the setting of obstacle scattering they were found by Lax-Phillips \cite{lp} and Vainberg \cite{v}, and their results were generalized by Morawetz-Ralston-Strauss \cite{mrs} and Melrose-Sj\"ostrand \cite{ms}. When $X$ is Euclidean outside of a compact set, they have been established for very general nontrapping perturbations of the Laplacian by Sj\"ostrand-Zworski in \cite[Theorem 1]{sz}, which extends earlier work of Martinez \cite{m} and Sj\"ostrand \cite{s}. Most recently, Baskin-Wunsch \cite{bw} derive them for geometrically nontrapping manifolds with cone points. These works give a larger resonance free region and a stronger resolvent estimate than the Theorem above, but require asymptotically Euclidean geometry near infinity. The manifolds considered in this paper are nontrapping, but the cusp makes them not uniformly so: for a sufficiently large compact set $K \subset X$, we have \[\sup_{\gamma \in \Gamma} \textrm{ diam }\gamma^{-1}(K) = +\infty,\] where $\Gamma$ is the set of unit speed geodesics in $X$. This is because geodesics may travel arbitrarily far into the cusp before escaping down the funnel; this dynamical peculiarity makes it difficult to separate the analysis in the cusp from the analysis in the funnel and is the reason for the relatively involved resolvent estimate gluing procedure we use below. Resonance free strips also exist in some trapping situations, with width determined by dynamical properties of the trapped set. These go back to work of Ikawa \cite{ik}, with recent progress by Nonnenmacher-Zworski \cite{nz}, Petkov-Stoyanov \cite{ps}, Alexandrova-Tamura \cite{at}, and Wunsch-Zworski \cite{wz}. Resonance free regions and resolvent estimates have applications to evolution equations, and this is an active area: examples include resonant wave expansions and wave decay, local smoothing estimates, Strichartz estimates, geometric control, and wave damping \cite{bur:smoothing, bz, bh, msv, gn, chr:mrl, bgh, Dyatlov:Asymptotic, csvw}; see also \cite{wsur} for a recent survey and more references. In \S\ref{s:applications} we apply \eqref{logreg} to local smoothing and resonant wave expansions. If $(X,g)$ is evenly asymptotically hyperbolic (in the sense of Mazzeo-Melrose \cite{m} and Guillarmou \cite{g}) and nontrapping, then for any $M_1>0$ there is $M_2>0$ such that \begin{equation}\label{e:eucbetter} \|\chi(\Delta - n^2/4 - \sigma^2)^{-1}\chi\|_{L^2(X) \to L^2(X)} \le M_2|\sigma|^{-1}, \quad |\re \sigma| \ge M_2,\, \im \sigma \ge - M_1, \end{equation} by work of Vasy \cite[(1.1)]{Vasy} (see also the analogous estimate for asymptotically Euclidean spaces in Sj\"ostrand-Zworski \cite[Theorem $1'$]{sz}). The bound \eqref{logreg} is weaker due to the presence of a cusp. Indeed, by studying low angular frequencies (which correspond to geodesics which travel far into the cusp before escaping down the funnel) in Proposition \ref{p:bessel} we show that if $(X,g) = \langle z \mapsto z+1 \rangle\backslash \Hh^2$, then \begin{equation}\label{e:low} \|\chi(\Delta - n^2/4 - \sigma^2)^{-1}\chi\|_{L^2(X) \to L^2(X)} \ge e^{-C|\im\sigma|}|\sigma|^{-1+2|\im \sigma|}/C, \end{equation} for $\sigma$ in the lower half plane and bounded away from the real and imaginary axes. The lower bound \eqref{e:low} gives a sense in which \eqref{logreg} is optimal, but finding the maximal resonance free region remains an open problem. The only known explicit example of this type is $(X,g) = \langle z \mapsto z+1 \rangle \backslash \Hh^2$, for which Borthwick \cite[\S5.3]{b} expresses the resolvent in terms of Bessel functions and shows there is only one resonance and it is simple (see also Proposition \ref{p:bessel}). On the other hand, Guillop\'e-Zworski \cite{gz} study more general surfaces, and prove that if the $0$-volume is not zero, then there are infinitely many resonances and optimal lower and upper bounds hold on their number in disks. We apply their result to our setting in \S\ref{s:examples}, giving a family of surfaces with infinitely many resonances to which our Theorem applies, but it is not clear even in this case whether or not the resonance free region given by the Theorem is optimal. The model resolvent bound \eqref{e:modelbound2b} below suggests that, if $(X,g)$ is a surface of revolution, then the methods of \S\ref{s:modelcusp} and \S\ref{s:funnel}, suitably elaborated, will allow one to replace the region $\{|\re \sigma| \ge M_2,\, \im \sigma \ge - M_1\}$ in the Theorem by the more natural \ $\{|\re \sigma| \ge M_2,\, \im \sigma \ge - M_1 \log\log|\re\sigma|\}$. In \cite[Corollary 1.2]{cv}, Cardoso-Vodev, extending work of Burq \cite{bu0, bu}, prove resolvent estimates for very general infinite volume manifolds (including the ones studied here; note that the presence of a funnel implies that the volume is infinite) which imply an exponentially small resonance free region. Our Theorem gives the first large resonance free region for a family of manifolds with cusps. For $\im \sigma=0$, \eqref{logreg} is lossless; that is to say it agrees with the result for general nontrapping operators on asymptotically Euclidean or hyperbolic manifolds (see Cardoso-Popov-Vodev \cite[(1.6)]{cpv} and references therein). However, if $(X,g)$ is asymptotically Euclidean or hyperbolic in the sense of \cite[\S 4]{Datchev-Vasy:Gluing}, then the gluing methods of that paper show that such a lossless estimate for $\im \sigma=0$ implies \eqref{e:eucbetter} for some $M_1>0$; see \cite{d-exten}. In this sense it is due to the cusp that $\Oh(|\sigma|^{-1})$ bounds hold for $\im \sigma=0$ but not in any strip containing the real axis. The Theorem also provides a first step in support of the following \begin{conj}[Fractal Weyl upper bound] Let $\Gamma$ be a geometrically finite discrete group of isometries of $\Hh^{n+1}$ such that $X = \Gamma \backslash \Hh^{n+1}$ is a smooth noncompact manifold. Let $R(X)$ denote the set of eigenvalues and resonances of $X$ included according to multiplicity, let $K \subset T^*X$ be the set of maximally extended, unit speed geodesics which are precompact, and let $m$ be the Hausdorff dimension of $K$. Then for any $C_0>0$ there is $C_1 > 0$ such that \[\#\{\sigma \in R(X)\colon |\sigma - r| \le C_0\} \le C_1 r^{(m-1)/2}.\] \end{conj} This statement is a partial generalization to the case of resonances of the Weyl asymptotic for eigenvalues of a compact manifold; such results go back to work of Sj\"ostrand \cite{s}. If $\Gamma\backslash \Hh^{n+1}$ has funnels but no cusps, this is proved in joint work with Dyatlov \cite{dd} (generalizing earlier results of Zworski \cite{z} and Guillop\'e-Lin-Zworski \cite{glz}); if $X = \Gamma \backslash \Hh^2$ has cusps but no funnels, this follows from work of Selberg \cite{sel}. When $n=1$ the remaining case is $\Gamma\backslash \Hh^2$ having both cusps and funnels. The methods of the present paper, combined with those of \cite{sz, dd}, provide a possible approach to the conjecture in this case. When $n \ge 2$ cusps can have mixed rank, and in this case even meromorphic continuation of the resolvent was proved only recently by Guillarmou-Mazzeo \cite{gm}. In \S\ref{s:prelim} we give the general assumptions on $(X,g)$ under which the Theorem holds, and deduce consequences for the geodesic flow and for the spectrum of the Laplacian. We then give examples of manifolds which satisfy the assumptions, including examples with infinitely many resonances and examples with eigenvalue. In \S\ref{s:reduce} we use a resolvent gluing method, based on one developed in joint work with Vasy \cite{Datchev-Vasy:Gluing}, to reduce the Theorem to proving resolvent estimates and propagation of singularities results for three model operators. The first model operator is semiclassically elliptic outside of a compact set, and we analyze it in \S\ref{s:rk} following \cite{sz} and \cite{Datchev-Vasy:Gluing}. In \S\ref{s:modelcusp} we study the second model operator, the model in the cusp. We use a separation of variables, a semiclassically singular rescaling, and an elliptic variant of the gluing method of \S\ref{s:reduce} to reduce its study to that of a family of one-dimensional Schr\"odinger operators for which uniform resolvent estimates and propagation of singularities results hold. The rescaling causes losses for the resolvent estimate on the real axis, and we remove these by a non-compact variant of the method of propagation of singlarities through trapped sets developed in joint work with Vasy \cite{Datchev-Vasy:Propagation}. The lower bound \eqref{e:low} shows that these losses cannot be removed for the continued resolvent; see also Bony-Petkov \cite{bp} for related and more general lower bounds in Euclidean scattering. In \S\ref{s:funnel} we study the third model operator, the model in the funnel, and we again reduce to a family of one-dimensional Schr\"odinger operators. To obtain uniform estimates we use a variant of the method of complex scaling of Aguilar-Combes \cite{ac} and Simon \cite{sim}, following the geometric approach of Sj\"ostrand-Zworski \cite{sz2}. The method of complex scaling was first adapted to such families of operators by Zworski \cite{z}, but we use here the approach of \cite{Datchev:Thesis}, which is slightly simpler and is adapted to non-analytic manifolds. The analysis in this section could be replaced by that of \cite{Vasy}, which avoids separating variables; the advantage of our approach is that it gives an estimate in a logarithmically large neighborhood of the real axis. Although we do not exploit this here, as mentioned above this improvement can probably be used to show that a larger resonance free region exists, at least when $(X,g)$ is a surface of revolution. In \S\ref{s:applications} we apply \eqref{logreg} to local smoothing and resonant wave expansions. For the latter we need the additional assumption, satisfied in the example above and in many of the examples in \S\ref{s:examples}, that $\chi (\Delta - n^2/4 - \sigma^2)^{-1} \chi$ is meromorphic in $\C$. In \S\ref{s:low} we prove \eqref{e:low} using Bessel function asymptotics. I am indebted especially to Maciej Zworski for his generous guidance, advice, and unflagging encouragement throughout the course of this project. Thanks also to Andr\'as Vasy, Nicolas Burq, John Lott, David Borthwick, Colin Guillarmou, Hamid Hezari, Semyon Dyatlov, and Richard Melrose for their interest and for their many very helpful ideas, comments, and suggestions. I am also grateful for the hospitality of the Mathematical Sciences Research Institute and of the Universit\'e Paris 13. I was partially supported by the National Science Foundation under grant DMS-0654436 and under a postdoctoral fellowship. \section{Preliminaries}\label{s:prelim} Throughout the paper $C>0$ is a large constant which may change from line to line, and estimates are always uniform for $h \in (0,h_0]$, where $h_0>0$ may change from line to line. \subsection{Assumptions}\label{s:assumptions} Let $S$ be a compact $n$ dimensional boundaryless manifold, and let \[X = \R_r \times S.\] Let $R_g > 0$, and let $g$ be a Riemannian metric on $X$ such that \begin{equation}\label{e:metricinfinity} g|_{\{\pm r > R_g\}} = dr^2 + e^{2(r + \beta(r))}dS_\pm, \end{equation} where $dS_+$ and $dS_-$ are metrics on $S$, $R_g>0$ and $\beta \in C^\infty(\R)$. We call the region $\{r < -R_g\}$ the \textit{cusp}, and the region $\{r > R_g\}$ the \textit{funnel}. \begin{figure}[htbp] \includegraphics[width=140mm]{assumptions.pdf} \caption{The manifold $X$.}\label{f:mfld} \end{figure} Suppose there is $\theta_0 \in (0,\pi/4)$ such that $\beta$ is holomorphic and bounded in the sectors $|z| > R_g,\ \min\{|\arg z|,\, |\arg -z|\} < 2\theta_0$. By Cauchy estimates, for all $k \in \N$ there are $C, C_k >0$, such that if $|z| > R_g,\ \min\{|\arg z|,\, |\arg -z|\} \le \theta_0$, then \[ |\beta^{(k)}(z)| \le C_k |z|^{-k}, \ |\im \beta(z)| \le C |\im z|/|z|. \] In particular, after possibly redefining $R_g$ to be larger, we may assume without loss of generality that, for all $r \in \R$, \begin{equation}\label{e:betahalf} |\beta'(r)| + |\beta''(r)| \le 1/4. \end{equation} In the example at the beginning of the paper $\beta \equiv 0$. When the funnel end is an exact hyperbolic funnel, $\beta(r) =C + \log(1 + e^{-2r})$ for $r> R_g$. We make two dynamical assumptions: if $\gamma \colon \R \to X$ is a maximally extended geodesic, assume $\gamma(\R)$ is not bounded and $\gamma^{-1}(\{r < -R_g\})$ is connected. See \S\ref{s:examples} for examples. \subsection{Dynamics near infinity} Let $p+1$ be the geodesic Hamiltonian, that is \[ p = \rho^2 + e^{-2(r + \beta(r))}\sigma_\pm - 1, \] in the region $\{\pm r > R_g\}$, where $\rho$ is dual to $r$, and $\sigma_\pm$ is the geodesic hamiltonian of $(S,dS_\pm)$. From this we conclude that, along geodesic flowlines, we have \[ \dot r(t) = H_p\rho = 2\rho(t), \qquad \dot\rho(t) = -H_p r = 2 \left[1 + \beta'(r(t))\right] e^{-2(r + \beta(r))}\sigma_\pm, \] so long as the trajectory remains within $\{\pm r > R_g\}$. In particular, \begin{equation}\label{e:convexity} \ddot r(t) = 4\left[1 + \beta'(r(t))\right] e^{-2(r + \beta(r))} \sigma_\pm \ge 0. \end{equation} Dividing the equation for $\dot \rho$ by $p + 1 - \rho^2$, putting $\hat \rho = \rho/\sqrt{p+1}$, and integrating we find \begin{equation}\label{tanh}\begin{split} \tanh^{-1} \hat\rho(t) - \tanh^{-1}\hat \rho(0) &= 2 \sqrt {p+1} \left(t + \int_0^t\beta'(r(s))ds\right)\\ & \ge \frac 34\ \frac{r(t) - r(0)}{\max\{\hat \rho(s): s \in [0,t]\}} , \end{split}\end{equation} where the equality holds so long as the trajectory remains in $\{\pm r > R_g\}$, and the inequality (which follows from \eqref{e:betahalf} and the equation for $\dot r $) holds when additionally $t \ge 0$, $\rho(0) \ge 0$. \subsection{The essential spectrum.}\label{spectrum} The nonnegative Laplacian is given by \begin{align*} \Delta|_{\{\pm r >R_g\}} &= D_r^2 - i n(1 + \beta'(r))D_r + e^{-2(r + \beta(r))} \Delta_{S_\pm}, \end{align*} where $D_r = -i\D_r$, and $\Delta_{S_\pm}$ is the Laplacian on $(S,dS_{\pm})$. Fix $\varphi\in C^\infty(X)$ such that \begin{equation}\label{e:phi} \varphi|_{\{|r|>R_g\}} = n(r + \beta(r))/2.\end{equation} Then \begin{equation}\label{e:vdef} \begin{split} \left.\left(e^{\varphi} \Delta e^{-\varphi}\right)\right|_{\{\pm r>R_g\} &= D_r^2 + e^{-2(r + \beta(r))} \Delta _{S_\pm} + \frac {n^2} 4 + V(r), \end{split} \end{equation} where $V(r) = \varphi'' + {\varphi'}^2 - \frac{n^2}4 = \frac n 2 \beta'' + \frac{n^2}2 \beta' + \frac{n^2}4 {\beta'}^2 .$ This shows the essential spectrum of $\Delta$ is $[n^2/4 ,\infty)$ (see for example \cite[Theorem XIII.14, Corollary 3]{rs}); the potential perturbation $V$ is relatively compact since $\beta'$ and $\beta''$ tend to zero at infinity (see for example Rellich's criterion \cite[Theorem XII.65]{rs}). In this paper we study: \begin{equation}\label{e:pdef} P \Def h^2\left(e^{\varphi} \Delta e^{-\varphi} - \frac{n^2}4\right) - 1, \end{equation} as an unbounded operator on $L^2_\varphi(X) \Def \{e^\varphi u\colon u \in L^2(X)\}$ with domain \[H^2_\varphi(X) \Def \{u \in L_\varphi^2(X)\colon e^{\varphi} \Delta e^{-\varphi} u \in L_\varphi^2(X)\} = \{e^\varphi u\colon u \in H^2(X)\}.\] We will show that for every $\chi \in C_0^\infty(X)$, $E \in (0,1)$ there exists $C_0 > 0$ such that for every $\Gamma >0$ there exist $C,h_0>0$ such that the cutoff resolvent $\chi(P-\lambda)^{-1}\chi$ continues holomorphically from $\{\im \lambda >0\}$ to $[-E,E] - i [0,\Gamma h]$ and satisfies \begin{equation}\label{e:main} \|\chi(P - \lambda)^{-1}\chi\|_{L_\varphi^2(X) \to L_\varphi^2(X)} \le C h^{-1-C_0|\im\lambda|/h}, \end{equation} uniformly for $\lambda \in [-E,E] - i [0,\Gamma h]$ and $h \in (0,h_0]$. This implies the Theorem and \eqref{logreg}. \subsection{Examples}\label{s:examples} In this section we give a family of examples of manifolds satisfying the assumptions of \S\ref{s:assumptions}. I am very grateful to John Lott for suggesting this family of examples. In this section $d_g(p,q)$ denotes the distance between $p$ and $q$ with respect to the Riemannian metric $g$, and $L_g(c)$ denotes the length of a curve $c$ with respect to $g$. Let $(\Hh^{n+1},g_h)$ be hyperbolic space with coordinates \[ (r,y) \in \R \times \R^n, \qquad g_h = dr^2 + e^{2r} dy^2. \] Let $(X, g_h)$ be a parabolic cylinder obtained by quotienting the $y$ variables to a torus: \[ X = \R \times \left(\la y \mapsto y + c_1, \dots, y \mapsto y + c_n \ra \backslash \R^n\right), \] where the $c_j$ are linearly independent vectors in $\R^n $. Let $R_g> 0$, put $dS_+ = dS_- = dy^2$, and take $\beta \in C^\infty(\R)$ satisfying all assumptions of \S\ref{s:assumptions}, including \eqref{e:betahalf}. On $\{|r| > R_g\}$ define $g$ by \eqref{e:metricinfinity}, and on $\{|r| \le R_g\}$ let $g$ be any metric with all sectional curvatures nonpositive. The calculation in the Appendix shows that the sectional curvatures in $\{|r| > R_g\}$ are nonpositive so long as \eqref{e:betahalf} holds. The two dynamical assumptions in the last paragraph of \S\ref{s:assumptions} will follow from the following classical theorem (see for example \cite[Theorem III.H.1.7]{brha}). \begin{prop}[Stability of quasi-geodesics] Let $(\Hh^{n+1},g_h)$ be hyperbolic $n+1$-space, let $p,q \in \Hh^{n+1}$, and let $\gamma_h\colon[t_1,t_2] \to \Hh^{n+1}$ be the unit speed geodesic from $p$ to $q$. Suppose $c\colon[t_1,t_2]\to\Hh^{n+1}$ satisfies $c(t_1) = p$, $c(t_2) = q$, and there is $C_1>0$ such that \begin{equation}\label{quasi}\frac 1 {C_1} |t-t'|\le d_{g_h}(c(t),c(t')) \le C_1 |t-t'|,\end{equation} for all $t,t'\in[t_1,t_2]$. Then \begin{equation}\label{conquasi}\max_{t \in [t_1,t_2]}d_{g_h}(\gamma_h(t),c(t)) \le C_2,\end{equation} where $C_2$ depends only on $C_1$. \end{prop} To apply this theorem, observe first that just as $g_h$ descends to a metric on $X$, so $g$ lifts to a metric on $\Hh^{n+1}$; call the lifted metric $g$ as well. Observe there is $C_g$ such that \begin{equation}\label{methyp}\frac 1{C_g} g_h(u,u) \le g(u,u) \le C_g g_h(u,u), \qquad u \in T_x X, \ x \in X.\end{equation} Indeed for $x$ varying in a compact set this is true for any pair of metrics, and on $\{|r|>R_g\}$ it suffices if $C_g \ge e^{2\max|\beta|}$. We will show that if $c$ is a unit speed $g$-geodesic in $\Hh^n$, then \eqref{quasi} holds with a constant $C_1$ depending only on $C_g$. Since both $g$ and $g_h$ have nonnegative curvature and hence distance-minimizing geodesics, it is equivalent to show that \begin{equation}\label{quasi2}\frac 1 {C_1} d_g(p,q) \le d_{g_h}(p,q) \le {C_1} d_g(p,q),\end{equation} holds for all $p,q \in \Hh^{n+1}$, with a constant $C_1$ which depends only on $C_g$. For this last we compute as follows: let $\gamma$ be a unit speed $g$-geodesic from $p$ to $q$. Then \[\begin{split}d_{g_h}(p,q) \le L_{g_h}(\gamma) = \int_{t_1}^{t_2} \sqrt{g_h(\dot\gamma,\dot\gamma)}dt \le\int_{t_1}^{t_2} \sqrt{C_g g(\dot\gamma,\dot\gamma)}dt = \sqrt{C_g}L_g(\gamma)=\sqrt{C_g}d_g(p,q). \end{split}\] This proves the second inequality of \eqref{quasi2}, and the first follows from the same calculation since \eqref{methyp} is unchanged if we switch $g$ and $g_h$. Let $\gamma \colon \R \to X$ be a $g$-geodesic and $\gamma_h \colon \R \to X$ a $g_h$-geodesic. For any $x \in X$ we have \[\lim_{t\to\infty}d_{g_h}(\gamma_h(t),x) = \lim_{t\to\infty}d_{g}(\gamma_h(t),x) = \infty,\] and by \eqref{conquasi} the same holds if $\gamma_h$ is replaced by $\gamma$. In particular $\gamma(\R)$ is not bounded. We check finally that $\gamma^{-1}(\{r < -R_g\})$ is connected. It suffices to check that if instead $\gamma\colon \R \to \Hh^{n+1}$ is a $g$-geodesic, then $\gamma^{-1}(\{r < -N\})$ is connected for $N$ large enough. We then conclude by redefining $R_g$ to be larger than $N$. We argue by way of contradiction. From \eqref{e:convexity} we see that $\dot r(t)$ is nondecreasing along $\gamma$ in $\{r < -R_g\}$. Hence, if $\gamma^{-1}(\{r < - N\})$ is to contain at least two intervals for some $N> R_g$, there must exist times $t_1<t_2<t_3$ such that $r(\gamma(t_1)), r(\gamma(t_3)) < - N$, $r(\gamma(t_2)) = -R_g$. Now the $g_h$-geodesic $\gamma_h\colon[t_1,t_3] \to \Hh^n$ joining $\gamma(t_1)$ to $\gamma(t_3)$ has $r(\gamma_h(t)) < -N$ for all $t \in [t_1,t_3]$. It follows that $d_{g_h}(\gamma_h(t_2),\gamma(t_2)) \ge N -R_g$, and if $N$ is large enough this violates \eqref{conquasi}. \subsubsection{Examples with infinitely many resonances.}\label{infmany} In this subsection we specialize to the case $n=1$, $\beta(r) = 0$ for $r < -R_g$, $\beta(r) = \beta_0 + \log(1 + e^{-2r})$ for $r > R_g$ and for some $\beta_0 \in \R$. Then the cusp and funnel of $X$ are isometric to the standard cusp and funnel obtained by quotienting $\Hh^2$ by a nonelementary Fuchsian subgroup (see e.g. \cite[\S2.4]{b}). In particular there is $\ell >0$ such that \[X = \R_r \times (\R/\ell\Z)_t, \qquad g|_{\{r > R_g\}} = dr^2 + \cosh^2r dt^2.\] If $(X_0, g_0) = [0,\infty) \times (\R/\ell\Z), \ g_0 = dr^2 + \cosh^2r dt^2,$ then the $0$-volume of $X$ is \[0\,\textrm{-}\vol(X) \Def \vol_g(X \cap \{r < R_g\}) - \vol_{g_0}(X_0 \cap \{r < R_g\}).\] Let $R_\chi(\sigma)$ denote the meromorphic continuation of $\chi (\Delta - 1/4 - \sigma^2)^{-1} \chi$. In this case, $R_\chi(\sigma)$ is meromorphic in $\C$ (\cite{mm, gz}), and near each pole $\sigma_0$ we have \[R_\chi(\sigma) = \chi \left(\sum_{j=1}^k \frac {A_j}{(\sigma - \sigma_0)^j} + A(\sigma)\right) \chi,\] where the $A_j\colon L^2_{\textrm{comp}}(X) \to L^2_{\textrm{loc}}(X)$ are finite rank and $A(\sigma)$ is holomorphic near $\sigma_0$. The \emph{multiplicity} of a pole, $m(\sigma_0)$ is given by $m(\sigma) \Def \rank\left(\sum_{j=1}^k A_j\right).$ \begin{prop}\cite[Theorem 1.3]{gz} If $0$-$\vol(X) \ne 0$, then there exists a constant $C$ such that \[\lambda^2/C \le \sum_{|\sigma|\le \lambda}m(\sigma) \le C\lambda^2, \qquad \lambda > C.\] \end{prop} We can ensure that $0$-$\vol(X) \ne 0$ by adding, if necessary, a small compactly supported metric perturbation to $g$. Then, as $\lambda \to \infty$, the meromorphic continuation of $R_\chi$ will have $\sim \lambda^2$ many poles in a disk of radius $\lambda$, but none of them will be in the strips \eqref{logreg}. \subsubsection{Examples with eigenvalue}\label{exeigensec} In this subsection we consider examples of the form \begin{equation}\label{exampeigen}X = \R \times (\R^n \slash \Z^n) \qquad g = dr^2+ \exp \left(2r + 2\int_{-\infty}^r b\right)dy^2, \qquad b \in C_0^\infty(\R).\end{equation} By the Appendix, $(X,g)$ is nonpositively curved if $b' + (b + 1)^2 \ge 0$ everywhere, e.g. if $b \ge -1/2$ and $b' \ge -1/4$; then all the assumptions of \S \ref{s:assumptions} hold. We will give a sufficient condition on $b$ such that $X$ has at least one eigenvalue, and also infinitely many resonances. By the calculation in \S\ref{spectrum}, if $\varphi(r)=- \frac n 2\left( r + \int_{-\infty}^r b\right)$ for all $r \in \R$, then \[e^{-\varphi} \Delta e^{\varphi} = D_r^2 + e^{-2(r + \int^r b)} \Delta_{\R^n/\Z^n} + \frac{n^2} 4 + V(r), \quad V(r) \Def \frac n 2 b'(r) + \frac {n^2} 4 b(r)^2 + \frac {n^2} 2 b(r).\] Observe that $V \in C_0^\infty(\R)$, and consequently (see for example \cite[Theorem XIII.110]{rs}) for $D_r^2 + V(r)$ to have a negative eigenvalue it is sufficient to ensure that \[\int_{-\infty}^\infty V(r)dr < 0.\] But in \cite[Theorem 2]{z87} Zworski shows that if $V \not\equiv 0$, the operator $D_r^2 + V(r)$ has infinitely many resonances: indeed the number in a disk of radius $\lambda$ is given by \[\frac 2 \pi |\chsupp V| \lambda + o(\lambda),\qquad \lambda \to \infty,\] where $\chsupp$ denotes the convex hull of the support. This eigenvalue and these resonances correspond to an eigenvalue and resonances for $\Delta$: one multiplies the eigenfunction and resonant states by $e^{\varphi}$ and regards them as functions on $X$ which depend on $r$ only. In summary if $(X,g)$ is given by \eqref{exampeigen}, then the assumptions of \S\ref{s:assumptions} hold if $b \ge -1/2$, $b' \ge -1/4$. It has infinitely many resonances and at least one eigenvalue if $b \not\equiv 0$, $b \le 0$. \subsection{Pseudodifferential operators}\label{secpseudor} In this section we review some facts about semiclassical pseudodifferential operators, following \cite{ds} and \cite{ez}. \subsubsection{Pseudodifferential operators on $\R^n$} For $m \in \R$, $\delta \in [0,1/2)$ let $S_\delta^m(\R^n)$ be the symbol class of functions $a = a_h(x,\xi) \in C^\infty(T^*\R^n)$ satisfying \begin{equation}\label{symboldef} \left|\D^\alpha_x \D^\beta_\xi a\right| \le C_{\alpha,\beta} h^{-\delta(|\alpha| + |\beta|)} (1+|\xi|^2)^{(m-|\beta|)/2}, \end{equation} uniformly in $T^*\R^n$. The \textit{principal symbol} of $a$ is its equivalence class in $S_\delta^{m}(\R^n) / h S_\delta^{m-1}(\R^n)$. Let $S^m(\R^n) = S^m_0(\R^n)$. We quantize $a \in S_\delta^m(\R^n)$ to an operator $\Op(a)$ using the formula \begin{equation}\label{quantdef}(\Op(a) u)(x) = \frac 1 {(2\pi h)^n} \int\!\!\!\int e^{i(x-y)\cdot\xi/h}a\left(h,x,\xi\right)u(y)dyd\xi,\end{equation} and put $\Psi_\delta^m(\R^n) = \{\Op(a)| a \in S_\delta^m(\R^n)\}$, $\Psi^m(\R^n) = \Psi^m_0(\R^n)$. If $A = \Op(a)$ then $a$ is the \textit{full symbol} of $A$, and the principal symbol of $A$ is the principal symbol of $a$. If $A \in \Psi_\delta^m(\R^n)$, then for any $s \in \R$ we have $\|A\|_{H^{s+m}_h(\R^n) \to H^s_h(\R^n)} \le C$, where (if $\Delta \ge0$) \[\|u\|_{H^s_h(\R^n)} = \|(1 + h^2\Delta)^{s/2}u\|_{L^2(\R^n)}.\] If $A \in \Psi_\delta^m(\R^n)$ and $B \in \Psi_\delta^{m'}(\R^n)$, then $AB \in \Psi_\delta^{m+m'} (\R^n)$ and $[A,B] = AB - BA \in h^{1-2\delta}\Psi_\delta^{m+m'-1} (\R^n)$. If $a, b$ are the principal symbols of $A, B$, then the principal symbol of $h^{2\delta-1}[A,B]$ is $i H_ba$, where $H_b$ is the Hamiltonian vector field of $b$. If $K \subset T^*\R^n$ has either $K$ or $T^*\R^n \setminus K$ bounded in $\xi$, then $a \in S_\delta^m(\R^n)$ is \emph{elliptic} on~$K$~if \begin{equation}\label{ellipdef}|a| \ge (1+|\xi|^2)^{m/2}/C,\end{equation} uniformly for $(x,\xi) \in K$. We say that $A \in \Psi_\delta^m(\R^n)$ is elliptic on $K$ if its principal symbol is. For such $K$, we say $A$ is \textit{microsupported} in $K$ if the full symbol $a$ of $A$ obeys \begin{equation}\label{e:microsuppdef} |\D_x^\alpha \D^\beta_\xi a| = C_{\alpha,\beta,N}h^N (1 + |\xi|^2)^{-N} \end{equation} uniformly on $T^*\R^n\setminus K$, for any $\alpha, \beta, N$. If $A_1$ is microsupported in $K_1$ and $A_2$ is microsupported in $K_2$, then $A_1A_2$ is microsupported in $K_1 \cap K_2$. If $A \in \Psi^m_\delta(\R^n)$ is elliptic on $K$, then it is invertible there in the following sense: there exists $G \in \Psi^{-m}_\delta(\R^n)$ such that $AG - \Id$ and $GA - \Id$ are both microsupported in $T^*X \setminus K$. Hence if $B \in \Psi_\delta^{m'}(\R^n)$ is microsupported in $K$ and $A$ is elliptic in an $\eps$-neighborhood of $K$ for some $\eps > 0$, then, for any $s,N \in \R$. \begin{equation}\label{ellipestrn} \|Bu\|_{H^{s+m}_h(\R^n)} \le C \|ABu\|_{H^{s}_h(\R^n)} + \Oh(h^\infty)\|u\|_{H^{-N}_h(\R^n)}.\end{equation} The \textit{sharp G\aa rding inequality} says that if the principal symbol of $A \in \Psi_\delta^m(\R^n)$ is nonnegative near $K$ and $B \in \Psi_\delta^{m'}(\R^n)$ is microsupported in $K$, then \begin{equation}\label{gardingrn}\la A B u, B u \ra_{L^2(\R^n)} \ge -Ch^{1-2\delta} \|B u\|^2_{H^{(m-1)/2}(\R^n)} - \Oh(h^\infty)\|u\|_{H^{-N}_h(\R^n)}.\end{equation} \subsubsection{Pseudodifferential operators on a manifold}\label{secpseudoman} These results extend to the case of a noncompact manifold $X$, provided we require our estimates to be uniform only on compact subsets of $X$. We formulate our estimates for $L^2_\varphi(X)$ and its associated Sobolev spaces, but of course this choice of density is not essential. Write $S^m_\delta(X)$ for the symbol class of functions $a \in C^\infty( T^*X)$ satisfying \eqref{symboldef} on coordinate patches (note that this condition is invariant under change of coordinates). The principal symbol of $a$ is its equivalence class in $S_\delta^m(X) / hS_\delta^{m-1}(X)$, and let $S^m(X) = S^m_0(X)$. Let $h^\infty \Psi^{-\infty}(X)$ be the set of linear operators $R$ such that for any $\chi \in C_0^\infty(X)$, we have $\|\chi R\|_{H^{-N}_{\varphi,h}(X) \to H^N_{\varphi,h}(X)} + \|R \chi \|_{H^{-N}_{\varphi,h}(X) \to H^N_{\varphi,h}(X)} \le C h^N$ for any $N$, where \begin{equation}\label{e:hphidef} \|u\|_{H^s_{\varphi,h}(X)} \Def \|(2+P)^{s/2}u\|_{L_\varphi^2(X).} \end{equation} We quantize $a \in S_\delta^{m}(X)$ to an operator $\Op(a)$ by using a partition of unity and the formula \eqref{quantdef} in coordinate patches. Let $\Psi_\delta^{m}(X) = \{\Op(a) + R | a \in S_\delta^m(X), R \in h^\infty\Psi^{-\infty}(X)\}$. The quantization $\Op$ depends on the choices of coordinates and partition of unity, but the class $\Psi_\delta^{m}(X)$ does not. If $A \in \Psi_\delta^{m}(X)$ and $\chi \in C_0^\infty(X)$, then $\chi A$ and $A \chi$ are bounded $H^{s+m}_{\varphi,h}(X) \to H^{s}_{\varphi,h}(X)$. If $A \in \Psi_\delta^{m}(X)$ and $B \in \Psi_\delta^{m'}(X)$, then $AB \in \Psi_\delta^{m+m'} (X)$ and $h^{2\delta-1}[A,B] \in \Psi_\delta^{m+m'-1} (X)$. If $a, b$ are the principal symbols of $A$ and $B$ (the principal symbol is invariantly defined, although the total symbol is not), then the principal symbol of $h^{2\delta-1}[A,B]$ is $i H_ba$, where $H_b$ is the Hamiltonian vector field of $b$. Let $K \subset T^*X$ have either $K \cap T^*U$ bounded for every bounded $U \subset X$, or $T^*U \setminus K$ bounded for every bounded $U \subset X$. We say $a \in S_\delta^m(X)$ is \emph{elliptic} on $K$ if \eqref{ellipdef} holds uniformly on $T^*U \cap K$ for every bounded $U \subset X$. We say that $A \in \Psi_\delta^m(X)$ is elliptic on $K$ if its principal symbol is. We say $A$ is \textit{microsupported} in $K$ if a full symbol $a$ of $A$ obeys \eqref{e:microsuppdef} uniformly on $T^*U \setminus K$ for every bounded $U \subset X$ and for any $\alpha, \beta, N$ (note that if this holds for one full symbol of $A$, it also does for all the others). If $B \in \Psi^{m'}_\delta(X)$ is microsupported in $K$ and $A$ is elliptic in an $\eps$-neighborhood of $K$ for some $\eps > 0$, then, for any $s,N \in \R$ and $\chi \in C_0^\infty(X)$, \begin{equation}\label{ellipestx} \|B\chi u\|_{H^{s+m}_{\varphi,h}(X)} \le C \|AB \chi u\|_{H^{s}_{\varphi,h}(X)} + \Oh(h^\infty)\|\chi u\|_{H^{-N}_{\varphi,h}(X)}.\end{equation} The \textit{sharp G\aa rding inequality} says that if the principal symbol of $A \in \Psi_\delta^m(X)$ is nonnegative near $K$ and $B \in \Psi_\delta^{m'}(X)$ is microsupported in $K$, then for every $\chi \in C_0^\infty(X)$, $N \in \R$, \begin{equation}\label{gardingx}\la A B \chi u, B \chi u \ra_{L^2_{\varphi}(X)} \ge -Ch^{1-2\delta} \|B \chi u\|^2_{H^{(m-1)/2}_{\varphi,h}(X)} - \Oh(h^\infty)\|\chi u\|_{H^{-N}_{\varphi,h}(X)}.\end{equation} \subsubsection{Exponentiation of operators}\label{expop} For $q \in C_0^\infty(T^*X)$, $Q$ a quantization of $q$, and $\eps \in[0,C_0 h\log(1/h)]$, we will be interested in operators of the form $e^{\eps Q/h}$. We write \[e^{\eps Q/h} = \sum_{j=0}^\infty \frac {(\eps/h)^j}{j!} Q^j,\] with the sum converging in the $H^s_{\varphi,h}(X) \to H^s_{\varphi,h}(X)$ norm operator topology, but the convergence is not uniform as $h \to 0$. Beals's characterization \cite[Theorem 9.12]{ez} can be used to show that $e^{\eps Q/h} \in \Psi^{0}_\delta(X)$ for any $\delta>0$, but we will not need this. Let $s \in \R$. Then \begin{equation}\label{e:expest} \left\|e^{\eps Q/h}\right\| \le \sum_{j=0}^\infty \frac{(C_0\log(1/h))^j}{j!} \|Q\|^j = e^{C_0 \log(1/h)\|Q\|} = h^{-C_0 \|Q\|}, \end{equation} where all norms are $H^s_{\varphi,h}(X) \to H^s_{\varphi,h}(X)$. If $A \in \Psi_\delta^{m}(X)$ is bounded $H^{s+m}_{\varphi,h}(X) \to H^s_{\varphi,h}(X)$ (without needing to be multiplied by a cutoff), then, by \eqref{e:expest}, \begin{equation}\label{e:eepsfirst} \|e^{\eps Q/h}A e^{-\eps Q/h}\|_{H^{s+m}_{\varphi,h}(X) \to H^s_{\varphi,h}(X)} \le C h^{-N} \end{equation} for any $s \in \R$, where $N = C_0(\|Q\|_{H^{s+m}_{\varphi,h}(X) \to H^{s+m}_{\varphi,h}(X)} + \|Q\|_{H^s_{\varphi,h}(X) \to H^s_{\varphi,h}(X)})$. But, writing $\ad_Q A = [Q,A]$ and $e^{\eps Q/h}A e^{-\eps Q/h} = e^{\eps \ad_{Q}/h}A$, for any $J \in \N$ we have the Taylor expansion \begin{equation}\label{e:tayloradj} e^{\eps Q/h}A e^{-\eps Q/h} = \sum_{j=0}^J \frac {\eps^j}{j!} \left(\frac{\ad_{Q}}h\right)^j A + \frac {\eps^{J+1}}{J!} \int_0^1(1-t)^J e^{-\eps t\ad_{Q}/h} \left(\frac{\ad_{Q}}h\right)^{J+1} A dt. \end{equation} For any $M \in \N$, the integrand maps $H^{M}_{\varphi,h}(X) \to H^{-M}_{\varphi,h}(X)$ with norm $\Oh(h^{-2\delta(J+1)-N})$, $N = C_0(\|Q\|_{H^{M}_{\varphi,h}(X) \to H^{M}_{\varphi,h}(X)} + \|Q\|_{H^{-M}_{\varphi,h}(X) \to H^{-M}_{\varphi,h}(X)})$. Hence applying \eqref{e:tayloradj} with $J$ sufficiently large we see that \eqref{e:eepsfirst} can be improved to \[ \|e^{\eps Q/h}A e^{-\eps Q/h}\|_{H^{s+m}_{\varphi,h}(X) \to H^s_{\varphi,h}(X)} \le C, \] and the integrand in \eqref{e:tayloradj} maps $H^{M}_{\varphi,h}(X) \to H^{-M}_{\varphi,h}(X)$ with norm $\Oh(1)$. Applying \eqref{e:tayloradj} with $J \to \infty$ shows that $e^{\eps Q/h}A e^{-\eps Q/h} \in \Psi_\delta^m(X)$, and applying \eqref{e:tayloradj} with $J = 1$ we find \begin{equation}\label{expexp}e^{\eps Q/h}A e^{-\eps Q/h} = A - \eps [A,Q/h] + \eps^2 h^{-4\delta} R,\end{equation} where $R \in \Psi^{-\infty}_\delta(X)$. \section{Reduction to estimates for model operators}\label{s:reduce} \subsection{Resolvent gluing}\label{s:glue} We reduce \eqref{e:main} to a series of estimates for model operators using a variant of the gluing method of \cite{Datchev-Vasy:Gluing}, adapted to the dynamics on $X$. Let $P_C, P_K, P_F$ be \textit{model operators} for $P$ in the sense that they satisfy \[ P_C|_{\{r < - R_g\}} = P|_{\{r < - R_g \}}, \quad P_K|_{\{|r| < R_g + 3\}} = P|_{\{|r| < R_g + 3\}}, \quad P_F|_{\{r > R_g\}} = P|_{\{r > R_g\}}. \] So $P_C$ is a model in the cusp, $P_F$ is a model in the funnel, and $P_K$ is a model in a neighborhood of the remaining region (see Figure \ref{f:mfld}). We will construct the operators such that $i(P_j - P_j^*)= 2W_j$ for each $j \in \{C,K,F\}$, where $W_j \in C^\infty(X;[0,1])$ will be specified below. Note that $W_j \ge 0$ implies $ \la \im P_j u, u \ra_{L^2_\varphi(X)} \le 0$ and hence \[ \|u\|_{L^2_\varphi(X)} \le (\im \lambda)^{-1} \|(P_j - \lambda)u\|_{L^2_\varphi(X)}, \quad \im \lambda > 0. \] Combining this with \eqref{e:hphidef} gives, for any $\chi_j \in C^\infty(X)$ bounded with all derivatives and satisfying $\supp \chi_j \subset \{P_j = P\}$, \begin{equation}\label{e:modelboundj0} \max_{j \in \{C,K,F\}}\|\chi _j R_j(\lambda) \chi_j \|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} \le C(|\lambda| + (\im \lambda)^{-1}), \quad \im \lambda > 0. \end{equation} Moreover we will construct $P_C, P_K,P_F$ such that for every $\chi \in C_0^\infty(X)$, $E \in (0,1)$, there is $C_0 > 0$ such that for all $\Gamma > 0$ the cutoff resolvents $\chi R_j(\lambda) \chi$ continue holomorphically to $\lambda \in [-E,E] + i[-\Gamma h,\Gamma h]$, where they satisfy \begin{equation}\label{e:modelboundj} \max_{j \in \{C,K,F\}} \|\chi R_j(\lambda) \chi\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} \le C h^{-1 - C_0 |\im \lambda|/h}. \end{equation} Here $\chi$, $E$, $C_0$, and $\Gamma$ are the same as in \eqref{e:main}, but as elsewhere in the paper the constant $C$ and the implicit constant $h_0$ may be different. We will also show that the $R_j(\lambda)$ propagate singularities forward along bicharacteristics, in the following limited sense. Let $\chi_1 \in C_0^\infty(X)$ and let $\chi_2,\chi_3 \in \Psi^1(X)$ be compactly supported differential operators. If $\supp \chi_1 \cup \supp \chi_3 \subset \{r < R_g+2\}$ and $\supp \chi_2 \subset \{r>R_g+2\}$, then, for any $N \in \N$, \begin{equation}\label{e:propsingfk} \|\chi_3 R_F(\lambda) \chi_2 R_K(\lambda) \chi_1\| _{L^2_\varphi(X) \to L^2_\varphi(X)}= \Oh(h^\infty), \end{equation} uniformly in $| \re \lambda| \le E$, $\im \lambda \in [-\Gamma h, h^{-N}]$. If $\supp \chi_1 \cup \supp \chi_3 \subset \{r < - R_g-2\}$ and $\supp \chi_2 \subset \{r>-R_g-2\}$, then, for any $N \in \N$, \begin{equation}\label{e:propsingkc} \|\chi_3 R_K(\lambda) \chi_2 R_C(\lambda) \chi_1\|_{L^2_\varphi(X) \to L^2_\varphi(X)} = \Oh(h^\infty) \end{equation} uniformly in$| \re \lambda| \le E$, $\im \lambda \in [-\Gamma h, h^{-N}]$. Note that in the first case \eqref{e:convexity} implies that no bicharacteristic passes through $T^*\supp \chi_1$, $T^*\supp\chi_2$, $T^*\supp\chi_3$ in that order, and in the second case this is implied by \eqref{e:convexity} together with the assumption that $\gamma^{-1}(\{r<-R_g\})$ is connected for any geodesic $\gamma \colon \R \to X$. We will use these facts in the proofs of \eqref{e:propsingfk} and \eqref{e:propsingkc} below. Suppose for the remainder of the subsection that $P_C,P_K,P_F$ have been constructed. Let $\chi_C,\chi_K,\chi_F \in C^\infty(\R)$ satisfy $\chi_C + \chi_K + \chi_F = 1$, $\supp \chi_F \subset (R_g + 1, \infty)$, $\supp (1-\chi_F) \subset (R_g+2,\infty)$, and $\chi_C(r) = \chi_F(-r)$ for all $r \in \R$. Then define a parametrix for $P-\lambda$ by \[ G = \chi_C(r - 1)R_C(\lambda) \chi_C(r) + \chi_K(|r - 1|)R_C(\lambda) \chi_K(|r|) + \chi_F(r + 1)R_F(\lambda) \chi_F(r). \] Then $G$ is defined for $\im \lambda > 0$ and $\chi G \chi$ continues holomorphically to $\lambda \in [-E,E] - i[0,\Gamma h]$. Define operators $A_C,A_K,A_F$ by \[\begin{split} (P - \lambda)G &= \Id + [\chi_C(r - 1),h^2D_r^2]R_C(\lambda) \chi_C(r) + [\chi_K(|r - 1|),h^2D_r^2] R_K(\lambda) \chi_K(|r|) \\ & \hspace{2.7in} + [\chi_F(r + 1),h^2D_r^2]R_F(\lambda) \chi_F(r) \\ &= \Id + A_C + A_K + A_F; \end{split}\] see Figure \ref{f:gluing}. \begin{figure}[htbp] \includegraphics[width=140mm]{gluing} \caption{The remainders $A_C$, $A_K$, and $A_F$ are localized on the right in the region to the back of the arrows, and on the left near the tips of the arrows ($A_C$ is localized on the right at the support of $\chi_C$ and on the left at the support of $\chi_C'(\cdot-1)$, and so on), and this implies \eqref{e:a2}. They are microlocalized on the left in the indicated directions, and this implies \eqref{e:remtriv} (since, by \eqref{e:convexity}, no geodesic can follow one of the $A_K$ arrows and then the $A_F$ arrow, and so on).}\label{f:gluing} \end{figure} The estimates \eqref{e:modelboundj0} and \eqref{e:modelboundj} only allow us to remove the remainders $A_C,A_K,A_F$ by Neumann series for a narrow range of $\lambda$. To obtain improved remainders, observe that the support properties of the $\chi_j$ imply that \begin{equation}\label{e:a2} A_C^2 = A_K^2 = A_F^2 = A_C A_F = A_FA_C = 0; \end{equation} so, solving away using $G$, we obtain \[ (P - \lambda) G(\Id - A_C - A_K - A_F) = \Id - A_KA_C - A_CA_K - A_FA_K - A_KA_F. \] Now the propagation of singularities estimates \eqref{e:propsingfk} and \eqref{e:propsingkc} impl \begin{equation}\label{e:remtriv} \|A_FA_K \|_{L^2_\varphi(X) \to L^2_\varphi(X)} + \| A_CA_KA_CA_K\|_{L^2_\varphi(X) \to L^2_\varphi(X)} = \Oh(h^\infty), \end{equation} In this sense the $A_FA_K$ remainder term is negligible. We again use \eqref{e:a2} to write \[\begin{split} (P - \lambda) &G(\Id - A_C - A_K - A_F +A_KA_C + A_C A_K + A_K A_F) = \\ &\Id - A_FA_K + A_CA_KA_C + A_FA_KA_C + A_KA_CA_K + A_CA_KA_F + A_KA_FA_K. \end{split}\] Now all remainders but $A_CA_KA_C$, $A_KA_CA_K $, and $A_CA_KA_F$ are negligible in the sense of \eqref{e:remtriv}. Solving away again gives \[\begin{split} (P - \lambda) G(\Id - A_C - &A_K - A_F + A_KA_C + A_C A_K + A_K A_F \\- A_C&A_KA_C - A_KA_CA_K - A_CA_KA_F) = \\ \Id &- A_FA_K + A_FA_KA_C + A_KA_FA_K \\&-A_KA_CA_KA_C - A_CA_KA_CA_K - A_FA_KA_CA_K - A_KA_CA_KA_F. \end{split}\] Now all remainders but $A_KA_CA_KA_C$ are negligible. Solving away one last time gives \[\begin{split} (P - \lambda) G&(\Id - A_C - A_K - A_F + A_KA_C + A_C A_K + A_K A_F \\- A_C&A_KA_C - A_KA_CA_K - A_CA_KA_F + A_KA_CA_KA_C) = \\ \Id &- A_FA_K + A_CA_KA_C+ A_FA_KA_C + A_KA_FA_K - A_CA_KA_CA_K \\& - A_FA_KA_CA_K - A_KA_CA_KA_F + A_CA_KA_CA_KA_C + A_FA_KA_CA_KA_C = \Id + R, \end{split}\] where $R$ is defined by the equation, and $\|R\|_{L^2_\varphi(X) \to L^2_\varphi(X)} = \Oh(h^\infty)$. So for $h$ small enough we may write \[\begin{split} (P-\lambda)^{-1} = G\Big(&\Id - A_C - A_K - A_F + A_KA_C + A_C A_K + A_K A_F\\& - A_CA_KA_C - A_KA_CA_K - A_CA_KA_F + A_KA_CA_KA_C\Big)\sum_{k=0}^\infty (-R)^k. \end{split}\] Combining this equation with \eqref{e:modelboundj}, we see that $\chi(P-\lambda)^{-1}\chi$ continues to holomorphically to $|\re \lambda| \le E$, $\im \lambda \ge -\Gamma h$ and obeys \[ \| \chi(P-\lambda)^{-1}\chi\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} \le C h^{-1 - 5C_0|\im \lambda|/h}. \] In summary, to prove \eqref{e:main} (and hence \eqref{logreg}), it remains to construct $P_C,P_K,P_F$ which satisfy \eqref{e:modelboundj0}, \eqref{e:modelboundj}, \eqref{e:propsingfk} and \eqref{e:propsingkc}. We conclude this subsection by stating two Propositions which contain the estimates we will prove for $R_K(\lambda)$, after which we show how they reduce \eqref{e:propsingfk} and \eqref{e:propsingkc} to simpler propagation of singularities estimates for $R_F(\lambda)$ and $R_C(\lambda)$ respectively, namely \eqref{e:modelpropf} and \eqref{e:modelprop}. In the next subsection we construct $P_K$ and prove the two Propositions. \begin{prop}\label{p:rkbound} For any $E \in (0,1)$ there is $C_0>0$ such that for any $M>0$ there are $C,h_0>0$ such that \begin{equation}\label{e:rkbound} \|R_K(\lambda)\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} \le C \begin{cases} h^{-1} + |\lambda|, \qquad & \im \lambda > 0, \\ h^{-1} e^{C_0 |\im \lambda|/h}, \qquad &\im \lambda \le 0, \end{cases} \end{equation} for $|\re \lambda| \le E$, $- Mh\log(1/h) \le \im \lambda $, $h \in (0,h_0]$. \end{prop} \begin{prop}\label{p:rkprop} Let $\Gamma \in \R$, $E \in (0,1)$. Let $A,B \in \Psi^0(X)$ have full symbols $a$ and $b$ with the projections to $X$ of $\supp a$ and $\supp b$ compact and suppose that \begin{equation}\label{e:rkpropdyn} \supp a \cap \left[\supp b \cup \bigcup_{t \ge 0} \exp(tH_p) \left[ p^{-1}([-E,E]) \cap \supp b\right]\right] = \varnothing, \end{equation} where $\exp(tH_p)$ is the bicharacteristic flow of $p$, then, for any $N \in \N$, \begin{equation}\label{e:rkprop} \|AR_K(\lambda)B\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} = \Oh(h^\infty), \end{equation} for $|\re \lambda| \le E$, $ -\Gamma h \le \im \lambda \le h^{-N}$. \end{prop} Take $\varphi \in C^\infty(\R)$, bounded with all derivatives and supported in $(0,\infty)$, and take $\widetilde \chi_2, \ \widetilde \chi_3 \in C_0^\infty(X)$ such that $\supp \widetilde\chi_2 \subset \{r>R_g+2\}$ and $\widetilde\chi_3 \subset \{r < R_g+2\}$, and such that $\widetilde\chi_2 \chi_2 = \chi_2 \widetilde\chi_2 = \chi_2$ and $\widetilde\chi_3 \chi_3 = \chi_3 \widetilde\chi_3 = \chi_3$. Then \eqref{e:propsingfk} follows from \begin{equation}\label{e:propsingbreak1} \|\widetilde \chi_3 R_F \widetilde \chi_2 \varphi(h D_r) \| _{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} + \|\widetilde \chi_2 (\Id - \varphi(hD_r)) R_K \chi_1\| _{L^2_\varphi(X) \to H^2_{\varphi,h}(X)}= \Oh(h^\infty). \end{equation} The estimate on the first term follows from \eqref{e:modelpropf} below, while the estimate on the second term follows from \eqref{e:rkprop} if $\supp(1-\varphi)$ is contained in a sufficiently small neighborhood of $(-\infty,0]$; it suffices to take a neighborhood small enough that no bicharacteristic in $p^{-1}([-E,E])$ goes from $T^*\supp \chi_1$ to $(T^*\supp \widetilde \chi_2) \cap \supp(1-\varphi(\rho))$, where $\rho$ is the dual variable to $r$ in $T^*X$, and such a neighborhood exists by \eqref{tanh} because when a bicharacteristic leaves $T^*\supp \chi_1$ it has $\rho \ge 0$, and \eqref{tanh} gives a minimum amount by which $\rho$ must grow in the time it takes the bicharacteristic to reach $T^*\supp \widetilde \chi_2$. An analogous argument reduces \eqref{e:propsingkc} to \eqref{e:modelprop}: the analog of \eqref{e:propsingbreak1} is \[\|\widetilde \chi_3 R_K (\Id - \varphi(h D_r)) \widetilde \chi_2\| _{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} + \| \varphi(hD_r) \widetilde \chi_2 R_C \chi_1\| _{L^2_\varphi(X) \to H^2_{\varphi,h}(X)}= \Oh(h^\infty),\] where $\varphi \in C^\infty(\R)$ is bounded with all derivatives and supported in $(-\infty,0)$, and $\widetilde \chi_2, \ \widetilde \chi_3 \in C_0^\infty(X)$ have $\supp \widetilde\chi_2 \subset \{r>-R_g-2\}$ and $\widetilde\chi_3 \subset \{r < -R_g-2\}$, and such that $\widetilde\chi_2 \chi_2 = \chi_2 \widetilde\chi_2 = \chi_2$ and $\widetilde\chi_3 \chi_3 = \chi_3 \widetilde\chi_3 = \chi_3$. \subsection{Model operator in the nonsymmetric region}\label{s:rk} In this subsection we define $P_K$ and prove Propositions \ref{p:rkbound} and \ref{p:rkprop}. Although the techniques involved are all essentially well known, we go over them in some detail here because they are important in the more complicated analysis of $P_C$ and $P_F$ below. Let $W_K \in C^\infty(X;[0,1])$ be $0$ near $\{|r| \le R_g+3\}$, and $1$ near $\{|r| \ge R_g + 4\}$, and let \[ P_K = P - iW_K. \] We begin with the proof of Proposition 3.1, which follows \cite[\S4]{sz}. Fix \[ E_0 \in (E,1), \qquad \eps = 10Mh\log(1/h). \] We will use the assumption that the flow is nontrapping to construct an \textit{escape function} $q \in C_0^\infty(T^*X)$, that is to say a function such that \begin{equation}\label{e:escfunck}\begin{split} H_p q &\le -1 \textrm{ near } T^*\supp(1-W_K) \cap p^{-1}([-E_0,E_0]). \end{split}\end{equation} The construction will be given below. Then let $Q \in \Psi^{-\infty}(X)$ be a quantization of $q$, and \[ P_{K,\eps} = e^{\eps Q/h}P_K e^{-\eps Q/h} = P_K - \eps [P_K,Q/h] + \eps^2 R, \] where $R \in \Psi^{-\infty}(X)$ (see \eqref{expexp}). We will prove that \begin{equation}\label{e:pkepsest} \|(P_{K,\eps} - E')^{-1}\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} \le 5 /\eps, \qquad E' \in [-E_0,E_0], \end{equation} from which it follows, using first the openness of the resolvent set and then \eqref{e:expest}, that \begin{equation}\label{e:pkest1} \|(P_K - \lambda)^{-1}\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} \le \frac{h^{-N}}{M \log(1/h)}, \quad |\re \lambda| \le E_0, \ |\im \lambda| \le M h \log(1/h), \end{equation} where $N=10M(\|Q\|_{H^2_{\varphi,h}(X) \to H^2_{\varphi,h}(X)} + \|Q\|_{L^2_\varphi(X) \to L^2_{\varphi}(X)} )+1$. Then we will show how to use complex interpolation to improve \eqref{e:pkest1} to \eqref{e:rkbound}. \begin{proof}[Construction of $q \in C_0^\infty(T^*X)$ satisfying \eqref{e:escfunck}.] As in \cite[\S 4]{vz}, we take $q$ of the form \begin{equation}\label{e:qdefk} q = \sum_{j=1}^J q_j, \end{equation} where each $q_j$ is supported near a bicharacteristic in $T^*\supp(1-W_K) \cap p^{-1}([-E_0,E_0])$. First, for each $\wp \in T^*\supp(1-W_K) \cap p^{-1}([-E_0,E_0])$, define the following \textit{escape time}: \[ T_\wp = \inf\{T \in \R\colon |t| \ge T-1 \Rightarrow \exp(tH_p)\wp \not\in T^*\supp(1-W_K)\}. \] Then put \[ T = \max\{T_\wp\colon \wp \in T^*\supp(1-W_K) \cap p^{-1}([-E_0,E_0])\}. \] Note that the nontrapping assumption in \S\ref{s:assumptions} implies that $T < \infty$. Let $\mathcal{S}_\wp$ be a hypersurface through $\wp$, transversal to $H_p$ near $\wp$. If $U_\wp$ is a small enough neighborhood of $\wp$, then \[ V_\wp = \{\exp(tH_p)\wp' \colon \wp' \in U_\wp \cap \mathcal{S}_\wp, |t| <T+1\} \] is diffeomorphic to $\R^{2n-1} \times (-T-1,T+1)$ with $\wp$ mapped to $(0,0)$. Denote this diffeomorphism by $(y_\wp,t_\wp)$. Further shrinking $U_\wp$ if necessary, we may assume the inverse image of $\R^{2n-1} \times \{|t|\ge T\}$ is disjoint from $T^*\supp(1-W_K)$. Then take $\varphi \in C_0^\infty(\R^{2n-1};[0,1])$ identically $1$ near $0$, and $\chi \in C_0^\infty((-T-1,T+1))$ with $\chi' = -1$ near $[-T,T]$, and put \[ q_\wp = \varphi (y_\wp) \chi (t_\wp), \qquad H_p q_\wp = \varphi(y_\wp)\chi'(t_\wp). \] Note $H_pq_\wp \le0$ on $T^*\supp(1-W_K)$ because $\chi' = -1$ there. Let $V'_\wp$ be the interior of $\{H_p q_\wp = -1\}$, note that the $V'_\wp$ cover $T^*(1-W_K) \cap p^{-1}([-E_0,E_0])$, and extract a finite subcover $\{V'_{\wp_1}, \dots, V'_{\wp_J}\}$. Then put $q_j = q_{\wp_j}$ and define $q$ by \eqref{e:qdefk}, so that \[ H_pq = \sum_{j=1}^J \varphi (y_{\wp_j}) \chi_\wp' (t_{\wp_j}). \] Then $H_pq \le -1$ near $T^*(1-W_K) \cap p^{-1}([-E_0,E_0])$ because at each point at least one summand is, and the other summands are nonpositive. \end{proof} \begin{proof}[Proof of \eqref{e:pkepsest}.] Let $\chi_0 \in C_0^\infty(X;[0,1])$ be identically $1$ on a large enough set that $\chi_0 Q = Q \chi_0 = Q$. In particular we have $(1-\chi_0) W_K = 1-\chi_0$, allowing us to write \[ \|(1-\chi_0)u\|_{L^2_\varphi(X)}^2 = -\im \la( P_{K,\eps} - E')(1-\chi_0)u,(1-\chi_0)u\ra_{L^2_\varphi(X)}. \] \[ \|(1-\chi_0)u\|_{L^2_\varphi(X)} \le \|(P_{K,\eps} - E' )u\|_{L^2_\varphi(X)} + \|[P_{K,\eps},\chi_0]u\|_{L^2_\varphi(X)}. \] To estimate $\|\chi_0 u\|_{L^2_\varphi(X)} $ and the remainder term $\|[P_{K,\eps},\chi_0]u\|_{L^2_\varphi(X)} $ we introduce a microlocal cutoff $\phi \in C_0^\infty(T^*X)$ which is identically 1 near $T^*\supp(1-W_K) \cap p^{-1}([-E_0,E_0])$ and is supported in the interior of the set where $H_pq \le -1$. Since the principal symbol of $P_{K,\eps} - E'$ is \[ p_{K,\eps} - E' = p - i W_K -E' - i\eps \{p - iW_K,q\}, \] we have \[ |p_{K,\eps} - E'| \ge 1-E_0, \ \textrm{ near } \supp (1 - \phi), \] for $|E'| \le E_0$, provided $h$ (and hence $\eps$) is sufficiently small. Then if $\Phi \in \Psi^{-\infty}(X)$ is a quantization of $\phi$, we find using the semiclassical elliptic estimate \eqref{ellipestx} that \[ \|(\Id - \Phi) \chi_0 u\|_{H^2_{\varphi,h}(X)} \le C\left( \|(P_{K,\eps} - E')u\|_{L^2_\varphi(X)} + h\|u\|_{H^1_{\varphi,h}(X)}\right). \] Since $H_pq \le -1$ near $\supp \phi$ we see that \[ \im p_{K,\eps} - E' = -W_K - \eps \{p,q\} \le - \eps, \ \textrm{ near } \supp \phi. \] Then, using the sharp G\aa rding inequality \eqref{gardingx}, we find that \[\begin{split} \|(P_{K,\eps} - E')\Phi \chi_0 u\|_{L^2_\varphi(X)} \|\Phi \chi_0u\|_{L^2_\varphi(X)} &\ge - \la \im(P_{K,\eps} - E')\Phi \chi_0u, \Phi \chi_0u\ra_{L^2_\varphi(X)} \\ &\ge \eps \|\Phi\chi_0u\|_{L^2_\varphi(X)}^2 - Ch \|u\|_{H^{1/2}_{\varphi,h}(X)}^2. \end{split}\] This implies that \[\begin{split} \|u\|_{L^2_\varphi(X)}&\le \|(1-\chi_0)u\|_{L^2_\varphi(X)} + \|\Phi \chi_0 u\|_{L^2_\varphi(X)} + \|(\Id-\Phi)\chi_0 u\|_{L^2_\varphi(X)} \\ &\le C \|(P_{K,\eps} - E')u\|_{L^2_\varphi(X)} + \eps^{-1} \|(P_{K,\eps} - E')u\|_{L^2_\varphi(X)} + Ch^{1/2}\|u\|_{H^1_{\varphi,h}(X)}, \end{split}\] As in the proof of \eqref{e:modelboundj0}, combining this with \begin{equation}\label{e:pkupgrade}\begin{split} \|u\|_{H^2_{\varphi,h}(X)} &\le 3\|u\|_{L^2_\varphi(X)} + \|(P-E')u\|_{L^2_\varphi(X)} \\&\le 4\|u\|_{L^2_\varphi(X)} + \|(P_{K,\varepsilon}-E')u\|_{L^2_\varphi(X)} + C\eps \|u\|_{L^2_\varphi(X)}, \end{split}\end{equation} we obtain \eqref{e:pkepsest} for $h$ sufficiently small. \end{proof} \begin{proof}[Proof that \eqref{e:pkest1} implies \eqref{e:rkbound}.] We follow the approach of \cite{tz} as presented in \cite[Lemma 3.1]{nsz}. Observe first that \eqref{e:modelboundj0} implies \eqref{e:rkbound} for $\im \lambda \ge C_\Omega h$ for any $C_\Omega>0$. Let $f(\lambda,h)$ be holomorphic in $\lambda$ for $\lambda \in \Omega = [-E_0,E_0] + i [-Mh\log(1/h), C_\Omega h]$ and bounded uniformly in $h$ there. Suppose further that, for $\lambda \in \Omega$, \[ |\re \lambda| \le E \Rightarrow |f| \ge 1, \qquad |\re \lambda| \in [(E+E_0)/2, E_0] \Rightarrow |f| \le h^N. \] \begin{figure}[htbp] \includegraphics[width=150mm]{compint3} \caption{Bounds on $f$ used in the complex interpolation argument.} \end{figure} For example, we may take $f$ to be a characteristic function convolved with a gaussian: \[\begin{split} f(\lambda, h) &= \frac 2 {\sqrt \pi} \log(1/h) \int_{-\tilde E}^{\tilde E} \exp\left(-\log^2(1/h) (\lambda - y)^2\right)dy\\ & = \erfc(\log(1/h)(\lambda - \tilde E)) - \erfc(\log(1/h)(\lambda + \tilde E)), \end{split}\] where $\tilde E = (3E+E_0)/4$, $\erfc z = 2 \int_z^\infty e^{-t^2}dt/\sqrt\pi$. We bound $|f|$ using the identity $\erfc(z) + \erfc(-z)= 2$ and the fact that $\erfc z = \pi^{-1/2}z^{-1}e^{-z^2} (1 + \Oh(z^{-2}))$ for $|\arg z| < 3\pi/4$. Then the subharmonic function \[ g(\lambda,h) = \log \|(P_K -\lambda)^{-1}\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} + \log |f(\lambda,h)| + \frac{N \im \lambda}{Mh} \] obeys $g \le C$ on $\D \Omega \cap (\{|\re \lambda| = E_0\} \cup \{\im \lambda = -Mh\log(1/h)\})$, and $g \le C + \log(1/h)$ on $\D \Omega \cap \{\im \lambda = C_\Omega h\}$. From the maximum principle and the lower bound on $|f|$ we obtain \[ \log \|(P_K -\lambda)^{-1}\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} + \frac{N \im \lambda}{Mh} \le C + \log(1/h), \] for $\lambda \in \Omega$, $|\re \lambda| \le E$, from which \eqref{e:rkbound} follows for $\lambda \in \Omega$. \end{proof} \begin{proof}[Proof of Proposition \ref{p:rkprop}] This is similar to \cite[Lemma 5.1]{Datchev-Vasy:Gluing}. By \eqref{ellipestx}, without loss of generality we may assume that $a$ is supported in a neighborhood of $p^{-1}([-E,E]) \cap \supp (1-W_K)$ which is as small as we please (but independent of $h$). In particular we may assume $\supp a$ is compact. We will show that if $(P_K - \lambda)u = B f$ with $\|f\|_{L^2_\varphi(X)} = 1$, and if $\|A_0u\| \le C h^k$ for some $A_0 \in \Psi^0(X)$ with full symbol $a_0$ such that \[ a_0 = 1 \textrm{ near } \supp a \cap p^{-1}([-E,E]),\qquad \supp a_0 \cap \bigcup_{t \ge 0} \exp(tH_p)\supp b = \varnothing, \] then $\|A_1 u\| \le C h^{k+1/2}$ for each $A_1 \in \Psi^0(X)$ with full symbol $a_1$ satisfying $a_0 = 1$ near $\supp a_1$. Then the conclusion \eqref{e:rkprop} follows by induction: the base step is given by \eqref{e:rkbound}. Let $q \in C_0^\infty(T^*X;[0,\infty))$ such that: \begin{equation}\label{e:rkpropesc1} a_0 = 1 \textrm{ near }\supp q, \qquad H_p (q^2) \le - (2 \Gamma + 1) q^2 \textrm{ near } \supp a_1, \end{equation} \begin{equation}\label{e:rkpropesc2} H_pq \le 0 \textrm{ on } T^*\supp(1-W_K). \end{equation} The construction of $q$ is very similar to that of the function $q$ used in the proof of Proposition \ref{p:rkbound} above, and is also given in \cite[Lemma 5.1]{Datchev-Vasy:Gluing}. Write \[ H_p (q^2) = -\ell^2 + r, \] where $\ell,r \in C_0^\infty(T^*X)$ satisfy \begin{equation}\label{e:derk} \ell^2 \ge (2 \Gamma + 1)q^2, \qquad \supp r \subset \{W_K = 1\}. \end{equation} Let $Q,L,R \in \Psi^{-\infty}(X)$ have principal symbols $q,\ell,r$ respectively. Then \[ i[P,Q^*Q] = -hL^*L + hR +h^2F + R_\infty, \] where $F \in \Psi^{-\infty}(X)$ has full symbol supported in $\supp q$ and $R_\infty \in h^\infty \Psi^{-\infty}(X)$. From this we conclude that \begin{equation}\label{e:poscommrk}\begin{split} \|L u\|_{L^2_\varphi(X)}^2 = &-\frac 2 h \im \la Q^*Q P u,u \ra_{L^2_\varphi(X)}+ \la Ru,u\ra_{L^2_\varphi(X)} + h\la F u,u\ra_{L^2_\varphi(X)} + \Oh(h^\infty)\|u\|_{L^2_\varphi(X)}^2\\ =&-\frac2h \im \la Q^*Q(P_K - \lambda)u,u\ra_{L^2_\varphi(X)} - \re\la Q^*QW_K u,u\ra_{L^2_\varphi(X)} - \frac 2 h \im \lambda\|Qu\|_{L^2_\varphi(X)}^2\\ &+ \la Ru,u\ra_{L^2_\varphi(X)} + h\la F u,u\ra_{L^2_\varphi(X)} + \Oh(h^\infty)\|u\|_{L^2_\varphi(X)}^2. \end{split}\end{equation} We now estimate the right hand of \eqref{e:poscommrk} side term by term to prove that \begin{equation}\label{e:pklinduc} \|Lu\|_{L^2_\varphi(X)}^2 \le 2 \Gamma \|Qu\|_{L^2_\varphi(X)}^2 + C h \|A_0 u\|_{L^2_\varphi(X)}^2 + \Oh(h^\infty)\|u\|_{L^2_\varphi(X)}^2, \end{equation} Indeed, since $\supp q \cap \supp b = \varnothing$ and since $(P_K - \lambda)u = B f $ it follows that \[ \la Q^*Q(P_K - \lambda)u,u\ra_{L^2_\varphi(X)} = \Oh(h^\infty)\|u\|_{L^2_\varphi(X)}^2. \] Next, we write \[ - \re\la Q^*QW_K u,u\ra_{L^2_\varphi(X)} = - \re\la W_K Q u,Q u\ra_{L^2_\varphi(X)} +\la Q^*[W_K,Q]u,u\ra_{L^2_\varphi(X)}, \] and observe that the first term is nonpositive because $W_K \ge 0$, and the second term is bounded by $C h \|A_0 u\|_{L^2_\varphi(X)}^2$. Since $\im \lambda \ge -\Gamma h$ we have $- \frac 2 h \im \lambda\|Qu\|_{L^2_\varphi(X)}^2 \le 2 \Gamma \|Qu\|^2_{L^2_\varphi(X)}$, while since $W_K=1$ on $\supp r$ we have the elliptic estimate \[ \qquad \la Ru,u\ra_{L^2_\varphi(X)} = C \|R(P_K-\lambda)u\|_{L^2_\varphi(X)} \|u\|_{L^2_\varphi(X)} + C h\|A_0 u\|^2_{L^2_\varphi(X)}, \] and the first term is $\Oh(h^\infty)\|u\|^2_{L^2_\varphi(X)}$ since $\supp r \cap \supp b = \varnothing$. Finally $h\la F u,u\ra_{L^2_\varphi(X)} \le Ch\|A_0u\|^2$ by inductive hypothesis, giving \eqref{e:pklinduc}. But by \eqref{e:derk} and the sharp G\aa rding inequality we have \[ \la (D^*D - (2\Gamma + 1)Q^*Q)u,u\ra \ge - Ch\|A_0 u\|^2 - \Oh(h^\infty)\|u\|^2. \] Hence by inductive hypothesis we have \[ \|Q u\|^2\le Ch^{2k+1}\|u\|^2, \] completing the inductive step. \end{proof} \section{Model operator in the cusp}\label{s:modelcusp} Take $W_C \in C^\infty(\R;[0,1])$ with $W_C(r) = 0$ near $r \le -R_g$, $W_C(r) = 1$ near $r \ge 0$, and let \[ P_C = h^2D_r^2 + e^{-2(r+\beta(r))} \Delta_{S_-} + h^2V(r) - 1 - iW_C(r), \] with notation as in \S\ref{spectrum}. \begin{prop} \label{p:modelbound} For every $\chi \in C_0^\infty(X)$, $E \in (0,1)$, there is $C_0>0$ such that for any $M>0$, there are $h_0,C>0$ such that the cutoff resolvent $\chi R_C(\lambda) \chi$ continues holomorphically from $\{\im \lambda>0\}$ to $\{|\re \lambda| \le E$, $-Mh\log\log(1/h) \le \im \lambda \le M\}, \, h \in (0,h_0]$, and obeys \begin{equation}\label{e:modelbound}\left\|\chi R_C(\lambda) \chi\right\|_{L^2_\varphi(X) \to H^2_{\varphi,h} (X)} \le C \begin{cases} h^{-1} + |\lambda|, \qquad & \im \lambda > 0 \\ h^{-1-C_0 |\im \lambda|/h}, \qquad &\im \lambda \le 0, \end{cases}. \end{equation} \end{prop} \begin{prop}\label{p:modelprop} Let $r_0 <0$, $\chi_- \in C_0^\infty((-\infty,r_0))$, $\chi_+ \in C_0^\infty((r_0,\infty))$, $\varphi \in C^\infty(\R)$ supported in $(-\infty,0)$ and bounded with all derivatives, $E \in (0,1)$, $\Gamma>0$ be given. Then there exists $h_0>0$ such that \begin{equation}\label{e:modelprop} \left\|\varphi(hD_r)\chi_+(r)R_C(\lambda)\chi_-(r)\right\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} = \Oh(h^\infty), \end{equation} for $|\re \lambda| \le E, \, - \Gamma h \le \im \lambda \le h^{-N}$, $h \in (0,h_0]$. \end{prop} To prove these propositions we separate variables over the eigenspaces of $\Delta_{S_-}$, writing $P_C = \bigoplus_{m=0}^\infty h^2D_r^2 + (h\lambda_m)^2 e^{-2(r + \beta(r))} + h^2 V(r) - 1 - iW_C(r),$ where $0 = \lambda_0 < \lambda_1 \le \cdots$ are square roots of the eigenvalues of $\Delta_{S_-}$. It suffices to prove \eqref{e:modelbound}, \eqref{e:modelprop} with $P_C$ replaced by $P(\alpha)$, with estimates uniform in $\alpha \in \{0\} \cup [h\lambda_1,\infty)$, where \[ P(\alpha) = h^2D_r^2 + \alpha^2e^{-2(r + \beta(r))} + h^2 V(r) - 1 - iW_C(r). \] \subsection{The case $\alpha = 0$} The analysis of $(P(0) - \lambda)^{-1}$ is very similar to that of $R_K$ in \S\ref{s:rk}. The only additional technical ingredient is the method of complex scaling, which for this operator works just as in \cite{sz2,sz}. \begin{lem}\label{l:p0} For every $\chi \in C_0^\infty(X)$, $E \in (0,1)$, there is $C_0>0$ such that for any $M>0$, there exist $h_0,C>0$ such that the cutoff resolvent $\chi(P(0) - \lambda)^{-1}\chi$ continues holomorphically from $\{\im \lambda>0\}$ to $\{|\re \lambda| \le E$, $-Mh\log(1/h) \le \im \lambda\}, \, h \in (0,h_0]$, and obeys \begin{equation}\label{e:modelbound0} \left\|\chi(P(0) - \lambda)^{-1}\chi\right\|_{L^2(\R) \to H^2_h(\R)} \le C h^{-1}e^{-C_0 |\im \lambda| /h}. \end{equation} Let $r_0 \in \R$, $\chi_- \in C_0^\infty((-\infty,r_0))$, $\chi_+ \in C_0^\infty((r_0,\infty))$,$\varphi \in C^\infty(\R)$ supported in $(-\infty,0)$ and bounded with all derivatives, $\Gamma>0$ be given. Then there exists $h_0>0$ such that \begin{equation}\label{e:modelprop0} \left\|\varphi(hD_r)\chi_+(r)(P(0) - \lambda)^{-1}\chi_-(r)\right\|_{L^2(\R) \to H^2_h(\R)} = \Oh(h^\infty), \end{equation} for $|\re \lambda| \le E, \, - \Gamma h \le \im \lambda \le h^{-N}$, $h \in (0,h_0]$. \end{lem} \begin{proof}[Proof of \eqref{e:modelbound0}] We use complex scaling to replace $P(0)$ by the complex scaled operator $P_\delta(0)$, defined below. As we will see, $P_\delta(0)$ is semiclassically elliptic for $|r|$ sufficiently large and obeys \eqref{e:modelbound0} without cutoffs. We have \[ P(0) = h^2 D_r^2 + h^2 V(r) -1 - iW_C(r). \] Fix $R>R_g$ sufficiently large that \begin{equation}\label{e:p0supp} \supp \chi \cup \supp \chi_+ \cup \supp \chi_- \subset (-R,\infty). \end{equation} Let $\gamma \in C^\infty(\R)$ be nondecreasing and obey $\gamma(r) = 0$ for $r \ge -R$, $\gamma'(r) = \tan \theta_0$ for $r \le -R-1$ (here $\theta_0$ is as in \S\ref{s:assumptions}), and impose further that $\beta(r)$ is holomorphic near $r + i \delta \gamma(r)$ for every $r < -R$, $\delta \in (0,1)$. Below we will take $\delta \ll 1$ independent of $h$. Now put \[ P_\delta(0) = \frac{h^2 D_r^2}{(1 + i \delta \gamma'(r))^2} - h \frac{\delta \gamma''(r)hD_r}{(1 + i \delta \gamma'(r))^3} + h^2V(r + i \delta \gamma(r)) - 1 - iW_C(r). \] If we define the differential operator with complex coefficients \[ \widetilde P(0) = h^2 D_z^2 + h^2 V(z) -1 - W_C(z), \] then we have \begin{equation}\label{e:p0restr} P(0) = \widetilde P(0)|_{\{z = r \colon r \in \R\}}, \qquad P_\delta(0) = \widetilde P(0)|_{\{z = r + i \delta \gamma(r) \colon r \in \R\}}. \end{equation} We will show that if $\chi_0 \in C^\infty(\R)$ has $\supp \chi_0 \cap \supp \gamma = \varnothing$, then \begin{equation}\label{e:p0agree} \chi_0(P(0) - \lambda)^{-1} \chi_0 = \chi_0(P_\delta(0) - \lambda)^{-1} \chi_0, \qquad \im \lambda >0. \end{equation} From this it follows that if one of these operators has a holomorphic continuation to any domain, then so does the other, and the continuations agree, so that it suffices to prove \eqref{e:modelbound0} and \eqref{e:modelprop0} with $P(0)$ replaced by $P_\delta(0)$. To prove \eqref{e:p0agree} we will prove that if \[(P(0)-\lambda)u = v, \qquad (P_\delta(0)-\lambda)u_\delta =v,\] for $v \in L^2(\R)$ with $\supp v \subset \{r \colon \gamma(r) = 0\}$, and $u,u_\delta\in L^2(\R)$, then \[ u|_{\{r \colon \gamma(r) = 0\}} = u_\delta |_{\{r \colon \gamma(r) = 0\}}.\] Thanks to \eqref{e:p0restr}, it suffices to show that if $\tilde u$ solves $(\widetilde P(0)- \lambda) \tilde u =v$ with $\tilde u|_{\{z = r, r \in \R\}} \in L^2(\R)$, then $\tilde u|_{\{z = r + i \delta \gamma(r), r \in \R\}}\in L^2(\R)$. For the proof of this statement we may take $\lambda$ fixed with $\re \lambda = 0$ since the general statement follows by holomorphic continuation. Observe that for $\re z < - R$, we have \begin{equation}\label{e:p0utildeeq}( \widetilde P(0)- \lambda)\tilde u(z) = 0.\end{equation} We will use the WKB method to construct solutions $u_\pm$ to \eqref{e:p0utildeeq} which are exponentially growing or decaying as $\re z \to -\infty$. Define \[f(z) = V(z) - (1+ \lambda)/h^2,\qquad \varphi(z) =(4f(z)f''(z) - 5f'(z)^2)(16f(z))^{-5/2}.\] Now (see e.g. \cite[Chapter 6, Theorem 11.1]{Olver:Asymptotics}) there exist two solutions to \eqref{e:p0utildeeq} given by \[ u_\pm(z) = f(z)^{-1/4}e^{\pm \int_{\gamma_{z,-R}} \sqrt{f(z')}dz'}(1 + b_\pm(z)), \qquad \re z < -R,\] taking principal branches of the roots and with the contour of integration $\gamma_{z,-R}$ taken from $z$ to $-R$ such that $\sqrt{\re z'}$ is monotonic along $\gamma_{z,-R}$. The functions $b_\pm$ obey \[|b_\pm(z)| \le \exp(\max (|\varphi(z')|\colon z' \in \gamma_\pm))-1 \le Ch, \] when $\re z> R$, where $\gamma_+$ (resp. $\gamma_-$) is a contour from $-\infty$ to $z$ (resp. $z$ to $-R$) such that $\sqrt{\re z'}$ is monotonic along the contour. It follows that, for fixed $h$ sufficiently small, \[|u_+(z)| \le C e^{\re z/C}, \qquad |u_-(z)| \ge C e^{-\re z/C},\] for $\re z < - R$. Hence $\tilde u|_{\{z = r, r \in \R\}} \in L^2(\R)$ implies that that $\tilde u$ is proportional to $u_+$. This implies that $\tilde u|_{\{z = r + i \delta \gamma(r), r \in \R\}}\in L^2(\R)$, completing the proof of \eqref{e:p0agree}. Fix \[ E_0 \in (E,1), \qquad \eps = 10 M h \log(1/h). \] The semiclassical principal symbol of $P_\delta(0)$ is \begin{equation}\label{e:p0symb} p_\delta(0) = \frac{\rho^2}{(1+ i\delta \gamma'(r))^2} - 1 = \rho^2(1 + \Oh(\delta)) - 1. \end{equation} In this case the escape function can be made more explicit: we take $q \in C_0^\infty(T^*\R)$ with \begin{equation}\label{e:hp0q0} q(r,\rho) = - 4r \rho(1-E_0)^{-2}, \qquad H_{p_\delta(0)} q = - 8\rho^2(1-E_0)^{-2}(1 + \Oh(\delta)), \end{equation} on $\{|r| \le R+1, \, |\rho| \le 2\}$. Let $Q \in \Psi^{-\infty}(\R)$ be a quantization of $q$ and put \[ P_{\delta,\eps}(0) = e^{\eps Q/h} P_\delta(0) e^{-\eps Q/h} = P_\delta(0) - \eps[P_\delta(0),Q/h] + \eps^2 R, \] where $R \in \Psi^{-\infty}(\R)$ (see \eqref{expexp}). We will prove \begin{equation}\label{e:modelbound0eps0} \|(P_{\delta,\eps}(0) -E')^{-1}\|_{L^2(\R) \to H^2_h(\R)}\le 5/\eps, \qquad E' \in [-E_0,E_0] \end{equation} from which it follows by \eqref{e:expest} that \begin{equation}\label{e:modelbound001} \|(P_\delta(0) -\lambda)^{-1}\|_{L^2(\R) \to H^2_h(\R)} \le \frac{h^{-N}}{M\log(1/h)}, \qquad |\re \lambda| \le E_0, \ |\im \lambda| \le M h \log(1/h), \end{equation} where $N = 10M(\|Q\|_{H^2_h(\R) \to H^2_h(\R)} + \|Q\|_{L^2(\R) \to L^2(\R)})+1$. As before we will use complex interpolation to improve \eqref{e:modelbound001} to \begin{equation}\label{e:modelbound0020} \|(P_\delta(0) -\lambda)^{-1}\|_{L^2(\R) \to H^2_h(\R)} \le Ch^{-1} e^{C|\im \lambda|/h}. \end{equation} for $- E \le \re \lambda \le E$, $- M h \log(1/h)$. Combining \eqref{e:p0agree} and \eqref{e:modelbound0020} gives \eqref{e:modelbound0}. Let $\phi \in C_0^\infty(\R;[0,1])$ have $\phi(\rho) = 1$ for $|\rho|$ near $[1-E_0,1+E_0]$ and $\supp \phi \subset\{(1-E_0)/2 < |\rho| < 2\}$. By \eqref{e:p0symb}, if $\delta$ is small enough and $h$ is small enough depending on $\delta$, then on $\supp (1-\phi(\rho))$ we have $|p_{\delta,\eps}(0) - E'| \ge \delta(1 + \rho^2)/C$, uniformly in $E' \in [-E_0,E_0]$ and in $h$, where $p_{\delta,\eps}(0)$ is the semiclassical principal symbol of $P_{\delta,\eps}(0)$. Hence, by the semiclassical elliptic estimate \eqref{ellipestrn}, \[ \|(\Id - \phi(hD_r)) u\|_{H^2_h(\R)} \le C \delta^{-1} \|(P_{\delta,\eps}(0) - E')(\Id - \phi(hD_r)) u\|_{L^2(\R)} + \Oh(h^\infty)\|u\|_{H^{-N}_h(\R)}. \] On $\supp\phi(\rho)$ we use the negativity of the imaginary part of the principal symbol of $P_{\delta,\eps}(0)$. Indeed, on $\{(r,\rho)\colon \rho \in \supp \phi, \, |r| \le R+1\}$ we have, using \eqref{e:hp0q0}, \[ \im p_{\delta,\eps}(0) = \im p_{\delta}(0) + \im i\eps H_{p_{\delta,\eps}(0)} q = \frac{-2\delta \gamma'(r)\rho^2}{|1+i\delta\gamma'(r)|^4} - \frac{8\eps \rho^2}{(1-E_0)^2}(1 + \Oh(\delta)) \le - \eps, \] provided $\delta$ is sufficiently small. Meanwhile, on $\{(r,\rho)\colon \rho \in \supp \phi, \, |r| \ge R+1\}$ we have \[ \im p_{\delta,\eps}(0) = \im p_{\delta}(0) + \im i\eps H_{p_{\delta,\eps}(0)} q = \frac{-2\delta \tan\theta_0 \rho^2 }{|1+i\delta \tan\theta_0|^4} + \Oh(\eps) \le -\delta/C, \] provided $h$ (and hence $\eps$) is sufficiently small. Then, using the sharp G\aa rding inequality \eqref{gardingrn}, we have, for $h$ sufficiently small, \[\begin{split} \|\varphi(h D_r)u\|_{L^2(\R)} \|(P_{\delta,\eps}(0) - E')\varphi(h D_r)u\|_{L^2(\R)} &\ge - \la\im (P_{\delta,\eps}(0) - E')\varphi(h D_r)u, \varphi(h D_r)u \ra_{L^2(\R)} \\ & \ge \eps \|\varphi(hD_r) u\|_{L^2(\R)}^2 - C h \|u\|^2_{H^{1/2}_h(\R)}. \end{split}\] We deduce \eqref{e:modelbound0eps0} from this just as we did \eqref{e:pkepsest} above. To improve \eqref{e:modelbound001} to \eqref{e:modelbound0020} we use almost the same complex interpolation argument as we did to improve \eqref{e:pkest1} to \eqref{e:rkbound}. The only difference is that in the first step we note that \[ \im p_\delta(0) = \frac {-2\delta\gamma'(r)}{|1+i\delta\gamma'(r)|^4} \le 0, \] so by the sharp G\aa rding inequality \eqref{gardingrn} we have, for some $C_\Omega>0$, $\la \im P_\delta(0) u, u \ra_{L^2(\R)} \ge -C_\Omega h \|u\|_{L^2(\R)}^2$, so that $\|(P_\delta(0) - \lambda)^{-1}\|_{L^2(\R)} \le1/ C_\Omega h$, when $\im \lambda \ge 2 C_\Omega h$. \end{proof} \begin{proof}[Proof of \eqref{e:modelprop0}] Let $(P_\delta(0) - \lambda)u = f,$ where $\|f\|_{L^2(\R)} =1$, $\supp f \subset \supp \chi_-$ and $P_\delta(0)$ is as in the proof of \eqref{e:modelbound0}. We must show that \begin{equation}\label{e:modelprop0conc} \|\varphi(hD_r) \chi_+(r) u\|_{H^{2}_h(\R)} = \Oh(h^\infty); \end{equation} recall that the replacement of $P(0)$ by $P_\delta(0)$ is justified by \eqref{e:p0agree}. To prove \eqref{e:modelprop0conc} we use an argument by induction based on a nested sequence of escape functions. More specifically, take \[ q = \varphi_r(r)\varphi_\rho(\rho), \qquad H_{p_\delta(0)} q = 2\rho\varphi'_r(r)\varphi_\rho(\rho) + \Oh(\delta), \] where $\varphi_r \in C_0^\infty(\R;[0,\infty))$ with $\supp \varphi_r \subset (r_0,\infty)$, $\varphi_r' \ge 0$ near $[r_0,R+1]$ (here $R$ is as in \eqref{e:p0supp}), $\varphi_r' > 0$ near $\supp \chi_+$. Take $\varphi_\rho \in C_0^\infty(\R;[0,\infty))$ with $\supp \varphi_\rho \subset (-\infty,0)$, $\varphi_\rho' \le 0$ near $[-2,0]$, $\varphi_\rho \ne 0$ near $\supp \varphi \cap [-2,0]$. Impose further that $\sqrt\varphi_r, \sqrt\varphi_\rho \in C_0^\infty(\R)$, and that $\varphi'_r \ge c \varphi_r$ for $r \le R+1$, where $c >0$ is chosen large enough that $H_{p_0(\delta)} q \le -(2\Gamma +1)q$ on $\{r \le R+1, \rho \ge -2\}$: see Figure \ref{f:p0prop}. \begin{figure}[htbp] \includegraphics{escfunc0.pdf} \caption{The escape function $q$ used to prove propagation of singularities \eqref{e:modelprop0} in the case $\alpha = 0$. The derivative along the flowlines $H_{p_\delta(0)}q$ is negative and provides ellipticity for our positive commutator argument near $\{r \in \supp \chi_+, \rho \in \supp \varphi\}$. We allow $H_{p_\delta(0)}q > 0$ (the unfavorable sign for us) only in $\{r > R+1\}$ and in $\{\rho < -2\}$, because in this region $p_\delta(0)$ is elliptic.} \label{f:p0prop} \end{figure} We will show that if $\|A_0u\|_{L^2(\R)} \le C h^k$ for $A_0 \in \Psi^0(\R)$ with full symbol supported sufficiently near $\supp q$ and for some $k \in \R$, then $\|A_1 u\|_{L^2(\R)} \le C h^{k + 1/2}$ for $A_1 \in \Psi^0(\R)$ with full symbol supported sufficiently near $\{ r \in \supp \chi_+, \rho \in \supp \varphi\}$. The conclusion \eqref{e:modelprop0conc} then follows by induction. (The base step of the induction follows from \eqref{e:modelbound0020} or even from \eqref{e:modelbound001}.) In the remainder of the proof all norms and inner products are in $L^2(\R)$ and we omit the subscript for brevity. We write \[ H_{p_\delta(0)} q^2 = -b^2 + e, \] where $b,e \in C_0^\infty(T^*\R)$, $b > 0$ near $ \{ r \in \supp \chi_+, \rho \in \supp \varphi, -2 \le \rho\}$, $b^2 \ge (2\Gamma+1)q^2$ everywhere, and $\supp e \cap (\{ r \le R+1, \rho \ge -2\} \cup \{r \le r_0\})= \varnothing$. Let $Q,B,E$ be quantizations of $q,b,e$ respectively. Then \[ i[P_\delta(0),Q^*Q] = - hB^*B + hE + h^2F, \] where $F \in \Psi^0(\R)$ has full symbol supported in $\supp q$. From this we conclude that \[ \|Bu\|^2 = - \frac2 h \im \la Q^*Q(P_\delta(0) - \lambda)u,u\ra - \frac 2 h \im \lambda\|Q u\|^2+ \la E u, u \ra + h \la Fu,u\ra + \Oh(h^\infty)\|u\|^2. \] From $(P_\delta(0) - \lambda)u = f$ and $\WF'_h Q \cap T^*\supp f = \varnothing$ it follows that the first term is $\Oh(h^\infty)\|u\|^2$. Similarly $\WF'_h E \cap (\supp f\cup p_\delta^{-1}(0)) = \varnothing$ implies by \eqref{ellipestrn} that the third term is $\Oh(h^\infty)\|u\|^2$. The fourth term is bounded by $C h^{2k+1}\|u\|^2$ by inductive hypothesis, giving \[ \|Bu\|^2 \le 2\Gamma \|Q u\|^2 + C h^{2k+1}\|u\|^2. \] By \eqref{gardingrn} we have \[ \la(B^*B - (2\Gamma +1)Q^*Q)u,u\ra \ge -Ch\|Ru\|^2, \] where $R \in \Psi^{0,0}_0(\R)$ is microsupported in an arbitrarily small neighborhood of $\WF'_hQ$. Hence $\|Ru\| \le Ch^k \|u\|$ and we have \[ \|Qu\|^2 \le C h^{2k+1}\|u\|^2, \] completing the inductive step and also the proof. \end{proof} \subsection{The case $\alpha \ge \lambda_1h$.} Propositions \ref{p:modelbound} and \ref{p:modelprop} follows from \eqref{e:modelbound0}, \eqref{e:modelprop0} and the following two Lemmas. \begin{lem}\label{l:modelbound} For any $E \in (0,1)$ there is $C_0>0$ such that for any $M, \lambda_1>0$ there are $h_0,C>0$ such that if $h \in (0,h_0], \alpha \ge \lambda_1 h$, $\lambda \in [-E,E] +i [-Mh\log\log(1/h),\infty)$, then \begin{equation}\label{e:modelbound2} \left\|(P(\alpha) - \lambda)^{-1}\right\|_{L^2(\R) \to H^2_h(\R)} \le C \log(1/h) h^{-1-C_0 |\im \lambda| /h}. \end{equation} If $\chi \in C^\infty(\R)$ has $\chi' \in C_0^\infty(\R)$ and $\chi(r) = 0$ for $r$ sufficiently negative, then \begin{equation}\label{e:modelbound2b} \left\|\chi(P(\alpha) - \lambda)^{-1}\chi\right\|_{L^2(\R) \to H^2_h(\R)} \le C h^{-1-2C_0 |\im \lambda| /h}\end{equation} in the same range of $h,\alpha,\lambda$, and with the same $C_0$ and $h_0$ (but with different $C$). \end{lem} \begin{lem} Let $r_0 <0$, $\chi_- \in C_0^\infty((-\infty,r_0))$, $\chi_+ \in C_0^\infty((r_0,\infty))$, $\varphi \in C_0^\infty((-\infty,0))$, $E \in (0,1)$, $\Gamma, \lambda_1, N>0$ be given. Then there exists $h_0>0$ such that \begin{equation}\label{e:modelpropa} \left\|\varphi(hD_r)\chi_+(r)(P(\alpha) - \lambda)^{-1}\chi_-(r)\right\|_{L^2(\R) \to H^2_h(\R)} = \Oh(h^\infty), \end{equation} uniformly for $\alpha \ge \lambda_1 h$, $\re \lambda \in [-E,E], \, -\Gamma h \le \im \lambda \le h^{-N}$, $h \in (0,h_0]$. \end{lem} Take $\alpha_0>0$ such that if $\alpha \ge \alpha_0$ and $r \le 0$ then $\alpha^2e^{-2(r+\beta(r))} \ge 3$. We consider the cases $\lambda_1 h \le \alpha \le \alpha_0$ and $\alpha_0 \le \alpha$ separately. \begin{proof}[Proof of \eqref{e:modelbound2}, \eqref{e:modelbound2b}, and \eqref{e:modelpropa} for $\alpha_0 \le \alpha$] In this case $P(\alpha)$ is `elliptic' (although not pseudodifferential in the usual sense because of the exponentially growing term $\alpha^2 e^{-2(r+\beta(r))}$) and better estimates hold. Use the fact that $W_C \ge 0$ and $\alpha^2 e^{-2(r+\beta(r))} \ge 3$ for $r \le 0$ to write \begin{align*} \int_{-\infty}^0 |u|^2 dr &\le \frac 13 \int_{-\infty}^\infty \alpha^2 e^{-2(r+\beta(r))} |u|^2 dr \le \frac 13 \re \langle P(\alpha)u,u\rangle_{L^2(\R)} + \left(\frac 13 + \Oh(h^2)\right) \|u\|_{L^2(\R)}^2, \\ \int_0^\infty |u|^2dr &= \int_0^\infty W_C |u|^2dr \le \int_{-\infty}^\infty W_C|u|^2 dr = -\im \langle P(\alpha) u,u\rangle_{L^2(\R)}. \end{align*} Adding the inequalities gives \[ \|u\|^2_{L^2(\R)} \le 2 \|(P(\alpha) - \lambda)u\|_{L^2(\R)} \|u\|_{L^2(\R)} +\left(\frac 13\re \lambda - \im \lambda+ \frac 13 + \Oh(h^2)\right)\|u\|^2_{L^2(\R)}, \] So long as $\im \lambda - (1/3) \re \lambda + 2/3 \ge \epsilon$ for some $\epsilon>0$, it follows that \begin{equation}\label{e:alphabig} \|u\|_{L^2(\R)} \le C\|(P(\alpha) - \lambda)u\|_{L^2(\R)}. \end{equation} To obtain \eqref{e:modelbound2} we observe that \[\begin{split} \|h^2D_r^2u\|_{L^2(\R)}^2= &\|(h^2D_r^2 + \alpha^2e^{-2(r+\beta(r))}) u\|_{L^2(\R)}^2 - \| \alpha^2e^{-2(r+\beta(r))}u\|_{L^2(\R)}^2 \\&- 2 \re \langle h^2 D_r^2 u, \alpha^2e^{-2(r+\beta(r))} u\rangle_{L^2(\R)}, \end{split}\] while \[\begin{split} - \re \langle & h^2 D_r^2 u, \alpha^2e^{-2(r+\beta(r))} u\rangle_{L^2(\R)} = \\& - \|\alpha e^{-(r+\beta(r))}hD_ru\|_{L^2(\R)}^2 +2 \im \langle h D_r u, (1+\beta'(r))h\alpha^2e^{-2(r+\beta(r))} u\rangle_{L^2(\R)}, \end{split}\] so that \[ \|h^2D_r^2u\|_{L^2(\R)} \le 2 \|(h^2D_r^2 + \alpha^2e^{-2(r+\beta(r))}) u\|_{L^2(\R)} \le 2\|(P(\alpha) - \lambda)u\|_{L^2(\R)} + C |\lambda| \|u\|_{L^2(\R)}. \] Together with \eqref{e:alphabig}, this implies \eqref{e:modelbound2} (and hence \eqref{e:modelbound2b}) with the right hand side replaced by $C(1+|\lambda|)$. The estimate \eqref{e:modelpropa} follows from the stronger Agmon estimate \[ \left\|\chi_+(r)(P(\alpha) - \lambda)^{-1}\chi_-(r)\right\|_{L^2(\R) \to H^2_h(\R)} = \Oh(e^{-1/(Ch)}), \] see for example \cite[Theorems 7.3 and 7.1]{ez}. \end{proof} \begin{proof}[Proof of \eqref{e:modelbound2} for $\lambda_1 h \le \alpha \le \alpha_0$] For this range of $\alpha$ we use the following rescaling (I'm very grateful to Nicolas Burq for suggesting this rescaling): \begin{equation}\label{e:tildevars} \tilde r = r/ \log(2\alpha_0/\alpha), \qquad \tilde h = h/\log(2\alpha_0/\alpha). \end{equation} In these variables we have \[ P(\alpha) = (\tilde hD_{\tilde r})^2 + 4\alpha_0^2 e^{-2\left[(1+\tilde r)\log(2\alpha_0/\alpha) + \tilde \beta(\tilde r)\right]}+ \tilde h^2 \widetilde V(\tilde r) - 1 - i\widetilde W_C(\tilde r), \] where \[ \tilde \beta(\tilde r) = \beta(r), \qquad \widetilde V(\tilde r) = \log(2\alpha_0/\alpha)^2 V(r), \qquad \widetilde W_C(\tilde r) =W_C(r). \] We will show that \begin{equation}\label{e:modelbound2tilde0} \|(P(\alpha) - \lambda)^{-1}\|_{L^2_{\tilde r} \to H^2_{h,\tilde r}} \le C \tilde h^{-1}e^{C_0 |\im \lambda| /\tilde h}, \end{equation} for $|\re \lambda| \le E, \ \im \lambda \ge - M \tilde h \log (1/\tilde h)$, from which \eqref{e:modelbound2} follows. We now use a variant of the gluing argument in \S\ref{s:glue} to replace the exponentially growing term $4\alpha_0^2 e^{-2\left[(1+\tilde r)\log(\alpha_0/\alpha) + \tilde \beta(\tilde r)\right]}$ with a bounded one. Fix $\widetilde R >0$ such that \[ \tilde r \le- \widetilde R,\ \alpha \le \alpha_0 \Longrightarrow \alpha_0^2 e^{-2\left[(1+\tilde r)\log(2\alpha_0/\alpha) + \tilde \beta(\tilde r)\right]} > 1. \] Take $\widetilde V_B, \widetilde V_E \in C^\infty(\R, [0,\infty))$ such that $\widetilde V_E(\tilde r) = 4\alpha_0^2 e^{-2\left[(1+\tilde r)\log(2\alpha_0/\alpha) + \tilde \beta(\tilde r)\right]} $ for $\tilde r \le - \widetilde R$ and $\widetilde V_E(\tilde r) \ge 4$ for all $\tilde r$, while $\widetilde V_B(\tilde r) = 4\alpha_0^2 e^{-2\left[(1+\tilde r)\log(2\alpha_0/\alpha) + \tilde \beta(\tilde r)\right]} $ for $\tilde r \ge- \widetilde R-3$ and is bounded, uniformly in $\alpha$, together with all derivatives (see figure \ref{f:vcve}). \begin{figure}[htbp] \includegraphics[width=12cm]{vcve} \caption{The model potentials $\widetilde V_E$ and $\widetilde V_B$, which agree with $4\alpha_0^2 e^{-2\left[(1+\tilde r)\log(2\alpha_0/\alpha) + \tilde \beta(\tilde r)\right]}$ for $\tilde r \le- \widetilde R$ and $\tilde r \ge - \widetilde R-3$ respectively.}\label{f:vcve} \end{figure} Let \[\begin{split} P_E(\alpha) &= (\tilde hD_{\tilde r})^2 + \widetilde V_E(\tilde r) + \tilde h^2 \widetilde V(\tilde r) - 1 - i\widetilde W_C(\tilde r), \\ P_B(\alpha) &= (\tilde hD_{\tilde r})^2 + \widetilde V_B(\tilde r) + \tilde h^2 \widetilde V(\tilde r) - 1 - i\widetilde W_C(\tilde r), \end{split}\] and let $R_E = (P_E(\alpha)-\lambda)^{-1}$, $R_B = (P_B(\alpha)-\lambda)^{-1}$. Note that \[ \|R_E\|_{L^2_{\tilde r} \to H^2_{h,\tilde r}} \le C \] by the same proof as that of \eqref{e:modelbound2} for $\alpha \ge \alpha_0$. We will show that \eqref{e:modelbound2tilde0} follows from \begin{equation}\label{e:modelbound2tilde} \|R_B\|_{L^2_{\tilde r} \to H^2_{h,\tilde r}} \le C \tilde h^{-1}e^{C_0 |\im \lambda| /\tilde h}, \end{equation} for $|\re \lambda| \le E, \im \lambda \ge - M \tilde h \log (1/\tilde h)$. Indeed, let $\chi_E \in C^\infty(\R;\R)$ have $\chi_E(\tilde r) = 1$ near $\tilde r \le- \widetilde R-2$ and $\chi_E(\tilde r) = 0$ near $\tilde r \ge- \widetilde R-1$, and let $\chi_B = 1 - \chi_E$. Let \[ G = \chi_E(\tilde r - 1) R_E \chi_E(\tilde r) + \chi_B(\tilde r + 1) R_B \chi_B(\tilde r). \] Then \[ (P(\alpha) - \lambda) G = \Id + [\tilde h^2D_{\tilde r}^2,\chi_E(\tilde r - 1)] R_E \chi_E(\tilde r) + [\tilde h^2D_{\tilde r}^2,\chi_B(\tilde r + 1)] R_B \chi_B(\tilde r) = \Id + A_E + A_B. \] As in \S\ref{s:glue} we have $A_E^2 = A_B^2 = 0$. We also have the Agmon estimate \[ \|A_E\|_{L^2_{\tilde r} \to L^2_{\tilde r}} \le e^{-1/(C\tilde h)}; \] see for example \cite[Theorems 7.3 and 7.1]{ez}. Solving away $A_B$ using $G$ we find that \begin{equation}\label{e:palphapar} (P(\alpha) - \lambda) G (\Id - A_B) = \Id + \Oh_{L^2_{\tilde r}\to L^2_{\tilde r}}(e^{-1/(C\tilde h)}), \end{equation} and since $\|G (\Id - A_B)\|_{L^2_{\tilde r} \to H^2_{\tilde h,\tilde r}} \le C \tilde h^{-1} e^{C|\im \lambda|/\tilde h}$, this implies \eqref{e:modelbound2tilde0}. The proof of \eqref{e:modelbound2tilde} follows that of \eqref{e:modelbound0} with these differences: the $-i\widetilde W_C(\tilde r)$ term removes the need for complex scaling, and the $\widetilde V_B(\tilde r)$ term puts $P_B$ in a mildly exotic operator class and leads to a slightly modified escape function $q$ and microlocal cutoff $\phi$. Fix \begin{equation}\label{e:e0cusp} E_0 \in (E,1), \qquad \eps = 10M\tilde h \log(1/\tilde h). \end{equation} The $\tilde h$-semiclassical principal symbol of $P_B$ (note that $P_B \in \Psi^2_\delta(\R)$ for any $\delta>0$) is \begin{equation}\label{e:pcsymb} p_B = \tilde \rho^2 + \widetilde V_B(\tilde r) - 1 - i \widetilde W_C(\tilde r), \end{equation} where $\tilde \rho$ is dual to $\tilde r$. Take $q \in C_0^\infty(T^*\R)$ such that on $\{-\widetilde R \le \tilde r \le 0, \ |\tilde \rho| \le 2\}$ we have \[ q(\tilde r, \tilde \rho) = - C_q (\tilde r + \widetilde R+1) \tilde \rho, \] \[ \re H_{p_B}q = -2C_q \tilde \rho^2 + C_q(\tilde r + \widetilde R+1) \widetilde V'_B(\tilde r) \le -C_q (\re p_B + 1) \] where $C_q>0$ is a large constant which will be specified below, and where for the inequality we used \eqref{e:betahalf}. Let $Q \in \Psi^{-\infty}(\R)$ be a quantization of $q$ with $\tilde h$ as semiclassical parameter and put \begin{equation}\label{e:expexp} P_{B,\eps} = e^{\eps Q/\tilde h} P_B e^{-\eps Q/ \tilde h} = P_B - \eps[P_B,Q/\tilde h] + \eps^2 \tilde h^{-4\delta} R, \end{equation} where $R \in \Psi_\delta^{-\infty}(\R)$ by \eqref{expexp}. The $\tilde h$-semiclassical principal symbol of $P_{B,\eps}$ is \[ p_{B,\eps} = \tilde \rho^2 + V_B(\tilde r) - 1 - i \widetilde W_C(\tilde r) + i \eps H_{p_B} q \] We will prove \begin{equation}\label{e:modelbouncd0eps} \|(P_{B,\eps} -E')^{-1}\|_{L^2_{\tilde r} \to H^2_{\tilde h, \tilde r} } \le 5/\eps, \qquad E' \in [-E_0,E_0], \end{equation} from which it follows by \eqref{e:expest} that \begin{equation}\label{e:modelbound00} \|(P_{B,\eps} -\lambda)^{-1}\|_{L^2_{\tilde r} \to H^2_{\tilde h, \tilde r} } \le \frac{\tilde h^{-N}}{M\log(1/\tilde h)}, \quad |\re \lambda| \le E_0, \ |\im \lambda| \le M \tilde h \log(1/\tilde h) \end{equation} where $N = 10M(\|Q\|_{H^2_{\tilde h, \tilde r} \to H^2_{\tilde h, \tilde r}} + \|Q\|_{L^2_{\tilde r} \to L^2_{\tilde r}}) + 1$. The proof that \eqref{e:modelbound00} implies \eqref{e:modelbound2tilde} is the same as the proof that \eqref{e:pkest1} implies \eqref{e:rkbound}. Let $\phi \in C_0^\infty(T^*\R)$ be identically $1$ near $\{(\tilde r, \tilde \rho): -\widetilde R \le \tilde r \le 0,\ |\tilde \rho|\le2,\ |\re p_B(\tilde r, \tilde \rho)| \le E_0\}$ and be supported such that $\re H_{p_B}q <0$ on $\supp \phi$. Let $\Phi$ be the quantization of $\phi$ with $\tilde h$ as semiclassical parameter. For $h$ (and hence $\tilde h$ and $\eps$) small enough, we have $|p_{B,\eps} - E'| \ge (1 + \tilde \rho^2)/C$ on $\supp (1-\phi)$, uniformly in $E' \in [-E_0,E_0]$, in $\alpha \le \alpha_0$ and in $h$. Hence, by the semiclassical elliptic estimate \eqref{ellipestrn}, \[ \|(\Id - \Phi) u\|_{H^2_{\tilde h, \tilde r}} \le C \|(P_{B,\eps} - E')(\Id - \Phi) u\|_{L^2_{\tilde r}} + \Oh(h^\infty)\|u\|_{H^{-N}_{\tilde h, \tilde r}}. \] Using the fact that $\re H_{p_B}q <0$ on $\supp\phi$, fix $C_q$ large enough that on $\supp\phi$ we have \[ \im p_{B,\eps} = - \widetilde W_C(\tilde r) + \eps \re H_{p_B} q \le - \eps. \] Then, using the sharp G\aa rding inequality \eqref{gardingrn}, we have, for $h$ sufficiently small, \[\begin{split} \|\Phi u\|_{L^2_{\tilde r}(\R)} \|(P_{B,\eps} - E')\Phi u\|_{L^2_{\tilde r}(\R)} &\ge - \la\im (P_{B,\eps} - E')\Phi u, \Phi u \ra_{L^2_{\tilde r}(\R)} \\ & \ge \eps \|\Phi u\|_{L^2_{\tilde r}(\R)}^2 - C \tilde h^{1-2\delta} \|u\|^2_{H^{1/2}_{\tilde h, \tilde r}(\R)}. \end{split}\] We deduce \eqref{e:modelbouncd0eps} from this just as we did \eqref{e:pkepsest} above. \end{proof} \begin{proof}[Proof of \eqref{e:modelbound2b} for $\lambda_1 h \le \alpha \le \alpha_0$.] It suffices to show that \begin{equation}\label{e:modelboundcomp} \left\|\chi R_B \chi\right\|_{L^2_r \to H^2_{h,r}} \le C/h, \end{equation} when $|\re \lambda| \le E_0, \,\im \lambda \ge 0$, with $R_B$ as in the proof of \eqref{e:modelbound2} for $\lambda_1 h \le \alpha \le \alpha_0$, $E_0$ as in \eqref{e:e0cusp}\footnote{Note that for this proof we do not use the variables $\tilde r$ and $\tilde h$.}. Then $\left\|\chi P(\alpha) - \lambda)^{-1} \chi\right\|_{L^2_r \to H^2_{h,r}} \le C/h$ (for the same range of parameters) follows by the same argument that reduced \eqref{e:modelbound2} to \eqref{e:modelbound2tilde} above. After this, \eqref{e:modelbound2b} follows by complex interpolation as in the proof that \eqref{e:pkest1} implies \eqref{e:rkbound} above. Indeed, take $f(\lambda,h)$ holomorphic in $\lambda$, bounded uniformly for $\lambda \in \Omega = [-E_0,E_0] + i [-Mh\log\log(1/h),0]$, and satisfying \[ |\re \lambda| \le E \Rightarrow |f| \ge 1, \qquad |\re \lambda| \le [(E+E_0)/2,E_0] \Rightarrow |f| \le h^2 \] for $\lambda \in \Omega$. Then define the subharmonic function \[ g(\lambda,h) = \log\|\chi (P(\alpha) - \lambda)^{-1}\chi\|_{L^2_r \to H^2_{h,r}} + \log|f(\lambda,h)| + 2C_0 \frac{\im \lambda} h \log(1/h), \] and apply the maximum principle to $g$ on $\Omega$, observing that $g \le C + \log(1/h)$ on $\D \Omega$. It now remains to prove \eqref{e:modelboundcomp}, which we do using a `non-compact' variant of the positive commutator method of \cite{Datchev-Vasy:Propagation}. Fix $-R_0 < \inf \supp \chi$ and take $f \in L^2_r$ with $\supp f \subset (-R_0,\infty)$. Let $u = R_B f$. We will show that $\|\chi u\|_{H^2_{h,r}} \le C\|f\|_{L^2_r}/h$. As an escape function take $q \in S^0(\R)$ with $q\ge 0$ everywhere and such that \[ q(r,\rho) = \begin{cases} 1 + 2 R_0e^{-1/R_0}, & -R_0 \ge r, \\ 1 + 2 R_0e^{-1/R_0} -\rho (r+R_0+1) e^{-1/(r+R_0)}, & -R_0 < r \le 0 \textrm{ and } |\rho|\le2. \end{cases} \] We do not prescribe additional conditions on $q$ outside of this range of $(r,\rho)$, as $P_B$ is semiclassically elliptic there. The $h$-semiclassical principal symbol of $P_B$ is (see \eqref{e:pcsymb}) \[ p_B = \rho^2 + V_B( r) - 1 - i W_C( r), \] where $V_B(r) = \widetilde V_B(\tilde r)$. Making $-\widetilde R$ more negative if necessary, we may suppose without loss of generality that \[ r \ge -R_0 \Longrightarrow V_B(r) = \alpha^2 e^{-2(r + \beta(r))}. \] For $r \le -R_0$ we have $H_{p_B} q = 0$, and for $-R_0 < r \le 0$, $|\rho| \le 2$ we have \[\begin{split} \re H_{p_B} q(r,\rho) &= \left[-2\rho^2 (1 + 1/(r+R_0) ) + V'_B(r)(r+R_0+1) \right]e^{-1/(r+R_0)} \\ &\le -(\re p_B +1)e^{-1/(r+R_0)}. \end{split}\] Consequently we may write \[ \re H_{p_B} (q^2) = -b^2 + a, \] where $a,b \in C^\infty_0(T^*\R)$ and $\supp a$ is disjoint from $\{r \le -R_0\}$ and from $\{-R_0<r\le 0\} \cap \{|\rho|\le2\}$. Note that \begin{equation}\label{e:bne0pc} b \ne 0 \textrm{ on } \{|p_B| \le E_0\} \cap T^*(-R_0,0). \end{equation} Let $Q = \Op(q)$ as in \eqref{quantdef}. Then \begin{equation}\label{e:pcposcom} i[P_B,Q^*Q] = -h B^*B + hA + [W_C,Q^*Q] + h^2Y, \end{equation} where $B,A,Y \in \Psi^{-\infty}(\R)$ and $B,A$ have semiclassical principal symbols $b,a$. Note that if $\chi_0 \in C_0^\infty((-R_0,\infty))$, then by \eqref{e:bne0pc} and \eqref{ellipestrn} we have \begin{equation}\label{e:chi0pc} \| \chi_0 u\|^2_{H^2_{h,r}} \le C (\|Bu\|^2_{L^2_r} + \log^2(1/h)\|f\|^2_{L^2_r}), \end{equation} so it suffices to show that \begin{equation}\label{e:bpc} \|Bu\|^2_{L^2_r} \le C h^{-2} \|f\|^2_{L^2_r}. \end{equation} Combining \eqref{e:pcposcom} with \[ \langle i[P_B,Q^*Q]u,u\rangle_{L^2_r} = -2\im\langle Q^*Qu,f\rangle_{L^2_r} +2\langle W_C Q^*Q u,u\rangle_{L^2_r} + 2 \im \lambda \|Q u\|^2_{L^2_r} \] gives \begin{equation}\label{e:bpcterms}\begin{split} \|Bu\|^2_{L^2_r} &= \langle Au,u\rangle_{L^2_r} + \frac 2 h\im\langle Q^*Qu,f\rangle_{L^2_r} - \frac 1 h\langle (W_C Q^*Q + Q^*Q W_C)u,u\rangle_{L^2_r} \\ &\qquad\qquad - \frac{2 \im \lambda}h \|Q u\|^2_{L^2_r} + h \langle Y u,u\rangle_{L^2_r}. \end{split}\end{equation} We now estimate the right hand side term by term to obtain \eqref{e:bpc}. Since $P_B - \lambda$ is semiclassically elliptic on $\supp a$, by \eqref{ellipestrn} followed by \eqref{e:modelbound2} we have \[ |\langle Au,u\rangle_{L^2_r}| \le C \|f\|_{L^2_r}^2 + Ch^2 \|u\|^2_{L^2_r} \le C\log^2(1/h) \|f\|_{L^2_r}^2. \] For any $\epsilon>0$ and $\chi_1 \in C_0^\infty(\R)$ with $\chi_1 = 1$ near $\supp f$ we have \[ \frac 2 h\im\langle Q^*Qu,f\rangle_{L^2_r} \le \epsilon \| \chi_1 u\|^2_{L^2_r} + \frac C{h^2\epsilon} \|f\|^2_{L^2_r}. \] By \eqref{e:bne0pc} and the elliptic estimate \eqref{ellipestrn}, if further $\inf \supp \chi_1 >-R_0$, then \eqref{e:chi0pc} gives \[ \frac 2 h\im\langle Q^*Qu,f\rangle_{L^2_r} \le C \epsilon \| B u\|^2_{L^2_r} + \frac C{h^2\epsilon} \|f\|^2_{L^2_r}. \] Next we have, using $W_C \ge 0$ and the fact that $h^{-1}[W_C, Q^*]Q$ has imaginary principal symbol, followed by \eqref{e:modelbound2}, \[\begin{split} - \frac 1 h\langle (W_C Q^*Q + Q^*Q W_C)u,u\rangle_{L^2_r} &= - \frac 2 h \langle W_C Qu,Qu\rangle_{L^2_r} + \frac 2 h \re \langle [W_C, Q^*]Q u,u\rangle_{L^2_r}\\ & \le C h\|u\|^2_{L^2_r} \le C \frac{\log^2(1/h)}h\|f\|_{L^2_r}^2. \end{split}\] Finally we observe that $- 2 \im \lambda \|Q u\|^2_{L^2_r}/h \le 0$ since $\im \lambda \ge 0$, while \eqref{e:modelbound2} implies \[ h \langle Y u,u\rangle_{L^2_r} \le C \frac{\log^2(1/h)}h\|f\|_{L^2_r}^2. \] This completes the estimation of \eqref{e:bpcterms} term by term, giving \eqref{e:bpc}. \end{proof} \begin{proof}[Proof of \eqref{e:modelpropa} for $\lambda_1 h \le \alpha \le \alpha_0$.] We begin this proof with the same rescaling to $\tilde r$ and $\tilde h$, and the same parametrix construction as for the proof of \eqref{e:modelbound2} for $\lambda_1 h \le \alpha \le \alpha_0$ above, but with the additional requirement that \[ -\widetilde R \le r_0/\log 2. \] Then if we put \[ \widetilde \chi_+(\tilde r) = \chi_+(r), \qquad \widetilde \chi_-(\tilde r) = \chi_-(r), \] we have \[ \supp \widetilde \chi_+ \subset (r_0/\log(2\alpha_0/\alpha),\infty) \subset (r_0/\log2,\infty), \quad \supp \chi_E \subset (-\infty,-\widetilde R -1) , \] and hence \begin{equation}\label{e:chipchie} \widetilde\chi_+(\tilde r) \chi_E(\tilde r -1 )=0. \end{equation} Then, noting that \eqref{e:palphapar} implies \[ (P(\alpha)-\lambda)^{-1} = G(\Id-A_B)(\Id + \Oh_{L^2_{\tilde r} \to L^2_{\tilde r} }(e^{-1/(C\tilde h)})), \] we use \eqref{e:chipchie} to write \[ \widetilde\chi_+(\tilde r)(P(\alpha)-\lambda)^{-1} \widetilde\chi_-(\tilde r) = \widetilde\chi_+(\tilde r) R_B \widetilde\chi_-(\tilde r) + \Oh_{L^2_{\tilde r} \to H^2_{\tilde h, \tilde r} }(e^{-1/(C\tilde h)}). \] Returning to the $r$ and $h$ variables, we see that it suffices to show that \begin{equation}\label{e:modelproppb} \|\varphi(h D_{ r}) \chi_+( r) R_B\chi_-( r)\|_{L^2_{ r} \to H^2_{ h, r}} = \Oh( h^\infty). \end{equation} The proof of \eqref{e:modelproppb} is almost the same as that of \eqref{e:modelprop0}. There are two differences. The first difference is that as an escape function we use \[ q = \varphi_r( r)\varphi_\rho( \rho), \qquad \re H_{p_B} q= 2 \rho\varphi'_r( r)\varphi_\rho( \rho) - V'_C( r) \varphi_r'( r)\varphi'_\rho( \rho), \] where $\varphi_r \in C_0^\infty(\R;[0,\infty))$ with $\supp \varphi_r \subset (r_0,\infty)$, $\varphi_r' \ge 0$ near $[r_0,0]$, $\varphi_r' > 0$ near $\supp \chi_+$. Take $\varphi_\rho \in C_0^\infty(\R;[0,\infty))$ with $\supp \varphi_\rho \subset (-\infty,0)$, $\varphi_\rho' \le 0$ near $[-2,0]$, $\varphi_\rho \ne 0$ near $\supp \varphi \cap [-2,0]$. Impose further that $\sqrt\varphi_r, \sqrt\varphi_\rho \in C_0^\infty(\R)$, and that $\varphi'_r \ge c \varphi_r$ for $r \le 0$, where $c >0$ is chosen large enough that $\re H_{p_B} q \le -(2\Gamma +1)q$ on $\{r \le 0, \rho \ge -2\}$. The second difference is that the complex absorbing barrier $W_C$ produces a remainder term in the positive commutator estimate, analogous to the one in the proof of \eqref{e:modelbound2b} for $\lambda_1 h \le \alpha \le \alpha_0$ above. The same argument removes the remainder term in this case. \end{proof} \section{Model operator in the funnel}\label{s:funnel} Take $W_F \in C^\infty(\R;[0,1])$ nonincreasing with $W_F(r) = 0$ near $r \ge R_g$, $W_F(r) = 1$ near $r \le 0$, and let \[ P_F = h^2D_r^2 + (1-W_F(r))e^{-2(r+\beta(r))} \Delta_{S_+} + h^2V(r) - 1 - iW_F(r), \] with notation as in \S\ref{spectrum}. \begin{prop} \label{p:modelboundf} For every $\chi \in C_0^\infty(X)$, $E \in (0,1)$, there is $C_0>0$ such that for any $M>0$, there are $h_0,C>0$ such that the cutoff resolvent $\chi R_F(\lambda) \chi$ continues holomorphically from $\{\im \lambda>0\}$ to $\{|\re \lambda| \le E$, $-Mh\log(1/h) \le \im \lambda\}, \, h \in (0,h_0]$, where it satisfies \begin{equation}\label{e:modelboundf}\left\|\chi R_F(\lambda) \chi\right\|_{L^2_\varphi(X) \to H^2_{\varphi,h} (X)} \le C \begin{cases} h^{-1} + |\lambda|, \qquad & \im \lambda > 0 \\ h^{-1}e^{C_0 |\im \lambda|/h}, \qquad &\im \lambda \le 0, \end{cases}. \end{equation} \end{prop} \begin{prop}\label{p:modelpropf} Let $r_0 > R_g$, $\chi_- \in C_0^\infty((-\infty,r_0))$, $\chi_+ \in C_0^\infty((r_0,\infty))$, $\varphi \in C^\infty(\R)$ supported in $(0,\infty)$ and bounded with all derivatives, $E \in (0,1)$, $\Gamma>0$ be given. Then there exists $h_0>0$ such that \begin{equation}\label{e:modelpropf} \left\|\chi_+(r)R_F(\lambda)\chi_-(r)\varphi(hD_r)\right\|_{L^2_\varphi(X) \to H^2_{\varphi,h}(X)} = \Oh(h^\infty), \end{equation} for $|\re \lambda| \le E, \, - \Gamma h \le \im \lambda \le h^{-N}$, $h \in (0,h_0]$. \end{prop} To prove these propositions we separate variables over the eigenspaces of $\Delta_{S_+}$, writing $P_F = \bigoplus_{m=0}^\infty h^2D_r^2 + (1-W_F(r))(h\lambda_m)^2 e^{-2(r + \beta(r))} + h^2 V(r) - 1 - iW_F(r),$ where $0 = \lambda_0 < \lambda_1 \le \cdots$ are square roots of the eigenvalues of $\Delta_{S_+}$. It suffices to prove \eqref{e:modelboundf}, \eqref{e:modelpropf} with $P_F$ replaced by $P(\alpha)$, with estimates uniform in $\alpha \ge 0$, where \[ P(\alpha) = h^2D_r^2 + (1-W_F(r))\alpha^2e^{-2(r + \beta(r))} + h^2 V(r) - 1 - iW_F(r). \] Next we use a variant of the method of complex scaling presented in the proof of Lemma \ref{l:p0}, but with contours $\gamma$ depending on $\alpha$ in such a way as to give estimates uniform in $\alpha$; the $\alpha$-dependence is needed because the term $\alpha^2(1-W_F(r))e^{-2(r + \beta(r))}$, although exponentially decaying, is not uniformly exponentially decaying as $\alpha \to \infty$. Such contours were first used in \cite[\S 4]{z}; here we present a simplified approach based on that in \cite[\S 5.2]{Datchev:Thesis}. Fix $R>R_g$ sufficiently large that \[ \supp \chi \cup \supp \chi_+ \cup \supp \chi_- \subset(-\infty,R). \] and that \begin{equation}\label{e:betanegf} \re z \ge R, \ 0 \le \arg z \le \theta_0 \Longrightarrow |\im \beta(z)| \le |\im z|/2, \end{equation} where $\theta_0$ is as in \S\ref{s:assumptions}. Let $\gamma = \gamma_\alpha(r)$ be real-valued, smooth in $r$ with $\gamma'(r)\ge 0$ for all $r$, and obey $\gamma(r) = 0$ for $r \le R$ (here and below $\gamma' = \partial_r \gamma$). Suppose $\gamma'' \in C_0^\infty(\R)$ for each $\alpha$, but not necessarily uniformly in $\alpha$. Now put \[\begin{split} P_\gamma(\alpha) = \frac{h^2D_r^2}{(1 + i\gamma'(r))^2} - h \frac{\gamma''(r) h D_r}{(1 + i\gamma'(r))^3}+ \alpha^2(1-W_F(r))e^{-2(r + i \gamma(r) + \beta(r + i \gamma(r)))} \\ + h^2 V(r + i \gamma(r)) - 1 - iW_F(r). \end{split}\] If we define the differential operator with complex coefficients \[\widetilde P(\alpha) = h^2D_z^2 + \alpha^2(1-W_F(z))e^{-2(z + \beta(z))} + h^2 V(z) - 1 - iW_F(z), \] then we have \[ P(\alpha) = \widetilde P(\alpha)|_{\{z=r: r \in \R\}}, \qquad P_\gamma(\alpha) = \widetilde P(\alpha)|_{\{z=r + i \gamma(r): r \in \R\}}. \] If $\chi_0 \in C^\infty(\R)$ has $\supp \chi_0 \cap \supp \gamma = \varnothing$, then \[ \chi_0(P(\alpha) - \lambda)^{-1} \chi_0 = \chi_0(P_\gamma(\alpha) - \lambda)^{-1} \chi_0, \qquad \im \lambda > 0, \] by an argument almost identical to that used to prove \eqref{e:p0agree}; the only difference is we construct WKB solutions which are exponentially growing and decaying as $\re z \to +\infty$ rather than $-\infty$, and we take $ f(z) = (\alpha^2 e^{-2(z + \beta(z))} + h^2 V(z) -1 - \lambda)/h^2.$ Consequently to prove \eqref{e:modelboundf} and \eqref{e:modelpropf}, it is enough to show that \begin{equation}\label{e:modelboundfg}\left\|(P_\gamma(\alpha) - \lambda)^{-1} \right\|_{L^2(\R) \to H^2_h (\R)} \le C e^{C_0 |\im \lambda| /h}, \end{equation} and \begin{equation}\label{e:modelpropfg} \left\|\chi_+(r)(P_\gamma(\alpha) - \lambda)^{-1}\chi_-(r) \varphi(hD_r)\right\|_{L^2(\R) \to H^2_h(\R)} = \Oh(h^\infty), \end{equation} for a suitably chosen $\gamma$, with estimates uniform in $\alpha \ge 0$. Fix $R_->R$ such that \begin{equation}\label{e:imbeta} |\im \beta(z)| \le \im z/2 \end{equation} for $\re z \ge R_-$, $0 \le \arg z \le \theta_0$, with $\theta_0$ as in \S\ref{s:assumptions}. Take $\alpha_0 > 0$ such that \begin{equation}\label{e:a0f} \alpha_0^2 e^{-2(R+1)} e^{-2\max|\re\beta|} = 8, \end{equation} where $\max |\re \beta|$ is taken over $\R \cup \{|z| >R_g, \ 0 \le \arg z \le \theta_0\}$. We consider the cases $\alpha \le \alpha_0$ and $\alpha \ge \alpha_0$ separately. \begin{proof}[Proof of \eqref{e:modelboundfg} for $0 \le \alpha \le \alpha_0$] Fix \[E_0 \in (E,1), \qquad \eps = 10Mh\log(1/h).\] We use the same complex scaling as in the proof of Lemma \ref{l:p0}. In this range $\gamma$ is independent of $\alpha$ and we put $\gamma = \delta \gamma_-$, where $0<\delta \ll 1$ will be specified later, and we require $\gamma_-(r) =0$ for $r \le R_-$, $\gamma_-'(r) \ge 0$ for all $r$, and $\gamma_-'(r) = \tan \theta_0$ for $r \ge R_-+1$. The semiclassical principal symbol of $P_\gamma(\alpha)$ is \[\begin{split} p_\gamma(\alpha) &= \frac{\rho^2}{(1 + i\gamma'(r))^2} + \alpha^2(1-W_F(r))e^{-2(r + i \gamma(r) + \beta(r + i \gamma(r)))} - 1 - iW_F(r) \\ &= \rho^2 + \alpha^2(1-W_F(r))e^{-2(r + \beta(r ))} - 1 - iW_F(r) + \Oh(\delta). \end{split},\] where the implicit constant in $\Oh$ is uniform in compact subsets of $T^*\R$. Moreover, \[ \re p_\gamma(\alpha) + 1 \ge \rho^2 - \Oh(\delta), \] and, using \eqref{e:imbeta}, \begin{equation}\label{e:ima0} \begin{split} \im p_\gamma(\alpha) & \le -\alpha^2(1-W_F(r))e^{-2(r + \re \beta(r+i\gamma(r))}\sin(2(\gamma(r) + \im \beta(r+ i \gamma(r)))\\ & \le -\alpha^2(1-W_F(r))e^{-2(r + \re \beta(r+i\gamma(r))}\sin\gamma(r)\\ &= -\alpha^2(1-W_F(r))e^{-2(r + \re \beta(r + i \gamma(r))}\gamma(r) (1 + \Oh(\delta^2)), \end{split} \end{equation} again uniformly on compact subsets of $T^*\R$. Take $q \in C_0^\infty(T^*\R)$ such that on $\{0 \le r \le R_-+1, \ |\rho| \le 2\}$ we have \[\begin{split} q &= - C_q (r+1) \rho,\\ \frac{\re H_{p_\gamma} q}{C_q} &= -2 \rho^2 -(W'_F(r) + 2(1 + \beta'(r))(r+1)\alpha^2e^{-2(r + \beta(r))} + \Oh(\delta \\&\le - (\re p_\gamma + 1) \le - \rho^2 + \Oh(\delta), \end{split}\] where $C_q>0$ will be specified later, and provided $\delta$ is sufficiently small. Let $Q = \Op(q)$ and put \[ P_{\gamma,\eps}(\alpha) = e^{\eps Q/h} P_\gamma(\alpha) e^{- \eps Q/h} = P_\gamma(\alpha) - \eps[P_\gamma(\alpha),Q/h] + \eps^2R, \] where $R \in \Psi^{-\infty}(\R)$ (see \eqref{expexp}). As in the proof of Lemma \ref{l:p0}, \eqref{e:modelboundfg} follows from \begin{equation}\label{e:modelboundfeps0} \left\|(P_{\gamma,\eps}(\alpha) - E')^{-1} \right\|_{L^2(\R) \to H^2_h (\R)} \le 5/\eps, \end{equation} for $E' \in [-E_0,E_0]$. The proof of \eqref{e:modelboundfeps0} combines elements of the proofs of \eqref{e:modelbound0eps0} and \eqref{e:modelbouncd0eps}. Let $\phi \in C_0^\infty(T^*\R)$ be identically $1$ near $\{0 \le r \le R_- + 1,\ | \rho|\le2,\ |\re p_\gamma| \le E_0\}$ and be supported such that $\re H_{p_\gamma}q <0$ on $\supp \phi$. Let $\Phi$ be the quantization of $\phi$. For $\delta$ small enough, and $h$ (and hence $\eps$) small enough depending on $\delta$, we have $|p_{\gamma,\eps} - E'| \ge \delta (1 + \rho^2)/C$ on $\supp (1-\phi)$, uniformly in $E' \in [-E_0,E_0]$, in $\alpha \le \alpha_0$ and in $h$, where $p_{\gamma,\eps}(\alpha)$ is the semiclassical principal symbol of $P_{\gamma,\eps}(\alpha)$. Hence, by the semiclassical elliptic estimate \eqref{ellipestrn}, \[ \|(\Id - \Phi) u\|_{H^2_{h}(\R)} \le C \delta^{-1}\|(P_{\gamma,\eps} - E')(\Id - \Phi) u\|_{L^2(\R)} + \Oh(h^\infty)\|u\|_{H^{-N}_{ h}(\R)}. \] Using \eqref{e:ima0} and $\supp\phi \subset \{\re H_{p_c}q <0\}$, fix $C_q$ large enough that on $\supp\phi$ we have \[ \im p_{\gamma,\eps} = \im p_\gamma + \eps \re H_{p_c} q \le -\alpha^2(1-W_F)e^{-2(r + \re \beta)} \gamma (1 + \Oh(\delta^2)) + \eps \re H_{p_c} q \le -\eps. \] Then, using the sharp G\aa rding inequality \eqref{gardingrn}, we have, for $h$ sufficiently small, \[\begin{split} \|\Phi u\|_{L^2(\R)} \|(P_{C,\eps} - E')\Phi u\|_{L^2(\R)} &\ge - \la\im (P_{C,\eps} - E')\Phi u, \Phi u \ra_{L^2(\R)} \\ & \ge \eps \|\Phi u\|_{L^2(\R)}^2 - C h \|u\|^2_{L^2(\R)} . \end{split}\] This implies \eqref{e:modelboundfeps0} just as in the proofs of \eqref{e:modelbound0eps0} and \eqref{e:modelbouncd0eps}. \end{proof} \begin{proof}[Proof of \eqref{e:modelboundfg} for $\alpha \ge \alpha_0$] Define contours $\gamma = \gamma_\alpha(r)$ as follows. Take $R_\alpha$ such that \begin{equation}\label{e:ralphadef} \alpha^2 e^{-2R_\alpha} e^{2 \max |\re \beta|} = \min\{1/4,(\tan \theta_0)/2\}, \end{equation} where $\max |\re \beta|$ is taken over $\R \cup \{|z| >R_g, \ 0 \le \arg z \le \theta_0\}$. Note that $R_\alpha > R+1$ by \eqref{e:a0f}. Take $\gamma$ smooth and supported in $(R,\infty)$, with $0 \le \gamma'(r) \le 1/2$, and such that \[\begin{split} \gamma(r) \le \pi/9, \quad &r \le R+1,\\ \pi/18 \le \gamma(r) \le \pi/6, \quad &R+1 \le r \le R_\alpha,\\ \gamma'(r) = \min\{1/2,\tan \theta_0\}, \quad & r \ge R_\alpha. \end{split}\] We prove that \begin{equation}\label{e:pgaellip} |p_\gamma(\alpha) - E'| \ge (1+\rho^2)/C, \end{equation} uniformly for $-E \le E' \le E$ and $\alpha \ge \alpha_0$, by considering each range of $r$ individually. By \eqref{ellipestrn} this implies \eqref{e:modelboundfg} for $\alpha \ge \alpha_0$. \begin{enumerate} \item For $r \le R+1$ we have \begin{equation}\label{e:repgar}\begin{split} \re p_\gamma(\alpha) + 1 &= \frac{\rho^2(1-\gamma'(r)^2)}{|1 + i\gamma'(r)|^4} + \alpha^2(1-W_F(r))\re e^{-2(r + i \gamma(r) + \beta(r + i \gamma(r)))} \\ &\ge\frac 13 \rho^2 + \alpha^2(1-W_F(r))e^{-2(r+\re \beta(r+i\gamma(r)))} \cos (3\gamma(r)) \\ &\ge \frac 13 \rho^2 + 4(1-W_F(r)) , \end{split}\end{equation} where for the first inequality we used $\gamma'\le 1/2$ and \eqref{e:imbeta}, and for the second \eqref{e:a0f} and $\gamma \le \pi/9$. Since $\im p_\gamma = -W_F$ whenever $W_F \ne 0$, this gives \eqref{e:pgaellip} for $r \le R+1$. \item For $R+1 \le r \le R_\alpha$ we have $\re p_\gamma(\alpha) \ge \frac 13 \rho^2 - 1$ by the same argument as in \eqref{e:repgar}. This gives \eqref{e:pgaellip} for $R+1 \le r \le R_\alpha$ once we note that \eqref{e:imbeta} and \eqref{e:ralphadef} imply \[\begin{split} -\im p_\gamma(\alpha) &= \frac{2\rho^2\gamma'(r)}{|1 + i\gamma'(r)|^4} - \alpha^2\im e^{-2(r + i \gamma(r) + \beta(r + i \gamma(r)))} \\&\ge e^{-2\max|\re\beta|}\sin(\pi/18)\min\{1/2,(\tan \theta_0)/2\}. \end{split}\] \item For $r \ge R_\alpha$, note that $\alpha^2 |e^{-2(r + i \gamma(r) + \beta(r + i \gamma(r)))}| \le \gamma'(r)$. We again deduce \eqref{e:pgaellip} by considering two ranges of $\rho$ individually. When $\rho^2/|1 +i\gamma'(r)|^4 \le 1/2$ we have \[\begin{split} \re p_\gamma(\alpha) &= \frac{\rho^2(1-\gamma'(r)^2)}{|1 + i\gamma'(r)|^4} + \alpha^2\re e^{-2(r + i \gamma(r) + \beta(r + i \gamma(r)))} - 1 \\ &\le 1/2 + 1/4- 1 = -1/4. \end{split}\] When $\rho^2/|1 +i\gamma'(r)|^4 \ge 1/2$ we have \[\begin{split} \im p_\gamma(\alpha) &= \frac{-2\rho^2\gamma'(r)}{|1 + i\gamma'(r)|^4} + \alpha^2\im e^{-2(r + i \gamma(r) + \beta(r + i \gamma(r)))}\\ &\le \frac{-2\rho^2\gamma'(r)}{|1 + i\gamma'(r)|^4} + \frac{\gamma'(r)}2 \le -3\gamma'(r)/2= -\min\{3/4,3(\tan \theta_0)/2\}. \end{split}\] \end{enumerate} \end{proof} For $\alpha \ge \alpha_0$, \eqref{e:modelpropfg} follows from an Agmon estimate just as in the proof of \eqref{e:modelpropa} for $\alpha \ge \alpha_0$ above. For $\alpha \le \alpha_0$, \eqref{e:modelpropfg} follows from the same positive commutator argument as was used for the proof of \eqref{e:modelproppb}. \section{Applications}\label{s:applications} In this section we use the notation \[ \|u\|_s = \|(1 + \Delta)^{s/2} u\|_{L^2(X)},\ \|A\|_{s \to s'} = \sup_{\|u\|_s = 1} \|A u\|_{s'}, \qquad s,s' \in \R. \] We begin by using \eqref{logreg} to deduce polynomial bounds on the resolvent between Sobolev spaces. If $\chi, \widetilde \chi \in C_0^\infty(X)$ have $\widetilde \chi \chi = \chi$, then for any $s \in \R$, we have \[\begin{split} \|\Delta \chi u\|_s &\le C(\|\widetilde \chi u\|_s + \|\widetilde\chi \Delta u\|_s). \end{split}\] Hence, for any $s,s' \in \R$, we have, if $R_\chi(\sigma) = \chi (\Delta - n^2/4 - \sigma^2)^{-1} \chi$, \[\begin{split} \|R_\chi(\sigma)\|_{s \to s} &\le C \|R_{\widetilde \chi}(\sigma)\|_{s' \to s'},\\ \|R_\chi(\sigma)\|_{s \to s'+2} &\le C(1 + |\sigma|^2) \left(\|R_{\widetilde \chi}(\sigma)\|_{s \to s} + \|R_{\widetilde \chi}(\sigma)\|_{s \to s'}\right), \\ \|R_\chi(\sigma)\|_{s \to s'} &\le C(1 + |\sigma|^2)^{-1} \left(\|R_{\widetilde \chi}(\sigma)\|_{s \to s'+2} + \|R_{\widetilde \chi}(\sigma)\|_{s \to s'}\right). \end{split}\] Consequently, for any $\chi \in C_0^\infty(X)$, there is $M_0>0$ such that for any $M_1>0$, $s\in \R$, $s' \le s+2$ there is $M_2>0$ such that \begin{equation}\label{e:ressob} \|R_\chi(\sigma)\|_{s \to s'} \le M_2 |\sigma|^{M_0|\im \sigma| + s' - s-1}, \end{equation} when $|\re \sigma| \ge M_2$, $\im \sigma \ge -M_1$. \subsection{Local smoothing} By the self-adjoint functional calculus of $\Delta$, the Schr\"odinger propagator is unitary on all Sobolev spaces: for any $s,t \in \R$, if $u \in H^s(X)$, \[ \|e^{-it\Delta}u\|_s = \|u\|_s. \] The Kato local smoothing effect says that if we localize in space and average in time, then Sobolev regularity improves by half a derivative: for any $\chi\in C_0^\infty(X)$, $T>0$, $s \in \R$ there is $C>0$ such that if $u \in H^s(X)$, \begin{equation}\label{e:locsmo} \int_0^T \left\|\chi e^{-it\Delta} u\right\|^2_{s+1/2}dt \le C \|u\|^2_s. \end{equation} This follows by a $TT^*$ argument from \eqref{e:ressob} applied with $\im \sigma = s = 0$, $s' = 1$ (see e.g. \cite[p 424]{bur:smoothing}); note that in this case the bound is uniform as $\sigma \to \pm \infty$. \subsection{Resonant wave expansions}\label{s:wave} Suppose $\chi (\Delta - n^2/4 - \sigma^2)^{-1} \chi$ is meromorphic for $\sigma \in\C$. For example we may take $(X,g)$ as in \S\ref{infmany}. More generally, if the funnel end is evenly asymptotically hyperbolic as in \cite[Definition 1.2]{g} then this follows as in the proof of Theorem 1.1 in \cite[p 747]{sz2}, but in the interest of brevity we do not pursue this here. Then \eqref{e:ressob} implies that, when the initial data is compactly supported, solutions to the wave equation $(\partial_t^2 + \Delta - n^2/4)u = 0$ can be expanded into a superposition of eigenstates and resonant states, with a remainder which decays exponentially on compact sets: Let $s \in \R$, $\chi \in C_0^\infty(X)$, $f \in H^{s+1}(X)$, $g \in H^{s}(X)$, $\chi f = f$, $\chi g = g$. For any $M_1>0$, \begin{equation}\label{e:sobthresh} s' < s -M_0M_1, \end{equation} there are $C,T>0$ such that if $t \ge T$, $H= \sqrt{\Delta - n^2/4}$, then \[ \left\|\chi \left(\cos(tH) f + \frac{\sin (tH)}{H} g- \sum_{\im \sigma_j > -M_1} \sum_{m=1}^{M(\sigma_j)} e^{-i\sigma_j t} t^{m-1} w_{j,m} \right) \right\|_{s'} \le C e^{-M_1 t}, \] where the sum is taken over poles of $R_\chi(\sigma)$ (and is finite by the Theorem), $M(\sigma_j)$ is the rank of the residue of the pole at $\sigma_j$, and each $w_{j,m}$ is a linear combination of the projections of $f$ and $g$ onto the $m$-th eigenstate or resonant state at $\sigma_j$. This follows from \eqref{e:ressob} by an argument of \cite{lp,v}; see also \cite[Theorem 3.3]{tz2} or \cite[Corollary 6.1]{Datchev-Vasy:Gluing}. \textbf{Remark}. The local smoothing estimate \eqref{e:locsmo} is lossless in the sense that the result is the same if $(X,g)$ is nontrapping and asymptotically Euclidean or hyperbolic (see \cite[(1.6)]{cpv} for a general result). This is because the resolvent estimates \eqref{logreg} and \eqref{e:eucbetter} agree when $\im \sigma = 0$. The resonant wave expansion exhibits a loss in the Sobolev spaces in which the remainder is controlled: the improvement from \eqref{logreg} to \eqref{e:eucbetter} for $\im \sigma < 0$ means that, when \eqref{e:eucbetter} holds, we can replace \eqref{e:sobthresh} with $s' < s$. \section{Lower bounds}\label{s:low} In this section we prove that, in the setting of an exact quotient, the holomorphic continuation of the resolvent grows polynomially. As in \cite[\S 5.3]{b}, we use the fact that in this case integral kernel of the resolvent can be written in terms of modified Bessel functions. \begin{prop}\label{p:bessel} Let $(X,g)$ be given by \[ X = \R \times S, \qquad g = dr^2 + e^{2r}dS, \] where $(S,dS)$ is a compact Riemannian manifold without boundary of dimension $n$. Then for any $\chi \in C_0^\infty(X)$ which is not identically $0$, the cutoff resolvent $\chi(\Delta - n^2/4 - \sigma^2)^{-1}\chi $ continues holomorphically from $\{\im \sigma > 0\}$ to $\C \setminus 0$, with a simple pole of rank $1$ at $\sigma = 0$. Moreover, if $\chi \ne 0$ in a neighborhood of $0$, for any $\eps > 0$ there exists $ C>0$ such that \[ \|\chi(\Delta - n^2/4 - \sigma^2)^{-1}\chi\| \ge e^{-C|\im\sigma|} |\sigma|^{2|\im\sigma|-1}/C, \] when $\im \sigma \le -\eps$, $|\re \sigma| \ge C$, $|\im\sigma|\le |\re\sigma|/\eps$. \end{prop} \begin{proof} As in \S\ref{spectrum} a conjugation and separation of variables reduce this to the study of the following family of ordinary differential operators \[ P_m = D_r^2 + \lambda_m^2 e^{-2r}, \] where $0=\lambda_0 < \lambda_1 \le \lambda_2 \le \cdots$ are square roots of the eigenvalues of $\Delta$. We will show that $\chi(P_m - \sigma^2)^{-1}\chi$ is entire in $\sigma$ for $m > 0$, and that it is holomorphic in $\C \setminus 0$ with a simple pole of rank $1$ at $\sigma = 0$ for $m = 0$. We will further show that \[ \|\chi (P_1 - \sigma^2)^{-1} \chi\| \ge e^{-C|\im\sigma|} |\sigma|^{2|\im\sigma|-1}/C, \] when $\im \sigma \le -\eps$, $|\re \sigma| \ge C$, $|\im\sigma|\le |\re\sigma|/\eps$. We write the integral kernel of the resolvent of each $P_m$ using the following formula (see for example \cite[(1.25)]{tz}): \begin{equation}\label{e:resolventformula} R_m(r,r') = -\psi_1(\max\{r,r'\})\psi_2(\min\{r,r'\})/W(\psi_1,\psi_2), \end{equation} where $\psi_1$ and $\psi_2$ are linearly independent solutions to $(P_m -\sigma^2)u=0$ and $W(\psi_1,\psi_2)$ is their Wronskian. If $m = 0$ we take $\psi_1(r)=e^{i r \sigma}$ and $\psi_2(r) = e^{-i r \sigma}$ (this is the only choice for which the resolvent maps $L^2 \to L^2$ for $\im \sigma > 0$), so that $W(\psi_1,\psi_2)=2i\sigma$. Now the asserted continuation is immediate from the formula \eqref{e:resolventformula}. To study $m > 0$ we use, as in \cite[\S5.3]{b}, the Bessel functions \begin{equation}\label{e:besselres}\psi_1(r) =I_\nu\left(\lambda_m e^{-r} \right), \qquad \psi_2(r) = K_\nu\left(\lambda_m e^{-r} \right), \qquad \nu = -i\sigma.\end{equation} We recall the definitions: \begin{equation}\label{e:idef}I_\nu(z) = \frac{z^\nu}{2^\nu} \sum_{k=0}^\infty \frac{(z/2)^{2k}}{k!\Gamma(\nu+k+1)},\end{equation} \begin{equation}\label{e:kdef}K_\nu(z) = \frac \pi{2\sin(\pi\nu)} \left(I_{-\nu}(z) - I_\nu(z)\right).\end{equation} This pair solves the desired equation (see for example \cite[Chapter 7, (8.01)]{Olver:Asymptotics}) and has $W=1$ (see for example \cite[Chapter 7, (8.07)]{Olver:Asymptotics}). When $\im \sigma > 0$, we have $\re \nu > 0$ and this resolvent maps $L^2 \to L^2$ thanks to the asymptotic \begin{equation}\label{e:iseries} I_\nu(z) = \frac{z^\nu}{2^\nu \Gamma(\nu+1)}\left(1 + \Oh\left(\frac{z^2}{\nu}\right)\right),\end{equation} which is a consequence of \eqref{e:idef}, and thanks to the fact that $K_\nu(z) \sim e^{-z}\sqrt{\pi/2z}$ as $z \to \infty$ (see for example \cite[Chapter 7, (8.04)]{Olver:Asymptotics}). Because $I$ and $K$ are entire in $\nu$, we have the desired homolorphic continuation of the resolvent for all $m > 0$. To estimate the resolvent we use \eqref{e:kdef} and \eqref{e:iseries} to write \[\begin{split} I_\nu(z')K_\nu(z) &= \frac \pi {2\sin(\pi\nu)}I_\nu(z')(I_{-\nu}(z) - I_\nu(z)) \\ & =\frac \pi{\sin(\pi\nu)\Gamma(\nu+1)} \frac {{z'}^\nu}{2^{\nu+1}}\left( \frac{z^{-\nu}}{2^{-\nu} \Gamma(-\nu+1)} - \frac{z^\nu}{2^\nu \Gamma(\nu+1)}\right)\left(1 + \Oh\left(\frac{z^2 + {z'}^2}{\nu}\right)\right). \end{split}\] Using Euler's reflection formula for the Gamma function (see for example \cite[Chapter 2, (1.07)]{Olver:Asymptotics}), \[\frac \pi{\sin(\pi\nu)\Gamma(\nu+1)} = -\Gamma(-\nu) = \frac{\Gamma(-\nu+1)}\nu,\] it follows that \begin{equation}\label{e:ikprod}\begin{split} I_\nu(z')K_\nu(z) &= \frac {{z'}^\nu}{2^{\nu+1}\nu}\left( \frac{z^{-\nu}}{2^{-\nu}} - \frac{z^\nu\Gamma(-\nu+1)}{2^\nu \Gamma(\nu+1)}\right)\left(1 + \Oh\left(\frac{z^2 + {z'}^2}{\nu}\right)\right) \\ &= \frac {{z'}^\nu}{2^{\nu+1}\nu}\left( \frac{z^{-\nu}}{2^{-\nu}} + \frac{\nu z^\nu\sin(\pi\nu)\Gamma(-\nu)^2}{2^\nu \pi}\right)\left(1 + \Oh\left(\frac{z^2 + {z'}^2}{\nu}\right)\right). \end{split}\end{equation} Using Stirling's formula (see for example \cite[Chapter 8, (4.04)]{Olver:Asymptotics}) \[ \Gamma(-\nu) = e^\nu (-\nu)^{-\nu}\sqrt{-2\pi/\nu} (1 + \Oh(\nu^{-1})), \] for $\arg (- \nu)$ varying in a compact subset of $(-\pi,\pi)$ and with the branch of $ (-\nu)^{-\nu}$ taken to be real and positive when $-\nu$ is, we write \[\begin{split} |\nu\sin(\pi\nu)\Gamma(-\nu)^2| & = \pi e^{\pi|\im\nu|} e^{2\re\nu} |\nu|^{-2\re \nu} e^{2\im\nu \arg(-\nu)} (1+ \Oh(|\im \nu|^{-1})),\\ &= \pi e^{2\re\nu} |\nu|^{-2\re \nu} e^{-2\im\nu \arctan\frac{\re\nu}{\im\nu}}(1+ \Oh(|\im \nu|^{-1}))\\ &= \pi |\nu|^{-2\re \nu} e^{-\frac 23 (\re \nu)^3/(\im \nu)^2}(1+ \Oh(|\re\nu|^5|\im\nu|^{-4} + |\im \nu|^{-1})), \end{split}\] for $\arg \nu$ varying in a compact subset of $(0,2\pi)$. To bound the resolvent from below we apply it to the characteristic function of an interval: let $ a > 0 $ and put \[\begin{split} u(r) &= -\int_0^a R_1(r,r')dr' = K_\nu(\lambda_1 e^{-r}) \int_0^a I_\nu(\lambda_1 e^{-r'}) dr', \end{split}\] where the last equality holds only for $r \le 0$. Then if $\chi \in C^\infty(\R)$ is identically $1$ on $[-a,a]$ we have \[\begin{split} \|\chi (P_1- \sigma^2)^{-1} \chi\|^2 &\ge \frac 1 a\int_{-a}^{a} |u(r)|^2 dr \ge \frac 1 a\int_{-a}^0 \left |K_\nu(\lambda_1 e^{-r}) \int_0^a I_\nu(\lambda_1 e^{-r'}) dr'\right|^2 dr\\ &= \frac 1 {a} \left| \int_0^a I_\nu(\lambda_1 e^{-r'}) dr'\right|^2\int_{-a}^0 \left |K_\nu(\lambda_1 e^{-r})\right|^2 dr. \end{split}\] Using \eqref{e:ikprod} we obtain \[ \|\chi (P_1- \sigma^2)^{-1} \chi\|^2 \ge \frac 1 {4a} \left| \int_{-a}^a \frac {(\lambda_1 e^{-r'})^\nu}{2^{\nu}\nu} dr'\right|^2\int_{-2a}^{-a} \left |\frac{(\lambda_1e^{-r})^{-\nu}}{2^{-\nu}} + \frac{\nu (\lambda_1e^{-r})^\nu\sin(\pi\nu)\Gamma(-\nu)^2}{2^\nu \pi}\right|^2 dr, \] provided $ |\nu| ^{-1} \le \lambda^{-2}_1 e^{-2a}/c_0$ for a suitably large absolute constant $c_0$. However, \[\begin{split} &\left| \int_{-a}^a \frac {(\lambda_1 e^{-r'})^\nu}{2^{\nu+1}\nu} dr'\right| = \frac{\lambda_1^{\re \nu}}{2^{\re\nu + 1}|\nu|^2}|e^{a\nu}-e^{-a\nu}|\ge \\&\frac{\lambda_1^{\re \nu}}{2^{\re\nu + 1}|\nu|^2}\left(e^{a |\re \nu|} - e^{-a|\re\nu|}\right) \ge e^{-C|\re \nu|}/(C|\nu|^2). \end{split}\] Then define $f(\nu)$ and $g(\nu)$ by \[\begin{split} \left |\frac{(\lambda_1e^{-r})^{-\nu}}{2^{-\nu}} + \frac{\nu (\lambda_1e^{-r})^\nu\sin(\pi\nu)\Gamma(-\nu)^2}{2^\nu \pi}\right| &\ge \frac 12 |\nu|^{-2\re \nu} e^{-\frac 23 \frac{(\re \nu)^3}{(\im \nu)^2}} \frac{(\lambda_1e^{-r})^{\re\nu}}{2^{\re\nu}} - \frac{2^{\re\nu}}{(\lambda_1e^{-r})^{\re\nu}} \\ &= f(\nu) g(\nu) e^{-\re \nu r} - e^{\re \nu r}/g(\nu). \end{split}\] So, provided $\re \nu \le0$, \[\begin{split} \int_{-2a}^a \left |\frac{(\lambda_1e^{-r})^{-\nu}}{2^{-\nu}} + \frac{\nu (\lambda_1e^{-r})^\nu\sin(\pi\nu)\Gamma(-\nu)^2}{2^\nu \pi}\right|^2dr &\ge \int_{-2a}^{-a} \left(f^2g^2 e^{-2\re \nu r} - 2f\right) dr \\ &\ge a(f^2g^2e^{-4|\re \nu| a} -2f). \end{split}\] Then if additionally $2 \le fg^2e^{-4|\re \nu| a}/2$ (it suffices to require $\re \nu \le - \eps$ and then $|\nu|$ sufficiently large depending on $\eps$), we have \[ \int_{-2a}^a \left |\frac{(\lambda_1e^{-r})^{-\nu}}{2^{-\nu}} + \frac{\nu (\lambda_1e^{-r})^\nu\sin(\pi\nu)\Gamma(-\nu)^2}{2^\nu \pi}\right|^2dr \ge a f^2g^2e^{-4|\re \nu| a}/2, \] so that \[ \|\chi (P_1- \sigma^2)^{-1} \chi\|^2 \ge \frac {e^{-C|\re \nu|}}{C|\nu|^2}|\nu|^{4|\re \nu|}. \] \end{proof} \section*{Appendix. The curvature of a warped product} The result of this calculation is used in the examples in \S\ref{s:examples}, and although it is well known, we include the details for the convenience of the reader. For this section only, let $(S,\tilde g)$ be a compact Riemannian manifold, and let $X = \R \times S$ have the metric \[g = dr^2 + f(r)^2 \tilde g,\] where $f \in C^\infty(\R;(0,\infty))$. Let $p \in X$, let $P$ be a two-dimensional subspace of $T_pX$, and let $K(P)$ be the sectional curvature of $P$ with respect to $g$. We will show that if $\D_r \in P$, then \[K(P) = - f''(r)/f(r),\] while if $P \subset T_pS$ and $\widetilde K(P)$ is the sectional curvature of $P$ with respect to $\tilde g$, then \[K(P) = (\widetilde K(P) - f'(r)^2)/f(r)^2.\] We work in coordinates $(x^0,\dots,x^n)=(r,x^1,\dots,x^n)$, and write \[g = g_{\alpha\beta}dx^\alpha dx^\beta = dr^2 + g_{ij}dx^idx^j = dr^2 + f(r)^2 \tilde g_{ij}dx^idx^j,\] using the Einstein summation convention. We use Greek letters for indices which include $0$, that is indices which include $r$, and Latin letters for indices which do not. Then \[\D_\alpha g_{r\alpha} = 0,\qquad \D_rg_{jk} = 2f^{-1}f'g_{jk}, \qquad \D_ig_{jk} = f^2\D_i\tilde g_{jk}.\] We write $\Gamma$ for the Christoffel symbols of $g$, and $\widetilde\Gamma$ for those of $\tilde g$. These are given by \[{\Gamma^r}_{r\alpha} = {\Gamma^\alpha}_{rr} = 0, \qquad {\Gamma^r}_{jk} = -f^{-1}f'g_{jk}, \qquad {\Gamma^i}_{jr} = f^{-1}f' \delta^i_j, \qquad {\Gamma^i}_{jk} = {\widetilde \Gamma^i}_{jk}.\] Let $R$ be the Riemann curvature tensor of $g$: \[{R_{\alpha\beta\gamma}}^\delta = \D_\alpha {\Gamma^\delta}_{\beta\gamma} + {\Gamma^\eps}_{\beta\gamma}{\Gamma^\delta}_{\alpha\eps} - \D_\beta {\Gamma^\delta}_{\alpha\gamma} - {\Gamma^\eps}_{\alpha\gamma}{\Gamma^\delta}_{\beta\eps}.\] Now if $P \subset T_pX$ is spanned by a pair of orthogonal unit vectors $V^\alpha\D_\alpha$ and $W^\alpha\D_\alpha$, then $K(P) = R_{\alpha\beta\gamma\delta}V^\alpha W^\beta W^\gamma V^\delta$, and similarly for $\widetilde R$ and $\widetilde K$. Then \[{R_{ijk}}^\ell = {\widetilde{R}_{ijk}}^{\phantom{ijk}\ell} + {\Gamma^r}_{jk}{\Gamma^\ell}_{ir} - {\Gamma^r}_{ik}{\Gamma^\ell}_{jr} = {\widetilde R_{ijk}}^{\phantom{ijk}\ell} + (f^{-1})^2 (f')^2(-\delta^\ell_i g_{jk} + \delta^\ell_j g_{ik}),\] \[{R_{rjk}}^r = \D_r{\Gamma^r}_{jk} - {\Gamma^m}_{rk}{\Gamma^r}_{jm} = - (f^{-1}f'g_{jk})' + (f^{-1}f')^2g_{jk} = -f^{-1}f''g_{jk}.\] If $\D_r \in P$ we take $V = \D_r$ and $W=W^j\D_j$ any unit vector in $T_pX$ orthogonal to $V$. Then \[K(P) = R_{rjkr}W^jW^k = -f^{-1}f''g_{jk}W^jW^k = -f^{-1}f''.\] Meanwhile if $\D_r \perp P$ we may write $V=V^j\D_j$ and $W=W^j\D_j$. Then \begin{align*}K(P) &= \left(f^2\tilde R_{ijk\ell}+ (f^{-1})^2 (f')^2(-g_{\ell i} g_{jk} + g_{\ell j} g_{ik})\right)V^iW^jW^kV^\ell. \intertext{using the fact that $fV$ and $fW$ are orthogonal unit vectors for $\tilde g$, we see that} K(P) &= f^{-2}\tilde K(P) - (f^{-1})^2 (f')^2.\end{align*}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \vspace{0.4em} Quantum computing \cite{Preskill2018, vijay2, google2018} has been considered attractive of late for implementing Machine Learning (ML) algorithms for several applications related to Artificial Intelligence \cite{lecun2015,biamonte2017_review, woss2018, havlivcek2019}. Variational algorithms form a special class of such Quantum Machine Learning (QML) algorithms \cite{havlivcek2019, schuld2019, benedetti2019, CQC2019, adhikary2020, salinas2020, mari2019}. Noisy Intermediate-Scale Quantum (NISQ) devices are considered suitable candidates for implementing such algorithms \cite{Preskill2018, ibmq, Grollier_QNeuro2020, havlivcek2019}. \vspace{0.4em} However, there are various challenges associated with the implementation of most variational QML algorithms on NISQ hardware: \begin{enumerate} \item Due to physical constraints, NISQ hardware mostly allows either single-qubit or two-qubit gates \cite{google2018, ibmq}. But most variational QML algorithms need multi-qubit gates in the variational/ parametrized quantum circuits that they use \cite{havlivcek2019, benedetti2019}. So any gate-operation involving more than two qubits needs to be decomposed into a series of single-qubit and two-qubit gates. Such design increases the gate-count as well as the circuit-depth while implementing such QML algorithms in hardware. An increase in the gate-count leads to an increased accumulation of noise in the form of gate-errors \cite{NASAreview}. A larger circuit-depth implies a longer execution time and, hence, higher decoherence \cite{ibmq}. The gate-count and the circuit-depth further increase due to the constrained architecture of the NISQ devices. Limited connectivity between physical qubits implies that not all two-qubit gates are implementable directly. To realise such ``forbidden'' gates, one needs to remap logical qubits to physical qubits through SWAP insertion \cite{saber2019,codar2020,swap2020,algo_opt_2019}. \item The existing QML algorithms mostly follow the amplitude encoding scheme or the qubit encoding scheme to encode the features of each input-sample (in the ML data-sets) as qubits for the variational/ parametrized quantum circuit \cite{havlivcek2019,schuld2019,Grollier_QNeuro2020,Schwab_QubitEncoding}. As a consequence of these schemes, the number of qubits in the quantum circuit depends on the dimension of the input-samples in a given data-set; this dimension is typically high \cite{havlivcek2019,schuld2019,Grollier_QNeuro2020,Schwab_QubitEncoding}. This makes the physical implementation of such QML algorithms difficult since numerous qubits are needed \cite{NASAreview}. \end{enumerate} \begin{figure*}[!t] \includegraphics[width=0.9\textwidth]{Figure1_LR.jpeg} \caption{The structure of the dressed quantum network we design here is shown in this schematic. The classical encoding layer converts the input $(x_{1},x_{2},...x_d)$ to {($\Tilde{x}_1$,$\Tilde{x}_2$,...$\Tilde{x}_N$)}, where $d$ is the dimension of the input-sample and $N$ is the number of possible output classes/ labels of the input-sample. Parameters of the classical encoding layer ($w_i^j$; $i = 1, 2, \cdots, d$ ; $j = 1, 2, \cdots, N$) are updated after every epoch during the training of the dressed quantum network, following our proposed algorithm. In the variational quantum circuit, the superscripts in the labels of the gates indicate the qubit on which it operates. $Z^j = e^{i \sigma_3 \Tilde{x}_j}$, $U^j = (e^{i \sigma_3 \alpha_1^j} e^{i \sigma_2 \alpha_2^j} e^{i \sigma_3 \alpha_3^j})$. $U^j$ corresponds to the SU(2) operations on the $j$-th qubit. $\alpha_1^j$,$\alpha_2^j$, and $\alpha_3^j$ are the SU(2) parameters/ rotation parameters for the $j$-th qubit. For every qubit from $j$=1 to $j=N$, the SU(2) parameters are also updated after every epoch during the training of the dressed quantum network. Projective measurement is carried out on each qubit for the outcome $\sigma_3=+1$. The set of probabilities for all the qubits is given by ${\bf P}= (P^{1},P^{2},...P^{N})$.} \label{fig:scematic} \end{figure*} \vspace{0.4em} In this paper, we propose a variational QML algorithm using a dressed quantum network \cite{adhikary2020, salinas2020, mari2019} (Fig. ~\ref{fig:scematic}). We use an aggressive encoding scheme in our network, which we call ``super compressed encoding.'' We use this scheme instead of the amplitude encoding scheme or the qubit encoding scheme mentioned earlier \cite{havlivcek2019,schuld2019,Grollier_QNeuro2020,Schwab_QubitEncoding}. Following our scheme, the classical encoding layer in our dressed network scales down the input-dimensions drastically. Independent of the number of features/ dimensions that each input-sample in an ML data-set has, the input-sample is encoded into a one-dimensional scalar for every possible output-class (see the details in Section II). The parameters used for this encoding are adjustable (Fig. ~\ref{fig:scematic}). \vspace{0.4em} For every output-class, corresponding to that one-dimensional scalar, a qubit-state is prepared. Several parametrized (these parameters are also adjustable) single-qubit gates are applied on that qubit in the variational quantum circuit (Fig. ~\ref{fig:scematic}). Then a measurement is performed to generate the loss based on the output-class that the input sample belongs to; we follow a supervised learning scheme here \cite{lecun2015}. Our algorithm is a classical-quantum hybrid variational algorithm \cite{mari2019,havlivcek2019,adhikary2020}. So all the adjustable parameters in our network are updated classically and iteratively over several epochs such that the loss decreases after every epoch (until the loss doesn't decrease any further/ we achieve convergence). Thus the network gets trained (find the details in Section II and in the block titled `Algorithm'). \vspace{0.4em} As a result of using this ``super compressed encoding'' scheme, our proposed algorithm addresses the issues related to the existing variational QML algorithms mentioned above. The number of qubits does not depend on the input-dimensions. It only depends on the number of output-classes, which is typically much lower than the number of input dimensions. Hence, we need fewer qubits for our implementation (find a quantitative comparison in Section V). Also, since we eliminate the need for multi-qubit gates, our algorithm is much more robust against noise \cite{ibmq}. \vspace{0.4em} In Section III of the paper, we implement our proposed algorithm on three separate platforms: a classical computer where the steps in our algorithm are carried out through Python-based programming, a quantum computation software called Qiskit, and real NISQ hardware (IBM-Q). On each of these platforms, we show that our classifier, despite using our aggressive encoding scheme (`super compressed encoding'), can indeed classify samples from popular ML data-sets such as Fisher's Iris, Wisconsin Breast Cancer (WBC) (diagnosis), and Abalone with high accuracy (Fig. ~\ref{fig:loss_min}, Table ~\ref{tab:accuracy}). From our IBM-Q-based implementation, we also show that our algorithm is robust against noise (Fig. ~\ref{ibmq_graph}, Table ~\ref{tab:ct}). \vspace{0.4em} In Section IV, we intuitively explain how the classification occurs in our network by representing the qubit-state, which we use, on the Bloch sphere as it evolves during the implementation of our proposed algorithm. Following this method, we show the clustering of qubit-states, which correspond to the input-samples belonging to different output-classes, on the Bloch sphere. This clustering happens as a result of the training process followed in our algorithm. Using the Bloch-sphere-based representation for binary classification (2 output-classes) on the WBC data-set and the MNIST data-set of handwritten digits (Table ~\ref{table_mnist}), we explain the training process intuitively. We also show the distinct roles played, during the training process, by the adjustable parameters in the classical encoding layer and that in the quantum circuit of our designed dressed quantum network (Fig. ~\ref{fig:visual}, Fig. ~\ref{fig:visual_MNIST}). \vspace{0.4em} To the best of our knowledge, such a Bloch-sphere-based approach has not been used before to show the evolution of quantum states (corresponding to the input-samples) as they are acted upon by the quantum gates and thus explain the working of other existing QML algorithms. But extensive research has been carried out recently to explain the internal mechanism behind classical ML algorithms' working \cite{ExplainableAI1,ExplainableAI2}. So, in that context, our Bloch-sphere-based explanation of the working of our QML algorithm may be considered very relevant for research on QML algorithms in general. \vspace{0.4em} In Section V, we compare the data-sets used by us with that used by other existing QML algorithms (Table ~\ref{table_datasetcomp}). We show that our algorithm can handle ML data-sets as complex or more complex than those handled so far by the different existing QML algorithms. Then we explore, in more detail, the advantages of our algorithm (robustness against noise, low number of qubits, etc., as mentioned above) compared to other existing QML algorithms through quantitative estimates. In Section VI, we conclude the paper. \vspace{0.4em} Though the same encoding scheme as the one used here has been used in \cite{adhikary2020}, the QML algorithm there uses a multi-level quantum system (qu-N-it) for multi-class classification. But for implementation on a practical NISQ hardware like IBM-Q, only a 2-level quantum system or qubit can be used. Hence, the algorithm in \cite{adhikary2020} can only be used for binary classification. But in this paper, we extend such qubit-based algorithm to the case of an arbitrary number of possible output-classes (Fig. ~\ref{fig:loss_min}, Table ~\ref{tab:accuracy}). Also, here we show the implementation of our algorithm on Qiskit and IBM-Q, unlike in \cite{adhikary2020}. Moreover, unlike in \cite{adhikary2020}, here we explain the working of our algorithm through Bloch-sphere-based representation. \section{The Proposed Algorithm} \subsection{Classical Encoding Layer} \vspace{0.4em} The dressed quantum network that we design corresponding to our proposed algorithm is shown in Fig. ~\ref{fig:scematic}. In our network, much like in \cite{mari2019}, the classical encoding layer, acting on the input, is used to prepare the qubits for the variational quantum circuit. \vspace{0.4em} Let us consider a data-set $\mathcal{S} = \{ ({\bf x}, f({\bf x}))\}$. Each entry in $\mathcal{S}$ is an ordered pair. It consists of a sample, represented as a vector ${\bf x}= (x_{1},x_{2},...x_d) \in \mathbb{R}^d$ and its associated label $f({\bf x})$. The label corresponds to the output-class that the sample belongs to. Thus, $f$ maps each sample to one of the $N$ labels in the set: $\mathcal{L} = \{ l_1, l_2, \cdots, l_N\}$; $f: {\bf x} \rightarrow \mathcal{L}$. The mappings that we consider here are many-to-one. \vspace{0.4em} In our ``super compressed encoding'' scheme, our classical encoding layer has the input layer with $d$ nodes connected with an output layer of $N$ nodes; there is no hidden layer (Fig.~\ref{fig:scematic}) \cite{lecun2015,adhikary2020} . $d$ is the dimension of each input-vector/ input-sample; $N$ represents the total number of possible output-classes. We denote this classical transformation as ${\cal N}_{d \rightarrow N}$. It corresponds to a simple Vector Matrix Multiplication (VMM) operation ${\cal N}: {\bf x} \rightarrow {\bf \Tilde{x}}$; $\Tilde{x}_j = \sum_i x_i w_i^j$; $i = 1, 2, \cdots, d$ ; $j = 1, 2, \cdots, N$. Thus, $w_i^j$ represents an element of a $N \times d$ dimensional weight matrix $W$. The transformed vector ${\bf \Tilde{x}}$ is $N$-dimensional. $N$ is typically much smaller than the dimension of the original input vector ($N < < d$). \vspace{0.4em} Thus, we observe here that following our ``super compressed encoding'' scheme, each input-sample, independent of its original dimensions ($d$), is drastically reduced to a one-dimensional scalar for each of the $N$ output-classes. In Section I, we already highlight this point. \subsection{Variational Quantum Circuit} \vspace{0.4em} In the variational quantum circuit connected to the classical encoding layer (Fig. ~\ref{fig:scematic}), the compressed data $\Tilde{x}_j$ ($j = 1, 2, \cdots, N$) is encoded into a $N$-qubit quantum state $\vert \psi({\bf x})\big>$ through the following gate operations: \begin{equation} \label{eq:state_prep} \ket{ \psi(\bf x)} = \otimes_{j = 1}^{N} e^{i \sigma_3 \Tilde{x}_j} H^j \ket{0}. \end{equation} Here, $H$ and $\sigma_3$ are the Hadamard gate and the third Pauli matrix, respectively, defined in a two-dimensional Hilbert space. The index $j$ labels individual qubits. \vspace{0.4em} Following the state-preparation step, we add a layer of parametrized SU(2) operations on each qubit \cite{adhikary2020,Debanjan2009}. In \cite{adhikary2020}, if a 2-level quantum system is used, a similar SU(2) operation is applied. But then, the overall network in \cite{adhikary2020} has only one qubit. In that case, the algorithm in \cite{adhikary2020} can only be used only for the particular case of binary classification. But in this paper, the number of output-classes can be greater than 2 ($N \geq 2$), as shown in Section III. \vspace{0.4em} Here, in this paper, the SU(2) operations (Fig. ~\ref{fig:scematic}) lead to the transformation: \begin{equation} \label{eq:su2trans} \ket{\bar \psi(\bf x)} = (\otimes_{j = 1}^{N} e^{i \sigma_3 \alpha_1^{j}} e^{i \sigma_2 \alpha_2^{j}} e^{i \sigma_3 \alpha_3^{j}}) \ket{\psi(\bf x)} \end{equation} where $\alpha_1^{j}$, $\alpha_2^{j}$ and $\alpha_3^{j}$ are the rotation parameters of the SU(2) operation on the $j$-th qubit. It is to be noted that all the gates used in equation ~\ref{eq:state_prep} and equation ~\ref{eq:su2trans} are single-qubit gates, as mentioned in Section I (Fig.~\ref{fig:scematic}). \vspace{0.4em} The final layer added to the circuit performs a projective measurement (Fig. ~\ref{fig:scematic}). This step is the read-out step; we use the information obtained from this step directly to classify the data. The measurement of our choice is the projection operator for the outcome: $\sigma_3 = +1$. We record the probability for the outcome $\sigma_3 = +1$ for each qubit and store them as a $N$-dimensional probability vector ${\bf P}= (P^{1},P^{2},...P^{N})$. It is to be noted that we do not have a classical layer (hence no adjustable parameters) on the network's output-side, unlike the dressed quantum network in \cite{mari2019}. \vspace{0.4em} We subsequently use the probability vector ${\bf P}$ to compute the loss function, as we discuss next \cite{bishop1995}. \subsection{The Loss-Computation and the Training Process} \vspace{0.4em} Since we follow the supervised learning scheme \cite{lecun2015}, we already know the class that each sample (\textbf{x}) belongs to ($f: {\bf x} \rightarrow \mathcal{L}$) and hence the target probability vector for that sample. We set the target probability vector for any sample of the $k$-th class (${\bf P}_{target}^{(k)}$) as: \begin{equation} \label{target} P_{target}^{(k) s} = \begin{cases} 1 & \text{if $s = k$}\\ 0 & \text{otherwise} \end{cases} \end{equation} \vspace{0.4em} Here, $ P_{target}^{(k) s}$ is the $s$-th element of the vector ${\bf P}_{target}^{(k)}$. To be classified correctly, the probability vector ${\bf P}$ (consequence of the measurement as described above) for every input belonging to the $k$-th class must evolve to ${\bf P}_{target}^{(k)}$, given by equation ~\ref{target}. Minimization of a properly constructed loss function leads to that. \vspace{0.4em} Comparing the two probability vectors above, the cross-entropy loss for a training sample belonging to the $k$-th class follows straightaway \cite{bishop1995}: \begin{equation} \label{crossentropy} {\cal E}_{CrossEntropy} = - \sum_{s = 1}^{N} P_{target}^{(k)s} \log_{e} \Sigma^s({\bf P}). \end{equation} $\Sigma(\cdot)$ in equation~\ref{crossentropy} is the SoftMax function; $\Sigma^s({\bf P}) = e^{P^s} / (\sum_{s'} e^{P^{s'}}) $. \vspace{0.4em} In every epoch, the loss for each training sample is calculated once. The total loss for an epoch is the sum of the individual losses for each training sample calculated in that epoch. \vspace{0.4em} Our dressed quantum network is trained for data-classification by minimizing the loss function, after every epoch, with respect to the weights of the classical encoding layer ($w_i^j$; $i = 1, 2, \cdots, d$ ; $j = 1, 2, \cdots, N$) and the parameters in the SU(2) operation ($\alpha_1^{j}$, $\alpha_2^{j}$ and $\alpha_3^{j}$; $j = 1, 2, \cdots, N$). These adjustable parameters are initialized to random values at the beginning of the training. Then during the training process, they are adjusted once every epoch (as mentioned in Section I) such that the loss decreases after every epoch (until the loss doesn't decrease any further/ we achieve convergence) and the probability vector for a sample belonging to the $k$-th class evolves, over many epochs, into the target probability vector for that class. Thus all these parameters are adjusted iteratively to train our dressed network. \vspace{0.4em} It is to be noted here that our algorithm is also different from \cite{mari2019} in the following way. In \cite{mari2019}, most parameters of the classical network, connected to the input, are pre-tuned and fixed. They do not change when the parameters of the quantum circuit are updated. But in our algorithm, we iteratively adjust all the parameters of our classical encoding layer during the training process along with adjusting the SU(2) parameters in our variational quantum circuit. Also, the number of parameters used in the classical network of \cite{mari2019} is much higher than the number of parameters we use in our classical encoding layer, here. \vspace{0.4em} \textit{We summarize our proposed algorithm for training our dressed quantum network in the block titled ``Algorithm'' for easy reference.} \vspace{0.4em} To determine the classification accuracy, we use a simple classification metric. A sample ${\bf x}$ belonging to the $k$-th class is said to be correctly classified if the $k$-th element of the probability vector ${\bf P}$ is greater than all other elements in that vector. To sharpen the criterion, we further impose the condition that the value of the $k$-th element in ${\bf P}$ must exceed a certain threshold $c_t$. Greater the value of $c_t$, more stringent is the classification criterion. Thus, $c_t$ is the classification-metric in our algorithm. \vspace{0.4em} In the next section (Section III), we implement our proposed algorithm on different platforms and show our classification results, using this algorithm, on different data-sets. The distinct roles played, during the training process, by the adjustable weights of the classical encoding layer ($w_i^j$; $i = 1, 2, \cdots, d$ ; $j = 1, 2, \cdots, N$) and the adjustable SU(2) parameters ($\alpha_1^{j}$, $\alpha_2^{j}$ and $\alpha_3^{j}$; $j = 1, 2, \cdots, N$) in the variational quantum circuit are explained in Section IV. \begin{algorithm}[H] \caption{Our Proposed QML Algorithm} \label{Algorithm} \begin{algorithmic}[1] \State \textbf{Input encoding (data compression):} For each input vector/ sample ${\bf x}= (x_{1},x_{2},...x_d)$, use the classical encoding layer (${\cal N}: {\bf x} \rightarrow {\bf \Tilde{x}}$), connected to the input, to generate $\Tilde{x}_j = \sum_i x_i w_i^j$; $i = 1, 2, \cdots, d$ ; $j = 1, 2, \cdots, N$. $N$ is the number of possible output classes/ labels of the input (Fig. ~\ref{fig:scematic}). \State \textbf{Input encoding (qubit preparation):} $\Tilde{x}_j$ ($j = 1,2, \cdots, N$) is encoded in $N$-qubit state: $|\psi(\mathbf{x})\rangle=$ $\otimes_{j=1}^{N} e^{i \sigma_{3} \tilde{x}_{j}} H^{j}|0\rangle^{j}$( equation ~\ref{eq:state_prep}, Fig. ~\ref{fig:scematic}). \State \textbf{SU(2) operation:} $ \ket{\bar \psi (\bf x)} = (\otimes_{j = 1}^{N} e^{i \sigma_3 \alpha_1^{j}} e^{i \sigma_2 \alpha_2^{j}} e^{i \sigma_3 \alpha_3^{j}}) \ket{ \psi (\bf x)}$ (equation ~\ref{eq:su2trans}, Fig. ~\ref{fig:scematic}). 1) \State \textbf{Measurement:} Projective measurement is carried out on each qubit for the outcome: $\sigma_3 = +1$. The set of probabilities for all the qubits in \(\ket{\bar \psi(\bf x)}\) is given by ${\bf P}= (P^{1},P^{2},...P^{N})$ (Fig. ~\ref{fig:scematic}). \State \textbf{Computation of loss:} Loss is calculated for each sample after comparing the probability vector ${\bf P}= (P^{1},P^{2},...P^{N})$, obtained from the measurement, with the target probability vector for the class that the sample belongs to (equation ~\ref{target}, ~\ref{crossentropy}). \State \textbf{Optimization of Loss} Steps 1–5 are repeated for all input-samples in the training data-set. The loss for all the samples is added to generate the total loss for that epoch. Then the loss is minimized classically by updating the adjustable parameters: the weight parameters of the classical encoding layer ($w_i^j$; $i = 1, 2, \cdots, d$ ; $j = 1, 2, \cdots, N$) and the the SU(2) parameters ($\alpha_1^{j}$, $\alpha_2^{j}$ and $\alpha_3^{j}$; $j = 1, 2, \cdots, N$). \State The above process is repeated over several epochs (loss for each sample in the training data-set is calculated once per epoch) until we converge to the minimum loss for all the samples in the training data-set. \end{algorithmic} \end{algorithm} \section{Implementation of the Algorithm and Data Classification Using It} \subsection{Implementation on Three Separate Platforms and Classification-Results} \begin{figure*}[!t] \centering \includegraphics[width= 0.9 \textwidth]{Figure2_LR.jpeg} \caption{\label{fig:loss_min} Plots of the loss (Eq.~\ref{crossentropy}) as a function of epochs during training for (a) the Fisher's IRIS data-set, (c) the WBC data-set, and (e) the Abalone data-set. Accuracy on the training data-set (training accuracy) has been plotted as a function of epochs in (b) for the Fisher's Iris data-set, (d) for the WBC data-set, and (f) for the Abalone data-set. For this purpose, the algorithm has been implemented on three different platforms: 1. programming on a classical computer (Python code) 2. Qiskit \cite{Qiskit} 3. quantum hardware (IBM-Q) \cite{ibmq}. The value of the classification-metric $c_t$ ($c_t$ is defined in Section II) is chosen to be 0.5, for the accuracy-calculation.} \end{figure*} \begin{table*}[!t] \begin{center} \begin{tabular}{| l | l | l | l | l |} \hline {\bf Dataset} & {\bf Classification type} &{\bf Classical computer} & {\bf Qiskit} & {\bf IBM-Q} \\ {} & {} & {\bf (Python code)} & {} & {} \\ \hline Fisher's Iris & 3 classes & 90$\%$ & 94$\%$ & 82$\%$ \\ \hline WBC & 2 classes &92.37$\%$ & 96.45$\%$ & 91.71$\%$ \\ \hline Abalone & 6 classes & 67.70$\%$ & 67.44$\%$ & 67.22$\%$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:accuracy} Classification accuracy numbers on the test data-sets (test accuracy) as obtained by implementing our algorithm on three different platforms: 1. programming on a classical computer (Python code) 2. Qiskit \cite{Qiskit} 3. quantum hardware (IBM-Q) \cite{ibmq}. The value of the classification-metric $c_t$ ($c_t$ is defined in Section II) is chosen to be 0.5, for the accuracy-calculation.} \end{table*} \vspace{0.4em} To assess our classifier, we have executed our algorithm for multi-class ($N \geq 2$) classification, discussed above, for three benchmark data-sets: \begin{enumerate} \item Fisher's Iris data-set: 4-dimensional input ($d=4$), 3 output-classes/ labels ($N=3$) \cite{dua2019uci} \item Wisconsin Breast Cancer (WBC) data-set: 30-dimensional input ($d=30$), 2 output-classes/ labels ($N=2$) \cite{dua2019uci} \item Abalone data-set: 8-dimensional input ($d=8$), 6 output-classes/ labels ($N=6$) \cite{dua2019uci}. \end{enumerate} More details on these data-sets and how we have used them here can be found in Appendix A below. \vspace{0.4em} We have implemented our algorithm, using the above data-sets, on three separate platforms: \begin{enumerate} \item A classical computer, using Python programming language: The analytic expressions in equation ~\ref{eq:state_prep}–\ref{crossentropy} are explicitly used in our codes. \item Qiskit, a quantum computing software: We have performed a noiseless simulation of the variational quantum circuit in our algorithm on this platform (Fig. ~\ref{fig:scematic}) \cite{Qiskit}. The classical encoding layer, the calculation of the loss, and the loss minimization are all implemented on a classical computer since our algorithm is a classical-quantum hybrid variational algorithm \cite{mari2019,havlivcek2019,adhikary2020}. However, for the classical optimization, a package provided by the Qiskit (Penny Lane) is used in this case \cite{pennylane2018}, unlike in platform 1, where we wrote our own Python code for the optimization. \item Quantum hardware (IBM-Q) \cite{ibmq}: We have used three separate systems to implement the variational quantum circuit in our algorithm: IBM-Q Rome, IBM-Q Armonk, and IBM-Q Melbourne. A different system is used for a different data-set (see Appendix B below for our reason behind such choices). For this kind of implementation, first, noisy simulation is performed on Qiskit. Noise-models, typical to IBM-Q Rome, IBM-Q Armonk, and IBM-Q Melbourne, are used. After every training epoch, the model-parameters are recorded. Then, these parameter-values are fixed in the variational quantum circuit, both in the noisy Qiskit simulator as well as in the real IMB-Q hardware. The outcomes of the model (probabilities) from both the platforms are compared, for randomly sampled training data. An excellent agreement between the two platforms' results is observed, thus establishing an equivalence between the two platforms. More details about the implementation can be found in Appendix B below. Here also, the classical encoding layer, the calculation of the loss, and the loss minimization are all implemented on a classical computer. \end{enumerate} \vspace{0.4em} The loss (as determined by equation ~\ref{crossentropy}) and the classification accuracy on the training data-set (training accuracy) are plotted as functions of epochs in Fig. \ref{fig:loss_min} for the different data-sets and the different platforms. The plots show convergence in training for all the cases. The value of the classification-metric $c_t$ ($c_t$ is defined in Section II) is chosen to be 0.5, for the accuracy-calculation. \vspace{0.4em} Our classification accuracy results on the test data-set (test accuracy) are summarized in Table \ref{tab:accuracy} for classification-metric $c_t = 0.5$ ($c_t$ is defined in Section II). More information on how we split each data-set into a training data-set and a test data-set can be found in Appendix A below. The accuracy numbers obtained in Table \ref{tab:accuracy} are close to the accuracy numbers obtained from classical ML and deep learning algorithms for the same data-sets \cite{salama2012experimental,pinto2018iris,sahin2018abalone,khalifa2019single}. This shows that our ``super compressed encoding'' is effective despite aggressively reducing the input-dimensions. \vspace{0.4em} The classification accuracy (test) for the same data-set varies across the different platforms in Table \ref{tab:accuracy}. This can be attributed to the fact that the probability vector ${\bf P}$, based on which the classification occurs, is not identical when evaluated on different platforms. Two key factors are responsible for this behavior: \begin{enumerate} \item The analytic expression of ${\bf P}$, which is explicitly used in platform 1, assumes that we have access to an infinitely large number of identically prepared states $\ket{\bar \psi (\bf x)}$. A projective measurement is performed on each of these states, and ${\bf P}$ is calculated from the resultant statistics. In both Qiskit and IBM-Q, the projective measurement is performed only for a finite number of times. This leads to a mismatch between the probability values obtained from platform 1 and those that are obtained from the other two platforms. This affects the loss function and hence the training. So it also affects the overall accuracy calculation. Interestingly, Qiskit (platform 2) offers the highest accuracy among the three platforms for most data-sets. This may be due to the high efficiency of the inbuilt optimizers (for the loss function) that the Penny Lane package (of Qiskit) uses. The Python code that we have written ourselves for optimization on platform 1 may have lower efficiency than that. Noise from actual quantum hardware affects the accuracy number for platform 3 adversely, as we discuss in the next point. \item The IBM-Q devices, used in our implementation (platform 3), suffer from noise in forms of gate-error, decoherence noise, etc. This results in imperfect preparation of $\ket{\bar \psi (\bf x)}$ and imperfect projective measurement. The effect of noise is not accounted for in the other two platforms; it leads to different ${\bf P}$ values for platform 1 and 2 compared to platform 3. The effect of noise, inherent in the real quantum hardware, can be seen in Table \ref{tab:accuracy}. The noise inevitably reduces the classification accuracy for the real quantum hardware compared to the other two platforms for a given data-set in most cases. Nonetheless, our accuracy numbers are still high for the real quantum hardware. Thus, we show that our algorithm is quite robust against noise. \end{enumerate} \subsection{Robustness against noise} \begin{figure} \centering \includegraphics[width = 0.5 \textwidth]{Figure3.jpeg} \caption{\label{ibmq_graph} The probabilities of the outcome of a projective measurement on a qubit being $\sigma_3 = +1$ ($P_+$) and the outcome being $\sigma_3 = -1$ ($1-P_+$) are obtained from a trained classifier (modified algorithm) for a sample labelled as ``malignant'' (a,c) and a sample labelled as ``benign'' (b,d) in the WBC dataset. The results in (a,b) are for the Qiskit-based implementation (platform 2) while those in (c,d) are for the IBM-Q-based implementation (platform 3). } \end{figure} \vspace{0.4em} To further elaborate on the robustness of our implementation to noise, we show the results for the implementation of our algorithm on Qiskit (platform 2) and IBM-Q hardware (platform 3) for the WBC data-set in more detail (Fig.~\ref{ibmq_graph}). Since it is a 2-class-dataset, two qubits need to be used following our algorithm. But the information from one qubit is enough for classification if the network is properly trained. This is because after training, when the sample belongs to the ``malignant'' category, the first qubit is expected to evolve to $\ket{0}$, and the second qubit is expected to evolve to $\ket{1}$ (equation ~\ref{target}). Similarly, when the sample belongs to the ``benign'' category, the first qubit is expected to evolve to $\ket{1}$, and the second qubit is expected to evolve to $\ket{0}$. Thus after correct training, as we can see from here, the second qubit offers redundant information. \vspace{0.4em} So the loss function in equation ~\ref{crossentropy} can be reformulated in this specific case of binary classification to account for projective measurement on only one of the two qubits. Let the probability that the outcome of a projective measurement ($\sigma_3 = +1$) on the first qubit be denoted by $P_+$; probability of the outcome being ($\sigma_3 = -1$) on the same qubit $= 1-P_+$. In that case, loss function turns out to be: \begin{equation} \label{error_lin_bin} {\cal E} = \sum_{p = 1}^{m_1} (1 - P_+^p) + \sum_{q = 1}^{m_2} (1 - (1 - P_+^q)) \end{equation} where there are $m_1$ training samples labelled as ``malignant'' and $m_2$ samples labelled as ``benign''. See Appendix C for more details about obtaining the loss function in equation~\ref{error_lin_bin} from the cross-entropy loss function in equation~\ref{crossentropy} above. \vspace{0.4em} Fig.~\ref{ibmq_graph} (a) and (b) show the values of $P_+$ and $(1-P_+)$, which are generated by a trained variational quantum circuit for a particular ``malignant'' sample and a particular ``benign'' sample respectively, in the case of noiseless simulation on Qiskit (platform 2 above). Fig.~\ref{ibmq_graph} (c) and (d) show the values of $P_+$ and $(1-P_+)$ for the same ``malignant'' sample and the same ``benign'' sample respectively in the case of implementation on IBM-Q (platform 3 above). The contribution of noise in IBM-Q can be understood by comparing the plots for IBM-Q-based implementation with that for noiseless Qiskit simulation. As expected, in the case of the Qiskit simulation, we note that the probability values $P_+$ and $(1-P_+)$ are closer to the ideal values ($P_+ \sim 1$ for ``malignant'' and $(1-P_+) \sim 1$ for ``benign'') compared to the IBM-Q-based implementation. \vspace{0.4em} For both Qiskit and IBM-Q implementation, the sample is classified as ``malignant'' if $P_+ > c_t$ and ``benign'' if $1-P_+ > c_t$. Noise in the IBM-Q system does not lead to wrong classification of data if $c_t$ has a reasonably low value. Fig.~\ref{ibmq_graph} (c) and (d) (results on one particular sample of each type) indicate that choosing $c_t$ within the range 0.5—0.6 may lead to successful classification of many such samples if similar probability-numbers are obtained for those other samples as in Fig.~\ref{ibmq_graph} (c) and (d). \begin{table}[!t] \begin{center} \begin{tabular}{| l | l | l |} \hline {$\mathbf{c_t}$} & {\bf Qiskit} & {\bf IBM-Q} \\ \hline 0.5 & 96.45$\%$ & 91.71$\%$ \\ \hline 0.6 & 94.08$\%$ & 87.57$\%$ \\ \hline 0.7 & 89.34$\%$ & 76.33$\%$ \\ \hline 0.8 & 80.47$\%$ & 62.72$\%$ \\ \hline 0.9 & 62.72$\%$ & 31.95$\%$ \\ \hline \end{tabular} \end{center} \caption{\label{tab:ct} Classification accuracy numbers on the test data-set for WBC (test accuracy) for different values of classification-metric $c_t$ (defined in Section II) are tabulated here. We obtain the numbers after implementing the algorithm on Qiskit (platform 2) and IBM-Q (platform 3).} \end{table} \vspace{0.4em} Table ~\ref{tab:ct} agrees with that observation since it shows reasonably high classification accuracy on the entire test data-set for the IBM-Q platform for the range of $c_t$ mentioned above. Again, this shows that our model is robust against noise within a reasonable range of values for the classification-metric ($c_t$). When the value of $c_t$ is very high, $P_+$ and $(1-P_+)$ deviate from their ideal values to a large extent for the IBM-Q platform (since it has noise). So the classification accuracy drops drastically in Table ~\ref{tab:ct} for the IBM-Q-based implementation compared to the noiseless Qiskit-based implementation. \vspace{0.4em} Even for the noiseless Qiskit-based implementation, the classification accuracy drops to a degree with increase in $c_t$ (Table ~\ref{tab:ct}). This is because the loss in equation \ref{error_lin_bin} cannot be minimized to the lowest possible value for every sample in the data-set. This leads to some input-samples being classified wrongly when the classification criterion is stringent (high value of $c_t$), just like in any classical ML algorithm. We identify these input samples on the Bloch sphere, corresponding to our trained network, in Section IV next. \section{Explanation of the Working of the Algorithm Using a Bloch Sphere} \begin{figure*}[!t] \centering \includegraphics[width=0.8 \textwidth]{Figure4_LR.jpeg} \caption{Training using the WBC data-set: (a), (d), and (g) show the output of the classical encoding layer (${\cal N}_{30 \rightarrow 1}$): $\Tilde{x}_j = \sum_i x_i w_i^j$. (b), (e), and (h) show the states $\psi({\bf x})$ (equation ~\ref{eq:state_prep}) on the Bloch sphere, for all training samples ${\bf x}$. (c), (f), and (i) show the states $\vert \bar{\psi}({\bf x})$ \big> (equation ~\ref{eq:su2trans}) on the Bloch sphere, for all training samples. $Q_1$, $Q_2$, and $Q_3$ are the three components of the polarization vector, corresponding to the Bloch sphere. (a), (b), and (c) correspond to the end of the 1st epoch; (d), (e), and (f) correspond to the end of the 20th epoch; (g), (h), and (i) correspond to the end of the 200th epoch. A blue dot corresponds to a sample belonging to the class ``malignant''. A red dot corresponds to a sample that belongs to the class ``benign''. } \label{fig:visual} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=0.8 \textwidth]{Figure5_LR.jpeg} \caption{Training using the MNIST data-set (for binary classification: digit ``0'' vs. all other digits): (a), (d), and (g) show the output of the classical encoding layer (${\cal N}_{784 \rightarrow 1}$): $\Tilde{x}_j = \sum_i x_i w_i^j$. (b), (e), and (h) show the states $\psi({\bf x})$ (equation ~\ref{eq:state_prep}) on the Bloch sphere, for all training samples ${\bf x}$. (c), (f), and (i) show the states $\vert \bar{\psi}({\bf x})$ \big> (equation ~\ref{eq:su2trans}) on the Bloch sphere, for all training samples. $Q_1$, $Q_2$, and $Q_3$ are the three components of the polarization vector, corresponding to the Bloch sphere. (a), (b), and (c) correspond to the end of the 1st epoch. (d), (e), and (f) correspond to the end of the 20th epoch. (g), (h), and (i) correspond to the end of the 200th epoch. A blue dot corresponds to a sample belonging to the class: digit ``0''. A red dot corresponds to a sample that belongs to the class: all other digits.} \label{fig:visual_MNIST} \end{figure*} \subsection{Bloch-Sphere-Based Representation for the Binary Classification Problem} \vspace{0.4em} To gain better insight into the working of our training algorithm and understand the precise role of the different parameters that we adjust iteratively during the training process, we represent the states in equation ~\ref{eq:state_prep} and equation \ref{eq:su2trans} on a Bloch sphere \cite{nielsen2002, blum2012}. We visualize their evolution during the training process over multiple epochs. Here, we have chosen the case of binary classification for the sake of simplicity. A binary classification problem requires only one qubit, as explained in Section III; this makes the Bloch-sphere-based visualization easy. \vspace{0.4em} Fig.~\ref{fig:visual} and Fig.~\ref{fig:visual_MNIST} show how our proposed dressed quantum network gets trained for the binary classification problem with the WBC dataset (2-class-data-set) and MNIST data-set of handwritten digits \cite{lecun-mnisthandwrittendigit-2010}. The MNIST data-set originally contains data for ten classes, corresponding to the digits: 0–9. But we have reformulated the ten-class-classification problem to ten separate binary-classification problems of the type: digit ``$i$'' vs. all other digits ($i \in \{0, \cdots, 9\}$). Fig.~\ref{fig:visual_MNIST} shows the Bloch-sphere-based representation of the qubit during the training process for the problem: digit ``0'' vs. all other digits (in the MNIST data-set) \cite{lecun-mnisthandwrittendigit-2010}. We carefully choose the training data from the original MNIST data-set to avoid imbalance between training data corresponding to digit ``$i$'' and that corresponding to all other digits \cite{longadge2013class,abd2013review}. For more details about how we use the MNIST data-set in our paper, see Appendix A below. The results shown here have been obtained by executing the classifier-algorithm on a classical computer using a Python-code (Platform 1, as listed in Section III). \vspace{0.4em} The variational quantum circuit now contains a single qubit. The sequence of parametrized gates is still the same as before (Section II). Following our ``super compressed encoding'' scheme, the input vector is first transformed into a single number; ${\cal N}: {\bf x} \rightarrow \Tilde{x} = \sum_i x_i w_i$, $i = 1, 2, \cdots, d$. For the WBC data-set, $d=30$; for the MNIST data-set, $d=784$. The scalar $\Tilde{x}$ is then encoded into a single qubit state $\ket{ \psi (\bf x)} = e^{i \sigma_3 \Tilde{x}} H \ket{0}$. A parametrized SU(2) operation ($e^{i \sigma_3 \alpha_1} e^{i \sigma_2 \alpha_2} e^{i \sigma_3 \alpha_3}$) is subsequently applied on $\vert \psi({\bf x})\big>$ to produce $\ket{\bar \psi (\bf x)}$. A projective measurement is then performed on the state and the probability for the outcome $\sigma_3 = +1$ is recorded as $P_+$. \vspace{0.4em} The classification metric $c_t$ is chosen to be 0.5 here. So, for the WBC dataset, a sample is classified as ``malignant'' if $P_+ > 0.5$; it is classified as ``benign'' if $(1-P_+) > 0.5$. Similarly, for the ``0'' vs. all other digits problem (MNIST), a sample is classified as digit ``0'' if $P_+ > 0.5$; it is classified as ``other digit'' if $(1-P_+) > 0.5$. The model is trained such that $P_+ \rightarrow 1$ for the samples labelled as ``malignant''/ digit ``0'' and $(1-P_+) \rightarrow 1$ for the samples that are ``benign''/ ``other digit''. Intuitively, this means that, as a part of the training process, the weights in the classical encoding layer and the rotation angles in the SU(2) operation are adjusted to ensure that $\ket{\bar \psi (\bf x)}$ evolves to $\ket{0}$ if ${\bf x}$ belongs to the class ``malignant''/ digit ``0'' while $\ket{\bar \psi (\bf x)}$ evolves to $\ket{1}$ if ${\bf x}$ belongs to the class ``benign''/ ``other digit'' (as mentioned earlier in Section III). \vspace{0.4em} Fig.~\ref{fig:visual} and Fig.~\ref{fig:visual_MNIST} depict the evolution of the following quantities, over several epochs, during the training process: \begin{enumerate} \item The output of the classical encoding layer for all training samples: Fig.~\ref{fig:visual} (a), (d), and (g) show the quantity ${\Tilde{x}}$ at the end of the 1st, 20th, and the 200th epoch respectively, for the WBC data-set. Fig.~\ref{fig:visual_MNIST} (a), (d), and (g) show the quantity ${\Tilde{x}}$ at the end of the 1st, 20th, and the 200th epoch respectively, for the MNIST data-set (digit ``0'' vs. all other digits). \item The state $\ket{\psi (\bf x)}$ on the surface of a Bloch sphere for all training samples. Fig.~\ref{fig:visual} (b), (e), and (h) show the state $\ket{\psi (\bf x)}$ at the end of the 1st, the 20th, and the 200th epoch respectively, for the WBC data-set. Fig.~\ref{fig:visual_MNIST} (b), (e), and (h) show the state $\ket{\psi (\bf x)}$ at the end of the 1st, the 20th, and the 200th epoch respectively, for the MNIST data-set (digit ``0'' vs. all other digits). \item The state $\ket{\bar \psi (\bf x)}$ on the surface of a Bloch sphere for all training samples. Fig.~\ref{fig:visual} (c), (f), and (i) show the state $\ket{\bar \psi (\bf x)}$ at the end of the 1st, the 20th, and the 200th epoch respectively, for the WBC data-set. Fig.~\ref{fig:visual_MNIST} (c), (f), and (i) show the state $\ket{\bar \psi (\bf x)}$ at the end of the 1st, the 20th, and the 200th epoch respectively, for the MNIST data-set (digit ``0'' vs. all other digits). \end{enumerate} \vspace{0.4em} Any single-qubit state \(\ket{\psi}\) can be written as a density matrix of the form $\rho = \ket{\psi}\bra{\psi} = \frac{1}{2} ({\bf 1} + {\bf Q}\cdot {\bf \sigma})$; with ${\bf Q}$ being the polarization vector in three dimensions with $\vert \vert {\bf Q} \vert \vert_{L_{2}} \leq 1$ \cite{nielsen2002,blum2012}. This polarization vector defines a single qubit state uniquely. The Bloch sphere is a unit sphere in the three-dimensional space, defined by the components of the polarization vector \textbf{Q} which are: $Q_1$, $Q_2$, and $Q_3$. Each point on the surface of the Bloch sphere represents a unique ${\bf Q}$, which corresponds to a unique pure state \(\ket{\psi}\). The probability of the outcome $\sigma_3 = +1$, for the state $\rho$ is given by the overlap of the density matrix with the projection operator $\pi_+ = \frac{1}{2} ({\bf 1} + \sigma_3)$: \begin{equation} \label{Bloch1} P_+ = Tr(\rho \pi_+) = \frac{1}{2}(1 + Q_3) \end{equation} Any state $\ket{\bar\psi(\bf x)}=e^{i \sigma_3 \alpha_1} e^{i \sigma_2 \alpha_2} e^{i \sigma_3 \alpha_3} e^{i \sigma_3 \Tilde{x}} H(\ket{0})$ (where $\bf x$ = $\{x_1,x_2,x_3,...x_d\}$ and $\Tilde{x} = \sum_i x_i w_i$, $i = 1, 2, \cdots, d$) can be represented on the surface of the Bloch sphere as ${\bf Q}$= $\{Q_1, Q_2, Q_3\}$ through the following mapping: \begin{eqnarray} \label{Bloch2} &Q_1& = \cos^2\alpha_2 \cos 2 (\alpha_1+\alpha_3+\Tilde{x})-\sin^2 \alpha_2 \cos 2 (\alpha_1-\alpha_3-\Tilde{x}) \nonumber \\ &Q_2& = \sin^2\alpha_2 \sin 2 (\alpha_1-\alpha_3- \Tilde{x})-\cos^2\alpha_2 \sin 2 (\alpha_1+\alpha_3+ \Tilde{x}) \nonumber \\ &Q_3& = \sin 2 \alpha_2 \cos 2(\alpha_3 + \Tilde{x}) \end{eqnarray} \vspace{0.4em} The blue dots on the Bloch sphere in Fig.~\ref{fig:visual} correspond to the ``malignant'' samples while the red dots represent the samples belonging to the ``benign'' class. The blue dots in Fig.~\ref{fig:visual_MNIST} correspond to the digit ``0'' samples while the red dots represent the samples belonging to the class: all other digits/ ``other digits''. Indeed, just as we desired, Fig.~\ref{fig:visual} (c), (f) and (i) and Fig.~\ref{fig:visual_MNIST} (c), (f) and (i) show that after training the circuit over a suitable number of epochs, the network learns how to differentiate between the samples that are labelled differently. For ``malignant'' samples/ digit ``0'' samples (blue dots), the corresponding states \(\ket{\bar \psi(\bf x)}\) are adjusted such that \(\ket{\bar \psi(\bf x)}\) evolve towards $\ket{0}$. So the third component of the polarization vector (${Q}_3$) is positive. Thus, all such training samples lie on the upper hemisphere of the Bloch sphere thereby ensuring the condition $P_+ > 0.5$ (from equation ~\ref{Bloch1}) . Similarly, for the ``benign'' samples/ ``other digit'' samples, \(\ket{\bar \psi(\bf x)}\) is adjusted such that \(\ket{\bar \psi(\bf x)}\) evolves towards $\ket{1}$. These points lie on the lower hemisphere of the Bloch sphere (${Q}_3 < 0$, and hence from equation ~\ref{Bloch1}, $(1 - P_+) > 0.5$). \vspace{0.4em} It is to be noted that while we do try to train our dressed network such that ${Q}_3$ for all ``malignant''/ digit ``0'' samples attains the highest possible positive value ($\to 1$) and ${Q}_3$ for all ``benign''/ other digit samples attains the lowest possible negative value ($\to -1$), this is not possible for all the samples. This is because the loss-term in equation ~\ref{error_lin_bin} cannot be minimized to the lowest possible value for each and every sample. We observe this phenomenon in Fig.~\ref{fig:visual} (i) and Fig.~\ref{fig:visual_MNIST} (i). For both the clusters, corresponding to the two output-classes, there are many points on the Bloch sphere further away from the ideal points: ${Q}_3 = 1$ (for ``malignant''/ digit ``0'' samples) and ${Q}_3 = -1$ (for ``benign''/ other digit samples). These samples here are taken from the training data-set. When \(\ket{\bar \psi( \bf x)}\) corresponding to these test-samples is plotted on the Bloch sphere with parameters in the classical encoding layer and rotation parameters in SU(2) operations being the ones obtained after training, a similar trend will be observed. As the classification-metric $c_t$ (in Section II) assumes a higher value, the samples corresponding to these points tend to get wrongly classified, leading to a drop in the test accuracy (Table ~\ref{tab:ct}), as discussed before in Section III. \vspace{0.4em} Additionally, in Table ~\ref{table_mnist}, we report the test classification accuracy on the test data-set for this binary classification problem with MNIST (digit ``0'' vs. all other digits, digit ``1'' vs. other digits, etc.). We observe that in all these cases, the classification accuracy is fairly high. This shows that our ``super-compressed encoding'' scheme is still efficient even for high-dimensional input (784 for MNIST). \begin{table}[!t] \begin{center} \begin{tabular}{| l | l |} \hline {\bf Dataset} & {\bf Accuracy} \\ \hline MNIST (0 vs All) & 97.80 $\%$ \\ \hline MNIST (1 vs All) & 97.22 $\%$ \\ \hline MNIST (2 vs All) & 91.61 $\%$ \\ \hline MNIST (3 vs All) & 91.58 $\%$ \\ \hline MNIST (4 vs All) & 92.97 $\%$ \\ \hline MNIST (5 vs All) & 90.91 $\%$ \\ \hline MNIST (6 vs All) & 95.45 $\%$ \\ \hline MNIST (7 vs All) & 94.01 $\%$ \\ \hline MNIST (8 vs All) & 88.60 $\%$ \\ \hline MNIST (9 vs All) & 90.08 $\%$ \\ \hline \end{tabular} \end{center} \caption{\label{table_mnist} Classification accuracy numbers for the binary classification problem of MNIST as obtained by running our algorithm on a classical computer using Python code (platform 1). Value of the classification metric $c_t$ is chosen to be 0.5 here.} \end{table} \subsection{Distinct Roles Played in Training by the Classical Encoding Layer and the SU(2) Operations} \vspace{0.4em} Fig. ~\ref{fig:visual} and Fig. ~\ref{fig:visual_MNIST} reveal the distinct role, with respect to the binary classification above, that has been played by the two sets of iteratively adjustable parameters: the weights in the classical encoding layer ($w_i$; $i=1,2,3,...d$) and the rotation/ SU(2) parameters ($\alpha_1,\alpha_2,\alpha_3$) in the variational quantum circuit. The weights in the classical encoding layer are updated, after every epoch, during training such that after the training (say the 200th epoch), for input-samples ($\bf x$) belonging to ``malignant'' type/ digit ``0'', the corresponding $\ket{\psi(\bf x)}$-s cluster on one part of the circle on the Bloch sphere with $Q_3=0$; for input-samples belonging to ``benign'' type/ all other digits, $\ket{\psi(\bf x)}$-s cluster on the opposite part of the circle on the Bloch sphere with $Q_3=0$. This can be seen from Fig. \ref{fig:visual}(h) and Fig. \ref{fig:visual_MNIST}(h). \vspace{0.4em} But we also observe from Fig. ~\ref{fig:visual}(h) and Fig. ~\ref{fig:visual_MNIST}(h) that though $\ket{\psi(\bf x)}$-s corresponding to the samples belonging to the two output-classes are well separated on the circle ($Q_3=0$), the axis that separates these two clusters cannot be determined with certainty. For example, the location of the two clusters formed corresponding to the two output-classes is different for the WBC data-set (Fig. ~\ref{fig:visual}(h)) and the MNIST data-set (Fig. ~\ref{fig:visual_MNIST}(h)). Thus adjusting the weights in the classical encoding layer is not enough to identify, with certainty, which sample belongs to which output-class. The SU(2) parameters in the variational quantum circuit also need to play an important role in this identification, as we explain next. \vspace{0.4em} These SU(2) parameters ($\alpha_1,\alpha_2,\alpha_3$) are learned during training such that after training (after the 200th epoch say), for one sample-type (``malignant''/digit ``0''), $\ket{\psi(\bf x)}$, after the SU(2) operations, transforms to $\ket{\bar \psi(\bf x)}$, which evolves towards $\ket{0}$ (Fig.~\ref{fig:visual}(i), Fig.~\ref{fig:visual_MNIST}(i)). Thus all the $\ket{\bar \psi(\bf x)}$-s for such samples cluster towards the point on the Bloch sphere with $Q_3=1$. Hence, for such samples, ${Q}_3$ is positive and $P_+ > 0.5$ as mentioned earlier. Similarly, for the other sample-type (``benign''/other digits), $\ket{\psi(\bf x)}$, after the SU(2) operations, transforms to $\ket{\bar \psi(\bf x)}$, which evolves towards $\ket{1}$ (Fig.~\ref{fig:visual}(i), Fig.~\ref{fig:visual_MNIST}(i)). Thus all the $\ket{\bar\psi(\bf x)}$-s for such samples cluster towards the point on the Bloch sphere with $Q_3=-1$. Hence, for such samples, ${Q}_3$ is negative and $(1 - P_+) > 0.5$ as mentioned earlier. The minimization of the loss function in equation ~\ref{error_lin_bin} enables such SU(2)-parameter adjusting. \vspace{0.4em} Overall, from Fig.~\ref{fig:visual} and Fig.~\ref{fig:visual_MNIST}, we observe that updating the weight parameters in the classical encoding layer enables the clustering of the samples based on their output-classes. But the clusters are formed on different randomly located parts of the circle on the Bloch sphere with ${Q}_3=0$. The iterative adjustment of the rotation parameters in the SU(2) operation enables driving these two randomly formed clusters towards specific points on the Bloch sphere: $Q_3=1$ and $Q_3=-1$; this allows identifying each sample as being of one output-class or another and leads to high classification accuracy. \vspace{0.4em} It is to be noted that $P_+$ (the outcome of a measurement on the quantum state) in equation ~\ref{error_lin_bin} not only depends on the SU(2) parameters in the quantum circuit but also the weight parameters in the classical encoding layer. These weight parameters are also iteratively adjusted to minimize the loss function in equation ~\ref{error_lin_bin}. So the training of the classical encoding layer also indirectly depends upon the SU(2) operations in the quantum circuit, the measurement process, and the iterative adjustment of the SU(2) parameters during the training of the quantum circuit. Thus, training of the classical encoding layer and training of the quantum circuit depend on each other. Together, they lead to the training of the overall dressed quantum network. \section{Advantages of Our Algorithm} \vspace{0.4em} We now compare the data-sets that we have classified, with high accuracy, using our proposed QML algorithm (Table ~\ref{tab:accuracy}, ~\ref{table_mnist}) with that used in existing QML algorithms. The precise data-sets classified by these algorithms indicate the kind of ML-tasks they are capable of solving. Once we show that our algorithm can classify the data in data-sets as complex or more complex than the ones used by existing QML algorithms, we will show the explicit advantages that our algorithm offers. \vspace{0.4em} Table ~\ref{table_datasetcomp} shows that for many existing QML algorithms, classification has only been reported only for toy data-sets, constructed for this purpose in the reports on these algorithms themselves \cite{salinas2020,havlivcek2019,Tachino}. On the contrary, in this paper, we report high classification accuracy on four standard ML data-sets: Fisher's Iris \cite{dua2019uci}, WBC \cite{dua2019uci}, Abalone \cite{dua2019uci}, and MNIST \cite{lecun-mnisthandwrittendigit-2010}. While \cite{OxfordXORIris} and \cite{MariaIris} also show classification using Fisher's Iris data-set, we also show classification, as mentioned here, using Abalone and MNIST data-set. If we judge the complexity of a data-set by the number of input-dimensions, MNIST and WBC data-set have much higher input-dimensions (784 and 30 respectively) compared to Fisher's Iris data-set (4) and most toy data-sets used in \cite{salinas2020,havlivcek2019,Tachino}. If we judge the complexity of a data-set by the number of output-classes, Abalone data-set has more output-classes (6) than Iris (3) and most toy data-sets used in \cite{salinas2020,havlivcek2019,Tachino}. \vspace{0.4em} Thus we show through Table ~\ref{table_datasetcomp} that our proposed QML algorithm can classify ML data-sets as complex as or more complex than the ones used by different existing QML algorithms. We can achieve this result despite using a ``super compressed encoding'' scheme and drastically scaling down the input dimensions to just one scalar per input-sample. To the best of our knowledge, only in \cite{mari2019}, the data-sets used are more complex than ours. However, they use a ResNET-block to extract the essential features from the input-images \cite{ResNet}. Since ResNET is a very complex neural network classically pre-trained on the ImageNet data-set, it learns the complex structure of images. It classically produces only those features that are essential for the final classification-task \cite{ImageNet}. On the contrary, in this paper, we initialize all the parameters of our classical encoding layer to random values. Then we iteratively adjust them to final values during the training process, which also involves iteratively adjusting the SU(2) parameters of the variational quantum circuit. \begin{table*}[!t] \begin{center} \begin{tabular}{| l | l |} \hline {\bf Reference of the QML algorithm} & {\bf Data-sets used} \\ \hline A. Perez-Salinas \textit{et al.}, Quantum \textbf{4}, 226 (2020) \cite{salinas2020} & Toy data-set constructed in \cite{salinas2020} to classify \\ & if different points lie inside or outside specific geometric regions \\ \hline V. Havlickek \textit{et al.}, Nature \textbf{567}, 209 (2019) \cite{havlivcek2019} & Toy data-set constructed in \cite{havlivcek2019} to classify different kinds of points \\ \hline F. Tacchino \textit{et al.},npj Quantum Information \textbf{5}, 26 (2019) \cite{Tachino} & Toy data-set constructed in \cite{Tachino} consisting of black-and-white patterns \\ \hline D. Zhu \textit{et al.}, Science Advances \textbf{5}, eaaw9918 (2019) \cite{CQC2019} & Bars-and-Stripes data-set \cite{BarsStripes} \\ \hline M. Benedetti \textit{et al.}, npj Quantum Information \textbf{5}, 1 (2019) \cite{benedetti2019} & Bars-and-Stripes data-set \cite{BarsStripes} \\ \hline M. Schuld \textit{et al.}, Europhysics Letters \textbf{119}, 6 (2017) \cite{MariaIris} & Fisher's Iris data-set \cite{dua2019uci} \\ \hline S. Cao \textit{et al.}, Physical Review A \textbf{101}, 052309 (2020) \cite{OxfordXORIris} & classical XOR gate, Fisher's Iris data-set \cite{dua2019uci} \\ \hline A. Mari \textit{et al.}, arxiv: 1912.08278 (2019) \cite{mari2019} & ImageNet, CIFAR-10 \cite{mari2019} \\ \hline This paper & Fisher's Iris \cite{dua2019uci}, WBC \cite{dua2019uci}, Abalone \cite{dua2019uci}, MNIST \cite{lecun-mnisthandwrittendigit-2010} \\ \hline \end{tabular} \end{center} \caption{\label{table_datasetcomp} Data-sets used by the different existing QML algorithms for classification} \end{table*} \vspace{0.4em} Having shown that our proposed QML algorithm can solve ML-tasks as complex or more complex than most existing QML algorithms, we now discuss the different advantages of our proposed algorithm compared to the existing algorithms. We had already mentioned a couple of advantages in Section I: robustness against noise, and a low number of qubits. Here, we use quantitative estimates to compare our proposed algorithm with other existing QML algorithms for these two metrics. We also discuss some other advantages of our proposed algorithm below. \begin{enumerate} \item \textbf{Robustness against noise:} As mentioned in Section I, multi-qubit gates are absent in our algorithm. We only use single-qubit gates; this leads to low noise in our implementation (Table ~\ref{tab:ct}, Fig. ~\ref{ibmq_graph}). All multi-qubit gates, when expressed in the basis-gate-set of IBM-Q devices, require the implementation of CNOT gates. On IBM-Q devices, the error-rate of CNOT gates is 1–2 orders of magnitude higher than that of the single-qubit gates. For example, on IBM-Q Melbourne, the CNOT error-rate ranges from $1.33\%$ to $6.18\%$. But for the single-qubit U2 gate, the error-rate ranges from $0.038\%$ to $0.412\%$\cite{ibmq}. Although the exact numbers change frequently, the approximate ratio of noise in the multi-qubit gate to the single-qubit gate almost remains the same, as seen here. Thus using multi-qubit gates makes implementing the algorithm more error-prone on current NISQ quantum devices. \item \textbf{Low number of qubits and quantum gates:} As mentioned in Section I, the ``super compressed encoding'' scheme used in this paper enables us to reduce the number of qubits in the variational quantum circuit compared to existing QML algorithms \cite{havlivcek2019}. The number of qubits in our implementation is independent of the input-dimensions and only depends on the number of output-classes. Hence, for the WBC data-set used above (2 output-classes), we use can only two qubits to encode each input-sample. Since this is a binary classification problem, we can further modify our algorithm to use only one qubit for that purpose (Fig. \ref{fig:visual}). On the contrary, in \cite{havlivcek2019}, 30 qubits will be needed to encode the same 30-dimensional input-sample in the WBC data-set. \item \textbf{Implementation of the non-linear activation function:} Another advantage of our algorithm lies in how easily we can apply a non-linearity in our classifier. Most classifiers, classical or quantum, require applying a non-linear activation function after a linear layer. For deep neural networks, these non-linearities are generally ``sigmoid'' or ``tan-sigmoid'' functions \cite{lecun2015}. Evaluating such a function requires evaluating an exponential function as an intermediate step, which is an expensive operation on a digital classical computer. In classical neuromorphic computing (implementation of ML algorithms through unconventional, but classical, architectures and devices), such non-linear activation function is often implemented through transistor-based analog circuits (not digital CMOS circuits) or different other emerging devices \cite{Bhowmik_JMMM,TransistorSynapseBioCAS,PCMReview_AbuSebastian,GeffBurr_Nature_TransistorPCM,Kaushik_IEEEReview}. Here, in our implementation of an ML algorithm on quantum hardware (it can be called quantum neuromorphic computing \cite{Grollier_QNeuro2020}), our quantum operations in the variational quantum circuit provide us with that non-linear activation function \cite{Tachino}. For example, the quantum state preparation (\(e^{i \sigma_3 \Tilde{x}}\)) is non-linear with respect to the input that the classical encoding layer feeds to the system (\(\Tilde{x}\)). Similarly, the measurement of the qubit provides another non-linearity. \item \textbf{Adaptability of the activation function:} The iteratively adjustable rotation parameters in the SU(2) operations on the qubits introduce adaptability in the activation function. From that perspective, our proposed algorithm can be compared with an algorithm recently used in classical neuromorphic computing where the input-data have been clustered through a linear network and oscillator-functions (their properties are adaptable) are used to learn the boundaries between the clusters \cite{Grollier_Nature2018}. Similarly in our algorithm, as explained in details in Section IV through the Bloch-sphere-based representation, iterative adjustment of the weights in the classical encoding layer enables separating the data into different clusters on the circle of the Bloch sphere with $Q_3 = 0$ (Fig. ~\ref{fig:visual} (h), Fig. ~\ref{fig:visual_MNIST} (h)). Iterative adjustment of the SU(2) parameters in the variational quantum circuit maps these clusters, formed at random positions on that circle on the Bloch sphere, to specific parts of the Bloch sphere as shown in Fig. ~\ref{fig:visual} (i) and Fig. ~\ref{fig:visual_MNIST} (i). Such adaptive property of activation functions has been found to be very useful for data-classification \cite{Grollier_Nature2018,AdaptiveNeuron1,AdaptiveNeuron2}. \item \textbf{Explainability:} As mentioned in Section I, to the best of our knowledge, such Bloch-sphere-based approach has not been used earlier to explain the working of other existing QML algorithms. The fact that we can explain the internal mechanism based on which our proposed QML algorithm works here (Section IV) is very helpful, given that so much research has been carried out recently to explain the internal mechanism of ML algorithms \cite{ExplainableAI1,ExplainableAI2}. \end{enumerate} \section{Conclusion} \vspace{0.4em} In this paper, we have proposed and implemented a QML algorithm that uses a dressed quantum network. The classical encoding layer in our dressed network uses the ``super compressed encoding'' scheme to drastically scale down the input-dimensions. We use the Bloch-sphere-based representation to explain the working of our algorithm. We implement our algorithm on a classical computer, using Python code, as well as on Qiskit and real NISQ hardware (IBM-Q). We report high classification accuracy numbers for our implementation on different ML data-sets. We show that our algorithm can handle ML data-sets of the complexity of the ones that other existing QML algorithms typically deal with. We also argue that our algorithm has several advantages compared to various existing QML algorithms: a low number of qubits, robustness against noise, implementation of adaptable non-linear activation functions, etc. \begin{acknowledgments} Debanjan Bhowmik thanks Department of Science and Technology (DST), India, for the INSPIRE Faculty Award, and Science and Engineering Research Board (SERB), India, for the Early Career Research (ECR) Award. Debanjan Bhowmik also thanks Industrial Research and Development Unit, Indian Institute of Technology Delhi and Nokia Networks, India, for the Discover and Learn 1-2-3-4 project. These awards and projects funded this research. The authors also thank Soumik Adhikary (from Department of Physics, Indian Institute of Technology Delhi) and Rajamani Vijayaraghavan (from Tata Institute of Fundamental Research, Mumbai, India) for their insights about our proposed algorithm. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Theoretical considerations} As discussed in the manuscript, the Hamiltonian of a molecule coupled to a single cavity mode has the form \begin{equation} \hat{H}_\textrm{cm} = \hat{H}_0 + \hbar \omega_\textrm{c} \hat{a}^\dag \hat{a} - g \hat{\vec{\mu}} \vec{e} (\hat{a}^\dag + \hat{a}) \label{eq:Hcm_SI} \end{equation} which, assuming two electronic states, can be recast as \begin{equation} \resizebox{0.9\textwidth}{!}{$\hat{H}_\textrm{cm} = \begin{bmatrix} \hat{T} + V_\textrm{X} & 0 & 0 & W_1 & 0 & 0 & \dots \\ 0 & \hat{T} + V_\textrm{A} & W_1 & 0 & 0 & 0 & \dots \\ 0 & W_1 & \hat{T} + V_\textrm{X} + \hbar\omega_\textrm{c} & 0 & 0 & W_2 & \dots \\ W_1 & 0 & 0 &\hat{T} + V_\textrm{A} + \hbar\omega_\textrm{c} & W_2 & 0 & \dots \\ 0 & 0 & 0 & W_2 &\hat{T} + V_\textrm{X} + 2\hbar\omega_\textrm{c} & 0 & \dots \\ 0 & 0 & W_2 & 0 & 0 &\hat{T} + V_\textrm{A} + 2\hbar\omega_\textrm{c} & \dots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix}$.} \label{eq:cavity_H_SI} \end{equation} We refer to the manuscript regarding the notations used in Eqs. \eqref{eq:Hcm_SI} and \eqref{eq:cavity_H_SI}. The interaction of the cavity mode with a laser pulse is described by the Hamiltonian \begin{equation} \hat{H}_\textrm{L}= - \mu_\textrm{c} E(t) (\hat{a}^\dag+\hat{a}) \end{equation} which gives rise to the total Hamiltonian \begin{equation} \hat{H} = \hat{H}_\textrm{cm} + \hat{H}_\textrm{L}. \end{equation} All previous equations correspond to the diabatic representation. The adiabatic representation is defined by diagonalizing the potential energy part ($V$) of the Hamiltonian in Eq. \eqref{eq:cavity_H_SI}, \begin{equation} V^\textrm{ad} = U^\textrm{T} V U \end{equation} where $V^\textrm{ad}$ contains the polaritonic PESs on its diagonal. Accordingly, the Hamiltonian in the adiabatic representation equals \begin{equation} \hat{H}^\textrm{ad} = U^\textrm{T} \hat{H} U = U^\textrm{T} \hat{T} U + V^\textrm{ad} + U^\textrm{T} \hat{H}_\textrm{L} U. \end{equation} The adiabatic approximation is defined by neglecting the kinetic coupling terms in $\hat{H}^\textrm{ad}$ (in other words, the approximation $U^\textrm{T} \hat{T} U \approx \hat{T}$ is made), that is, \begin{equation} \hat{H}^\textrm{BO} = \hat{T} + V^\textrm{ad} + U^\textrm{T} \hat{H}_\textrm{L} U. \label{eq:hadapprox_SI} \end{equation} As a next step, geometric phase (GP) effects are incorporated by taking the similarity-transformed Hamiltonian \begin{equation} \hat{H}^\textrm{BO}_\textrm{GP} = \exp(\textrm{i}\theta) \hat{H}^\textrm{BO} \exp(-\textrm{i}\theta) \label{eq:hadapproxGP0_SI} \end{equation} where $\exp(-\textrm{i}\theta)$ is a position-dependent phase factor which will enable us to work with single-valued nuclear wave functions.\cite{Mead19792284,Ryabinkin2013,Ryabinkin20171785,Henshaw_2017,Izmaylov20165278} As discussed in the manuscript, the coupled cavity-molecule system is pumped to the singly-excited subspace by a laser pulse. Therefore, in our particular case, $\theta$ is chosen as the angle which parameterizes the two-by-two orthogonal transformation matrix \begin{equation} U = \begin{bmatrix} \cos{\theta} & \sin{\theta} \\ -\sin{\theta} & \cos{\theta} \end{bmatrix} \end{equation} which diagonalizes the potential energy matrix corresponding to the singly-excited subspace. Thus, the matrix \begin{equation} U^\textrm{T} \begin{bmatrix} V_\textrm{A} & W_1 \\ W_1 & V_\textrm{X} + \hbar\omega_\textrm{c} \end{bmatrix} U \end{equation} is diagonal if \begin{equation} \theta = \frac{1}{2} \arctan \left( \frac{2W_1}{V_\textrm{X}+\hbar\omega_\textrm{c}-V_\textrm{A}} \right). \end{equation} Eq. \eqref{eq:hadapproxGP0_SI} can be rearranged by evaluating the action of the kinetic energy operator on $\exp(-\textrm{i}\theta)$, which yields \begin{equation} \hat{H}^\textrm{BO}_\textrm{GP} = \hat{H}^\textrm{BO} + \textrm{i} (\nabla \theta) \nabla + \frac{\textrm{i}}{2} (\nabla^2 \theta) + \frac{1}{2} (\nabla \theta)^2. \label{eq:hadapproxGP1_SI} \end{equation} In the 2D($\nu_2$,$\nu_4$) model (see the next section for further discussion) used in numerical computations, $\hat{T} = -\frac{1}{2} \left( \frac{\partial^2}{\partial Q_2^2} + \frac{\partial^2}{\partial Q_4^2} \right)$ and $\nabla = \left( \frac{\partial}{\partial Q_2} , \frac{\partial}{\partial Q_4} \right)$. By substituting the commutator \begin{equation} [ \nabla, \nabla \theta ] = \nabla (\nabla \theta) - (\nabla \theta) \nabla = \nabla^2 \theta \end{equation} into the second GP term ($\frac{\textrm{i}}{2} (\nabla^2 \theta)$) one can show that the sum of the first two GP terms becomes \begin{equation} \textrm{i} (\nabla \theta) \nabla + \frac{\textrm{i}}{2} (\nabla^2 \theta) = \frac{\textrm{i}}{2} ((\nabla \theta) \nabla + \nabla (\nabla \theta)). \end{equation} This way, $\hat{H}^\textrm{BO}_\textrm{GP}$ can be transformed to a more symmetric form \begin{equation} \hat{H}^\textrm{BO}_\textrm{GP} = \hat{H}^\textrm{BO} + \frac{\textrm{i}}{2}((\nabla \theta)\nabla+\nabla(\nabla \theta)) + \frac{1}{2} (\nabla \theta)^2 \label{eq:hadapproxGP2_SI} \end{equation} which was employed in numerical computations carried out in this study. \section{Computational model and technical details} As already described in previous work,\cite{Fabri2020234302,Fabri2021a,Fbri2022} the four-atomic formaldehyde (H$_{2}$CO) molecule has a planar equilibrium structure ($C_{2v}$ point-group symmetry) in the ground electronic state ($\tilde{\textrm{X}} ~ ^1\textrm{A}_1$) and two symmetry-equivalent nonplanar equilibrium structures ($C_{s}$ point-group symmetry) which are connected by a planar transition state structure ($C_{2v}$ point-group symmetry) in the excited electronic state ($\tilde{\textrm{A}} ~ ^1\textrm{A}_2$). The ground-state equilibrium structure and definition of the body-fixed coordinate axes are depicted in Figure \ref{fig:structure}. Out of the six vibrational normal modes of H$_{2}$CO the $\nu_2$ (C=O stretch, $\textrm{A}_1$ symmetry) and $\nu_4$ (out-of-plane bend, $\textrm{B}_1$ symmetry) vibrational modes are included in the computational model called the 2D($\nu_2$,$\nu_4$) model. The corresponding anharmonic fundamentals in the ground electronic state (obtained by six-dimensional variational computations) are $1738.1 ~ \textrm{cm}^{-1}$ ($\nu_2$ mode) and $1147.0 ~ \textrm{cm}^{-1}$ ($\nu_4$ mode). \begin{figure}[hbt!] \begin{center} \includegraphics[scale=0.4]{structure.pdf} \end{center} \caption{\label{fig:structure} Equilibrium structure of the H$_{2}$CO molecule in the ground electronic state and definition of the body-fixed coordinate axes (the equilibrium structure is placed in the $yz$ plane).} \end{figure} In order to set up the 2D($\nu_2$,$\nu_4$) model normal coordinates corresponding to the planar transition state structure of the excited electronic state were evaluated and the four inactive normal coordinates ($Q_1$, $Q_3$, $Q_5$, $Q_6$) were set to zero. Then, the 2D($\nu_2$,$\nu_4$) potential energy surfaces (PESs) ($V_\textrm{X}$ and $V_\textrm{A}$) and the transition dipole moment (TDM) surface were computed as a function of the $Q_2$ and $Q_4$ normal coordinates at the CAM-B3LYP/6-31G* level of theory. Finally, two-dimensional PES and TDM functions were generated by interpolating the ab initio PES and TDM points. Due to symmetry, the TDM vanishes at any geometry of $C_{2v}$ symmetry. Moreover, in the 2D($\nu_2$,$\nu_4$) model, only the body-fixed $y$ component of the TDM can be nonzero and the TDM is always perpendicular to the permanent dipole moment of both electronic states. This observation motivates the choice that the cavity field is polarized along the body-fixed $y$ axis in all computations. Since H$_{2}$CO does not have any first-order nonadiabatic coupling between the X and A electronic states around its equilibrium geometry, light-induced nonadiabatic effects can be unambiguously distinguished from natural ones. The time-dependent Schr\"odinger and Lindblad equations were solved numerically in the diabatic representation using the direct product of two-dimensional discrete variable representation basis functions and Fock states of the cavity mode $| n \rangle$ with $n=0,1,2$. In addition to the numerically-exact diabatic computations, approximate adiabatic computations were carried out without (BO model) or with the GP terms (BOGP model). In both cases the potential energy part of the diabatic Hamiltonian was diagonalized at each two-dimensional grid point to obtain polaritonic PESs. The Schr\"odinger and Lindblad equations were then transformed to the adiabatic representation, nonadiabatic coupling terms were omitted and the resulting equations were solved numerically using the same two-dimensional discrete variable representation basis for each polaritonic PES. \section{Supplemental population, emission and probability density figures} Figures \ref{fig:gphelps_SI} and \ref{fig:gpdoesnothelp_SI} show population and emission figures (exact, approximate adiabatic (BO) and approximate adiabatic with geometric phase (BOGP)) for the cavity parameters $\omega_{c} =29957.2 ~ \textrm{cm}^{-1}$ and $g = 0.1 ~ \textrm{au}$. The cavity mode is pumped with the following laser pulses: $\omega_\textrm{L} = 29200 ~ \textrm{cm}^{-1}$ (Figure \ref{fig:gphelps_SI}) and $\omega_\textrm{L} = 30300 ~ \textrm{cm}^{-1}$ (Figure \ref{fig:gpdoesnothelp_SI}), both with $T = 200 ~ \textrm{fs}$ and $E_0 = 0.001 ~ \textrm{au}$ (corresponding to a field intensity of $I = 3.51 \cdot 10^{10} ~ \textrm{W} / \textrm{cm}^{2}$). Figures \ref{fig:pdmgphelps} and \ref{fig:pdmgpdoesnothelp} provide probability density figures for selected eigenstates with $\omega_\textrm{c} = 30245.5 ~ \textrm{cm}^{-1}$ and $g = 0.1 ~ \textrm{au}$ (see the text and Table 1 of the manuscript for more information on eigenstate labels). \begin{figure*} \includegraphics[scale=0.54]{fig-13a.pdf} \includegraphics[scale=0.54]{fig-13b.pdf} \includegraphics[scale=0.54]{fig-13c.pdf} \includegraphics[scale=0.54]{fig-13d.pdf} \caption{\label{fig:gphelps_SI} (a-b) Population of the lower polaritonic (LP) state for the three different models investigated (exact, approximate adiabatic (BO) and approximate adiabatic with geometric phase (BOGP)) during and after excitation with a $200 ~ \textrm{fs}$ laser pulse (carrier wavenumber: $\omega_\textrm{L} = 29200 ~ \textrm{cm}^{-1}$). The cavity wavenumber and coupling strength are $\omega_\textrm{c} = 29957.2 ~ \textrm{cm}^{-1}$ and $g = 0.1 ~ \textrm{au}$. Populations of polaritonic states higher than LP are negligible (see dashed lines with empty markers). (c-d) Ultrafast emission signals for the three different models. The emission is proportional to the expectation value of the photon number operator $\hat{N}$. The cavity and laser parameters are the same as for panels a and b. Results obtained with the time-dependent Schr\"odinger (TDSE) and Lindblad equations are explicitly labeled in each panel. Note that the exact emission is significantly overestimated by the BO model, while the BOGP model shows an excellent agreement with the exact results.} \end{figure*} \begin{figure*} \includegraphics[scale=0.54]{fig-14a.pdf} \includegraphics[scale=0.54]{fig-14b.pdf} \includegraphics[scale=0.54]{fig-14c.pdf} \includegraphics[scale=0.54]{fig-14d.pdf} \caption{\label{fig:gpdoesnothelp_SI} (a-b) Population of the lower polaritonic (LP) state for the three different models investigated (exact, approximate adiabatic (BO) and approximate adiabatic with geometric phase (BOGP)) during and after excitation with a $200 ~ \textrm{fs}$ laser pulse (carrier wavenumber: $\omega_\textrm{L} = 30300 ~ \textrm{cm}^{-1}$). The cavity wavenumber and coupling strength are $\omega_\textrm{c} = 29957.2 ~ \textrm{cm}^{-1}$ and $g = 0.1 ~ \textrm{au}$. Populations of polaritonic states higher than LP are negligible (see dashed lines with empty markers). (c-d) Ultrafast emission signals for the three different models. The emission is proportional to the expectation value of the photon number operator $\hat{N}$. The cavity and laser parameters are the same as for panels a and b. Results obtained with the time-dependent Schr\"odinger (TDSE) and Lindblad equations are explicitly labeled in each panel. Note that in this case the exact emission is underestimated by the BO model and inclusion of the geometric phase does not improve the BO model.} \end{figure*} \begin{figure*} \includegraphics[scale=0.52]{1_exact.pdf} \includegraphics[scale=0.52]{1A_BO.pdf} \includegraphics[scale=0.52]{1a_BOGP.pdf} \includegraphics[scale=0.52]{1b_BOGP.pdf} \caption{\label{fig:pdmgphelps} Probability density figures for selected eigenstates (exact: $1$, approximate adiabatic (BO): $1\textrm{A}$, approximate adiabatic with geometric phase (BOGP): $1\textrm{a}$ and $1\textrm{b}$, see the text and Table 1 of the manuscript for more information). $Q_2$ and $Q_4$ are dimensionless normal coordinates of the modes $\nu_2$ and $\nu_4$. The cavity wavenumber and coupling strength are $\omega_\textrm{c} = 30245.5 ~ \textrm{cm}^{-1}$ and $g = 0.1 ~ \textrm{au}$, respectively. The red dot indicates the position of the LICI at $Q_2 = 10.05$ and $Q_4 = 0$.} \end{figure*} \begin{figure*} \includegraphics[scale=0.35]{3_exact.pdf} \includegraphics[scale=0.35]{3B_BO.pdf} \includegraphics[scale=0.35]{3b_BOGP.pdf} \caption{\label{fig:pdmgpdoesnothelp} Probability density figures for selected eigenstates (exact: $3$, approximate adiabatic (BO): $3\textrm{B}$, approximate adiabatic with geometric phase (BOGP): $3\textrm{b}$, see the text and Table 1 of the manuscript for more information). $Q_2$ and $Q_4$ are dimensionless normal coordinates of the modes $\nu_2$ and $\nu_4$. The cavity wavenumber and coupling strength are $\omega_\textrm{c} = 30245.5 ~ \textrm{cm}^{-1}$ and $g = 0.1 ~ \textrm{au}$, respectively. The red dot indicates the position of the LICI at $Q_2 = 10.05$ and $Q_4 = 0$.} \end{figure*} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction\label{Intro}} The introduction of foreign particles in an amorphous matrix has for long been used for the strength reinforcement of disordered materials~\cite{Torquato-book02}. The most classical strategy consists in adding rigid particles or fibers in order to enhance the elastic properties of the composite material. An additional or alternative strategy consists in the modification of the plastic properties. Here, the effect on the overall strength is more delicate. The introduction of hard particles in a very ductile matrix tends to increase the effective yield stress, hence the strength. A good illustration of this approach can be found in the development of materials for road pavements~\cite{Branthaver-SHRP93,Anderson-SHRP94,Chen-JMCE98,You-CBM12}: mineral micrometer scale fillers are introduced in a viscous bitumen to make it viscoplastic; the obtained mastic asphalt is then reinforced through the addition of millimetric to centimetric aggregates. More recently the introduction of a ductile phase has been used to reinforce metallic glasses~\cite{Hofmann-Nat08,Ferry-MRS13}. In this case, the ductility of the second phase enables one to control the development of shear-bands, thus preventing the nucleation of cracks. A reinforcement effect is obtained despite the fact that the effective yield stress of the amorphous composite is lowered with respect to that of the matrix. The understanding of the plastic behavior of amorphous composites thus appears to be crucial in the design of modern materials. Efforts in theoretical and numerical modelling have been recently performed to study the effects of micro-alloying in metallic glasses~\cite{Samwer-APL13,Samwer-ActaMat14} and of the addition of aggregates in asphalt mixtures~\cite{Al-Rub09,Al-Rub11,Al-Rub12}. From the theoretical mechanics point of view, the determination of effective mechanical properties is a matter of homogenization. While this field has been intensively explored in case of elastic properties~\cite{Torquato-book02}, results are much more scarce for non-linear behaviors like fracture~\cite{RVH-EJMA03,PVR-PRL13} or plasticity~\cite{Ponte-Castaneda-PRSA92,Debotton-IJSS95,Willis-JMPS04,Turgeman-MRC11,Suquet-IUTAM14}. In particular, standard homogenization approaches fail to account for size effects~\cite{Ponte-Castaneda-PRSA92,Debotton-IJSS95}. Only the development of strain-gradient theories (relying on the introduction of an ad-hoc internal length scale) has so far succeeded to reproduce size dependence~\cite{Willis-JMPS04}. Still, these approaches only predict the mean behavior and cannot cope with sample-to-sample fluctuations. Here we develop an alternative approach, based on the recent development of depinning-like mesoscopic models of amorphous plasticity~\cite{BulatovArgon94a,BVR-PRL02,Picard-PRE02,Schuh-ActaMat09,TPVR-CRM12,Nicolas-SM14}. The modelling of amorphous plasticity and rheology of complex fluids has seen much progress in recent years~\cite{RTV-MSMSE11} and a family of mesoscopic models~\cite{BVR-PRL02,TPVR-CRM12} has emerged that rely on two main ingredients: local plastic thresholds (amorphous plasticity results from series of local rearrangements of the amorphous structure~\cite{Argon-ActaMet79,FalkLanger-PRE98}) and account of elastic interactions (local plastic events occur in a surrounding elastic matrix and induce internal stresses~\cite{Eshelby57}). These models show scaling properties close to the effective yield stress (here seen as a critical threshold) and thus exhibit statistical size effects. Another nice feature of these models is their ability to reproduce localization and shear-banding behaviors~\cite{VR-PRB11,Homer-ActaMat14}. The effect of crystalline inclusions in an amorphous matrix has recently been discussed along such lines in Ref.~\cite{Homer-ActaMat15} with a particular emphasis on the localization behavior. Here we specialize the model recently presented in~\cite{TPVR-PRE11,TPVR-CRM12} to the case of amorphous composites by considering a binomial distribution of local plastic stress thresholds in order to reproduce the introduction of a fraction of hard particles in an amorphous matrix. The simplistic model presented in the following will obviously not be able to give a realistic account of the whole richness of the mechanical behavior of amorphous composites. However results are expected to be generic for this class of materials. In section~\ref{Model} we introduce the model, in section~\ref{Size-effects} we present the complex size dependence of the yield strength measured on the amorphous matrix and amorphous composites with a growing fraction of particles. In section~\ref{Hardening}, we discuss the hardening mechanisms at play in amorphous composites and the localization behavior. We emphasize in particular the interplay between the gradual localization of the plastic deformation and the building of a strongly correlated internal stress field. Elaborating on the numerical observations, we present in section~\ref{Analytical-Model} an analytical model that accounts quantitatively for the size effects of the effective yield strength of amorphous composites. Mathematical details of the model are provided in a separated appendix. Our main findings are finally summarized in section~\ref{Conclusion}. \section{Modelling amorphous plasticity: from glasses to amorphous composites \label{Model}} The modelling of amorphous plasticity has recently given rise to an increasing interest~\cite{RTV-MSMSE11}. Unlike crystalline plasticity that results from the motion of dislocations of the ordered lattice, amorphous plasticity results from series of localized rearrangements of the disordered structure~\cite{Spaepen-ActaMet77,Argon-ActaMet79,FalkLanger-PRE98}. Such local plastic events induce internal stresses within the surrounding material~\cite{Maloney-PRL04b,Tanguy-EPJE06}. The latter can be seen as an elastic matrix around a plastic inclusion and the stress associated to the rearrangement computed in the spirit of the problem of the eigen strain early introduced by Eshelby~\cite{Eshelby57}. \begin{figure} \includegraphics[width=0.48\columnwidth]{fig1a.png} \includegraphics[width=0.48\columnwidth]{fig1b.png} \includegraphics[width=0.48\columnwidth]{fig1c.png} \includegraphics[width=0.48\columnwidth]{fig1d.png} \includegraphics[width=0.48\columnwidth]{fig1e.png} \includegraphics[width=0.48\columnwidth]{fig1f.png} \includegraphics[width=0.04\columnwidth,angle=-90]{fig1g.png} \caption{Effect of a plastic inclusion in an elastic matrix submitted to a pure shear biaxial loading $\Sigma=\Sigma_{yy}=-\Sigma_{xx}$. Only the spherical inclusion experiences plasticity. The first row represents the plastic strain $\varepsilon^{pl}$, the second row the elastic strain $\varepsilon^{el}$ and the third row the total strain $\varepsilon=\varepsilon^{el}+\varepsilon^{pl}$. The strain fields are represented by a color scale on the reference mesh on the left column and on a deformed mesh on the right column. One recognizes the traditional quadrupolar symmetry associated to the Eshelby inclusion.} \label{fig:Eshelby} \end{figure} In Fig.~\ref{fig:Eshelby} we show the effect of such a plastic inclusion on the surrounding matrix. The inclusion has experienced a pure shear $\varepsilon^{pl}_{xx}=-\varepsilon^{\mathrm{pl}}_{yy}=\varepsilon^{\mathrm{pl}}_0$. The amplitude of the shear strain $\varepsilon=\varepsilon_{xx} - \varepsilon_{yy}$ is represented by a color scale on a reference undeformed grid (left column) and on a grid deformed according to the total displacement (right column). The first row shows the plastic strain $\varepsilon^{\mathrm{pl}}$, non-zero only within the inclusion. The second row shows the elastic strain field $\varepsilon^{\mathrm{el}}$. The latter is negative within the inclusion. Outside the inclusion it exhibits a quadrupolar symmetry: negative along the axes at $0$ and $90$ degrees, and positive along the directions at $\pm45$ degrees. The third row shows the total strain $\varepsilon=\varepsilon^{\mathrm{pl}}+\varepsilon^{\mathrm{el}}$. The precise expression of the internal stress field $\sigma^{\mathrm{el}}$ depends on the details of the plastic strain field and the geometry of the rearranging region but in the far field, the dominant term obeys the universal form: \begin{equation} \sigma^{\mathrm{el}}=\mu \varepsilon^{\mathrm{pl}}_0 \mathcal{A} \frac{\cos(4\theta)}{r^2}\;, \label{quadrupolar-stress} \end{equation} where $\varepsilon^{\mathrm{pl}}_0$ and $\mathcal{A}$ are the mean plastic strain experienced by the inclusion and the area of the inclusion, respectively. The quadrupolar elastic interaction associated with localized plastic events is the essential ingredient of the recent models of amorphous plasticity and/or rheology of complex fluids. Indeed its anisotropic nature is responsible for non-trivial behaviors~\cite{TPVR-CRM12,Wyart-PNAS14}. In particular the presence of multiple soft modes of the elastic interaction, corresponding to the existence of extended modes of plastic strain which do not induce internal stresses, e.g., shear bands, is responsible for a complex localization behavior. \subsection{A mesoscopic model of amorphous plasticity} Following the model introduced in Refs.~\cite{BVR-PRL02, TPVR-CRM12}, the system is discretized on a two-dimensional square lattice with a lattice constant that is larger than the typical size of the plastic reorganizations. Each site $(i, j)$ has an internal stress $\sigma^{\mathrm{el}}_{ij}$, a local plastic threshold $\sigma^{\mathrm{c}}_{ij}$, and a local plastic strain $\varepsilon^{\mathrm{pl}}_{ij}$. A pure shear external loading is considered: $\Sigma_{xx}^{\mathrm{ext}}=-\Sigma_{yy}^{\mathrm{ext}}=\Sigma^{\mathrm{ext}}$. It is assumed that the reorganizations at a microscopic scale obey the same symmetry as the external loading, i.e., a site $(i, j)$ undergoes a plastic deformation in pure shear: $\varepsilon_{xx}^{\mathrm{pl}}=-\varepsilon_{yy}^{\mathrm{pl}}=\varepsilon^{\mathrm{pl}}_{ij}$. A local criterion of plasticity is considered, the limit of elasticity is thus defined for a site $(i, j)$ by: \begin{equation} \Sigma^{\mathrm{ext}} + \sigma^{\mathrm{el}}_{ij} \le \sigma^{\mathrm{c}}_{ij}\;. \label{criterion} \end{equation} Values of $\sigma^{\mathrm{c}}$ are drawn from a random distribution. No spatial correlations are considered. Whenever the criterion is locally satisfied, say on site $(i_0,j_0)$ the site undergoes an incremental plastic strain $\delta \varepsilon_0^{\mathrm{pl}}$. This value is drawn from a uniform distribution in $[0,d_0]$. To account for the structural change experienced by the rearranging zone, the local plastic threshold is updated to a new value. As discussed above, the local plastic event also induces an incremental internal stress on every lattice site $(i, j)$: \begin{equation} \delta\sigma^{\mathrm{el}}_{ij} = G^{\mathrm{el}} * \delta \varepsilon_0^{\mathrm{pl}} \;, \label{internal-stress} \end{equation} where the star denotes the convolution operation and $G^{\mathrm{el}}$ is a quadrupolar kernel accounting for the elastic reaction of the matrix to a unit plastic event. Here we consider bi-periodic boundary conditions and $G^{\mathrm{el}}$ is computed from Fourier space~\cite{TPVR-CRM12,Budrikis-PRE13,TPRV-preprint15}. The system is driven with an extremal dynamics: only one site is deformed per iteration step. An iteration step corresponds to $(i)$ identify the weakest site $(i_0,j_0)$ for a given configuration, $(ii)$ update the plastic strain $\varepsilon^{\mathrm{pl}}_{i_0,j_0}$ and the plastic threshold $\sigma^{\mathrm{c}}_{i_0,j_0}$ at this particular site and $(iii)$ update the internal stress $\sigma^{\mathrm{el}}$ all over the system. A new configuration is thus obtained and the next iteration can be performed. Extremal dynamics~\cite{BVR-PRL02} is a way of driving the system at a vanishing strain rate, very close in spirit to the athermal quasi-static driving used in some atomistic simulations~\cite{Maloney-PRL04a,Maloney-PRE06}. Note that the same model can be driven with other kinds of dynamics, e.g., constant stress, kinetic Monte Carlo. A direct outcome of a simulation is the evolution of the external stress $\Sigma^{\mathrm{ext}}$ versus the average plastic strain $\langle \varepsilon^{\mathrm{pl}} \rangle$, where the average $\langle \cdot \rangle$ represents the average over the different sites at a particular iteration step. The average plastic strain $\langle \varepsilon^{\mathrm{pl}} \rangle$ is directly proportional to the number of iteration steps so that $\langle \varepsilon^{\mathrm{pl}} \rangle$ can be seen as a fictitious time. In Fig.~\ref{fig:stressStrainTypical} we give a sketch of a simple plastic behavior. A typical stress-strain curve obtained upon monotonous loading is shown. A (reversible) elastic behavior is first observed up to the yield stress value $\Sigma^{\mathrm{Y}}$. Above this value, plasticity sets in (a residual plastic strain is obtained upon unloading). The following curvature of the stress-strain curve is characteristic of a hardening behavior: if a unloading/loading cycle is performed, a new (larger) value of the elastic limit is obtained. A stress plateau is eventually reached that defines the ultimate flow stress $\Sigma^{\mathrm{F}}$. In the present framework, the external loading is not monotonous. Rather, the external stress $\Sigma^{\mathrm{ext}}$ is a fluctuating quantity which is adapted at each iteration step so that the criterion of the weakest site is satisfied. The macroscopic flow stress $\Sigma^{\mathrm{F}}$ of a given configuration is thus obtained as the maximum value of the external stress over the simulation: \begin{equation} \label{FlowStress} \Sigma^{\mathrm{F}} = \max_t \Sigma^{\mathrm{ext}}(t)\;, \end{equation} where $t$ is an iteration step. For an external loading $\Sigma^{\mathrm{ext}} < \Sigma^{\mathrm{F}}$, plastic deformation will eventually stop while any loading $\Sigma^{\mathrm{ext}} < \Sigma^{\mathrm{F}}$ will allow it to develop indefinitely. \begin{figure} \begin{center} \includegraphics[width=0.95\columnwidth]{fig2.png} \caption{\label{fig:stressStrainTypical}Sketch of a simple plastic behavior. Plasticity sets in at yield stress $\Sigma^{\mathrm{Y}}$, a hardening stage follows until a stress plateau is reached. The latter stress value defines the flow stress $\Sigma^{\mathrm{F}}$. The plastic strain $\varepsilon^{\mathrm{pl}}$ is defined as the total strain $\varepsilon$ minus the elastic strain $\varepsilon^{\mathrm{el}}$.} \end{center} \end{figure} \subsection{Application to amorphous composites} The model presented above can be easily applied to the case of amorphous composites. A major hypothesis (already performed in the bare model) consists in assuming the homogeneity of the elastic properties. Only the effect of a plastic disorder will be considered in the following. To represent the amorphous composite we consider a fraction $\phi$ of inclusions randomly distributed in an amorphous matrix. Here the size of the inclusions is assumed to be given by the mesh size so that no correlation is considered in the spatial distribution of inclusions. The fraction of inclusions is defined by $\phi=N_{\mathrm{inc}}/N^2$ where $N_{\mathrm{inc}}$ is the number of hard inclusions and $N$ the linear size of the lattice. A bimodal distribution is used to account for the respective plastic thresholds of the matrix and the inclusions. For the amorphous matrix, the plastic threshold is drawn from a uniform distribution $[\overline{\sigma^{\mathrm{c}}}-\delta\sigma^{\mathrm{c}}, \overline{\sigma^{\mathrm{c}}}+\delta\sigma^{\mathrm{c}}]$. Here we choose $\overline{\sigma^{\mathrm{c}}}=1$ and $\delta\sigma^{\mathrm{c}}=0.5$. The inclusions can be either less or more ductile than the amorphous matrix. In the cases of interest presented above, their nature is often crystalline. We thus assume low fluctuations of the plastic properties of the inclusions and we consider that they are characterized by a constant plastic threshold, $\sigma^{\mathrm{c}}=\Sigma^{\mathrm{H}}$: all inclusions get the same yield stress and this value does not change after an inclusion has experienced plastic deformation. Here we restrict the scope to the case of hard particles: $\Sigma^H > \overline{\sigma^{\mathrm{c}}}$. In order to reduce the space of parameters we also consider that the typical plastic strain undergone by the inclusions is the same as in the amorphous matrix. \subsection{Overview of the simulations} Simulations were performed with sizes ranging from $N=16$ up to $N=256$, and a number $M=40$ of independent realizations of the disorder. The fraction of inclusions was varied between, $\phi=2.5 \times 10^{-4}$ and $\phi=0.99$. Different values of the contrast between inclusions and matrix were used : $\Sigma^{\mathrm{H}}=4, 10, 40$ and the value $\Sigma^{\mathrm{H}}=10^{8}$ was used to mimic infinitely hard particles. Most of the following discussion will focus on the case $\Sigma^H = 10$. \section{A size dependent effective yield stress\label{Size-effects}} \subsection{Amorphous matrix} We first discuss size effects in the case of a mere amorphous matrix, i.e., in the absence of hard particles. The ultimate yield strength or flow stress $\Sigma^{\mathrm{F}}$ of the material is defined as the maximum stress experienced by the material for a given simulation. In Fig.~\ref{fig:size-effect-matrix} we show the evolution of the ultimate yield strength with the system size. A slight decrease is observed. In the inset, we show that the evolution is consistent with a simple power law dependence: \begin{equation} \Sigma^{\mathrm{F}} = \Sigma^* + \frac{A}{N}\;, \end{equation} where $\Sigma^*$ is the flow stress in the limit of an infinitely large system and $A$ is a constant. Such a power-law dependence is consistent with the depinning-like nature of the model. In this context~\cite{BVR-PRL02,TPVR-PRE11,Wyart-PNAS14}, the plastic flow stress can be viewed as a critical threshold between a static phase (no plasticity) and a dynamic phase (plastic flow). The fluctuations of the depinning threshold measured over a finite length scale here simply reflect the divergence of the correlation length in the vicinity of a critical threshold $\xi \approx |f-f^*|^{-\nu}$. The present results are consistent with the rough estimate $\nu\approx 1$ obtained in previous works~\cite{TPVR-PRE11,Wyart-PNAS14}. Fig.~\ref{fig:sigmaVsNc0} gives another illustration of this critical-like behavior. This figure shows the variation of the standard deviation $\delta \Sigma^{\mathrm{F}}$ with the average flow stress $\Sigma^{\mathrm{F}}$. The variation is reasonably reproduced by an affine relationship $\delta \Sigma^{\mathrm{F}} = a(\Sigma^{\mathrm{F}} - \Sigma^*)$. This is consistent with the expected critical behavior $(\Sigma^* - \Sigma^{\mathrm{F}} ) \propto \delta \Sigma^{\mathrm{F}} \propto L^{-1/\nu}$. The intercept value $\Sigma^*$ can be seen here as the extrapolated value of the effective flow stress at infinite size. \begin{figure} \begin{center} \includegraphics[width=0.98\columnwidth]{fig3.jpg} \end{center} \caption{\label{fig:size-effect-matrix} (color online) Variation of the ultimate yield strength $\Sigma^{\mathrm{F}}$ with the system size $N$ for a mere amorphous matrix ($\phi=0$) with a yield stress $\sigma^{\mathrm{c}} \in [0.5; 1.5]$. The line corresponds to the power law expression $\Sigma^{\mathrm{F}} = \Sigma^* + \frac{A}{N}$. As shown in the inset this evolution is consistent with the numerical data.} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.98\columnwidth]{fig4.jpg} \end{center} \caption{\label{fig:sigmaVsNc0} (color online) Variation of the standard deviation $\delta\Sigma^{\mathrm{F}}$ of the ultimate yield strength $\Sigma^{\mathrm{F}}$ with $\Sigma^{\mathrm{F}}$ for the amorphous matrix with a yield stress $\sigma^{\mathrm{c}} \in [0.5; 1.5]$, for system sizes $N=16,\;32,\;64,\;128,\;256$. The standard deviation is obtained for $40$ realizations. As expected for a critical transition, a linear behavior is obtained. An extrapolation at zero standard deviation gives an estimate of the critical threshold, here the yield stress $\Sigma^*$ at infinite size.} \end{figure} Note that, independently of the system size, the values of the effective flow stress lie significantly above the simple average of the microscopic thresholds $\overline{\sigma^{\mathrm{c}}}=1$. \subsection{Amorphous composites} Second, we discuss the dependence of the ultimate yield strength on the fraction of inclusions and on the size of the system. {\em Size dependence \---} In Fig.~\ref{fig:sigmaVsN}, we show the size dependence observed for amorphous composites with volume fractions of inclusions ranging from $\phi=0$ to $\phi=0.16$. \begin{figure} \begin{center} \includegraphics[width=0.98\columnwidth]{fig5.jpg} \end{center} \caption{\label{fig:sigmaVsN} (color online) Variation of the ultimate yield strength $\Sigma^{\mathrm{F}}$ with the system size $N$ for the amorphous matrix ($\phi=0$) and amorphous composites with various fractions of hard inclusions. The yield stress of the amorphous matrix is $\sigma^{\mathrm{c}} \in [0.5; 1.5]$, the yield stress of the inclusions is $\Sigma^{\mathrm{H}} = 10$. Depending on $\phi$ the yield strength shows either a decreasing or an increasing trend with increasing system size.} \end{figure} For low fractions of hard inclusions, the behavior is similar to that obtained for the amorphous matrix. The yield strength decreases with increasing system size and converges towards a finite value for large system sizes. Surprisingly, the behavior is markedly different for large fractions of inclusions: the ultimate yield strength increases with increasing system size. At intermediate values of the fraction of inclusions, the evolution of the yield strength even appears to be non-monotonic. {\em Mixing law \---}In Fig.~\ref{fig:sigmaVsConc} we show the evolution of $\Sigma^{\mathrm{F}}$ with the fraction $\phi$ of inclusions of yield stress $\Sigma^{\mathrm{H}}=10$ for system sizes ranging from $N=16$ to $N=256$. The error bars indicate the standard deviation computed on the different realizations performed for a given pair of parameters $(\phi,N)$. A clear size-effect is observed. The curves obtained for different values of $N$ do not superimpose. The larger the system, the larger the reinforcement effect induced by the hard inclusions and the closer the effective yield strength to the value obtained from a simple linear mixing law: \begin{equation} \label{MixingLaw} \Sigma^{\mathrm{M}}(\phi, N) = (1-\phi)\Sigma^{\mathrm{A}}(N) + \phi \Sigma^{\mathrm{H}}\;, \end{equation} where $\Sigma^{\mathrm{A}}$ is the ultimate yield strength of the sole amorphous matrix and $\Sigma^{\mathrm{H}}$ the yield stress of the hard sites. Note that the value $\Sigma^{\mathrm{M}}$ obtained from a linear mixing law, known as the Voigt average in the context of homogenization is usually expected to be an upper bound~\cite{Torquato-book02}. While this statement is true for homogenization of linear properties such as conductivity or elasticity, it does not necessarily hold for non-linear properties such as fracture or plasticity. In the latter case, out-of-equilibrium mechanisms may allow the effective property to reach values above the Voigt bound~\cite{RVH-EJMA03,PVR-PRL13}. Although it often fails to reproduce quantitatively the experimental data, the simple linear mixing law~\cite{Leidner-JAPS74} remains widely used in material science to account for the effects of plastic reinforcement~\cite{Turcsanyi-JMSL88,Chen-JMCE98,Chen-JMS05}. Another feature, here emphasized in the inset of Fig.~\ref{fig:sigmaVsConc}, can be pointed out: for a given system size $N$, no reinforcement is observed below a threshold value $\phi_c(N)$ of the volume fraction of hard inclusions. The larger the system size $N$, the smaller the threshold value $\phi_c(N)$. Despite its simplicity (scalar model, perfect plasticity), the present model is thus characterized by a complex behavior. In particular it exhibits a clear size effect that can usually only be reproduced in the framework of more complex descriptions of plasticity such as strain-gradient based theories~\cite{Willis-JMPS04}. A key ingredient is here the account of the elastic interaction induced by the local plastic events. \begin{figure} \begin{center} \includegraphics[width=0.98\columnwidth]{fig6.jpg} \end{center} \caption{\label{fig:sigmaVsConc} (color online) Variation of the ultimate yield strength $\Sigma^{\mathrm{F}}$ with the fraction $\phi$ of hard inclusions for a yield stress $\sigma^{\mathrm{c}} \in [0.5; 1.5]$ of the matrix, a yield stress $\Sigma^{\mathrm{H}} = 10$ of the hard inclusions and for five different system sizes $N=16$, $N=32$, $N=64$, $N=128$, and $N = 256$. The same data are shown in the inset in semi-logarithmic scale.} \end{figure} \section{Hardening and localization\label{Hardening}} We now discuss in more details the plastic behavior of the model amorphous composites. In the following, we try to unveil the mechanisms at play in the hardening regime. We shall discriminate between two different effects, respectively associated to a structural evolution of the amorphous matrix and a concentration of the stresses on the hard particles. We then show a gradual localization of the plastic deformation on the weakest band of the material. \subsection{Stress-Strain curves} Figure~\ref{fig:stressStrain} displays the stress-strain curves obtained for four different values of the inclusion yield stress $\Sigma^{\mathrm{H}}=4,\;10,\;40,\;10^8$ (the latter case being meant to mimic infinitely hard inclusions) and for different volume fractions $\phi$ ranging from $0$ to $0.25$. Note that in order to emphasize the hardening regime the variation of the stress is represented versus the sole plastic strain. Two successive hardening regimes can be distinguished before the stress plateau corresponding to the flow stress is reached. The first one is related to the hardening of the amorphous matrix. The second one is directly induced by the presence of hard particles. \begin{figure} {\includegraphics[scale=0.25]{fig7a.jpg}} \vspace{-4pt} {\includegraphics[scale=0.25]{fig7b.jpg}} \vspace{-4pt} {\includegraphics[scale=0.25]{fig7c.jpg}} \vspace{-4pt} {\includegraphics[scale=0.25]{fig7d.jpg}} \caption{\label{fig:stressStrain} (color online) Stress-strain curves for a yield stress of the matrix $\sigma^{\mathrm{c}} \in [0.5; 1.5]$, volume fractions of hard sites $\phi = \{ 0, ... ,0.25\}$ and for a yield stress of the hard inclusions of $\Sigma^{\mathrm{H}} = 4$ (a), $\Sigma^{\mathrm{H}} = 10$ (b), $\Sigma^{\mathrm{H}} = 40$ (c), and $\Sigma^{\mathrm{H}} = 10^8$ (d).} \end{figure} \subsection{Statistical hardening of the amorphous matrix} In this subsection, hardening in the pure amorphous matrix is considered. At low plastic strain, a gradual hardening of the amorphous matrix takes place. This phenomenon which has been discussed in Refs.~\cite{BVR-PRL02,TPVR-CRM12} results from the progressive exhaustion of the weakest sites of the matrix. We show in Fig.~\ref{fig:trap-matrix-exhaustion} the gradual evolution of the distribution of the local plastic thresholds $P(\sigma^{\mathrm{c}})$ upon deformation in the case of the sole matrix. The larger the deformation, the narrower the distribution and the closer the mean to the upper bound value. This structural evolution can be understood in the following way. After plastic rearrangements, the sites are given a new plastic threshold drawn from the same random distribution as the initial ones. The systematic bias between the weak thresholds of the failing sites and the ``normal'' thresholds that replace them after deformation induces an evolutionary-like transient increase, reminiscent of self-organized criticality models~\cite{BakSneppen-PRL93}. It can be seen in Fig.~\ref{fig:stressStrain} that at low fractions of inclusions, this exhaustion mechanism is the only one to hold and hard particles do not contribute to the reinforcement. Indeed the stress-strain curve at low volume fractions of hard sites is identical to that of the pure amorphous matrix. \begin{figure} \begin{center} \includegraphics[width=0.98\columnwidth]{fig8.jpg} \end{center} \caption{\label{fig:trap-matrix-exhaustion} (color online) Evolution of the distribution of local plastic thresholds $P(\sigma^{\mathrm{c}})$ upon plastic deformation. A gradual exhaustion effect is observed until a stationary distribution is reached.} \end{figure} A complementary view of this statistical hardening is given in Fig.~\ref{fig:crit-matrix-exhaustion}. Here, instead of the local plastic thresholds, we show the evolution of the distribution of the effective thresholds $P(\sigma_c^{\mathrm{eff}})$ where $\sigma_c^{\mathrm{eff}} =\sigma^{\mathrm{c}} - \sigma^{\mathrm{el}}$. Indeed, following Eq.~\ref{criterion}, the local criterion for a given site $(i,j)$ can be rewritten as $\Sigma^{\mathrm{ext}} \leq \sigma_{ij}^{\mathrm{c}} - \sigma_{ij}^{\mathrm{el}}$. In other terms, the local thresholds are dressed by the internal stress. Following the evolution of the distribution upon deformation, we recover the hardening effect. Interestingly, even in the transient stage, one can identify a sharp front associated to the lower bound of the distribution. This directly corresponds to the emergence of a yield stress. The disordered system has self-organized and in the transient stage one can unambiguously define a yield stress that depends on an internal variable, the cumulated plastic strain. This also shows the dependence of the macroscopic plastic properties on the past mechanical history. \begin{figure} \begin{center} \includegraphics[width=0.98\columnwidth]{fig9.jpg} \end{center} \caption{\label{fig:crit-matrix-exhaustion} (color online) Evolution of the distribution of {\it effective} plastic thresholds $P(\sigma_c^{\mathrm{eff}})$ where $\sigma_c^{\mathrm{eff}} =\sigma^{\mathrm{c}} - \sigma^{\mathrm{el}} $ upon plastic deformation. The sharp lower front is associated to the emergence of a global yield stress. The latter gradually increases upon plastic deformation (hardening) until it reaches its final value (the stress plateau of the stress-strain curve).} \end{figure} \subsection{Inclusion hardening} When hard particles are present in the amorphous matrix, an additional hardening stage is observed. As seen in in Fig.~\ref{fig:stressStrain}, this second stage takes place at higher plastic strains than the matrix hardening stage. We observe that the higher the fraction of hard particles, the sooner the onset of the second hardening regime. Note also that below a certain fraction of hard inclusions, no second stage of hardening is observed. In comparison to the pure amorphous matrix, the initial distribution of plastic thresholds is bimodal in a composite. In the initial stage of the deformation, due to the high contrast of plastic thresholds, only sites of the amorphous matrix can experience plasticity. The plastic events induce internal stresses. Hard particles can sustain a level of internal stress much higher than that of the amorphous matrix and act here as a kind of internal skeleton bearing most of the stress exerted on the structure. Again it is of interest to follow the distribution of effective thresholds. In Fig.~\ref{fig:crit-exhaustion-composite16} we show the evolution observed for an amorphous composite with 16\% of hard particles of yield stress $\Sigma^{\mathrm{H}}=10$. We see that upon plastic deformation, the build-up of internal stress on hard particles has a clear effect: it tends to smear out the peak around $\Sigma^{\mathrm{H}}=10$. In the mean time the lower part of the distribution, in particular the sharp front that corresponds to the global yield stress keeps on increasing. This second hardening stage is much longer than the statistical hardening of the amorphous matrix. Stationarity is eventually obtained when the second peak has entirely disappeared. \begin{figure} \begin{center} \includegraphics[width=0.98\columnwidth]{fig10.jpg} \end{center} \caption{\label{fig:crit-exhaustion-composite16} (color online) Evolution of the distribution of {\it effective} plastic thresholds $P(\sigma_c^{\mathrm{eff}})$ where $\sigma_c^{\mathrm{eff}} =\sigma^{\mathrm{c}} - \sigma^{\mathrm{el}} $ upon plastic deformation in an amorphous composite with 16\% of hard particles of yield stress $\Sigma^{\mathrm{H}}=10$. The building of internal stresses gradually smears out the peak associated with hard particles. Conversely, the sharp front corresponding to the global yield stress increases upon plastic deformation longer than in the case of the sole amorphous matrix.} \end{figure} \subsection{Localization: no-slip bands} \label{sec:localization} In order to reveal the hardening mechanisms induced by the hard particles, we give a closer look at the spatial organization of the plastic strain field. In Fig.~\ref{fig:trap_rel_strain_maps} we show in the top row maps of the relative plastic strain $\varepsilon^{\mathrm{pl}}_{i, j}/\langle\varepsilon^{\mathrm{pl}}\rangle$ obtained after a long simulation for three concentrations of particles ($\phi=10^{-3}$, $10^{-2}$, $10^{-1}$) and in the bottom row the associated maps of plastic thresholds $\sigma^{\mathrm{c}}$ (in the final configuration of a long simulation) indicating the position of the hard sites. The low concentration case (panels (a) and (d) of Fig.~\ref{fig:trap_rel_strain_maps}, $\phi=10^{-3}$) gives a good illustration of the effect of adding hard sites on plastic deformation. We see that the plastic strain field is not homogeneous. In this example where only $3$ hard particles are present, we observe, as expected, that the hard particles are barely deformed. Interestingly, plastic deformation is also small along the bands at $\pm 45^\circ$ that intercept the hard sites. Plasticity is inhibited along a set of ``no-slip'' bands induced by the presence of hard particles. These bands orientated at $\pm45^\circ$ obviously reflect the symmetry of the quadrupolar elastic interaction discussed above. While the low fraction of hard inclusions shown in this example is not sufficient to induce any reinforcement, it gives a simple clue on the strengthening mechanism: hard particles inhibit the natural slip systems associated to the elastic kernel~\cite{TPRV-preprint15}. In the medium concentration case (panels (b) and (e), $\phi=10^{-2}$), the (relative) plastic strain field is more heterogeneous. One recovers patterns orientated at $\pm 45^\circ$ and it is possible to distinguish between two kinds of bands: bands containing hard sites are much less deformed than those not containing hard sites. In other words, the lattice of no-slip bands is much denser and only the sites not intercepted by these bands can easily undergo plastic deformation. In the high concentration case (panels (c) and (f), $\phi=10^{-1}$), the (relative) plastic strain field is highly heterogeneous and actually highly localized. Most of the plastic deformation concentrates onto one single band. This evolution is more clearly shown in Fig.~\ref{fig:strain_maps_Phi01} where we represented maps of the incremental plastic strain $\Delta \varepsilon^{\mathrm{pl}}_{i,j} = \varepsilon^{\mathrm{pl}}_{i,j}(t + \delta t) - \varepsilon^{\mathrm{pl}}_{i,j}(t) $ where $\delta t$ represents a few iteration steps such that $\langle \varepsilon^{\mathrm{pl}}\rangle (\delta t) = 2$ and $\langle \varepsilon^{\mathrm{pl}}\rangle (t) = 10$, $20$, $30$, $40$, $50$ and $60$. Upon deformation, plastic activity appears to become more and more localized. \begin{widetext} \begin{figure}[t] \begin{center} \includegraphics[height=0.3\textwidth]{fig11a.png}\hspace{-0.75cm} \includegraphics[height=0.3\textwidth]{fig11b.png}\hspace{-0.75cm} \includegraphics[height=0.3\textwidth]{fig11c.png} \end{center} \vspace{-0.5cm} \hspace{-0.9cm}(a)\hspace{4.375cm}(b)\hspace{4.375cm}(c)\\ \vspace{-0.25cm} \begin{center} \includegraphics[height=0.3\textwidth]{fig11d.png}\hspace{-0.75cm} \includegraphics[height=0.3\textwidth]{fig11e.png}\hspace{-0.75cm} \includegraphics[height=0.3\textwidth]{fig11f.png} \end{center} \vspace{-0.5cm} \hspace{-0.9cm}(d)\hspace{4.375cm}(e)\hspace{4.375cm}(f)\\ \caption{\label{fig:trap_rel_strain_maps} (color online) (a),(b),(c): maps of the relative plastic strain $\varepsilon^{\mathrm{pl}}_{i, j}/\langle \varepsilon^{\mathrm{pl}}\rangle$ for a system size $N=64$, a yield stress of hard sites $\Sigma^{\mathrm{H}} = 10$, and volume fractions of hard inclusions $\phi = 10^{-3}$, $10^{-2}$, $10^{-1}$ of hard sites,respectively. (d),(e),(f): maps of the associated final configurations of plastic thresholds $\sigma^{\mathrm{c}}$ . The positions of hard sites are visible in dark red. } \end{figure} \begin{figure}[t] \begin{center} \includegraphics[height=0.3\textwidth]{fig12a.png}\hspace{-0.75cm} \includegraphics[height=0.3\textwidth]{fig12b.png}\hspace{-0.75cm} \includegraphics[height=0.3\textwidth]{fig12c.png} \end{center} \vspace{-0.5cm} \hspace{-0.9cm}$\langle \varepsilon^{pl}\rangle=10$ \hspace{3.375cm}$\langle \varepsilon^{pl}\rangle=20$ \hspace{3.375cm}$\langle \varepsilon^{pl}\rangle=30$\\ \vspace{-0.25cm} \begin{center} \includegraphics[height=0.3\textwidth]{fig12c.png}\hspace{-0.75cm} \includegraphics[height=0.3\textwidth]{fig12d.png}\hspace{-0.75cm} \includegraphics[height=0.3\textwidth]{fig12e.png} \end{center} \vspace{-0.5cm} \hspace{-0.9cm}$\langle \varepsilon^{pl}\rangle=40$ \hspace{3.375cm}$\langle \varepsilon^{pl}\rangle=50$ \hspace{3.375cm}$\langle \varepsilon^{pl}\rangle=60$\\ \caption{\label{fig:strain_maps_Phi01} (color online) maps of incremental plastic strain $\Delta\varepsilon^{\mathrm{pl}}$ for a volume fraction of hard inclusions $\phi = 10^{-1}$, a system size $N= 64$, a yield stress of hard sites $\Sigma^{\mathrm{H}} = 10$, and different values of the average plastic strain $\langle \varepsilon^{pl}\rangle$. } \end{figure} \end{widetext} \subsection{Localization: the weakest band} We now try to correlate the plastic activity with the underlying structure, here represented by the landscape of plastic thresholds. As discussed above, plastic deformation tends to localize along bands orientated at $\pm 45^\circ$ that reflect the symmetry of the Eshelby quadrupolar elastic interaction. Due to statistical fluctuations, not all possible slip systems encounter the same number of hard particles. We define the weakest band $SB_{\mathrm{min}}$ and the strongest bands $SB_{\mathrm{max}}$ as the bands containing respectively the smallest and the largest amount of hard particles among the $2N$ possible slip systems. Here we take into account the two possible orientations. Note again that we consider periodic boundary conditions so that all slip systems are a priori \textit{equivalent}. We can now compute the fraction of plastic activity occurring in the various bands. In order to highlight the gradual development of the localization, we proceed as in Sec.~\ref{sec:localization}: we consider the evolution of the incremental plastic strain field $\Delta \varepsilon^{\mathrm{pl}}_{i,j} = \varepsilon^{\mathrm{pl}}_{i,j}(t + \delta t) - \varepsilon^{\mathrm{pl}}_{i,j}(t) $ with $\delta t$ a few iteration steps such that $\langle \varepsilon^{\mathrm{pl}}\rangle (\delta t) = 2$. In Fig.~\ref{fig:fracSB10} we show the evolution with the cumulated plastic strain $\langle \varepsilon^{\mathrm{pl}}\rangle$ of the fractions $f_{\mathrm{min}} = \Delta \varepsilon^{\mathrm{pl}} (SB_{\mathrm{min}})/ \langle \Delta \varepsilon^{\mathrm{pl}}\rangle$ and $f_{\mathrm{max}} = \Delta \varepsilon^{\mathrm{pl}} (SB_{\mathrm{max}})/ \langle \Delta \varepsilon^{\mathrm{pl}}\rangle$ of the incremental plastic strain borne by the weakest and the strongest bands respectively for different concentrations of hard particles. If the plastic strain field was uniformly spread on all bands, one would expect $f_{\mathrm{min}} = f_{\mathrm{max}} =1/N$ (and not $1/2N$ because of the redundancy between the two possible orientations at $+45^\circ$ and $-45^\circ$). In the case of the sole amorphous matrix, the weakest band deforms about twice as more than the strongest band: $f_{\mathrm{min}} \approx 2 f_{\mathrm{max}}$. For a fraction $\phi=10^{-2}$, the effect is a bit more pronounced but not spectacular yet, we observe $f_{\mathrm{min}} \approx 4 f_{\mathrm{max}}$. This ratio remains reasonably constant upon deformation. This is consistent with the typical heterogeneity observed in Fig.~\ref{fig:trap_rel_strain_maps}. Note that for such a concentration the number of particles falls strictly below the number of slip systems so that deformation can always find a band free of particles to develop. No significant reinforcement is expected in this case. Above some threshold, all slip systems are virtually blocked by hard particles. This is the case for the two concentrations $\phi=0.01$ and $\phi=0.16$ shown in Fig.~\ref{fig:fracSB10}. Here we see a dramatic effect: upon deformation, the weakest band bears a higher and higher fraction of the plastic activity. Eventually most of the plastic strain occurs within this weakest band. We thus observe a strong correlation between structure and plastic behavior: plastic deformation gradually concentrate onto the weakest slip system, characterized by the smallest amount of hard particles. \begin{figure}[t] \begin{center} \includegraphics[width=0.98\columnwidth]{fig13.jpg}\hspace{-0.75cm} \end{center} \caption{\label{fig:fracSB10} Fractions $f_{\mathrm{min}}$ and $f_{\mathrm{max}}$ of incremental plastic strain $\Delta \varepsilon^{\mathrm{pl}}$ borne by the weakest and the strongest slip systems $SB_{\mathrm{min}}$ and $SB_{\mathrm{max}}$ containing the smallest and the largest amount of particles, respectively.} \end{figure} \section{A simple analytical model\label{Analytical-Model}} The mechanism of reinforcement can thus be understood in the following way. Hard particles inhibit slip systems. No reinforcement occur until all slip systems are blocked. Above the associated threshold concentration, all slip systems are hindered by hard particles and plastic strain gradually localizes onto the weakest one, i.e., the one that contains the fewest hard particles. The macroscopic plastic behavior is thus controlled by the properties of this weakest band. In the following, we discuss these two aspects, elaborate a simple analytical model and compare its prediction with our simulations. Mathematical details are presented in a separated appendix. \subsection{Percolation} As discussed above, no reinforcement is expected until all slip systems are blocked by at least one particle. Here the two families of slip systems associated to the two directions at $\pm45^\circ$ should a priori be considered. For the sake of simplicity, we consider in the following only one of the two orientations. This approximation allows to recover a simple one-dimensional percolation problem. We assume here that the distribution of particles is not spatially correlated and take the volume fraction $\phi$ as the probability for one inclusion to be hard. The probability to have exactly $n$ hard inclusions in one randomly chosen diagonal is then \begin{equation} P(N_d = n) = \binom{N}{n} \phi^n (1-\phi)^{N-n}\;, \end{equation} where $N_d$ is the random variable counting the number of hard sites on a diagonal, $N_d = n$ is the event ``$n$ hard sites on a diagonal", $N$ is the number of sites in a diagonal, which is exactly the system size in the square lattice considered here. We recognize a binomial distribution. The probability of having at least $1$ hard inclusion on a diagonal is: \begin{equation} P(N_d \ge 1) = 1- (1-\phi)^{N}\;. \end{equation} There are $N$ diagonals with the same orientation. They are independent. Consequently, the probability to have at least $1$ hard inclusion on each diagonal is \begin{equation} \label{eq:probabBlocked} P(B) = \Bigl (1- (1-\phi)^{N}\Bigr )^N\;, \end{equation} where the letter $B$ stands for ``blocked''. This probability is the equivalent of the probability of percolation. It is plotted for different system sizes versus the volume fraction of hard inclusions in Fig.~\ref{fig:probaBlocked}. The probability of having at least one hard inclusion per diagonal increases with the volume fraction of hard inclusions until it reaches $1$. \begin{figure} \includegraphics[width=0.95\columnwidth]{fig14.jpg} \caption{\label{fig:probaBlocked} (color online) Variation of the probability $P(B)$ of having at least one hard inclusion on each diagonal, defined in Eq.~\ref{eq:probabBlocked}, with the volume fraction $\phi$ of hard inclusions for different system sizes $N=16$, $N=32$, $N=64$, $N=128$, $N=1024$. Inset: Variation of the critical fraction $\phi_c$ defined in Eq.~\ref{eq:phic} with the system size $N$. } \end{figure} The threshold for percolation, or here for all diagonals to be blocked, is the volume fraction $\phi$ for which the probability $P(B)$ is the steepest. In other words, the threshold for the transition corresponds to the volume fraction of hard inclusions for which the second derivative of $P(B)$ vanishes. This volume fraction is called critical and denoted $\phi_c$. It is equal to \begin{equation} \label{eq:phic} \phi_c(N) = 1-\frac{1}{(N+1)^{1/N}}\;. \end{equation} The inset of Fig.~\ref{fig:probaBlocked} shows the variation of the critical fraction $\phi_c$ with the system size $N$. The thresholding effect is nicely illustrated in Figure~\ref{fig:rescale} where we show the rescaled ultimate yield strength $(\Sigma^{\mathrm{F}}(\phi, N)-\Sigma^{\mathrm{A}})/ (\Sigma^{\mathrm{H}}-\Sigma^{\mathrm{A}})/\phi_c(N)$ versus the rescaled volume fraction $\phi/\phi_c(N)$, for different system sizes. This plot is to be compared with the inset of Fig.~\ref{fig:rescale} showing the same quantities without the rescaling by $\phi_c(N)$. In the main plot, the curves corresponding to different system sizes collapse onto a single master curve, showing that our interpretation of the transition is valid. \begin{figure} \includegraphics[width=0.95\columnwidth]{fig15.jpg} \caption{\label{fig:rescale} (color online) Variation of the rescaled yield strength $(\Sigma^{\mathrm{F}}(\phi, N)-\Sigma^{\mathrm{A}})/ (\Sigma^{\mathrm{H}}-\Sigma^{\mathrm{A}})/\phi_c(N)$ with the rescaled volume fraction $\phi/\phi_c(N)$ for a yield stress $\Sigma^{\mathrm{H}} = 10$ of hard inclusions and different system sizes $N=16$, $32$, $64$, $128$, $256$. Inset: Variation of $(\Sigma^{\mathrm{F}}(\phi, N)-\Sigma^{\mathrm{A}})/ (\Sigma^{\mathrm{H}}-\Sigma^{\mathrm{A}})$ with $\phi$ for the same systems.} \end{figure} \subsection{Yield stress of the weakest band} \subsubsection{Restriction to elastic line depinning} In section~\ref{Hardening}, plastic deformation was shown to concentrate onto one single band, the one containing the smallest amount of hard particles. It is thus natural to use the ultimate yield strength of that weakest band as an estimate for the ultimate yield strength of the whole amorphous composite. Ignoring the residual plastic strain undergone outside the band, the problem is thus reduced to a one-dimensional elastic depinning problem very similar to the propagation of a crack front in a random landscape~\cite{Schmittbuhl-PRL95,PVR-PRL13}. Indeed, if we denote $\varepsilon_i^{\mathrm{W}}=\varepsilon^{\mathrm{W}}(z_i)$ the plastic strain in the weakest band at location $z_i$ where $z$ is the distance along the band, any local plastic strain increment $\delta\varepsilon_i^{\mathrm{W}}$ induces along the band an internal stress proportional to an elastic kernel which is nothing but the restriction on a diagonal of the Eshelby quadrupolar stress defined in Eq.~\ref{quadrupolar-stress}. More specifically the internal stress at location $z_j$ induced by the plastic strain increment at location $z_i$ amounts to: \begin{eqnarray}\nonumber \delta\sigma_{ii}^{\mathrm{W}} &= & -A_0 \varepsilon_i^{\mathrm{W}}\;, \\ \delta\sigma_{ij}^{\mathrm{W}} &=&\frac{A_1}{(z_i-z_j)^2} \varepsilon_i^{\mathrm{W}}\;, \quad \mathrm{if} \quad i \ne j \;, \end{eqnarray} where $A_0$ and $A_1$ are positive constants. One recognizes here the elastic interaction associated to the trapping of an interfacial crack front~\cite{GaoRice-JAM89}. The determination of the effective toughness of an interfacial crack propagating in a random landscape (which also amounts to the critical threshold of a long-range elastic line) has recently been discussed in Ref~\cite{PVR-PRL13}. While the effective toughness can significantly exceed the simple arithmetic average of the microscopic properties when the disorder is highly fluctuating in the direction of propagation (strong pinning), a simple mixing law is recovered when the microscopic toughness is only slowly varying along the direction of propagation (weak pinning). In the present case, the hard sites are persistent, i.e., the value of their yield stress does not change upon deformation. Besides, the fluctuations of the local thresholds that characterize the amorphous matrix are weak compared with the yield stress of the hard sites. Weak pinning conditions can thus be considered and a simple mixing law used to compute the effective yield stress of the band. \subsubsection{How weak is the weakest band?} The effective yield stress of the weakest band $ \sigma_Y^W$ is thus simply written: \begin{equation} \Sigma^{\mathrm{W}} = \frac{1-m}{N} \Sigma^{\mathrm{A}} + \frac{m}{N} \Sigma^{\mathrm{H}} \;, \label{sigma_weakest_band} \end{equation} where $ \Sigma^{\mathrm{A}}$ is the yield stress of the amorphous matrix and $ \Sigma^{\mathrm{H}}$ that of the hard sites and $m$ is the number of hard sites in the band. The estimate of the ultimate yield strength $\Sigma^{\mathrm{F}}$ of the material is given by the ensemble average: \begin{equation} \Sigma^{\mathrm{F}} = \langle \Sigma^{\mathrm{W}} \rangle= \frac{1-\langle m\rangle}{N} \Sigma^{\mathrm{A}} + \frac{\langle m \rangle}{N} \Sigma^{\mathrm{H}} \;. \label{sigma_weakest_band_avg} \end{equation} where $\langle m \rangle$ is the average minimum number of hard sites on a diagonal of size $N$ for a fraction $\phi$ of hard sites. In the following we define $f=\langle m \rangle/N$, the effective fraction of hard sites in the weakest band. As it immediately appears from Eq.~\ref{sigma_weakest_band_avg}, within the weakest band approximation, the difference between the effective flow stress $\Sigma^{\mathrm{F}}$ and the mixing law estimate $\Sigma^{\mathrm{M}}$ directly stems from the difference between $f$ and $\phi$. The distribution of the number $m$ of hard sites is given by the binomial distribution of parameters $\phi$ and $N$. An exact formula for the average $\langle m\rangle$ of the minimal number of hard sites on a diagonal when $N$ diagonals are considered is given in the appendix. However, this formula contains an infinite sum and is not easy to handle. In order to estimate this minimal value we shall resort to an argument of extremal statistics. Depending on the value of $\phi$, the binomial converges at large $N$ either to a Gaussian or to a Poisson distribution. In the present case we are interested in the large deviations of the binomial distribution~\cite{Arratia-BMB89}. We use recent results on the general approximation of the binomial distribution~\cite{Blondeau-PhD11,Blondeau-DCC11} obtained in the context of cryptology studies: \begin{equation} \label{eq:blondeauf} P(m \leq f N) = \frac{\phi \sqrt{1-f}}{(\phi - f)\sqrt{2\pi N f}} e^{-N D(f||\phi)}\;, \end{equation} for $N \rightarrow \infty$ where $D(f||\phi)$ is the Kullback-Leibler divergence defined as: \begin{equation} D(f||\phi) = f \ln{\frac{f}{\phi}} + (1-f)\ln{\frac{1-f}{1-\phi}}\;. \end{equation} Here, the fraction $f$ of hard sites in the weakest band is estimated via a simple extremal statistics argument: \begin{equation} \label{eq:extremal-stat} P(m \leq f N) \approx \frac{1}{N}\;. \end{equation} Detailed calculations based on the asymptotic expansions given in Ref.~\cite{Blondeau-DCC11} are presented in the appendix. They allow us to obtain an estimate of the distance between the fraction $f$ (the fraction of hard sites in the weakest band) and the parameter $\phi$ of the binomial distribution (the mean fraction of hard sites): \begin{equation} \label{eq:resultf-main} f = \phi - \sqrt{\frac{2\phi (1-\phi)}{N} \log \frac{N}{\sqrt{2\pi}}} (1+ r_N)\;, \end{equation} where \begin{eqnarray} \nonumber r_N &= &- \frac{1}{2} \frac{\log (2h_N)}{2 h_N +1}\;,\\ h_N &= &\log \frac{N}{\sqrt{2\pi}} \;. \end{eqnarray} This immediately sets the distance of the flow stress $\Sigma^{\mathrm{F}}$ to the mixing law value $\Sigma^{\mathrm{M}}(\phi, N)$ obtained by Eq.~\ref{MixingLaw}: \begin{equation} \label{eq:result-vs-mixing-law} \Sigma^{\mathrm{F}}(\phi, N) = \Sigma^{\mathrm{M}}(\phi, N) - \left(\phi - f \right) \left[\Sigma^{\mathrm{H}} - \Sigma^{\mathrm{A}}(N) \right]\;. \end{equation} In particular we obtain a clear size effect: the convergence to the mixing law scales as $(\log N/N)^{1/2}$. This result is illustrated in Fig.~\ref{size-scaling} where we display the variation of the rescaled flow stress $\sigma_R(\phi,N)$ with $(\log N/N)^{1/2}$ for various values of the fraction $\phi$ of hard sites. The rescaled flow stress $\sigma_R(\phi,N)$ is defined as the reinforcement factor with respect to the flow stress of the amorphous matrix: \begin{equation} \label{eq:rescaled-reinforcement} \sigma_R(\phi,N) = \frac{\Sigma^{\mathrm{F}}(\phi,N) -\Sigma^{\mathrm{A}}(N) }{\Sigma^{\mathrm{H}} - \Sigma^{\mathrm{A}}(N)}\;. \end{equation} In the framework of our approximation, we expect $\sigma_R(\phi,N) = f((\phi,N)$. In particular, following Eq.~\ref{eq:resultf-main}, we should recover $\phi - \sigma_R(\phi,N) \propto (\log N/N)^{1/2}$. As shown in Fig.~\ref{size-scaling}, this scaling is nicely obeyed for moderate values of $\phi$. Departures from the predicted scaling behavior become significant at low values of $\phi$ and $N$, because the analytical estimation holds only in the limit of large $N$ and intermediate values of $\phi$. A numerical estimation of the average number $\langle m \rangle$ of hard sites in the weakest band is discussed in the appendix and shows that the approximation holds surprisingly well even for low values of $\phi$ and small system sizes. Beyond the prediction of the scaling behavior, the logarithmic corrections accounted for in Eq.~\ref{eq:resultf-main} allow us to test quantitatively our predictions for the reinforcement effect of hard inclusions in an amorphous matrix. In Fig.~\ref{ana-vs-num} we compare analytical predictions and simulation results for the rescaled flow stress $\sigma_R(\phi,N)$ with respect to the fraction of hard sites $\phi$. Again, our prediction of effective flow stress happens to be very precise for moderate values of $\phi$ and large system sizes. \begin{figure} \includegraphics[width=0.95\columnwidth]{fig16.png} \caption{\label{size-scaling} (color online) Size scaling of the rescaled flow stress $\sigma_R(\phi,N)$, defined in Eq.~\ref{eq:rescaled-reinforcement}, of amorphous composites with concentration of hard particles ranging from $\phi=0.04$ to $\phi=0.5$. Filled symbols on the vertical axis correspond to the infinite size limit, i.e., the result of the mixing law. Indicative dashed lines show the expected scaling behavior in $(\log N /N)^{1/2}$.} \end{figure} \begin{figure} \includegraphics[width=0.95\columnwidth]{fig17.png} \caption{\label{ana-vs-num} (color online) Effect of the concentration of hard particles on the rescaled flow stress of amorphous composites for different system sizes $N=16$, $32$, $64$, $128$, $256$. The straight line corresponds to the mixing law expected at infinite size. The dashed lines are the analytical predictions of Eq.~\ref{eq:rescaled-reinforcement} accounting for logarithmic corrections.} \end{figure} \section{Conclusion\label{Conclusion}} The plastic behavior as described in the mesoscopic simulations shows two types of system size dependence. The first type corresponds to an effect of the amorphous matrix only and results from the critical character of the yielding transition. In this case, the ultimate yield strength decreases with an increasing system size, as $1/N$. This system size dependence has already been addressed in Refs.~\cite{TPVR-PRE11,Wyart-PNAS14}. A similar critical behavior has recently been advocated in the related framework of compressive strength of brittle heterogeneous materials~\cite{WGGAV-PNAS14} The second type of size effect is specific to the composite material. Below a critical volume fraction of hard inclusions depending on the system size, no hardening behavior of the second type is observed. Above this critical volume fraction, the hardening behavior depends on the system size: the ultimate yield strength increases with an increasing system size, as $-(\log{N}/N)^{1/2}$. We showed that the thresholding effect observed in the simulations is close to a percolation transition. We also showed that during this second hardening regime, most of the plastic strain is concentrated onto the weakest band. Therefore, we proposed a simple model to describe the dependence of the ultimate yield strength $\Sigma^{\mathrm{F}}$ on the system size and the volume fraction. This model is based on the assumption that the weakest band bears all the plastic strain and governs the ultimate yield strength $\Sigma^{\mathrm{F}}$ of the entire system. The ultimate yield strength $\Sigma^{\mathrm{F}}$ is then given by a combination of the yield strength of the pure matrix and of the hard inclusions, weighted respectively by the fraction of matrix sites and of hard inclusions in the weakest band. Using extremal statistics arguments, we proposed an analytical estimate of the average number of hard inclusions in the weakest band in the limit of large system sizes. The comparison of the analytical estimate with the simulation results is satisfactory and our model consequently makes a direct link between the structure, represented by the plastic threshold on each site, and the mechanical behavior. \begin{acknowledgments} DV acknowledges Anne Canteaut for the indication of Refs.~\cite{Blondeau-PhD11,Blondeau-DCC11}. CL acknowledges Eric Lu\c con for the help in deriving the exact formula of the average minimum of hard sites on a diagonal. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \input{sections/1_introduction} \section{Related Work} \label{related_work} \input{sections/2_related_work} \section{Methodology} \input{sections/3_methodology} \label{methdology} \section{Experimental Evaluations} \input{sections/5_experimental_setup} \section{Results \& Discussions} \input{sections/6_result_discussion} \section{Conclusions} \input{sections/7_conclusion_future_work} { \bibliographystyle{ieee} \subsection{Inductive Transfer Learning} Given the fact that BreakHis~\cite{spanhol2016dataset} is a small-scale and class-imbalanced dataset, this work hypothesizes a constraint case of inductive transfer for representation learning by initializing encoder ImageNet~\cite{deng2009imagenet} pre-trained weights. In this work, the inductive transfer (i) helps to obtain improved performance on the downstream task of malignancy classification and (ii) enables self-supervised pre-training using the proposed method on the small-scale dataset \subsection{Self-supervised Method - Magnification Prior Contrastive Similarity} \label{ssl} \begin{comment} Recent advances in Contrastive Joint Embedding Embedding Architecture \& Method (JEAM) SimCLR~\cite{chen2020simple}-\cite{chen2020big}, and other methods are robust for representation learning and obtaining state-of-the-art performance. This work hypothesizes that inducted humans prior suppress the potential of networks to learn in a self-supervised manner more independently towards imitating biological intelligence. The limitation imposed due to the needs of strong human prior is more recognizable considering beyond the natural visual concept based domains like medical imaging, microscopic images of pathology, and others. Human knowledge and perception of such domains are relatively limited and fail to compile domain-specific characteristics to create effective transformations for distorted views. Specifically, BreakHis dataset~\cite{spanhol2016dataset} contains Whole Slide Images (WSI) of four magnifications and affected regions present only on some parts of WSI. Further, the affected region is not necessarily located in the center of WSI. Applying transformations from natural visual concept domains to BreakHis does not tend to learn optimal representations because visual and textural properties are very different, including location, size, shape, background-foreground, and concrete definition of objects. Thus network needs to learn representations by self-attention on affected regions across magnifications that are invariant to location, global context, \& several other geometric characteristics. So stronger human prior prevents networks from learning and focusing on self-attention. Knowing the fact from the literature that human-prior is required for the current state of self-supervised methods, this work focuses on i) exploiting self-supervision signal \& prior from data, i.e., magnification factor to reduce the human prior, and ii) enabling representation learning on small-scale dataset~\cite{spanhol2016dataset} by utilizing transfer learning and using augmentation based transformation in a specific arrangement. \end{comment} Magnification Prior Contrastive Similarity (MPCS) method formulates self-supervised pre-training to learn representations on microscopic Histopathology WSI without labels on small-scale data. The main objective of MPCS is to lower the amount of labeled data needed for the downstream task to address challenges in supervised learning. MPCS construct pairs of distinct views considering characteristics of microscopic histopathology WSI (H-WSI) for contrastive similarity-based pre-training. Microscopic H-WSI structural properties are different from natural visual macroscopic images~\cite{deng2009imagenet} (vehicles, cats, or dogs) in terms of location, size, shape, background-foreground, and concrete definition of objects. Unlike SimCLR~\cite{chen2020simple} where pairs of distinct views from the input image is constructed by human-centered augmentations, MPCS constructs pair of distinct views using pair sampling methods based on the signal from data itself i.e. magnification factor in BreakHis~\cite{spanhol2016dataset}. Two H-WSI from different magnification factors of the same sample makes a pair. Utilizing prior from data~(magnification factor) enables meaningful contrastive learning on histopathology H-WSI and reduces dependency over human inducted prior. Further, tumor-affected regions in H-WSI are characterized by format and highly abnormal amounts of nuclei. Such affected regions are promising in all the H-WSIs of different magnification for the same sample. Thus, affected regions being common and size invariant in positive pair of a sample allow learning contrastive similarity by region attentions. \begin{figure}[t] \centering \includegraphics[width =0.6\linewidth]{sections/figures/ssl_method_mpcs.png} \vspace{-0.2cm} \caption{Magnification Prior Contrastive Similarity method explained} \label{fig:ssl_mpcs} \vspace{-4mm} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width =0.6\linewidth]{sections/figures/human_prior_strategy.png} \vspace{-0.2cm} \caption{Strategies for pair sampling based on inducted Human Prior(HP). Added measure to prevent mode collapse in pre-training by $1^{st} view \neq 2^{nd} view$ in all strategies.} \label{fig:human_prior} \vspace{-5mm} \end{figure} Current work also hypothesizes that reduced human-prior in the pre-training method provides an enhanced degree of freedom to the method that can increase the potential of the network to learn efficient representations in a self-supervised approach. To investigate, three strategies for pair sampling are formulated based on inducted human prior. The number of human decisions defines the level of inducted human-prior (HP) during pair sampling. As explained in Figure~\ref{fig:human_prior}, in Fixed Pair, decisions to choose magnification factor for both views are by human, thus making strong human-prior. In Ordered Pair, only the second view of the pair is chosen by a human using a look-up table, making weaker human prior. In Random Pair, no human prior inducted and magnification factor for both views are sampled randomly. Further, Figure~\ref{fig:degrre_of_freedom} demonstrates the Degree of Freedom (DoF) for the method where the Fixed Pair strategy provides no DoF, Order Pair provides one DoF, and Random Pair provides 2 DoF to the method. In MPCS, to formulate a batch of $2N$ views, randomly sampled batch of $N$ sets of input $X=\{X^{(1)}, X^{(2)}, ..., X^{(N)}\}$ are considered where each set of input $X^{(i)}=\{x^{(i)}_{40}, x^{(i)}_{100}, x^{(i)}_{200}, x^{(i)}_{400}\}$ contains the images corresponding to four magnification factors. Positive pair of views is constructed based on a selection strategy of pair sampling which contains two views from the same example of different magnification. Further, similarity maximizes (loss minimizes) by an objective defined by the contrastive loss in Eq.~\ref{eq:loss}). MPCS is demonstrated in Figure~\ref{fig:ssl_mpcs} and components are explained below. \begin{figure}[!ht] \centering \includegraphics[width =0.8\linewidth]{sections/figures/magnification_prior_contrastive_similarity_degree_of_freedom_v2.png} \vspace{-0.2cm} \caption{Relation between inducted Human Prior (HP) of magnification and Degree of Freedom (DoF) for method} \label{fig:degrre_of_freedom} \vspace{-4mm} \end{figure} \begin{itemize} \item A \textit{domain specific human prior} module $P_h(X \to X_\mathrm{MF})$: $X=\{x_{40}, x_{100}, x_{200}, x_{400}\}$ that exploits supervision signal from data i.e. magnification and samples the two views $X_\mathrm{MF}=$(\textbf{$x_\mathrm{MF1}$}, \textbf{$x_\mathrm{MF2}$}) of different magnifications to construct pair based on employed strategy of pair sampling shown in step 1 of Figure~\ref{fig:ssl_mpcs}. \vspace{-0.2cm} \item A \textit{uniform stochastic transformation} based module $\displaystyle \mathcal{T_U}(X_\mathrm{MF} \to \tilde{X}_\mathrm{MF})$ that uniformly transforms both views from $X_\mathrm{MF}=$(\textbf{$x_\mathrm{MF1}$}, \textbf{$x_\mathrm{MF2}$}) to $\tilde{X}_\mathrm{MF}=$ (\textbf{$\tilde{x}_\mathrm{MF1}$}, \textbf{$\tilde{x}_\mathrm{MF2}$}) of positive pair by sampled augmentation transformations scheme shown in step 2 of Figure~\ref{fig:ssl_mpcs}.\vspace{-0.2cm} \item A neural network \textit{base encoder} \textit{f}(·) which yields representations from transformed views of pair. It obtains $h_\mathrm{MF1}$ = \textit{f}($\tilde{x}_\mathrm{MF1}$) = encoder-network($\tilde{x}_\mathrm{MF1}$) and $h_\mathrm{MF2}$ = \textit{f}($\tilde{x}_\mathrm{MF2}$) = encoder-network($\tilde{x}_\mathrm{MF2}$) where $h_\mathrm{MF1}$, $h_\mathrm{MF2} \in \mathbb{R}^d$ are the output after the respective average pooling layers, shown in step 3 of Figure~\ref{fig:ssl_mpcs}.\vspace{-0.2cm} \item A small-scale \textit{MLP projection head} \textit{g}(·) that maps representations to the latent space where contrastive loss is applied, shown in step 4 of Figure~\ref{fig:ssl_mpcs}. A multi-layer perceptron with single hidden layer to obtain $z_\mathrm{MF1}$ = \textit{g}($h_\mathrm{MF1}$) = $W^{(2)}\sigma(W^{(1)}{h_\mathrm{MF1}})$ and $z_\mathrm{MF2}$ = \textit{g}($h_\mathrm{MF2}$) = $W^{(2)}\sigma(W^{(1)}{h_\mathrm{MF2}})$ where $\sigma$ is ReLU.\vspace{-0.2cm} \item A \textit{contrastive loss function}, normalized temperature-scaled cross entropy loss (NT-Xent) from SimCLR is defined for a contrastive prediction, shown in step 5 of Figure~\ref{fig:ssl_mpcs}. For given a set \textbf{$\tilde{x}_{k}$} including a positive pair examples \textbf{$\tilde{x}_\mathrm{MF1}$} and \textbf{$\tilde{x}_\mathrm{MF2}$}, the contrastive prediction task tends to find \textbf{$\tilde{x}_\mathrm{MF2}$} in \{${\mathbf{\tilde{x}_{k}}}\}_{k\neq MF1}$ for the given \textbf{$\tilde{x}_\mathrm{MF1}$}. \end{itemize} The loss function for a positive pair of examples (MF1, MF2) is defined as \begin{equation} L_\mathrm{MF1,MF2} = -\log \dfrac{\exp(sim(\boldsymbol{z}_\mathrm{MF1},\boldsymbol{z}_\mathrm{MF2})/\tau)} {\sum_{k =1}^{2N} 1_{[k \neq MF1]} exp(sim(\boldsymbol{z}_\mathrm{MF1},\boldsymbol{z}_{k})/\tau)} \label{eq:loss} \end{equation} \begin{comment} \begin{equation} sim(\boldsymbol{z}_\mathrm{MF1},\boldsymbol{z}_\mathrm{MF2}) = \dfrac{\boldsymbol{z}^T_\mathrm{MF1} \boldsymbol{z}_\mathrm{MF2}} {\parallel sim(\boldsymbol{z}_\mathrm{MF1}) \parallel \parallel sim(\boldsymbol{z}_\mathrm{MF2}) \parallel} \vspace{-1em} \end{equation} \begin{equation} sim(\boldsymbol{z}_\mathrm{MF1},\boldsymbol{z}_{k}) = \dfrac{\boldsymbol{z}^T_\mathrm{MF1} \boldsymbol{z}_{k}} {\parallel sim(\boldsymbol{z}_\mathrm{MF1}) \parallel \parallel sim(\boldsymbol{z}_{k}) \parallel} \end{equation} \end{comment} In Eq.~(\ref{eq:loss}), where $1_{[k \neq MF1]} \in {0, 1}$ is an indicator evaluating to 1 if $k \neq i$. \subsection{Datasets} \subsubsection{BreakHis} BreakHis~\cite{spanhol2016dataset} dataset consists of 2,480 benign and 5,429 malignant histopathological microscopic images from 82 patients at four magnification levels (40×, 100×, 200×, 400×). Each image in the BreakHis dataset is of size 700×460, stained with hematoxylin and eosin (HE). Following the previous works, two evaluation metrics are used, image-level accuracy(ILA) and patient-level accuracy (PLA). PLA shows patient-wise classification performance, calculated as the mean over total no. of patients using patient-score. Patient-score is correctly classified images of the patient over a total number of images of that patient. ILA disregards patient-level details and thus serves as standard image classification accuracy. \subsubsection{BACH} The second dataset, Breast Cancer Histology Images (BACH)~\cite{aresta2019bach} is from the ICIAR2018 Grand Challenge and contains 400 histopathology slides. The BACH dataset has four classes, normal, benign, in-situ, and invasive. The slide size is relatively large, 2048 × 1536 pixels; thus, patches of size 512x512. Two evaluation metrics, patch-wise accuracy and image-wise accuracy, are used, whereas image-wise accuracy is calculated based on majority voting over the patches of the respective image. \subsubsection{Breast Cancer Cell Dataset} The third dataset, Breast Cancer Cell Dataset~\cite{gelasca2008evaluation} is from the University of California, Santa Barbara Biosegmentation Benchmark. This dataset contains 58 HE-stained histopathology 896x768 size images of breast tissue, of which 26 are malignant, and 32 are benign. Patches of size 224x224 were created, and image-wise accuracy was calculated using majority voting over patches of the respective image. \subsection{Encoder Architectures} In this current work, the proposed method MPCS investigated two different CNN encoder architectures. ResNet-50~\cite{yu2017dilated} and Efficient-net b2~\cite{tan2019efficientnet} are used for pre-training and fine-tuning. SSL-specific MLP projection head used for Efficient-net b2 is three layers network of 2048-1204-128 units, whereas ResNet-50 is the most common backbone encoder, used projection head adapted from SimCLR, having 1024-128 units. encoder and projection heads are demonstrated in Fig.~\ref{fig:ssl_mpcs}. \subsection{Training Protocol} This section shares parameter configurations used in pre-training and fine-tuning. \subsubsection{SSL pre-training} Self-supervised pre-trainings of both encoders take place on the BreakHis dataset for 1000 epochs with temperature parameter $0.01$, learning rate 1e-05, and a set of augmentation methods such as color-jitter, flip, and rotation. Efficient-net b2 encoder pre-trained using Adam optimizer with a batch size of 128 and image input of $(341, 341)$. However, ResNet-50 adapted standard configurations of self-supervised practices and pre-trained using the LARS optimizer with a batch size of 1024 and input image size of 224x224. \subsubsection{Fine-tuning} The common training configurations across datasets for both encoders are learning rate of 2e-05, batch size of 32, image input of 224x224, augmentations methods such as random crop, flip, affine, rotation, and color-jitter, and using adam as optimizer. A dropout of 0.3 is used in the fully connected layer. \subsection{Experimentation Details} \begin{comment} As also mentioned in Section~\ref{related_work} careful analysis shows the following weakness in mentioned data-split strategy: \begin{enumerate} \item The stated data-split strategy does not have any mechanism to ensure the correctness of mean performance evaluation where contribution made by each data-point is ensured in equal or at least in one trial of experiment out of 5. The same is applicable to learning capacity. \item The stated strategy does not have any mechanism to reflect the data distribution of imbalanced classes during data-split in each trial out of 5. \end{enumerate} The reasons behind the weakness of 1) and 2) are the following: \begin{itemize} \item Repeated yet any independent random trial of data-splits does not endorse theoretical guarantee for selection of mutually exclusive data-points in the test set across the trials because any two consecutive trials are conditionally independent. \item Specifically chosen ratio of 70:30 for train-test data splits repeated over 5-times forces some data-points to occur in the test set with more frequency than others. \end{itemize} To cater above-stated issues in the experimentation strategy of dataset division, an improved strategy needs to be formulated which can guarantee the following two aspects: \begin{enumerate} \item Mean performance calculation must be contributed by every data-point of the dataset in an equal amount of participation. \item Skewed data distribution due to class imbalances must reflect in every test set from each trial to get real-world agnostic performance (more malignant cases than benign in the dataset). \end{enumerate} \end{comment} To ensure the reliability and consistency of the models, this work follows 5-cross validation data split scheme. This is applied to all three datasets in which each fold contained 20\% data, following class distribution from whole data. Four out of five folds are used for training \& validation, and the remaining one for testing. Thus all the results reported are in terms of mean value with standard deviation. In the above-stated 5-cross validation settings, both backbone encoders, ResNet-50 and Efficient-net b2, are being pre-trained on the first dataset BreakHis with all three variants (ordered pair, random pair, and fixed pair) of the proposed SSL method MPCS for learning domain-specific representations. Further, downstream-task-specific fine-tuning experiments are carried out to investigate the impact of learned representations for all three datasets e.g., BreakHis, BACH, and Breast Cancer Cell. Following are the details of experimentation for each dataset. Table~\ref{tab:experiments_details} describes fine-tuning experiments for the first dataset BreakHis. All the mentioned experiments for malignancy classification are conducted for both encoders, Efficient-net b2 and ResNet-50 for all four magnifications (40X, 100X, 200X, and 400X). Experiments (Exp-1 to Exp-4) in \textit{Limited Labeled Data Setting} evaluate model performance while using only 20\% labels, whereas experiments (Exp-5 to Exp-8) in \textit{Fully Supervised Setting} use all labels. The sole objective is to compare the performance of models pre-trained on proposed MPCS methods against ImageNet~\cite{deng2009imagenet} pre-trained model to analyze the effect of learned representations. Preferred names in discussion used as MPCS-X for ImageNet $\to$ MPCS-X. Experiments(Exp-9 to Exp-12) for the second dataset BACH~\cite{aresta2019bach} are described in Table~\ref{tab:exp_bach} based on the ResNet-50 encoder. All the images are divided into 512X512 size patches; thus, performance is measured patch-wise and image-wise (using majority voting suggested in~\cite{vesal2018classification}). The major objectives are 1) evaluating pre-trained models from the proposed method against the ImageNet-based transfer learning approach~\cite{aresta2019bach} 2) evaluating the ability to learn downstream tasks with limited labels ranging from 5\% to 100\% of labels from the train data portion. Finally, to evaluate the effect of learned domain-specific representations on small-scale data, a series of fine-tuning (all layers trained) and linear evaluation experiments (only fully-connected layers trainable) Exp-13 to Exp-16 are conducted Breast Cancer Cell dataset and described in Table~\ref{tab:exp_bisque}. Similar to BACH dataset, images for this dataset were also divided into sizes 224X224 for training and test, and performance was measured likewise. \begin{table}[t] \caption{Experiment details for BreakHis dataset} \vspace{-0.3cm} \label{tab:experiments_details} \resizebox{\columnwidth}{!}{% \begin{tabular}{ccccc} \hline \multicolumn{1}{c|}{\multirow{2}{*}{No.}} & \multicolumn{1}{c|}{\multirow{2}{*}{Pre-training Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}BreakHis data\\ (\%)for SSL\end{tabular}}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Finetuning on \\ BreakHis Dataset\end{tabular}} \\ \cline{4-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Train (\%)} & Test (\%) \\ \hline \multicolumn{5}{c}{Limited Labeled Data Setting} \\ \hline \multicolumn{1}{c|}{Exp-1} & \multicolumn{1}{c|}{ImageNet} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{20\%} & 20\% \\ \multicolumn{1}{c|}{Exp-2} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Fixed Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{20\%} & 20\% \\ \multicolumn{1}{c|}{Exp-3} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Ordered Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{20\%} & 20\% \\ \multicolumn{1}{c|}{Exp-4} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Random Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{20\%} & 20\% \\ \hline \multicolumn{5}{c}{Fully Supervised Data Setting} \\ \hline \multicolumn{1}{c|}{Exp-5} & \multicolumn{1}{c|}{ImageNet} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{80\%} & 20\% \\ \multicolumn{1}{c|}{Exp-6} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Fixed Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{80\%} & 20\% \\ \multicolumn{1}{c|}{Exp-7} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Ordered Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{80\%} & 20\% \\ \multicolumn{1}{c|}{Exp-8} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Random Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{80\%} & 20\% \\ \hline \end{tabular}% } \vspace{-2mm} \end{table} \begin{table}[t] \caption{Experiment details for BACH dataset} \label{tab:exp_bach} \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|cc} \hline & & \multicolumn{2}{c}{Fine-tuning on BACH dataset} \\ \cline{3-4} \multirow{-2}{*}{No.} & \multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}Pre-training \\ method/weights\end{tabular}} & \multicolumn{1}{c|}{Labels(\%) from Train Data} & Test data (\%) \\ \hline Exp-9 & ImageNet~\cite{vesal2018classification}-re-implement & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 100\% (~80\% train data)}} & {\color[HTML]{1E1E1E} 20\%} \\ Exp-10 & MPCS-Fixed Pair (BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} {[}5\%, 10\%, 20\%, 40\%, 60\%, 80\%, 100\%{]}}} & {\color[HTML]{1E1E1E} 20\%} \\ Exp-11 & MPCS-Ordered Pair(BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} {[}5\%, 10\%, 20\%, 40\%, 60\%, 80\%, 100\%{]}}} & 20\% \\ Exp-12 & MPCS-Random Pair(BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} {[}5\%, 10\%, 20\%, 40\%, 60\%, 80\%, 100\%{]}}} & {\color[HTML]{1E1E1E} 20\%} \\ \hline \end{tabular}% } \vspace{-3mm} \end{table} \begin{table}[t] \caption{Experiment details for Breast Cancer Cell Dataset} \label{tab:exp_bisque} \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|cc|cc} \hline & & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Fine-tuning on \\ Breast Cancer Cell Dataset\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Linear-evaluation on \\ Breast Cancer Cell Dataset\end{tabular}} \\ \cline{3-6} \multirow{-2}{*}{No.} & \multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}Pre-training \\ method/weights\end{tabular}} & \multicolumn{1}{c|}{Train data(\%)} & Test data (\%) & \multicolumn{1}{c|}{Train data(\%)} & \multicolumn{1}{l}{Test data(\%)} \\ \hline Exp-13 & MPCS-Fixed Pair (BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & {\color[HTML]{1E1E1E} 20\%} & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & {\color[HTML]{1E1E1E} 20\%} \\ Exp-14 & MPCS-Ordered Pair(BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & 20\% & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & 20\% \\ Exp-15 & MPCS-Random Pair(BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & {\color[HTML]{1E1E1E} 20\%} & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & {\color[HTML]{1E1E1E} 20\%} \\ \hline \end{tabular}% } \vspace{-3mm} \end{table} \subsection{Self-supervised method MPCS demonstrates label efficiency} \begin{figure}[t] \centering \includegraphics[width =0.6\linewidth]{sections/figures/SSL_label_wise_compare_bach.png} \caption{Comparison with DPCL~\cite{ciga2022self} for label efficiency on classification task on BACH dataset} \label{fig:ssl_dpcl} \vspace{-7mm} \end{figure} All three variants of the proposed method MPCS demonstrate label efficiency on downstream tasks. Results in Table~\ref{ft_20} On BreakHis dataset, MPCS fine-tuned models obtain (significant improvement p \textless 0.01) proportionally bigger margins of improvement of (2.52±0.02)\% over ImageNet pre-trained model while only 20 \% labels are used for all magnification scales. These results match the performance of other state-of-the-art methods mentioned in Table~\ref{ft_80_sota} which have been trained on 100\% labels. Following the trend, MPCS pre-trained models consistently outperform on the BACH dataset compared with the recent contrastive learning-based method DPCL~\cite{ciga2022self} in a complete range of labels from 5\% to 100\%, shown in Figure~\ref{fig:ssl_dpcl}. \subsection{Data prior enables self-supervision on small-scale datasets} The proposed MPCS method enables self-supervised representation learning on the small-scale dataset by extensively using a data prior (supervision signal from data) e.g. magnification factors (40X, 100X, 200X, and 400x). It decreases the dependence on human-curated priors e.g. augmentation method choices during self-supervised pre-training. \subsection{MPCS learns robust self-supervised representations} \begin{figure}[!ht] \centering \includegraphics[width =0.8\linewidth]{sections/figures/tsne_all_pair.png} \caption{tsne-visualization of the features from MPCS model (on BreakHis) without fine-tuning. Blue(benign); red(malignant). } \label{fig:tsne} \vspace{-4mm} \end{figure} Explicitly focusing on the robustness of learned representations, Figure~\ref{fig:tsne} strongly supports the fact that MPCS pre-trained models can capture and learn discriminative features across the classes during the self-supervised pre-training phase itself without knowing about actual human-provided labels. It is worth mentioning that data points of different classes are easily separable by either linear or non-linear boundaries for all four magnifications for all variants of the MPCS method. The CAM of fine-tuned models depicted in Figures~\ref{fig:cam_breakhis} and ~\ref{fig:cam_bach} also indicate that MPCS pre-trained models activate regions of interest (dark red colors show strong activation) more efficiently than the ImageNet pre-trained model for BreakHis and BACH datasets. \subsection{Preliminary support for the hypothesis about reducing human inducted priors} The MPCS-Ordered Pair inducts weaker human-prior in pair sampling. Thus, the MPCS method obtains one DoF for randomly choosing the first input view. In comparison, the MPCS-Fixed Pair, which inducts stronger human prior by choosing both views, 200x and 400x by human-prior, gives zero DoF to the MPCS method. The MPCS-Random Pair, in which the MPCS method obtains the highest degree of freedom since the human-prior is absent. \begin{figure}[!ht] \centering \includegraphics[width =0.6\linewidth]{sections/figures/degree_of_freedom_ILA_PLA_extended.png} \caption{Comparison for human priors - Indicates weaker human-prior(moderate DoF for method) outperforms stronger and no human-prior during limited (20\% labels) data\\} \label{fig:degree_of_freedom_ILA_PLA} \vspace{-8mm} \end{figure} Figure~\ref{fig:degree_of_freedom_ILA_PLA} explains the trends when fewer labels are used, that both encoders that weaker human-prior based pair sampling tends to outperform over extreme cases of stronger human prior or the absence of it. However, it requires detailed explorations of different datasets and tasks. Similar patterns were also observed on the BACH dataset that the ordered pair outperforms over other two variants. \section{Supplementary Material} \subsection{Self-supervised methods learn magnification invariant representations} The MPCS methods not only outperform for magnification-specific tasks stated in Tables~\ref{ft_20} and~\ref{ft_80_sota} but representations learned through the proposed method also demonstrate a consistent edge in classification performance in cross-magnification evaluation over the ImageNet model. These experiments were conducted on the Efficient-net b2 encoder only to prevent additional computation usage. However, the ResNet-50 encoder can be benchmarked, if needed. \input{sections/tables/trained_cross_magnification} \input{sections/tables/eval_cross_magnification} Table~\ref{x_mag_type1} shows (type-1 mean cross magnification evaluation) mean cross-magnification accuracy where the model is evaluated on other magnifications except the magnification on which the model was trained. The MPCS-Order Pair methods outperform the ImageNet model and other methods with a mean cross-magnification ILA of 80.99\% and PLA of 81.49\% when the model was trained on 40x magnification and evaluated on 100x, 200x, and 400x. Whereas MPCS-Random Pair outperforms with mean cross-magnification ILA 84.84\% and PLA 82.97\% when trained on 200x and ILA 84.83\% and PLA 83.76\% when trained on 400x. For 100x, MPCS-Ordered Pair obtains ILA 80.99\%, and MPCS-Random Pair obtains PLA 85.05\%. Further, Table~\ref{x_mag_type2} evaluates the mean performance of models trained on other magnifications except on which evaluation is performed (type-2 mean cross magnification evaluation). Interestingly, type-2 cross magnification evaluation also shows similar trends except in 400x, in which the ImageNet model obtained high PLA performance. Empirical analysis on type-1 and type-2 cross magnification suggests that MPCS self-supervised pre-trained models perform better than the ImageNet model by learning magnification invariant representations. \subsection{Additional ablation results on using 80\%, 60\%, and 40\% of labels of the training set on BreakHis dataset} This section describes the extended ablation study on using labels in an incremental manner. The main section of the results describes and compares all three variants of the MPCS method with ImageNet pre-trained models when fine-tuned on 20\% and 100\% labels of the training set are used. \input{sections/tables/mpcs_breakhis_on_80_labels_of_trainset} \input{sections/tables/mpcs_breakhis_on_60_labels_of_trainset} \input{sections/tables/mpcs_breakhis_on_40_labels_of_trainset} To continue the trend for completeness of analysis, this section adds the results for the same setting considering 40\%, 60\%, and 80\% label utilization in fine-tuning, results described in Tables~\ref{tab:breakhis_80_label_mpcs}, ~\ref{tab:breakhis_60_label_mpcs}, and~\ref{tab:breakhis_40_label_mpcs}, respectively. The most important observation is that MPCS methods consistently outperform the ImageNet model over the range of labels provided and specifically, the ordered pair method remains best performing in largely. Figure~\ref{fig:effnet_ila_compare} and \ref{fig:effnet_pla_compare} shows the comparisons for Efficient-net b2 encoder for ILA and PLA accuracy. Similarly, Figure~\ref{fig:resnet_ila_compare} and \ref{fig:resnet_pla_compare} shows the comparisons for ResNet-50 encoder for ILA and PLA accuracy. A common trend is evident that MPCS methods based models consistently performs better than ImageNet based model for entire range of labels. It clearly shows that self-supervised learned representations improve fine-tuning task performance overall range of available labels, similar to the trend observed on the BACH dataset. Besides being able to obtain relatively higher accuracy on limited label settings, more label additions are largely beneficial to self-supervised pre-trained models than ImageNet pre-trained models. \begin{figure}[] \centering \includegraphics[width =0.9\linewidth]{sections/figures/effnet_ila_labelwise.jpg} \caption{Performance comparison (ILA accuracy, Efficient-net b2 model) for MPCS pre-trained models with ImageNet pre-trained model over range of labels used.} \label{fig:effnet_ila_compare} \end{figure} \begin{figure}[] \centering \includegraphics[width =0.9\linewidth]{sections/figures/effnet_pla_labelwise.jpg} \caption{Performance comparison (PLA accuracy, Efficient-net b2 model) for MPCS pre-trained models with ImageNet pre-trained model over range of labels used.} \label{fig:effnet_pla_compare} \end{figure} \begin{figure}[t] \centering \includegraphics[width =0.9\linewidth]{sections/figures/resnet_ila_labelwise.jpg} \caption{Performance comparison (ILA accuracy, ResNet-50 model) for MPCS pre-trained models with ImageNet pre-trained model over range of labels used.} \label{fig:resnet_ila_compare} \end{figure} \begin{figure}[t] \centering \includegraphics[width =0.9\linewidth]{sections/figures/resnet_pla_labelwise.jpg} \caption{Performance comparison (PLA accuracy, ResNet-50 model) for MPCS pre-trained models with ImageNet pre-trained model over range of labels used.} \label{fig:resnet_pla_compare} \end{figure} \subsection{Experimentation statistics} \input{sections/tables/exp_stats} An extensive experimentation strategy was designed, and experiments were performed To evaluate all the variants of the proposed self-supervised pre-training method MPCS. Specifically, 15 pre-training experiments for Efficient-net b2 and 18 pre-training experiments for ResNet-50 were performed on the BreakHis dataset. Further learned representations from pre-training models are evaluated by 800 downstream task training experiments on the BreakHis dataset covering all four magnifications (40x, 100x, 200x, and 400x), 5-cross folds, and on a wide range of labels (5\% to 100\% train set labels). One hundred forty downstream task training experiments were performed on the BACH dataset using BreakHis MPCS pre-trained ResNet-50 models. Finally, 30 downstream tasks experiments were performed for the Breast Cell Cancer Dataset using ResNet-50 pre-trained models covering fine-tuning and linear evaluation. In this way, 1003 experiments are performed in the current work. Details are mentioned in Table~\ref{tab:exp_stats}. \section{Introduction} \input{sections/1_introduction} \section{Related Work} \label{related_work} \input{sections/2_related_work} \section{Methodology} \input{sections/3_methodology} \label{methdology} \section{Experimental Evaluations} \input{sections/5_experimental_setup} \section{Results \& Discussions} \input{sections/6_result_discussion} \section{Conclusions} \input{sections/7_conclusion_future_work} { \bibliographystyle{ieee} \subsection{Inductive Transfer Learning} Given the fact that BreakHis~\cite{spanhol2016dataset} is a small-scale and class-imbalanced dataset, this work hypothesizes a constraint case of inductive transfer for representation learning by initializing encoder ImageNet~\cite{deng2009imagenet} pre-trained weights. In this work, the inductive transfer (i) helps to obtain improved performance on the downstream task of malignancy classification and (ii) enables self-supervised pre-training using the proposed method on the small-scale dataset \subsection{Self-supervised Method - Magnification Prior Contrastive Similarity} \label{ssl} \begin{comment} Recent advances in Contrastive Joint Embedding Embedding Architecture \& Method (JEAM) SimCLR~\cite{chen2020simple}-\cite{chen2020big}, and other methods are robust for representation learning and obtaining state-of-the-art performance. This work hypothesizes that inducted humans prior suppress the potential of networks to learn in a self-supervised manner more independently towards imitating biological intelligence. The limitation imposed due to the needs of strong human prior is more recognizable considering beyond the natural visual concept based domains like medical imaging, microscopic images of pathology, and others. Human knowledge and perception of such domains are relatively limited and fail to compile domain-specific characteristics to create effective transformations for distorted views. Specifically, BreakHis dataset~\cite{spanhol2016dataset} contains Whole Slide Images (WSI) of four magnifications and affected regions present only on some parts of WSI. Further, the affected region is not necessarily located in the center of WSI. Applying transformations from natural visual concept domains to BreakHis does not tend to learn optimal representations because visual and textural properties are very different, including location, size, shape, background-foreground, and concrete definition of objects. Thus network needs to learn representations by self-attention on affected regions across magnifications that are invariant to location, global context, \& several other geometric characteristics. So stronger human prior prevents networks from learning and focusing on self-attention. Knowing the fact from the literature that human-prior is required for the current state of self-supervised methods, this work focuses on i) exploiting self-supervision signal \& prior from data, i.e., magnification factor to reduce the human prior, and ii) enabling representation learning on small-scale dataset~\cite{spanhol2016dataset} by utilizing transfer learning and using augmentation based transformation in a specific arrangement. \end{comment} Magnification Prior Contrastive Similarity (MPCS) method formulates self-supervised pre-training to learn representations on microscopic Histopathology WSI without labels on small-scale data. The main objective of MPCS is to lower the amount of labeled data needed for the downstream task to address challenges in supervised learning. MPCS construct pairs of distinct views considering characteristics of microscopic histopathology WSI (H-WSI) for contrastive similarity-based pre-training. Microscopic H-WSI structural properties are different from natural visual macroscopic images~\cite{deng2009imagenet} (vehicles, cats, or dogs) in terms of location, size, shape, background-foreground, and concrete definition of objects. Unlike SimCLR~\cite{chen2020simple} where pairs of distinct views from the input image is constructed by human-centered augmentations, MPCS constructs pair of distinct views using pair sampling methods based on the signal from data itself i.e. magnification factor in BreakHis~\cite{spanhol2016dataset}. Two H-WSI from different magnification factors of the same sample makes a pair. Utilizing prior from data~(magnification factor) enables meaningful contrastive learning on histopathology H-WSI and reduces dependency over human inducted prior. Further, tumor-affected regions in H-WSI are characterized by format and highly abnormal amounts of nuclei. Such affected regions are promising in all the H-WSIs of different magnification for the same sample. Thus, affected regions being common and size invariant in positive pair of a sample allow learning contrastive similarity by region attentions. \begin{figure}[t] \centering \includegraphics[width =0.6\linewidth]{sections/figures/ssl_method_mpcs.png} \vspace{-0.2cm} \caption{Magnification Prior Contrastive Similarity method explained} \label{fig:ssl_mpcs} \vspace{-4mm} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width =0.6\linewidth]{sections/figures/human_prior_strategy.png} \vspace{-0.2cm} \caption{Strategies for pair sampling based on inducted Human Prior(HP). Added measure to prevent mode collapse in pre-training by $1^{st} view \neq 2^{nd} view$ in all strategies.} \label{fig:human_prior} \vspace{-5mm} \end{figure} Current work also hypothesizes that reduced human-prior in the pre-training method provides an enhanced degree of freedom to the method that can increase the potential of the network to learn efficient representations in a self-supervised approach. To investigate, three strategies for pair sampling are formulated based on inducted human prior. The number of human decisions defines the level of inducted human-prior (HP) during pair sampling. As explained in Figure~\ref{fig:human_prior}, in Fixed Pair, decisions to choose magnification factor for both views are by human, thus making strong human-prior. In Ordered Pair, only the second view of the pair is chosen by a human using a look-up table, making weaker human prior. In Random Pair, no human prior inducted and magnification factor for both views are sampled randomly. Further, Figure~\ref{fig:degrre_of_freedom} demonstrates the Degree of Freedom (DoF) for the method where the Fixed Pair strategy provides no DoF, Order Pair provides one DoF, and Random Pair provides 2 DoF to the method. In MPCS, to formulate a batch of $2N$ views, randomly sampled batch of $N$ sets of input $X=\{X^{(1)}, X^{(2)}, ..., X^{(N)}\}$ are considered where each set of input $X^{(i)}=\{x^{(i)}_{40}, x^{(i)}_{100}, x^{(i)}_{200}, x^{(i)}_{400}\}$ contains the images corresponding to four magnification factors. Positive pair of views is constructed based on a selection strategy of pair sampling which contains two views from the same example of different magnification. Further, similarity maximizes (loss minimizes) by an objective defined by the contrastive loss in Eq.~\ref{eq:loss}). MPCS is demonstrated in Figure~\ref{fig:ssl_mpcs} and components are explained below. \begin{figure}[!ht] \centering \includegraphics[width =0.8\linewidth]{sections/figures/magnification_prior_contrastive_similarity_degree_of_freedom_v2.png} \vspace{-0.2cm} \caption{Relation between inducted Human Prior (HP) of magnification and Degree of Freedom (DoF) for method} \label{fig:degrre_of_freedom} \vspace{-4mm} \end{figure} \begin{itemize} \item A \textit{domain specific human prior} module $P_h(X \to X_\mathrm{MF})$: $X=\{x_{40}, x_{100}, x_{200}, x_{400}\}$ that exploits supervision signal from data i.e. magnification and samples the two views $X_\mathrm{MF}=$(\textbf{$x_\mathrm{MF1}$}, \textbf{$x_\mathrm{MF2}$}) of different magnifications to construct pair based on employed strategy of pair sampling shown in step 1 of Figure~\ref{fig:ssl_mpcs}. \vspace{-0.2cm} \item A \textit{uniform stochastic transformation} based module $\displaystyle \mathcal{T_U}(X_\mathrm{MF} \to \tilde{X}_\mathrm{MF})$ that uniformly transforms both views from $X_\mathrm{MF}=$(\textbf{$x_\mathrm{MF1}$}, \textbf{$x_\mathrm{MF2}$}) to $\tilde{X}_\mathrm{MF}=$ (\textbf{$\tilde{x}_\mathrm{MF1}$}, \textbf{$\tilde{x}_\mathrm{MF2}$}) of positive pair by sampled augmentation transformations scheme shown in step 2 of Figure~\ref{fig:ssl_mpcs}.\vspace{-0.2cm} \item A neural network \textit{base encoder} \textit{f}(·) which yields representations from transformed views of pair. It obtains $h_\mathrm{MF1}$ = \textit{f}($\tilde{x}_\mathrm{MF1}$) = encoder-network($\tilde{x}_\mathrm{MF1}$) and $h_\mathrm{MF2}$ = \textit{f}($\tilde{x}_\mathrm{MF2}$) = encoder-network($\tilde{x}_\mathrm{MF2}$) where $h_\mathrm{MF1}$, $h_\mathrm{MF2} \in \mathbb{R}^d$ are the output after the respective average pooling layers, shown in step 3 of Figure~\ref{fig:ssl_mpcs}.\vspace{-0.2cm} \item A small-scale \textit{MLP projection head} \textit{g}(·) that maps representations to the latent space where contrastive loss is applied, shown in step 4 of Figure~\ref{fig:ssl_mpcs}. A multi-layer perceptron with single hidden layer to obtain $z_\mathrm{MF1}$ = \textit{g}($h_\mathrm{MF1}$) = $W^{(2)}\sigma(W^{(1)}{h_\mathrm{MF1}})$ and $z_\mathrm{MF2}$ = \textit{g}($h_\mathrm{MF2}$) = $W^{(2)}\sigma(W^{(1)}{h_\mathrm{MF2}})$ where $\sigma$ is ReLU.\vspace{-0.2cm} \item A \textit{contrastive loss function}, normalized temperature-scaled cross entropy loss (NT-Xent) from SimCLR is defined for a contrastive prediction, shown in step 5 of Figure~\ref{fig:ssl_mpcs}. For given a set \textbf{$\tilde{x}_{k}$} including a positive pair examples \textbf{$\tilde{x}_\mathrm{MF1}$} and \textbf{$\tilde{x}_\mathrm{MF2}$}, the contrastive prediction task tends to find \textbf{$\tilde{x}_\mathrm{MF2}$} in \{${\mathbf{\tilde{x}_{k}}}\}_{k\neq MF1}$ for the given \textbf{$\tilde{x}_\mathrm{MF1}$}. \end{itemize} The loss function for a positive pair of examples (MF1, MF2) is defined as \begin{equation} L_\mathrm{MF1,MF2} = -\log \dfrac{\exp(sim(\boldsymbol{z}_\mathrm{MF1},\boldsymbol{z}_\mathrm{MF2})/\tau)} {\sum_{k =1}^{2N} 1_{[k \neq MF1]} exp(sim(\boldsymbol{z}_\mathrm{MF1},\boldsymbol{z}_{k})/\tau)} \label{eq:loss} \end{equation} \begin{comment} \begin{equation} sim(\boldsymbol{z}_\mathrm{MF1},\boldsymbol{z}_\mathrm{MF2}) = \dfrac{\boldsymbol{z}^T_\mathrm{MF1} \boldsymbol{z}_\mathrm{MF2}} {\parallel sim(\boldsymbol{z}_\mathrm{MF1}) \parallel \parallel sim(\boldsymbol{z}_\mathrm{MF2}) \parallel} \vspace{-1em} \end{equation} \begin{equation} sim(\boldsymbol{z}_\mathrm{MF1},\boldsymbol{z}_{k}) = \dfrac{\boldsymbol{z}^T_\mathrm{MF1} \boldsymbol{z}_{k}} {\parallel sim(\boldsymbol{z}_\mathrm{MF1}) \parallel \parallel sim(\boldsymbol{z}_{k}) \parallel} \end{equation} \end{comment} In Eq.~(\ref{eq:loss}), where $1_{[k \neq MF1]} \in {0, 1}$ is an indicator evaluating to 1 if $k \neq i$. \subsection{Datasets} \subsubsection{BreakHis} BreakHis~\cite{spanhol2016dataset} dataset consists of 2,480 benign and 5,429 malignant histopathological microscopic images from 82 patients at four magnification levels (40×, 100×, 200×, 400×). Each image in the BreakHis dataset is of size 700×460, stained with hematoxylin and eosin (HE). Following the previous works, two evaluation metrics are used, image-level accuracy(ILA) and patient-level accuracy (PLA). PLA shows patient-wise classification performance, calculated as the mean over total no. of patients using patient-score. Patient-score is correctly classified images of the patient over a total number of images of that patient. ILA disregards patient-level details and thus serves as standard image classification accuracy. \subsubsection{BACH} The second dataset, Breast Cancer Histology Images (BACH)~\cite{aresta2019bach} is from the ICIAR2018 Grand Challenge and contains 400 histopathology slides. The BACH dataset has four classes, normal, benign, in-situ, and invasive. The slide size is relatively large, 2048 × 1536 pixels; thus, patches of size 512x512. Two evaluation metrics, patch-wise accuracy and image-wise accuracy, are used, whereas image-wise accuracy is calculated based on majority voting over the patches of the respective image. \subsubsection{Breast Cancer Cell Dataset} The third dataset, Breast Cancer Cell Dataset~\cite{gelasca2008evaluation} is from the University of California, Santa Barbara Biosegmentation Benchmark. This dataset contains 58 HE-stained histopathology 896x768 size images of breast tissue, of which 26 are malignant, and 32 are benign. Patches of size 224x224 were created, and image-wise accuracy was calculated using majority voting over patches of the respective image. \subsection{Encoder Architectures} In this current work, the proposed method MPCS investigated two different CNN encoder architectures. ResNet-50~\cite{yu2017dilated} and Efficient-net b2~\cite{tan2019efficientnet} are used for pre-training and fine-tuning. SSL-specific MLP projection head used for Efficient-net b2 is three layers network of 2048-1204-128 units, whereas ResNet-50 is the most common backbone encoder, used projection head adapted from SimCLR, having 1024-128 units. encoder and projection heads are demonstrated in Fig.~\ref{fig:ssl_mpcs}. \subsection{Training Protocol} This section shares parameter configurations used in pre-training and fine-tuning. \subsubsection{SSL pre-training} Self-supervised pre-trainings of both encoders take place on the BreakHis dataset for 1000 epochs with temperature parameter $0.01$, learning rate 1e-05, and a set of augmentation methods such as color-jitter, flip, and rotation. Efficient-net b2 encoder pre-trained using Adam optimizer with a batch size of 128 and image input of $(341, 341)$. However, ResNet-50 adapted standard configurations of self-supervised practices and pre-trained using the LARS optimizer with a batch size of 1024 and input image size of 224x224. \subsubsection{Fine-tuning} The common training configurations across datasets for both encoders are learning rate of 2e-05, batch size of 32, image input of 224x224, augmentations methods such as random crop, flip, affine, rotation, and color-jitter, and using adam as optimizer. A dropout of 0.3 is used in the fully connected layer. \subsection{Experimentation Details} \begin{comment} As also mentioned in Section~\ref{related_work} careful analysis shows the following weakness in mentioned data-split strategy: \begin{enumerate} \item The stated data-split strategy does not have any mechanism to ensure the correctness of mean performance evaluation where contribution made by each data-point is ensured in equal or at least in one trial of experiment out of 5. The same is applicable to learning capacity. \item The stated strategy does not have any mechanism to reflect the data distribution of imbalanced classes during data-split in each trial out of 5. \end{enumerate} The reasons behind the weakness of 1) and 2) are the following: \begin{itemize} \item Repeated yet any independent random trial of data-splits does not endorse theoretical guarantee for selection of mutually exclusive data-points in the test set across the trials because any two consecutive trials are conditionally independent. \item Specifically chosen ratio of 70:30 for train-test data splits repeated over 5-times forces some data-points to occur in the test set with more frequency than others. \end{itemize} To cater above-stated issues in the experimentation strategy of dataset division, an improved strategy needs to be formulated which can guarantee the following two aspects: \begin{enumerate} \item Mean performance calculation must be contributed by every data-point of the dataset in an equal amount of participation. \item Skewed data distribution due to class imbalances must reflect in every test set from each trial to get real-world agnostic performance (more malignant cases than benign in the dataset). \end{enumerate} \end{comment} To ensure the reliability and consistency of the models, this work follows 5-cross validation data split scheme. This is applied to all three datasets in which each fold contained 20\% data, following class distribution from whole data. Four out of five folds are used for training \& validation, and the remaining one for testing. Thus all the results reported are in terms of mean value with standard deviation. In the above-stated 5-cross validation settings, both backbone encoders, ResNet-50 and Efficient-net b2, are being pre-trained on the first dataset BreakHis with all three variants (ordered pair, random pair, and fixed pair) of the proposed SSL method MPCS for learning domain-specific representations. Further, downstream-task-specific fine-tuning experiments are carried out to investigate the impact of learned representations for all three datasets e.g., BreakHis, BACH, and Breast Cancer Cell. Following are the details of experimentation for each dataset. Table~\ref{tab:experiments_details} describes fine-tuning experiments for the first dataset BreakHis. All the mentioned experiments for malignancy classification are conducted for both encoders, Efficient-net b2 and ResNet-50 for all four magnifications (40X, 100X, 200X, and 400X). Experiments (Exp-1 to Exp-4) in \textit{Limited Labeled Data Setting} evaluate model performance while using only 20\% labels, whereas experiments (Exp-5 to Exp-8) in \textit{Fully Supervised Setting} use all labels. The sole objective is to compare the performance of models pre-trained on proposed MPCS methods against ImageNet~\cite{deng2009imagenet} pre-trained model to analyze the effect of learned representations. Preferred names in discussion used as MPCS-X for ImageNet $\to$ MPCS-X. Experiments(Exp-9 to Exp-12) for the second dataset BACH~\cite{aresta2019bach} are described in Table~\ref{tab:exp_bach} based on the ResNet-50 encoder. All the images are divided into 512X512 size patches; thus, performance is measured patch-wise and image-wise (using majority voting suggested in~\cite{vesal2018classification}). The major objectives are 1) evaluating pre-trained models from the proposed method against the ImageNet-based transfer learning approach~\cite{aresta2019bach} 2) evaluating the ability to learn downstream tasks with limited labels ranging from 5\% to 100\% of labels from the train data portion. Finally, to evaluate the effect of learned domain-specific representations on small-scale data, a series of fine-tuning (all layers trained) and linear evaluation experiments (only fully-connected layers trainable) Exp-13 to Exp-16 are conducted Breast Cancer Cell dataset and described in Table~\ref{tab:exp_bisque}. Similar to BACH dataset, images for this dataset were also divided into sizes 224X224 for training and test, and performance was measured likewise. \begin{table}[t] \caption{Experiment details for BreakHis dataset} \vspace{-0.3cm} \label{tab:experiments_details} \resizebox{\columnwidth}{!}{% \begin{tabular}{ccccc} \hline \multicolumn{1}{c|}{\multirow{2}{*}{No.}} & \multicolumn{1}{c|}{\multirow{2}{*}{Pre-training Method}} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}BreakHis data\\ (\%)for SSL\end{tabular}}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Finetuning on \\ BreakHis Dataset\end{tabular}} \\ \cline{4-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Train (\%)} & Test (\%) \\ \hline \multicolumn{5}{c}{Limited Labeled Data Setting} \\ \hline \multicolumn{1}{c|}{Exp-1} & \multicolumn{1}{c|}{ImageNet} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{20\%} & 20\% \\ \multicolumn{1}{c|}{Exp-2} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Fixed Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{20\%} & 20\% \\ \multicolumn{1}{c|}{Exp-3} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Ordered Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{20\%} & 20\% \\ \multicolumn{1}{c|}{Exp-4} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Random Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{20\%} & 20\% \\ \hline \multicolumn{5}{c}{Fully Supervised Data Setting} \\ \hline \multicolumn{1}{c|}{Exp-5} & \multicolumn{1}{c|}{ImageNet} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{80\%} & 20\% \\ \multicolumn{1}{c|}{Exp-6} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Fixed Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{80\%} & 20\% \\ \multicolumn{1}{c|}{Exp-7} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Ordered Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{80\%} & 20\% \\ \multicolumn{1}{c|}{Exp-8} & \multicolumn{1}{c|}{ImageNet $\to$ MPCS-Random Pair} & \multicolumn{1}{c|}{60\%} & \multicolumn{1}{c|}{80\%} & 20\% \\ \hline \end{tabular}% } \vspace{-2mm} \end{table} \begin{table}[t] \caption{Experiment details for BACH dataset} \label{tab:exp_bach} \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|cc} \hline & & \multicolumn{2}{c}{Fine-tuning on BACH dataset} \\ \cline{3-4} \multirow{-2}{*}{No.} & \multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}Pre-training \\ method/weights\end{tabular}} & \multicolumn{1}{c|}{Labels(\%) from Train Data} & Test data (\%) \\ \hline Exp-9 & ImageNet~\cite{vesal2018classification}-re-implement & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 100\% (~80\% train data)}} & {\color[HTML]{1E1E1E} 20\%} \\ Exp-10 & MPCS-Fixed Pair (BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} {[}5\%, 10\%, 20\%, 40\%, 60\%, 80\%, 100\%{]}}} & {\color[HTML]{1E1E1E} 20\%} \\ Exp-11 & MPCS-Ordered Pair(BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} {[}5\%, 10\%, 20\%, 40\%, 60\%, 80\%, 100\%{]}}} & 20\% \\ Exp-12 & MPCS-Random Pair(BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} {[}5\%, 10\%, 20\%, 40\%, 60\%, 80\%, 100\%{]}}} & {\color[HTML]{1E1E1E} 20\%} \\ \hline \end{tabular}% } \vspace{-3mm} \end{table} \begin{table}[t] \caption{Experiment details for Breast Cancer Cell Dataset} \label{tab:exp_bisque} \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|cc|cc} \hline & & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Fine-tuning on \\ Breast Cancer Cell Dataset\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Linear-evaluation on \\ Breast Cancer Cell Dataset\end{tabular}} \\ \cline{3-6} \multirow{-2}{*}{No.} & \multirow{-2}{*}{\begin{tabular}[c]{@{}c@{}}Pre-training \\ method/weights\end{tabular}} & \multicolumn{1}{c|}{Train data(\%)} & Test data (\%) & \multicolumn{1}{c|}{Train data(\%)} & \multicolumn{1}{l}{Test data(\%)} \\ \hline Exp-13 & MPCS-Fixed Pair (BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & {\color[HTML]{1E1E1E} 20\%} & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & {\color[HTML]{1E1E1E} 20\%} \\ Exp-14 & MPCS-Ordered Pair(BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & 20\% & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & 20\% \\ Exp-15 & MPCS-Random Pair(BreakHis) & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & {\color[HTML]{1E1E1E} 20\%} & \multicolumn{1}{c|}{{\color[HTML]{1E1E1E} 80\%}} & {\color[HTML]{1E1E1E} 20\%} \\ \hline \end{tabular}% } \vspace{-3mm} \end{table} \subsection{Self-supervised method MPCS demonstrates label efficiency} \begin{figure}[t] \centering \includegraphics[width =0.6\linewidth]{sections/figures/SSL_label_wise_compare_bach.png} \caption{Comparison with DPCL~\cite{ciga2022self} for label efficiency on classification task on BACH dataset} \label{fig:ssl_dpcl} \vspace{-7mm} \end{figure} All three variants of the proposed method MPCS demonstrate label efficiency on downstream tasks. Results in Table~\ref{ft_20} On BreakHis dataset, MPCS fine-tuned models obtain (significant improvement p \textless 0.01) proportionally bigger margins of improvement of (2.52±0.02)\% over ImageNet pre-trained model while only 20 \% labels are used for all magnification scales. These results match the performance of other state-of-the-art methods mentioned in Table~\ref{ft_80_sota} which have been trained on 100\% labels. Following the trend, MPCS pre-trained models consistently outperform on the BACH dataset compared with the recent contrastive learning-based method DPCL~\cite{ciga2022self} in a complete range of labels from 5\% to 100\%, shown in Figure~\ref{fig:ssl_dpcl}. \subsection{Data prior enables self-supervision on small-scale datasets} The proposed MPCS method enables self-supervised representation learning on the small-scale dataset by extensively using a data prior (supervision signal from data) e.g. magnification factors (40X, 100X, 200X, and 400x). It decreases the dependence on human-curated priors e.g. augmentation method choices during self-supervised pre-training. \subsection{MPCS learns robust self-supervised representations} \begin{figure}[!ht] \centering \includegraphics[width =0.8\linewidth]{sections/figures/tsne_all_pair.png} \caption{tsne-visualization of the features from MPCS model (on BreakHis) without fine-tuning. Blue(benign); red(malignant). } \label{fig:tsne} \vspace{-4mm} \end{figure} Explicitly focusing on the robustness of learned representations, Figure~\ref{fig:tsne} strongly supports the fact that MPCS pre-trained models can capture and learn discriminative features across the classes during the self-supervised pre-training phase itself without knowing about actual human-provided labels. It is worth mentioning that data points of different classes are easily separable by either linear or non-linear boundaries for all four magnifications for all variants of the MPCS method. The CAM of fine-tuned models depicted in Figures~\ref{fig:cam_breakhis} and ~\ref{fig:cam_bach} also indicate that MPCS pre-trained models activate regions of interest (dark red colors show strong activation) more efficiently than the ImageNet pre-trained model for BreakHis and BACH datasets. \subsection{Preliminary support for the hypothesis about reducing human inducted priors} The MPCS-Ordered Pair inducts weaker human-prior in pair sampling. Thus, the MPCS method obtains one DoF for randomly choosing the first input view. In comparison, the MPCS-Fixed Pair, which inducts stronger human prior by choosing both views, 200x and 400x by human-prior, gives zero DoF to the MPCS method. The MPCS-Random Pair, in which the MPCS method obtains the highest degree of freedom since the human-prior is absent. \begin{figure}[!ht] \centering \includegraphics[width =0.6\linewidth]{sections/figures/degree_of_freedom_ILA_PLA_extended.png} \caption{Comparison for human priors - Indicates weaker human-prior(moderate DoF for method) outperforms stronger and no human-prior during limited (20\% labels) data\\} \label{fig:degree_of_freedom_ILA_PLA} \vspace{-8mm} \end{figure} Figure~\ref{fig:degree_of_freedom_ILA_PLA} explains the trends when fewer labels are used, that both encoders that weaker human-prior based pair sampling tends to outperform over extreme cases of stronger human prior or the absence of it. However, it requires detailed explorations of different datasets and tasks. Similar patterns were also observed on the BACH dataset that the ordered pair outperforms over other two variants. \section{Supplementary Material} \subsection{Self-supervised methods learn magnification invariant representations} The MPCS methods not only outperform for magnification-specific tasks stated in Tables~\ref{ft_20} and~\ref{ft_80_sota} but representations learned through the proposed method also demonstrate a consistent edge in classification performance in cross-magnification evaluation over the ImageNet model. These experiments were conducted on the Efficient-net b2 encoder only to prevent additional computation usage. However, the ResNet-50 encoder can be benchmarked, if needed. \input{sections/tables/trained_cross_magnification} \input{sections/tables/eval_cross_magnification} Table~\ref{x_mag_type1} shows (type-1 mean cross magnification evaluation) mean cross-magnification accuracy where the model is evaluated on other magnifications except the magnification on which the model was trained. The MPCS-Order Pair methods outperform the ImageNet model and other methods with a mean cross-magnification ILA of 80.99\% and PLA of 81.49\% when the model was trained on 40x magnification and evaluated on 100x, 200x, and 400x. Whereas MPCS-Random Pair outperforms with mean cross-magnification ILA 84.84\% and PLA 82.97\% when trained on 200x and ILA 84.83\% and PLA 83.76\% when trained on 400x. For 100x, MPCS-Ordered Pair obtains ILA 80.99\%, and MPCS-Random Pair obtains PLA 85.05\%. Further, Table~\ref{x_mag_type2} evaluates the mean performance of models trained on other magnifications except on which evaluation is performed (type-2 mean cross magnification evaluation). Interestingly, type-2 cross magnification evaluation also shows similar trends except in 400x, in which the ImageNet model obtained high PLA performance. Empirical analysis on type-1 and type-2 cross magnification suggests that MPCS self-supervised pre-trained models perform better than the ImageNet model by learning magnification invariant representations. \subsection{Additional ablation results on using 80\%, 60\%, and 40\% of labels of the training set on BreakHis dataset} This section describes the extended ablation study on using labels in an incremental manner. The main section of the results describes and compares all three variants of the MPCS method with ImageNet pre-trained models when fine-tuned on 20\% and 100\% labels of the training set are used. \input{sections/tables/mpcs_breakhis_on_80_labels_of_trainset} \input{sections/tables/mpcs_breakhis_on_60_labels_of_trainset} \input{sections/tables/mpcs_breakhis_on_40_labels_of_trainset} To continue the trend for completeness of analysis, this section adds the results for the same setting considering 40\%, 60\%, and 80\% label utilization in fine-tuning, results described in Tables~\ref{tab:breakhis_80_label_mpcs}, ~\ref{tab:breakhis_60_label_mpcs}, and~\ref{tab:breakhis_40_label_mpcs}, respectively. The most important observation is that MPCS methods consistently outperform the ImageNet model over the range of labels provided and specifically, the ordered pair method remains best performing in largely. Figure~\ref{fig:effnet_ila_compare} and \ref{fig:effnet_pla_compare} shows the comparisons for Efficient-net b2 encoder for ILA and PLA accuracy. Similarly, Figure~\ref{fig:resnet_ila_compare} and \ref{fig:resnet_pla_compare} shows the comparisons for ResNet-50 encoder for ILA and PLA accuracy. A common trend is evident that MPCS methods based models consistently performs better than ImageNet based model for entire range of labels. It clearly shows that self-supervised learned representations improve fine-tuning task performance overall range of available labels, similar to the trend observed on the BACH dataset. Besides being able to obtain relatively higher accuracy on limited label settings, more label additions are largely beneficial to self-supervised pre-trained models than ImageNet pre-trained models. \begin{figure}[] \centering \includegraphics[width =0.9\linewidth]{sections/figures/effnet_ila_labelwise.jpg} \caption{Performance comparison (ILA accuracy, Efficient-net b2 model) for MPCS pre-trained models with ImageNet pre-trained model over range of labels used.} \label{fig:effnet_ila_compare} \end{figure} \begin{figure}[] \centering \includegraphics[width =0.9\linewidth]{sections/figures/effnet_pla_labelwise.jpg} \caption{Performance comparison (PLA accuracy, Efficient-net b2 model) for MPCS pre-trained models with ImageNet pre-trained model over range of labels used.} \label{fig:effnet_pla_compare} \end{figure} \begin{figure}[t] \centering \includegraphics[width =0.9\linewidth]{sections/figures/resnet_ila_labelwise.jpg} \caption{Performance comparison (ILA accuracy, ResNet-50 model) for MPCS pre-trained models with ImageNet pre-trained model over range of labels used.} \label{fig:resnet_ila_compare} \end{figure} \begin{figure}[t] \centering \includegraphics[width =0.9\linewidth]{sections/figures/resnet_pla_labelwise.jpg} \caption{Performance comparison (PLA accuracy, ResNet-50 model) for MPCS pre-trained models with ImageNet pre-trained model over range of labels used.} \label{fig:resnet_pla_compare} \end{figure} \subsection{Experimentation statistics} \input{sections/tables/exp_stats} An extensive experimentation strategy was designed, and experiments were performed To evaluate all the variants of the proposed self-supervised pre-training method MPCS. Specifically, 15 pre-training experiments for Efficient-net b2 and 18 pre-training experiments for ResNet-50 were performed on the BreakHis dataset. Further learned representations from pre-training models are evaluated by 800 downstream task training experiments on the BreakHis dataset covering all four magnifications (40x, 100x, 200x, and 400x), 5-cross folds, and on a wide range of labels (5\% to 100\% train set labels). One hundred forty downstream task training experiments were performed on the BACH dataset using BreakHis MPCS pre-trained ResNet-50 models. Finally, 30 downstream tasks experiments were performed for the Breast Cell Cancer Dataset using ResNet-50 pre-trained models covering fine-tuning and linear evaluation. In this way, 1003 experiments are performed in the current work. Details are mentioned in Table~\ref{tab:exp_stats}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recent work has identified \emph{relational Horn logic} (RHL) and \emph{partial Horn logic} (PHL) as a semantically well-behaved extensions of Datalog \citep{phl-theory}. This paper describes how the Datalog evaluation algorithm can be generalized to RHL evaluation. An RHL theory contains declarations of the following data: \begin{itemize} \item A set $S$ of \emph{sorts}. \item A set $R$ of \emph{relations}. \item \emph{Arities} $r : s_1 \times \dots \times s_n$ for all relations $r \in R$. \item A set of \emph{sequents} (or \emph{rules}, or \emph{axioms}), each of the form $\mathcal{F} \implies \mathcal{G}$, where $\mathcal{F}$ and $\mathcal{G}$ are conjunctions $\phi_1 \land \dots \land \phi_n$ of \emph{atoms} $\phi_i$. \end{itemize} Instead of RHL sequents as above, Datalog engines typically accept rules of the following form: \begin{equation} \phi \; \text{:-} \; \phi_1, \dots, \phi_n. \end{equation} The \emph{head} $\phi$ corresponds to the conclusion of an RHL sequent and consists of a single atom. The \emph{body} $\phi_1, \dots, \phi_n$ corresponds to the premise of an RHL sequent, can contain multiple atoms and is interpreted as conjunction. The structure of RHL sequents is thus more general at first sight because the conclusion of an RHL sequents is allowed to be a conjunction of atoms. However, a single sequent with $n$ conclusion atoms is equivalent to $n$ sequents, each with a single conclusion atom. Thus, no generality is gained solely from allowing general conjunctions as conclusions. Where RHL generalizes Datalog is in what kind of atoms it allows, and how variables are handled. In Datalog, each atom is of the form $r(v_1, \dots, v_n)$ where $r$ is a relation symbol and the $v_i$ are variables whose sort match the arity of $r$. In addition to such \emph{relation atoms}, RHL recognizes also the following types of atoms: \begin{enumerate} \item An \emph{equality atom} \begin{equation} u \equiv v \end{equation} where $u$ and $v$ have the same sort. We reserve the symbol ${\equiv}$ for RHL syntax, whereas ${=}$ is used for meta-theoretical equality. \item A \emph{sort quantification} \begin{equation} v \downarrow \end{equation} where $v$ is a variable with known sort $s$. If the sort of $v$ is not determined by other atoms in the sequent, we also use the syntax $v : s$ as synonymous for $v \downarrow$ and the metatheoretical assertion that $v$ has sort $s$. \end{enumerate} If an equality atom $u \equiv v$ occurs in the premise of a sequent, then matches of the premise are only valid if $u$ and $v$ are interpreted as the same constant. Thus, equality atoms in a premise can be eliminated by removing all occurrences of one of the two variables in the sequent by the other variable. The semantics of an equality atom $u \equiv v$ in the conclusion of a sequent are non-trivial, however: Whenever the premise of such a sequent matches such that $u$ is interpreted as a constant $a$ and $v$ is interpreted as constant $b$, then we expect the evaluation engine to identify $a$ and $b$ in all contexts henceforth. For example, the premise of the transitivity axiom $\mathrm{Le}(u, v) \land \mathrm{Le}(v, w) \implies \mathrm{Le}(u, w)$ should match tuples $(a, b_1), (b_2, c) \in \mathrm{Le}$ if an equality $b_1 = b_2$ has been inferred previously. Partial functions can be encoded in RHL using relations representing the graphs of partial functions. Thus one identifies partial functions $f : s_1 \times \dots \times s_n \rightarrow s$ with relations $f : s_1 \times \dots s_n \times s$ where the first $n$ components of each entry represent an element in the domain of the function and the last component represents the value of the function. The \emph{functionality axiom} \begin{equation} \label{eq:functionality} f(v_1, \dots, v_n, u) \land f(v_1, \dots, v_n, w) \implies u \equiv w \end{equation} enforces that the relation $f$ does indeed correspond to a well-defined partial function. Sort quantifications $v : s$ in premises allow matching elements of a given sort $s$ that do not appear in any relation. In standard Datalog, all variables in the head of a sequent must also appear in the body. This requirement is removed in RHL. Variables that only appear in the conclusion are implicitly existentially quantified. If the premise of a sequent matches, then the evaluation engine must extend the match to the conclusion by finding an interpretation of variables that occur only in the conclusion such that the conclusion holds. If no such extension exists, then the evaluation engine must create new identifiers to interpret variables that only occur in the conclusion, and enforce that the atoms of the conclusion hold for this interpretation. We expect the evaluation engine to output a list of identifiers of each sort, including those identifiers that were created during evaluation. An RHL sequent in which all conclusion variables also occur in the premise is called \emph{surjective}. If an RHL theory contains non-surjective sequents, then evaluation need not terminate. The Souffle Datalog engine implements a similar mechanism in the form of its choice construction \citep{souffle-choice}. The presence of non-surjective sequents can not only lead to non-termination but also to non-deterministic results, in the sense that the result depends on the order in which sequents are matched. This is not the case for \emph{strong} RHL theories, in which the interpretion of conclusion variables is uniquely determined once all sequents are satisfied. For example, the RHL theory given by the functionality sequent \eqref{eq:functionality} and the sequent \begin{equation} \label{eq:totality} v_1 : s_1 \land \dots \land v_n : s_n \implies f(v_1, \dots, v_n, v) \end{equation} is strong, since if the functionality axiom is satisfied, then the interpretation of the variable $v$ in sequent \eqref{eq:totality} is uniquely determined. Unfortunately, it is undecidable whether a given RHL theory is strong. A further problem with RHL is that functions must be encoded via their graphs. This leads to excessive verbosity when formulating axioms involving complex expressions built up from function symbols. \emph{Partial Horn logic} (PHL) \citep{phl} is a syntactic layer on top of RHL that rectifies these shortcomings. In partial Horn logic, relations must be explicitly declared as predicates or partial functions. Predicates correspond directly to RHL relations, whereas functions are lowered into a relation corresponding to the graph of the function and the implicit functionality axiom \eqref{eq:functionality}. In positions where RHL accepts variables, PHL also allows composite terms \begin{equation} t \, \text{::=} \, v \mid f(t_1, \dots, t_n) \end{equation} which are recursively defined from variables and application of partial function symbols to terms whose sorts match the function signature. When lowering PHL to RHL, composite terms $t$ are recursively lowered into a result variable representing the term $t$ and additional RHL atoms. These additional atoms are inserted into premise or conclusion of the sequent, depending on where $t$ appears. To lower a composite term $t = f(t_1, \dots, t_n)$, we may assume that $t = f(v_1, \dots, v_n)$ for variables $v_i$ by recursively lowering the arguments $t_1, \dots, t_n$ first. We now choose a fresh variable $v$ representing $t$ and add the RHL atom $f(v_1, \dots, v_n, v)$. Since lowering a PHL formula reduces nested expressions into a flat sequence of atoms, the process of lowering PHL to RHL is also called \emph{flattening}. PHL sequents are \emph{epic} if all variables in the conclusion are already introduced in the premise. Note that lowering epic PHL sequents can result in non-surjective RHL sequents, because lowering composite terms can introduce fresh variables. Nevertheless, the lowered RHL theory resulting from an epic PHL theory (i.e. a PHL theory containing epic sequents only) is strong. Conversely, every strong RHL theory is equivalent to an epic PHL theory; see \citet[Section 4.3]{phl-theory} for details. Thus, epic PHL has the same descriptive strength as strong RHL. On the other hand, checking whether a PHL sequent is epic is trivial, whereas checking whether an RHL theory is strong is undecidable. This makes PHL more suitable as human-facing input language to an evaluation engine compared to RHL. The \emph{Eqlog} engine, whose underlying algorithm is described in this paper, accepts epic PHL as input language. Eqlog lowers a user-provided epic PHL theory to RHL, which is then transpiled to a Rust module. This is similar to the Souffle Datalog engine, which transpiles Datalog to C++. In contrast to Souffle, Eqlog is meant to be used as part of a larger project and not as standalone tool. Similarly to the \emph{egg} equality saturation library \citep{egg}, Eqlog supports online use-cases in which one alternates inserting new ground facts into the model and closing the model under PHL sequents. Refer to the Eqlog project homepage \citep{eqlog-homepage} for details. Independently of the work on Eqlog presented in this article, members of the Egg community have created a very similar tool that combines e-graphs with Datalog, which will be reported on in an upcoming article. \textbf{Outline.} In Section \ref{sec:rhl-evaluation}, we describe a basic algorithm to evaluate RHL, and a technique to detect termination in many circumstances. Then, in Section \ref{sec:optimizations}, we discuss optimizations of the RHL evaluation algorithm. Finally, in Section \ref{sec:applications}, we sketch applications of PHL and RHL evaluation to the implementation of programming languages. Section \ref{sec:conclusion} concludes. \textbf{Acknowledgements.} Jakob Botsch Nielsen contributed significantly to previous versions of the implementation of the Eqlog tool and its application to an experimental type checker for a dependently typed proof assistant. My own work on Eqlog and the algorithm presented in this paper commenced during my time as PhD student at Aarhus University, where I was advised by Bas Spitters and supported by the Air Force Office and Scientific Research project "Homotopy Type Theory and Probabilistic Computation", grant number 12595060. \section{RHL Evaluation} \label{sec:rhl-evaluation} In this section, we describe a basic algorithm to evaluate RHL theories. The input to the evaluation algorithm is a list of RHL sequents and a \emph{relational structure} representing ground facts. A relational structure is given by sets of numerical identifiers for each sort, representing the elements of the sort, and sets of tuples for each relation. From Section \ref{subsec:naive-rhl-evaluation} onward, we assume that relational structures contain also union-find data structures for each sort, representing semantic equality of sort elements. If RHL evaluation terminates, then we require that the output is a relational structure that is \emph{weakly free} with respect to the list of RHL sequents over the input relational structure \citep{phl-theory}. Intuitively, this means that the output must satisfy all sequents in the RHL theory, and that it must be obtained from the input relational structure only from matching sequent premises and adjoining data corresponding to conclusions. Weak freeness does not uniquely characterize a relational structure. In general, the output relational structure depends on the order of matching premises of RHL sequents. However, if the RHL theory is strong, then the output relational structure is \emph{(strongly) free} over the input relational structure, which determines it uniquely up to unique isomorphism (i.e. renaming of identifiers). Relevant classes of strong RHL theories are theories containing surjective sequents only, and theories that are obtained from lowering epic PHL theories \citep{phl-theory}. In Section \ref{subsec:naive-datalog}, we discuss the naive algorithm for Datalog evaluation and amend it with support for non-surjective sequents and sort quantification, but not equality. Then, in Section \ref{subsec:naive-rhl-evaluation}, we consider a well-known simple congruence closure algorithm, which we shall understand as a special-purpose algorithm for evaluation of functionality RHL sequents. The conclusion of functionality sequents is an equality, which our naive Datalog evaluation algorithm cannot process. \emph{Union-find data structures} and \emph{normalization} are aspects of this congruence closure algorithm that deal with equalities in particular. We incorporate these into our naive Datalog algorithm to obtain a naive RHL evaluation algorithm. In Section \ref{subsec:termination}, we discuss an example where our RHL evaluation algorithm does not terminate for a non-surjective RHL theory with finite free models. Based on the example, we show how the evaluation algorithm can be modified to terminate for this particular example and also a wide range of other RHL theories. \subsection{Naive Datalog evaluation} \label{subsec:naive-datalog} The naive Datalog algorithm is given by repeating \emph{premise matching} and \emph{conclusion application} phases until a fixed point is reached. The high-level structure of the algorithm can be expressed in Rust-like pseudo-code as follows: \begin{lstlisting}[language=rust,style=colouredRust] fn datalog(structure, sequents) { loop { // 1. Match premises. let matches = []; for sequent in sequents { matches.push(find_matches(structure, sequent.premise)); } // 2. Apply conclusions. let has_changed = false; for (sequent, matches) in sequents.zip(matches) { for match in matches { if apply_conclusion(structure, sequent.conclusion, match) { has_changed = true; } } } // Terminate if applying conclusions had no effect. if !changed { break; } } return structure; } \end{lstlisting} \texttt{find\_matches} is a subprocedure that returns a list of matches of the given formula in a relational structure. Each match is given by a mapping from the set of variables that occur in the formula to elements in the relational structure. A naive implementation of this function enumerates matches using a nested loop join. For example, matches of the formula $\mathrm{Le}(u, v) \land \mathrm{Ge}(w, v) \land \mathrm{Le}(w, x)$ can be enumerated as follows: \begin{lstlisting}[language=rust,style=colouredRust] for (u, v) in structure.rels[Le] { for (w, v1) in structure.rels[Ge] { if v1 != v { continue; } for (w1, x) in structure.rels[Le] { if w1 != w { continue; } matches.push({u, v, w, x}); } } } \end{lstlisting} Each relational atom translates into a nested loop over the corresponding relation in the relational structure, and each sort quantification translates into a loop over the corresponding list of elements. \texttt{apply\_conclusion} is a subprocedure that inserts data into a relational structure according to a provided conclusion and a substitution of variables for elements in the relational structure. It returns a boolean value indicating whether the operation had an effect, i.e. whether at least some of the concluded data was not already present in the relational structure. For surjective sequents without equalities, where every variable in the conclusion is already bound by a match of the premise, we substitute the variables in each relation atom and insert the corresponding tuple into the relational structure. For non-surjective sequents, we first check if the provided substitution of premise variables can be extended to interpretations of the conclusion variables such that the conclusion holds. This can be accomplished using a version of the \texttt{find\_matches} function that takes a list of already fixed interpretations of some of the variables in the formula. If no such extension exists, then we adjoin fresh elements to the relational structure to interpret the unbound conclusion variables and proceed as in the surjective case. \subsection{Congruence closure and naive RHL evaluation} \label{subsec:naive-rhl-evaluation} RHL equality atoms can be reduced to Datalog by introducing binary equality relations on each sort representing inferred equality. However, this \emph{setoid reduction} \citep[Section 3.4]{phl-theory} typically leads to inefficient Datalog programs. Semantically, an inferred equality often reduces the size of the relational structure, since equalities can collapse two previously distinct tuples in a relation into a single tuple. Instead, the setoid reduction leads to significant duplication due to congruence axioms, which assert that all relations must be closed in each argument under inferred equality. Our goal in this section is to rectify this deficiency: Every inferred equality should only shrink the relational structure. Observe that RHL can be used to solve, in particular, the congruence closure problem: Decide which equalities among a list $t_1, \dots, t_n$ of expression follow from a list of equalities among subexpressions. This problem can be encoded in RHL with a theory given by an $(n + 1)$-ary relation symbol $f$ representing the graph of an $n$-ary function symbol that occurs in the $t_i$ and the \emph{functionality axiom} \begin{equation} \label{eq:functionality} f(v_1, \dots, v_n, u) \land f(v_1, \dots, v_n, w) \implies u \equiv w. \end{equation} One then inserts data corresponding to the $t_i$ into a relational structure, imposes equalities among subexpressions, and closes the structure under functionality axioms. We may thus understand congruence closure algorithms as special-purpose evaluation algorithms for functionality RHL sequents, and try to generalize existing congruence closure algorithms to general RHL evaluation. Our inspiration here is the congruence closure algorithm described in \citet{congruence-closure}. Consider the following version of their naive algorithm 2.1, which they attribute to \citet{naive-congruence-closure}. The version presented here is specialized to a single binary function. The input of the algorithm is a list of triples representing the graph of the function. \begin{lstlisting}[language=rust,style=colouredRust] fn congruence_closure(graph) { uf = UnionFind::new(); loop { // 1. Match premises. let eqs = []; for (x0, x1, x2) in graph { for (y0, y1, y2) in graph { if x0 == y0 && x1 == y1 { eqs.push((x2, y2)); } } } // 2. Apply equalities. let has_changed = false; for (lhs, rhs) in eqs { lhs = uf.find(lhs); rhs = uf.find(rhs); if lhs != rhs { uf.union(lhs, rhs); has_changed = true; } } // Terminate if nothing has changed. if !has_changed { break; } // 3. Normalize. graph0 = []; for (x0, x1, x2) in graph { graph0.push(uf.find(x0), uf.find(x1), uf.find(x2)); } graph = graph0; } return uf; } \end{lstlisting} Similarly to the Datalog evaluation algorithm of Section \ref{subsec:naive-datalog}, this congruence closure algorithm repeats a number of steps until it reaches a fixed point. Step 1 corresponds to the \texttt{find\_matches} function for the premise of the functionality axiom \eqref{eq:functionality} of a binary function. Step 2 applies the conclusions $u \equiv v$ for each match that was found in step 1. The algorithm uses a union-find data structure to represent equality. A union-find data structure associates a canonical representative to each equivalence class. Equivalence classes are equal if and only if they have the same canonical representative. Union-find data structures support fast \texttt{find} and \texttt{union} operations in near-constant runtime. The \texttt{find} operation computes the canonical representative in the equivalence class of a given element. The \texttt{union} operation merges the equivalence classes of two canonical representatives. Step 3, which replaces all elements in entries of the \texttt{graph} relation by canonical representatives, does not have a counterpart in Datalog evaluation. Because of the use of the union-find data structure, only comparisons among canonical representatives reflect inferred equality. Note that, instead of the normalization step, we could also consult the union-find data structure in step 1 during premise matching when comparing elements. However, a separate normalization step makes the use of a number of optimizations possible, which we discuss in Section \ref{sec:optimizations}. By incorporating aspects of the congruence closure algorithm that deal with equalities into the naive Datalog evaluation algorithm of Section \ref{subsec:naive-datalog}, we now obtain our \emph{naive RHL evaluation algorithm}: \begin{enumerate} \item In addition to sets of elements and relation, relational structures now contain also union-find data structures for each sort, representing semantic equality. We maintain the invariant that the relations in the relational structure contain canonical representatives only before each iteration of the evaluation algorithm. \item \texttt{apply\_conclusion} handles equalities $u \equiv v$ by merging the equivalence classes of the interpretations of $u$ and $v$ with a call to \texttt{union}. \item Before the end of the loop body, we insert a normalization step, which replaces each element in a tuple in any relation with its canonical representative by calling \texttt{find}. \end{enumerate} Since relational structures store relations as sets without duplication, normalization can potentially reduce the number of elements of relations. The Souffle Datalog engine provides an efficient implementation of equivalence relations using union-find data structures \citep{souffle-union-find}, but it does not implement normalization. \subsection{Detecting termination} \label{subsec:termination} If all sequents in the RHL theory to be evaluated are surjective, then the algorithm we have described in Section \ref{subsec:naive-rhl-evaluation} is guaranteed to terminate. For non-surjective sequents, however, the (weakly) free model over a finite structure need not be finite. In these situations, RHL evaluation can thus not terminate and must instead be aborted after a timeout is exceeded or a desired property has been inferred. Nevertheless, there are RHL theories for which free models over finite relational structures are again finite despite the presence of non-surjective sequents. But even in these situations, the RHL evaluation algorithm we have discussed so far need not terminate. Consider, for example, the following PHL theory that axiomatizes pairs of maps $f : A \rightleftarrows B : g$ such that $g(f(x)) = x$ for all $x \in A$: \begin{enumerate} \item \label{itm:f-total} $x : A \implies f(x) \downarrow$ \item \label{itm:g-total} $y : B \implies g(y) \downarrow$ \item \label{itm:retract} $y = f(x) \implies g(y) = x$ \end{enumerate} Axiom \ref{itm:retract} is lowered to the RHL axiom $f(x, y) \implies g(y, x)$, which is surjective. Axioms \ref{itm:f-total} and \ref{itm:g-total}, however, are non-surjective. Nevertheless, free models of this theory over finite relational structures are always finite, as the following construction of free models proves: Given sets $A, B$ and relations $f \subseteq A \times B, g \subseteq B \times A$, one first identifies elements within $A$ and $B$ according to the functionality axioms for $f$ and $g$ and axiom \ref{itm:retract}. For each element $b \in B$ on which $g$ is not defined, we then adjoin new elements to $A$ and extend $g$ accordingly to a total function by axiom \ref{itm:g-total}. Similarly, we then adjoin for each $a \in A$ on which $f$ is not defined new elements to $B$ and extend $f$ accordingly to a total function by axiom \ref{itm:f-total}. Now every element in $B$ on which $g$ is not defined is of the form $g(a)$ for some unique $a \in A$, so by axiom \ref{itm:retract}, we may extend $g$ to a total function by setting $g(f(a)) = a$. Now $f$ and $g$ are total functions and $g \circ f$ is the identity function on $A$. On first thought, we might thus hope RHL evaluation for this theory to eventually reach a fixed point and terminate. Unfortunately, this is not the case for the RHL evaluation algorithm described in Section \ref{subsec:naive-rhl-evaluation}. Consider the iterations of evaluation with initial relational structure given by $A = \{ a_0 \}$, $B = \emptyset$ and $f$ and $g$ entirely undefined: \begin{enumerate} \item Axiom \ref{itm:f-total} matches on $a_0 \in A$, resulting in a new element $b_0 \in B$ and the tuple $(a_0, b_0) \in f$. \item Axiom \ref{itm:g-total} matches on $b_0 \in B$, and axiom \ref{itm:retract} matches on $(a_0, b_0)$. The former results in a new element $a_1 \in A$ and the tuple $(b_0, a_1) \in g$, while the latter results in the tuple $(b_0, a_0) \in g$. \item \label{itm:infinite-f-adjoining} Axiom \ref{itm:f-total} matches on $a_1 \in A$, and the implicit functionality axiom for $g$ matches on $(b_0, a_0), (b_0, a_1)$. The former results in a new element $b_1 \in B$ and the tuple $(a_1, b_1) \in f$, while the latter results in the equality $a_0 = a_1$. \item \label{itm:infinite-g-adjoining} Axiom \ref{itm:g-total} matches on $b_1 \in B$, and the implicit functionality axiom for $f$ matches on $(a_0, b_0), (a_1, b_1)$. The former results in a new element $a_2 \in A$ and the tuple $(b_1, a_2) \in g$, while the latter results in the equality $b_0 = b_1$. \item All further iterations alternate between variations of iterations \ref{itm:infinite-f-adjoining} and \ref{itm:infinite-g-adjoining}. \end{enumerate} What prevents termination for this theory is thus that the evaluation algorithm matches all sequents at once: Observe that our proof that free models are finite relies on applying sequents in some particular order. For this particular theory, non-termination can be prevented by carefully stating axioms so as to avoid alternating states, for example by replacing axioms \ref{itm:f-total} and \ref{itm:retract} with $x : A \implies g(f(x)) = x$. However, the following variant of the RHL evaluation algorithm described in Section \ref{subsec:naive-rhl-evaluation} terminates on a wide range of RHL theories, including the theory above. One splits the top-level evaluation loop into an inner loop responsible for surjective sequents and an outer loop responsible for non-surjective sequents. The algorithm thus alternates closing the relational structure under all surjective sequents (which always terminates) and a single step of matching and adjoining conclusions of non-surjective sequents. If eventually a non-surjective step does not change the relational structure, then all sequents are satisfied and evaluation terminates. However, I expect there to be theories where termination depends on a particular order in which non-surjective sequents are applied, and then this simple approach does not suffice. \section{Optimizations} \label{sec:optimizations} In this section, we consider optimizations of the naive RHL algorithm that we discussed in Section \ref{subsec:naive-rhl-evaluation}. Most of these techniques are adapted from optimizations that apply to Datalog evaluation, to the congruence closure problem or to both. Implemented together, these optimizations allow us to recover the fast congruence closure algorithm due to \citet{congruence-closure} as a special case of RHL evaluation for functionality axioms. \subsection{Semi-naive evaluation} \label{subsec:semi-naive} Semi-naive evaluation is a common Datalog evaluation optimization. It exploits the observation that matches of premises that were found in the previous iteration need not be considered again because conclusions to these matches have already been adjoined. A match has not been found in a previous iteration if at least one of the atoms in the premise is matched with new data, i.e. data was added only in the last iteration. To distinguish old data from new data, we store for each relation and each sort lists of tuples or elements that were added in the last iteration. An $n$-fold nested loop that matches the premise of a sequent can now be replaced with $n$ copies, where in the $i$th copy the $i$th loop iterates over new data only. For example, the nested loop described in Section \ref{subsec:naive-datalog} enumerating the premise of $\mathrm{Le}(u, v) \land \mathrm{Ge}(w, v) \land \mathrm{Le}(w, x)$ can be replaced by the following three nested loops: \begin{lstlisting}[language=rust,style=colouredRust] for (u, v) in structure.rels_new[Le] { for (w, v1) in structure.rels_all[Ge] { if v1 != v { continue; } for (w1, x) in structure.rels_all[Le] { if w1 != w { continue; } matches.push([u, v, w, x]); } } } for (u, v) in structure.rels_all[Le] { for (w, v1) in structure.rels_new[Ge] { if v1 != v { continue; } for (w1, x) in structure.rels_all[Le] { if w1 != w { continue; } matches.push([u, v, w, x]); } } } for (u, v) in structure.rels_all[Le] { for (w, v1) in structure.rels_all[Ge] { if v1 != v { continue; } for (w1, x) in structure.rels_new[Le] { if w1 != w { continue; } matches.push([u, v, w, x]); } } } \end{lstlisting} Observe that not only the conclusion application phase but also the normalization phase can lead to new data: If an element in the tuple of some relation changes as a result of normalization, then the tuple must be considered as new tuple. The optimized congruence closure algorithm described by \citet{congruence-closure} also implements semi-naive evaluation. Their \texttt{pending} list in algorithm 2.4 corresponds to our set of new tuples in the relation representing the graph of a function. Semi-naive evaluation is well-suited for online applications, where one alternates RHL evaluation and ad-hoc manipulation. If this manipulation consists only of adjoining data, then this data can be adjoined to the same data structures that hold new data during RHL evaluation. The first iteration of subsequent RHL evaluation need then only consider matches for this new data instead of matches in the full relational structure. \subsection{Symmetries} \label{subsec:symmetries} Semi-naive matching of the premise of the functionality axiom for a (binary) function results in two loops: \begin{lstlisting}[language=rust,style=colouredRust] for (x0, x1, x2) in structure.rels_new[f] { for (y0, y1, y2) in structure.rels_all[f] { ... } } for (x0, x1, x2) in structure.rels_all[f] { for (y0, y1, y2) in structure.rels_new[f] { ... } } \end{lstlisting} On the other hand, the congruence closure algorithm described by \citet{congruence-closure} requires a single loop only. Indeed, the second loop is unnecessary due to a \emph{symmetry} in the functionality axiom $f(v_0, v_1, u) \land f(v_0, v_1, w) \implies u \equiv w$. The symmetry is given by swapping $u$ and $w$. This results in a semantically equivalent premise, and swapping the variable has the same effect as swapping the two premise atoms. In such cases, it suffices to consider matches where the first of the two atoms is interpreted with new data. Another example where symmetries can be exploited is the anti-symmetry axiom $\mathrm{Le}(u, v) \land \mathrm{Le}(v, u) \implies u \equiv v$. \subsection{Indices and occurrence lists} \label{subsec:indices} Indices are meant to speed up the nested loops that enumerate matches of premises. The idea is to replace each inner loop by an efficient sublinear lookup with fixed projections. For example, matching the premise $\mathrm{Le}(u, v) \land \mathrm{Le}(v, w)$ of a transitivity axiom can be sped up with an index on the first projection. One thus maintains a map that allows fast lookup of all tuples $(v, w)$ for fixed $v$. The premise can then be enumerated as follows: \begin{lstlisting}[language=rust,style=colouredRust] for (u, v) in structure.rels[Le] { for (_, w) in structure.rels[Le].index[v] { matches.push((u, v, w)); } } \end{lstlisting} Indices are typically realized using variants of ordered search trees or hash maps. They can be maintained over all iterations, or recreated before and disposed immediately after the premise matching phase in each iteration. Recreating indices requires less memory compared to maintaining indices, since only indices needed to match a single sequent need to be stored in memory at any time. Fast Datalog engines often maintain indices over all iterations, which results in faster execution at the expense of increased memory usage. If indices are maintained over all iterations, new tuples must also be inserted into indices during the conclusion application phase. For RHL evaluation, however, index maintenance is problematic due to the normalization phase, in which elements in relations are replaced by their canonical representatives if needed. When using indices, they require normalization, too. We turn again to the fast congruence closure algorithm described by \citet[Section 2.4]{congruence-closure} to implement index normalization efficiently. Their \emph{signature table} is a hash map index that speeds up matching premises of functionality axioms. It is maintained throughout the iterations of the congruence closure algorithm. Efficient normalization is implemented using further data structures which we shall refer to as \emph{occurrence lists}. An occurrence list contains for each equivalence class the expression nodes which have at least one child in the equivalence class. In the normalization step, it suffices to normalize those tuples that occur in occurrence lists of elements that ceased to be canonical representatives. Occurrence lists can be adapted to RHL evaluation as follows. We associate to each canonical representative a list of tuples in any relation in which the canonical representative occurs. Tuples in occurrence lists need not be normalized. In the conclusion application phase, we insert tuples also in the occurrence lists of each element of the inserted tuple. When merging two equivalence classes, we save the element that ceases to be a canonical representative along with its occurrence list for use during the following normalization phase. The occurrence lists of the element that remains a canonical representative is set to the concatenation of the two original occurrence lists. Concatenation can be implemented asymptotically efficiently if occurrence lists are realized as linked lists or rope data structures. In the normalization phase, we remove each tuple in one of the occurrence lists we saved earlier, normalize the tuple and reinsert it into each index. When enforcing an equality during the conclusion phase, the algorithm described in \citet[Section 2.4]{congruence-closure} chooses the element that remains a canonical representative in a way that minimizes the amount of normalization necessary: Thus, the element with longer occurrence lists should remain canonical. This applies directly also to occurrence lists in RHL evaluation. To avoid normalizing tuples that were inserted in the current iteration, the conclusion application phase can be split into an \emph{equality application phase}, where we only consider equalities in conclusions, and a \emph{relation application phase}, where we only consider relation atoms. We then normalize between the equality application phase and the relation application phase. This has the benefit that new tuples need not be normalized directly after insertion. \subsection{Functional projections} We say that the $i$th projection of a relation $r : s_1 \times \dots \times s_n$ in an RHL theory is \emph{functional} if the $i$th projection $x_i$ of each tuple $(x_1, \dots, x_n) \in r$ is uniquely determined by the other components $x_1, \dots, x_{i - 1}, x_{i + 1}, \dots, x_n$ of the tuple. More generally, we can consider a set $I$ of projections which are uniquely determined by the complementary projections. As the name suggests, the functionality axiom for an $n$-ary function asserts that the $(n + 1)$th projection of the graph of the function is functional. Another example are injective functions, where the first $n$ projections of the graph depend functionally on the $(n + 1)$th projection. When indices are maintained on a relation with a functional projection, equality constraints can be generated already during insertion into the index instead of later during the matching phase. For example, if $r$ is an $(n + 1)$-ary relation representing the graph of a function, then an index on the first $n$ arguments can be maintained to match the premise of the functionality axiom. Without consideration of functionality of the $(n + 1)$th projection, we expect the index to allow lookups of \emph{lists} of tuples for fixed value of the first $n$ projections. Due to the functional dependency, we can instead enforce that lookups result into at most one tuple. Whenever a second tuple would be inserted into the index which would violate this property, then we generate equality constraints according to functional projections instead. These equalities are then enforced during the next conclusion application phase. The \emph{signature table} hash map in the efficient congruence closure algorithm described by \citet[Section 2.4]{congruence-closure} can be understood as an index with functional projection optimization. \section{Applications} \label{sec:applications} In this Section, we discuss two applications of RHL and PHL evaluation to the implementation of programming languages: Steensgaard's points-to analysis (Section \ref{subsec:steensgard}) and type inference (Section \ref{subsec:inference}). \subsection{Steensgaard's points-to analysis} \label{subsec:steensgard} Points-to-analysis aims to bound the set of heap objects a variable in a program can point to. The two well-known approaches are due to Andersen and Steensgaard. Both algorithms identify heap objects with their allocation sites, i.e. the program expressions that allocate objects. The Andersen and Steensgaard analyses thus give an over-approximating answer to the question of whether a given variable $x$ can point to an object that was allocated in expression $e$. Both analyses must consider the case where a variable $x$ is a assigned to a variable $y$. Andersen-style analysis computes for each variable $x$ a set of objects $e$ that $x$ can point to such that the following properties hold: \begin{enumerate} \item If there is a statement that assigns an allocating expression $e$ to a variable $x$, then $x$ can point to $e$. \item If a variable $y$ can take the value of a variable $x$ (e.g. as a result of an assignment or a function call), and $x$ can point to $e$, then $y$ can point to $e$. \end{enumerate} A minimal implementation of Andersen-style analysis using Datalog populates input relations $\mathrm{Alloc}(x, e)$ and $\mathrm{Assign}(x, y)$ with data derived from the program source code. The rules above governing the $\mathrm{PointsTo}(x, e)$ relation are then encoded in Datalog as follows: \begin{enumerate} \item $\mathrm{Alloc}(x, e) \implies \mathrm{PointsTo}(x, e)$ \item \label{itm:andersen-subset} $\mathrm{Assign}(x, y) \land \mathrm{PointsTo}(x, e) \implies \mathrm{PointsTo}(y, e)$ \end{enumerate} To summarize, Andersen's algorithm enforces \emph{subset constraints}: If a variable $y$ can take the value of a variable $x$, either through a function call or a direct assignment, then the points-to set of $x$ is a subset of the points-to set of $y$. Steensgaard's algorithm is a less precise but typically faster variation of Andersen's algorithm. It is based on \emph{equality constraints}. Thus if a variable $y$ can take the value of a variable $x$, then Andersen's algorithm \emph{equates} the points-to sets of $x$ and $y$. A direct implementation of Steensgaard's algorithm maintains a union-find data structure on the variables of a program, and a mapping that assigns to each canonical representative a points-to set of heap allocations. The algorithm scans the program source code and performs the following actions: \begin{enumerate} \item For each statement that assigns an allocation expression $e$ to a variable $x$, add $e$ to the points-to set of the canonical representative of $x$. \item If a variable $y$ can take the value of a variable $x$, unify the equivalence classes of $x$ and $y$. The points-to set of the unified equivalence class is the union of the points-to sets of the classes of $x$ and $y$. \end{enumerate} Steensgaard's algorithm is strictly less precise than Andersen's, but it typically requires less memory, since only one points-to set needs to be maintained for every equivalence class of variables. To encode Steensgaard's algorithm in RHL, we can simply replace rule \ref{itm:andersen-subset} above in Andersen's analysis with the rule \begin{equation} \mathrm{Assign}(x, y) \implies x \equiv y \end{equation} to enforce equality constraints. \subsection{Type inference} \label{subsec:inference} Type inference (or type reconstruction) is the task of assigning types to variables and expressions based on their usage in a program fragment. The constraint-based typing algorithm for the simply typed lambda calculus described in \citet[22.3, 22.4]{pierce-types-and-programming-languages} assigns to each term a separate, initially unrestricted type variable. It then collects constraints on type variables according to the usage and definition of the corresponding terms. This is accomplished by considering typing rules and their inverses. For example, based on the application typing rule \begin{equation} \label{eq:application-typing-rule} \inferrule { k : \sigma \rightarrow \tau \and s : \sigma } { k \, s : \tau } \end{equation} we infer the following constraints from a term $t_2 = t_0 \, t_1$: \begin{enumerate} \item \label{itm:app-constraints-first} If $t_0$ has type $S \rightarrow T$, then $t_1$ has type $S$. \item \label{itm:infer-dom-from-arg} Conversely, if $t_1$ has type $S$, then $t_0$ has type $S \rightarrow T$ for some $T$. \item If $t_0$ has type $S \rightarrow T$, then $t_2$ has type $T$. \item \label{itm:infer-cod-from-result} Conversely, if $t_2$ has type $T$, then $t_0$ has type $S \rightarrow T$ for some $S$. \end{enumerate} The implicit existentials in constraints \ref{itm:infer-dom-from-arg} and \ref{itm:infer-cod-from-result} generate new type variables $T$ and $S$, which must be chosen fresh for each instance of the constraint. In the following unification step, the generated constraints are checked for compatibility, and if so the most general substitution of type variables is created that satisfies all the constraints. For example, for the fragment $x \, y$ for variables $x$ and $y$, the algorithm outputs the substitution $[T_0 \mapsto (T_1 \rightarrow T_2), T_1 \mapsto T_1, T_2 \mapsto T_2]$, where $T_0, T_1$ and $T_2$ are the type variables initially assigned to $x, y$ and $x \, y$. This inference procedure can be implemented in PHL as follows. We introduce sorts $\mathrm{Tm}$ of terms and $\mathrm{Ty}$ of types. Each $n$-ary type or term constructor corresponds to an $n$-ary PHL function. For example, function types are encoded as binary function symbol $\mathrm{Fun} : \mathrm{Ty} \times \mathrm{Ty} \rightarrow \mathrm{Ty}$ and application as a function $\mathrm{Tm} \times \mathrm{Tm} \rightarrow \mathrm{Tm}$. To enforce injectivity of type constructors, we introduce inverse functions for each parameter of a type constructor. For function types, we add functions $\mathrm{Dom}: \mathrm{Ty} \rightarrow \mathrm{Ty}$ and $\mathrm{Cod} : \mathrm{Ty} \rightarrow \mathrm{Ty}$ and enforce axioms that $\mathrm{Dom}$ and $\mathrm{Cod}$ are indeed inverses to $\mathrm{Ty}$: \begin{mathpar} \mathrm{Dom}(\kappa) \downarrow \implies \mathrm{Cod}(\kappa) \downarrow \and \mathrm{Cod}(\kappa) \downarrow \implies \mathrm{Dom}(\kappa) \downarrow \\ \sigma = \mathrm{Dom}(\kappa) \land \tau = \mathrm{Dom}(\kappa) \implies \kappa = \mathrm{Fun}(\sigma, \tau) \\ \kappa = \mathrm{Fun}(\sigma, \tau) \implies \mathrm{Dom}(\kappa) = \sigma \land \mathrm{Cod}(\kappa) = \tau \end{mathpar} Thus $\mathrm{Dom}$ and $\mathrm{Cod}$ are defined on the same set of types: Those of the form $\mathrm{Fun}(\sigma, \tau)$ for types $\sigma$ and $\tau$. To detect violations of joined injectivity of type constructors, we introduce a nullary predicate $\bot$ and rules such as \begin{mathpar} \mathrm{Fun}(-, -) \equiv \mathrm{List}(-) \implies \bot() \end{mathpar} for every pair of distinct type constructors, for example function types and list types. We always arrange $\bot$ to be empty before PHL evaluation, so that $\bot$ is inhabited after evaluation if and only if a violation of joined injectivity was inferred. We encode the typing relation $t : \tau$ as a function $\mathrm{TmTy} : \mathrm{Tm} \rightarrow \mathrm{Ty}$ instead of a relation because each term has a unique type. The axiom $t : \mathrm{Tm} \implies \mathrm{TmTy}(t) \downarrow$ enforces that each term has a type. During evaluation, this non-surjective rule introduces a fresh identifier as type of each term if necessary. Finally, we encode term constructors as PHL functions and add axioms according to inference rules and their inverses. For example, the typing rule \eqref{eq:application-typing-rule} of function application results in a PHL function $\mathrm{App} : \mathrm{Tm} \times \mathrm{Tm} \rightarrow \mathrm{Tm}$ and the following PHL axioms governing it, corresponding to the constraints \ref{itm:app-constraints-first} -- \ref{itm:infer-cod-from-result} above: \begin{enumerate} \item $\mathrm{App}(t_0, t_1) \downarrow \land \mathrm{TmTy}(t_0) = \mathrm{Fun}(\sigma, -) \implies \mathrm{TmTy}(t_1) = \sigma$ \item \label{itm:axiom-dom-exists} $\mathrm{App}(t_0, t_1) \downarrow \land \mathrm{TmTy}(t_1) = \sigma \implies \mathrm{Dom}(\mathrm{TmTy}(t_0)) = \sigma$ \item $t_2 = \mathrm{App}(t_0, t_1) \land \mathrm{TmTy}(t_0) = \mathrm{Fun}(-, \tau) \implies \mathrm{TmTy}(t_2) = \tau$ \item \label{itm:axiom-cod-exists} $t_2 = \mathrm{App}(t_0, t_1) \land \mathrm{TmTy}(t_2) = \tau \implies \mathrm{Cod}(\mathrm{TmTy}(t_0)) = \tau$ \end{enumerate} Observe that the conclusions of axioms \ref{itm:axiom-dom-exists} and \ref{itm:axiom-cod-exists} assert that domains and codomains of certain types exist, which together with our axioms for $\mathrm{Dom}$ and $\mathrm{Cod}$ implies that these types are function types. If necessary, fresh type identifiers are adjoined during PHL evaluation by the non-surjective axioms asserting that the $\mathrm{Dom}$ and $\mathrm{Cod}$ functions are defined on the same set of types. With the PHL theory modeling the type system at hand, the full type inference algorithm can now be implemented in three steps: \begin{enumerate} \item Populate a relational structure based on the program fragment: We adjoin a unique term identifier for each term in the program fragment and add entries in the relations representing graphs of term constructors. \item Close the relational structure under the axioms described above. \item If the $\bot$ relation contains an element, output an error. Otherwise, there exists for each type identifier $T$ at most one type constructor $\mu$ and an entry of the form $(T_1, \dots, T_n, T)$ with $T$ as last component in the relation corresponding to $\mu$. Expand each type identifier recursively into a maximal syntax tree according to such entries. Output the maximal syntax tree representing $\mathrm{TmTy}(t)$ for each term $t$ in the input program fragment. \end{enumerate} \section{Conclusion} \label{sec:conclusion} Using Datalog to implement type checking and inference, as we sketched in Section \ref{subsec:inference}, is not a new idea. For example, the blog post \citet{lowering-rust-traits-to-logic} discusses an implementation of Rust's trait system using Datalog. One of the issues raised there is that Rust's associated types require reasoning about equalities of types. A similar issue would arise for Haskell's type families. The solution proposed in the blog post is to combine Datalog with a normalization algorithm. RHL's built-in equality might offer a declarative alternative to normalization that applies also in situations where no strongly normalizing rewrite system is available. Typing rules are typically specified using the notation of natural deduction (e.g. the application typing rule \eqref{eq:application-typing-rule}). Apart from syntactic differences, the structure of such rules and their semantics correspond almost precisely to PHL. Indeed, it is generally understood that many type systems can be encoded as \emph{essentially algebraic theories} \citep[Chapter 3.D]{locally-presentable-and-accessible-categories}, which have the same descriptive strength as epic PHL. From this perspective, program fragments can be identified with elements of the \emph{initial model} of the PHL theory axiomatizing the type system, i.e. the free model over the empty relational structure. These conceptual connections and the example in Section \ref{subsec:inference} suggest that PHL and RHL evaluation have the potential to assume a role in the implementation of programming languages paralleling that of parser generators: Where parser generators produce parsers from grammars, RHL evaluation engines produce type checkers from type systems. \bibliographystyle{abbrvnat} \setcitestyle{authoryear,open={(},close={)}}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Electromagnetically Induced Transparency (EIT) is a quantum interference phenomenon responsible for canceling the absorption of a weak probe laser by applying a strong electromagnetic control field in the same medium. In the last decades, much attention has been paid to study EIT and related phenomena leading to many different applications \cite{Harris1997, Marangos1998, Fleischhauer2005}. In its simplest configuration, two electromagnetic fields excite an ensemble of three-level atoms in $\Lambda$ configuration and the optical properties of the atomic medium are described by the first-order complex electric susceptibility $\chi_{e}^{(1)}$. Its real part Re$\left\{\chi_{e}^{(1)}\right\} $ is related to the index of refraction of the medium, featured by a region of anomalous dispersion leading to very small group velocities \cite{Hau1999, Scully1999, Yashchuk1999}. The zero absorption window is described by the imaginary part Im$\left\{\chi_{e}^{(1)}\right\} $, which allows applications ranging from high-resolution spectroscopy \cite{Marangos1998} to atomic clocks \cite{Vanier2005}. Mechanical and electric analogies of EIT in a $\Lambda$ configuration and their characteristics in equivalent systems have been noted since Alzar et al. \cite{Nussenzveig2002} reproduced the phenomenology of EIT using two coupled harmonic oscillators and RLC circuits. They were inspired by Hammer and Prentiss \cite{Prentiss1988}, who modeled classically the stimulated resonance Raman effect with a set of three coupled classical pendulums. Due to the considerable practical usefulness provided by the classical results, many efforts have been made towards representing EIT-related phenomena in different atomic systems using classical models \cite{Tokman2002, Huang2010, Serna2011, Huang2013}. Its importance has recently grown up even more owing the number of reported classical systems that follow the same dynamics, such as metamaterials \cite{Prosvirnin2008, Soukoulis2009, Bettiol2009, Giessen2009, Soukoulis2011, Chen2012}, cavity optomechanics \cite{Vahala2007, Kippenberg2008, Kippenberg2010, Painter2011}, multiple coupled photonic crystal cavities \cite{Wong2009}, acoustic structures \cite{Sheng2010}, coupled resonant systems \cite{Zhang2013}, and so on. To date, no completely correspondence between the quantum and classical models which yields a direct comparison between the results has been realized. We establish in this work, a one-to-one correspondence between the classical and quantum dynamic variables using two classical coupled harmonic oscillators to model EIT in $\Lambda$ configuration. We also show the role of a cavity mode in the mechanical system to model EIT-like phenomena observed in two coupled cavity modes and in systems comprised by a single two-level atom interacting with a single mode of a resonator considering two configurations, the driven cavity field and the driven atom. The analysis of the probe response for the driven cavity cases reveal that $\left\langle a\right\rangle $ is directly related to the electric susceptibility of the atom-cavity or cavity-cavity systems. The classical correspondence is also established for EIT-like observed in four-level atoms in the inverted-Y and tripod configurations, and for the cavity EIT (CEIT) system, considering three coupled classical harmonic oscillators. For the atomic tripod configuration we compare the classical analog obtained here with the analog published recently \cite{Huang2013}, showing the validity of both for different set of parameters. The analog for the CEIT system is presented for the first time and the result is compared with an experiment performed with $N\sim15$ atoms \cite{Muecke2010}. We show the validity and the limiting conditions to reproduce the quantum results using the classical models. This work can be considerably useful to provide a general mapping of EIT-like systems into a variety of classical systems. \section{Classical analog of EIT in different physical systems using two-coupled harmonic oscillators} Coupled harmonic oscillators are an intuitive model used as close analog for many phenomena, including the stimulated resonance Raman effect \cite{Prentiss1988}, electromagnetic induced transparency \cite{Nussenzveig2002, Tokman2002, Huang2010, Serna2011, Huang2013}, time dependent Josephson phenomena \cite{Zimmerman1971}, adiabatic and nonadiabatic processes \cite{Xiong1988, Romanenko2009}, level repulsion \cite{Brentano1994}, strongly interacting quantum systems \cite{Novotny2010}, one-half spin dynamics \cite{Glaser2003, Devaud2006}, coherent quantum states \cite{McKibben1977, Briggs2012, Eisfeld2012}, among others. EIT and their classical analogs can be obtained when suitable conditions are prescribed. In what follows, we will briefly review some of the EIT-related systems and derive their linear electric susceptibilities from the density matrix formalism. Our focus is to show how the behavior of the electric susceptibility of each atomic system can be reproduced using coupled oscillators, through the concept of mechanical susceptibility. \subsection{The phenomenology of EIT reproduced in two-coupled harmonic oscillators}\label{SecEIT} The phenomenon of EIT occurs in three level atomic systems in $\Lambda$ configuration with two ground states, $|1\rangle$ and $|2\rangle$, and an excited state $|3\rangle$, interacting with two classical coherent fields, probe and control, of frequencies $\omega_{p}$ and $\omega_{c}$, respectively, as illustrated in Fig.\ref{EsquemaEIT}a. The atomic transition $|1\rangle \leftrightarrow|3\rangle$ (frequency $\omega_{31}$) is driven by the probe field with Rabi frequency $2\Omega_{p}$, and the transition $|2\rangle\leftrightarrow|3\rangle$ (frequency $\omega_{32}$) is coupled by the control field with Rabi frequency $2\Omega_{c}$ \cite{Multilevel}. Introducing the electric dipole and rotating-wave approximations, the time-independent Hamiltonian which describes the atom-field interaction in a rotating frame is given by ($\hbar=1$) \cite{Fleischhauer2005} \begin{equation} H=(\Delta_{p}-\Delta_{c})\sigma_{22}+\Delta_{p}\sigma_{33}-(\Omega_{p}\sigma_{31}+\Omega_{c}\sigma_{32}+h.c.),\label{e0} \end{equation} where $\sigma_{ij}=\left\vert i\right\rangle \left\langle j\right\vert,i,j=1,2,3$ are the atomic raising and lowering operators ($i\neq j$), and atomic energy-level population operators ($i=j$). The detunings are given by $\Delta_{p}=\omega_{31}-\omega_{p}$, $\Delta_{c}=\omega_{32}-\omega_{c}$ and $h.c.$ stands for the Hermitian conjugate. The dynamics of the system is obtained by solving the master equation for the atomic density operator ($\rho$) \begin{align} \dot{\rho} & =-i[H,\rho]+\sum\limits_{m=1,2}\Gamma_{3m}(2\sigma_{m3}\rho\sigma_{3m}-\sigma_{33}\rho-\rho\sigma_{33})\nonumber\\ & +\sum\limits_{n=2,3}\gamma_{n}(2\sigma_{nn}\rho\sigma_{nn}-\sigma_{nn}\rho-\rho\sigma_{nn}),\label{e1} \end{align} where $\Gamma_{31}$, $\Gamma_{32}$ are the polarization decay rates of the excited level $|3\rangle$ to the levels $|1\rangle$ and $|2\rangle$, and $\gamma_{2}$, $\gamma_{3}$ the non-radiative atomic dephasing rates of states $|2\rangle$ and $|3\rangle$, respectively. \begin{figure [ptbh] \begin{center} \includegraphics[width=8cm]% {EsquemaEIT.jpg} \caption{(Color online). (a) Schematic energy level diagram of a three-level atom in $\Lambda$ configuration for EIT. It shows two classical electromagnetic fields, probe $(\omega_{p})$ and control $(\omega_{c})$, coupling the transitions $|1\rangle\leftrightarrow|3\rangle$ and $|2\rangle\leftrightarrow|3\rangle$, respectively, and their corresponding detunings. The decay rates are represented by $\gamma_{31}=\Gamma_{31}+\Gamma_{32}+\gamma_{3}$ and $\gamma_{2}$. (b) Coupled damped harmonic oscillators used to reproduce the phenomenology observed in EIT, showing two masses $m_{1}$ and $m_{2}$ displaced from their equilibrium positions by the distances $x_{1}$ and $x_{2}$, respectively, attached to three springs with spring constants $k_{1}% $, $k_{2}$ and $k_{12}$. A driving force of frequency $\omega_{s}$ acts on mass $m_{1}$ and the damping constant of the $j$th harmonic oscillator is represented by $\gamma_{j}$ ($j=1,2$). (c) Classical analog of EIT showing the equivalence of each parameter in the mechanical system. Each harmonic oscillator corresponds to a dipole-allowed transition with electronic dipole moment $\mu_{i3}$, ($i=1,2$).} \label{EsquemaEIT} \end{center} \end{figure} It is assumed that all $N$ atoms contained in a volume $V$ couple identically to the electromagnetic fields and that the medium is isotropic and homogenous. Considering that the atoms do not interact to each other and ignoring local-field effects, the optical response of the medium to the applied probe field $E(t)=E_{p}e^{-i\omega_{p}t}+c.c.$ can be obtained through the expectation value of the atomic polarizability \begin{equation} \mathbf{P}(t)=\chi_{e}^{(1)}\mathbf{E}(t), \label{e2} \end{equation} with $\chi_{e}^{(1)}$ denoting the linear electric susceptibility. The polarization can also be written in terms of the expectation value of the dipole moment operator $\mu$ per unit volume \begin{equation} \mathbf{P}(t)=-\frac{1}{V}\sum\limits_{i=1}^{N}\left\langle e\mathbf{r}_{i}(t)\right\rangle =\frac{N}{V}Tr(\mu\rho). \label{e3} \end{equation} In this way the linear response of the probe beam in the atomic sample can be directly related to the off-diagonal density matrix element $\rho_{31}$, \begin{equation} \chi_{e}^{(1)}(\omega_{p})=\frac{N\left\vert \mu_{13}\right\vert }{VE_{p}}\rho_{31}. \label{SFull} \end{equation} From eqs.\eqref{e0} and \eqref{e1} the full equations of motion for the density matrix are given by \begin{subequations}\label{EvolveEIT} \begin{eqnarray} \dot{\rho}_{31} &=& -i\left\{ \left(\Delta_{p} - i\gamma_{31}\right) \rho_{31} - \Omega_{p}\left(\rho_{11} - \rho_{33} \right)\right\} \nonumber\\ &+& i\Omega_{c}\rho_{21},\\ \nonumber\\ \dot{\rho}_{21} &=& -i\left\{ \left[ \left(\Delta_{p} - \Delta_{c}\right) - i\gamma_{2} \right] \rho_{21} + \Omega_{p} \rho_{23} \right\} \nonumber\\ &+& i\Omega_{c}\rho_{31},\\ \nonumber\\ \dot{\rho}_{23} &=& -i\left\{ \left[-\Delta_{c} - i\left( \gamma_{31} - \gamma_{2}\right)\right] \rho_{23} + \Omega_{p} \rho_{21} \right\} \nonumber\\ &+& i\Omega_{c}\left( \rho_{33} - \rho_{22} \right), \end{eqnarray} \end{subequations} where $\gamma_{31} = \Gamma_{31} + \Gamma_{32} + \gamma_{3}$. As described in detail by Fleischhauer et al. \cite{Fleischhauer2005}, EIT occurs when the population of the system is initially in the ground state $|1\rangle$. The state of zero absorption, referred to as the dark state, is usually attributed to the result of quantum interference between two indistinguishable paths. This state corresponds to $|1\rangle$ if the conditions $\Omega_{p}<<\Omega_{c}$ and $\gamma_{2}<<\gamma_{31}$ are prescribed to yield $\rho_{11}\approx1$ and consequently $\rho_{22} \approx 0$. The state $|3\rangle$ is never populated ($\rho _{33} = 0$) in the dark state. Using these conditions in eqs.\eqref{EvolveEIT}, the steady-state solutions ($\dot{\rho}_{ij}=0$) for $\rho_{21}$ and $\rho_{31}$ can be determined through the equations \begin{subequations}\label{EvolveEITB} \begin{eqnarray} \left(\Delta_{p} - i\gamma_{31}\right) \rho_{31} - \Omega_{c}\rho_{21} &=& \Omega_{p},\\ \nonumber\\ \left( \delta - i\gamma_{2} \right) \rho_{21} - \Omega_{c}\rho_{31} &=& 0, \end{eqnarray} \end{subequations} yielding, \begin{equation} \rho_{31}(\omega_{p})=\frac{\Omega_{p}\left(\delta-i\gamma_{2}\right)}{\left(\Delta_{p}-i\gamma_{31}\right)\left(\delta-i\gamma_{2}\right)-\Omega_{c}^{2}}, \label{e5}% \end{equation} where we have introduced the two-photon detuning $\delta=\Delta_{p}-\Delta_{c}$. Hereafter, the susceptibility stated in eq.\eqref{SFull} will be replaced by a reduced susceptibility that does not depend on the specific details of the physical system. Then, for EIT it reads, \begin{equation} \tilde{\chi}_{e}(\omega_{p})=\frac{VE_{p}}{N\left\vert \mu_{13}\right\vert}\chi_{e}^{(1)}(\omega_{p})=\rho_{31}(\omega_{p}). \label{e6} \end{equation} Thus, the main characteristics of EIT regarding absorption, gain and the control of the group velocity of light in a medium can be obtained from the imaginary and real parts of $\rho_{31}$. Note that the essential features of EIT are derived using a semiclassical model, where it is assumed two classical fields interacting with an atomic ensemble with microscopic coherences treated quantum mechanically. Under the assumption of low atomic excitation ($\rho_{11}\approx1$), which is experimentally justified by choosing an appropriately low pump intensity, implying that $\Omega_{p}<<\Omega_{c}$, effects of atomic saturation are neglected. In this way, the expectation values of the atomic operators $\rho_{ij}=\left\langle \sigma_{ji}\right\rangle $ can be replaced by classical amplitudes. The mechanical model used to demonstrate the classical analog of EIT consists of two coupled, damped harmonic oscillators with one of them driven by a harmonic force $F_{s}(t)=Fe^{-i\left(\omega_{s}t+\phi_{s}\right)}+c.c.$, for $\phi_{s}=0$ and frequency $\omega_{s}$ \cite{Nussenzveig2002}. It is considered two particles $1$ and $2$ with equal masses $m_{1}=m_{2}=m$ and three springs arranged as illustrated in Fig.\ref{EsquemaEIT} b. The two outside spring constants are $k_{1}$ and $k_{2}$. The third spring couples linearly the two particles and its constant spring is $k_{12}$. It is assumed that the whole system moves in only one dimension $x$ and the distances $x_{1}$ and $x_{2}$ measure the displacements of particles $1$ and $2$ from their respective equilibrium positions. The equations of motion for the two masses are \begin{subequations} \begin{eqnarray} m\ddot{x}_{1} &=& -k_{1}x_{1} - \eta_{1}\dot{x}_{1} - k_{12}\left(x_{1} - x_{2}\right) + F_{s}(t)\\ \label{ClassA} m\ddot{x}_{2} &=& -k_{2}x_{2} - \eta_{2}\dot{x}_{2} - k_{12}\left(x_{2} - x_{1}\right) \label{ClassB} \end{eqnarray} \end{subequations} which are usually written as, \begin{subequations}\label{classicA} \begin{align} \ddot{x}_{1} +\omega_{1}^{2}x_{1} + 2\gamma_{1}\dot{x}_{1} - \omega_{12}^{2}x_{2} &= \frac{F_{s}(t)}{m}\\ \ddot{x}_{2} +\omega_{2}^{2}x_{2} + 2\gamma_{2}\dot{x}_{2} - \omega_{12}^{2}x_{1} &= 0 \end{align} \end{subequations} where $\omega_{j}^{2} = \left(k_{j} + k_{12}\right)/m$, $\omega_{12}^{2} = k_{12}/m$ and the damping constant of the $j$th harmonic oscillator is $2\gamma_{j} = \eta_{j}/m$, $j = 1,2$. Assuming that the steady-state solution of equations above has the form $x_{j} = N_{j}e^{-i\omega_{s}t} + c.c.$ we find \begin{subequations}\label{EvolveHO} \begin{eqnarray} \left(-\omega_{s}^2 + \omega_{1}^{2} - 2i\gamma_{1}\omega_{s}\right)N_{1} - \omega_{12}^{2}N_{2} &= \frac{F}{m},\\ \left(-\omega_{s}^2 + \omega_{2}^{2} - 2i\gamma_{2}\omega_{s}\right)N_{2} - \omega_{12}^{2}N_{1} &= 0, \end{eqnarray} \end{subequations} where the complex conjugate solution ($c.c.$) was omitted for simplicity. Note that eqs.\eqref{EvolveHO} for $N_{1}$ and $N_{2}$ have the same struture as eqs.\eqref{EvolveEITB} for $\rho_{31}$ and $\rho_{21}$, respectively. Solving eqs.\eqref{EvolveHO} for the displacement of the driven oscillator $x_{1}\left(t\right)$ and considering $\omega_{s}$ near to the natural oscillation frequencies $\omega_{j}$ ($j=1,2$), so that $\omega_{j}^{2}-\omega_{s}^{2}\approx 2\omega_{j}(\omega_{j}-\omega_{s})$ and $\gamma_{j}\omega_{s} \approx \gamma_{j}\omega_{j}$, we have \begin{equation} x_{1}(t)\simeq\frac{F/\left(2m\omega_{1}\right) \left(\Delta_{2} - i\gamma_{2}\right)}{\left(\Delta_{1}-i\gamma_{1}\right) \left(\Delta_{2}-i\gamma_{2}\right) -\Omega_{12}^{2}}e^{-i\omega_{s}t}+c.c., \label{e7} \end{equation} where we have defined the detunings $\Delta_{j}=\omega_{j}-\omega_{s}$ and the classical coupling rate between particles $1$ and $2$ as $2\Omega_{12}=\omega_{12}^{2}/\sqrt{\omega_{1}\omega_{2}}$, in analogy to the Rabi frequency of the control field ($2\Omega_{c}$). The quantity $F/m\omega_{1}=2\Omega_{s}C_{1}$ has dimension of frequency $(2\Omega_{s})$ times length $(C_{1})$. The first term makes the role of the Rabi frequency of the probe field ($2\Omega_{p}$). Then, eq.(\ref{e7}) can be reduced to the form \begin{equation} x_{1}(t)=C_{1}\rho_{co}e^{-i\omega_{s}t}+c.c., \label{e8} \end{equation} where the dimensionless complex amplitude $\rho_{co}$ is given by \begin{equation} \rho_{co}(\omega_{s})=\frac{\Omega_{s}\left(\Delta_{2}-i\gamma_{2}\right)}{\left( \Delta_{1}-i\gamma_{1}\right) \left(\Delta_{2}-i\gamma_{2}\right) -\Omega_{12}^{2}}. \label{e9} \end{equation} An equation similar to \eqref{e8} can be derived for the atomic system by making $\left\vert \mathbf{r}_{i}(t)\right\vert =x(t)$ in eq.\eqref{e3} for $N=1$ and using eq.\eqref{e2}, eq.\eqref{SFull} and the expression for the applied probe field $E(t) = E_{p}e^{-i\omega_{p}t} + c.c.$, yielding, \begin{equation} x(t)=C_{2}\rho_{31}e^{-i\omega_{p}t}+c.c.=C_{2}\tilde{\chi}_{e}e^{-i\omega_{p}t}+c.c., \label{xc2} \end{equation} where $C_{2}=\left\vert \mu_{13}\right\vert /e$, similarly to $C_{1}$, bears dimension of length. By comparing eq.\eqref{e8} with the first equality of eq.\eqref{xc2} we find the analog $C_{1}\equiv C_{2}$, $\omega_{s}\equiv \omega_{p}$ and $\rho_{co}\equiv\rho_{31}$. The analog is obtained for the steady-state solution of both systems, EIT and coupled oscillators. In Appendix A we used the Hamiltonian formalism to show that the dynamics of the EIT system, given by $\dot{\rho}_{31}$ and $\dot{\rho}_{21}$, is also similar to the dynamics of the classical oscillators. This formalism is also advantageous to obtain a direct definition of the classical pumping rate $\Omega_{s}$ as a function of the parameters of the mechanical system, which is $\Omega_{s}=\sqrt{F^{2}/2m\omega_{1}}$ meaning that $C_{1}=\sqrt{1/2m\omega_{1}}$. In analogy to the EIT system, eq.\eqref{xc2}, we define a reduced mechanical susceptibility $\tilde{\chi}_{M}(\omega_{s})=\rho_{co}(\omega_{s})$. The concept of susceptibility of a mechanical oscillator is widely used in optomechanics \cite{Vahala2007, Kippenberg2008, Kippenberg2010, Painter2011}. Here we are extending this idea to a set of coupled oscillators. By inspection in eqs.\eqref{e5} and \eqref{e9} we see that $\rho_{31}$ and $\rho_{co}$ are perfectly equivalent. Thus, the classical analog of each parameter of EIT in atomic physics can be identified formally in the mechanical system, as summarized in table \ref{TableEIT} and illustrated in Fig.\ref{EsquemaEIT}(c). Each harmonic oscillator is identified as a dipole-allowed transition with electronic dipole moment $\mu_{i3}$ ($i=1,2$). The classical analog for the two-photon detuning $\delta=\Delta_{p}-\Delta_{c}$ is $\Delta_{2}=\Delta_{1}-\Delta_{21}$, where $\Delta_{21}$ accounts for the detuning of the resonant frequencies between oscillator 2 and oscillator 1. It can be obtained readily by setting $k_{2}=k_{1}\pm\Delta k$. The detuning $\Delta_{21}$ is responsible for reproducing the shift observed in the dark state when $\Delta_{c}\neq0$. The atomic transitions of the EIT system are considered to have fixed resonant frequencies $\omega_{31}$ and $\omega_{32}$, meaning that the detuning $\Delta_{c}$ is performed by changing the frequency of the control field $\omega_{c}$. In the mechanical system the equivalent of $\omega_{c}$ is $\omega_{12}$ but the classical detuning $\Delta_{21}$ is performed by changing the spring constants $k_{1}$ or $k_{2}$ and not $k_{12}$. This is because $\omega_{1}$ and $\omega_{2}$ depends on $k_{12}$ in the same way. Then we have to keep $\omega_{12}$ constant by fixing $k_{12}$ and change the resonant frequencies $\omega_{1}$ and $\omega_{2}$ through $k_{1}$ and $k_{2}$ to produce the detuning $\Delta_{21}$. For perfect control field resonance $\Delta_{c}=0$, we have $\delta =\Delta_{p}$, which corresponds to $\Delta_{1}=\Delta_{2}$ in eq.\eqref{e9}, implying that $\omega_{1}=\omega_{2}$ and consequently $k_{1}=k_{2}$ for the coupled oscillators. \begin{table}[th] \caption{Classical analog of EIT using two mechanical coupled harmonic oscillators (2-MCHO).}% \label{TableEIT} \centering \begin{tabular} {c c} \hline\hline EIT $\left(\rho_{31}\right)$ & 2-MCHO $\left(\rho_{co}\right)$\\[1ex] \hline $\Delta_{p}$ & $\Delta_{1}$\\ $\delta$ & $\Delta_{2}$\\ $\Omega_{p}$ & $\Omega_{s}$\\ $\Omega_{c}$ & $\Omega_{12}$\\ $\gamma_{31}$ & $\gamma_{1}$\\ $\gamma_{2}$ & $\gamma_{2}$\\ [1ex] \hline \end{tabular} \end{table} In Fig.\ref{FigCOEIT} we show the imaginary and real parts of the reduced electric susceptibility $\tilde{\chi}_{e}$ vs the normalized probe-atom detuning $\Delta_{p}/\gamma_{31}$ for the EIT system in comparison with its mechanical counterpart $\tilde{\chi}_{M}$, obtained using two coupled oscillators. The parameters in the classical system are set to be the same as in the EIT following the analog presented in Table \ref{TableEIT}, for $\Omega_{p} = 0.02\gamma_{31}$, $\gamma_{2} = 0$, $\Delta_{c} = 0$ and different values of the Rabi frequency of the control field $\Omega_{c}$. For the set of parameters used in Figs.\ref{FigCOEIT}(a) and \ref{FigCOEIT}(b) the EIT condition $\Omega_{p}<<\Omega_{c}$ is not deeply satisfied. Once $\rho_{11}\neq 1$ in these cases, the classical model do not reproduce the atomic result satisfactorily. When the condition is fulfilled, $\rho_{11} \approx 1$, we have perfect equivalence between the classical and semiclassical results, as depicted in Figs.\ref{FigCOEIT}(c) and \ref{FigCOEIT}(d). \begin{figure}[!ht] \includegraphics[width=8.5cm]{FigCOEIT.jpg} \caption{(Color online). Imaginary and real parts of the reduced electric susceptibility $\tilde{\chi}_{e}$ vs the normalized probe-atom detuning $\Delta_{p}/\gamma_{31}$ for the EIT system in comparison with its classical counterpart $\tilde{\chi}_{M}$ for $\Omega_{p}=0.02\gamma_{31}$, $\gamma_{2}=0$, $\Delta_{c}=0$ and different values of the Rabi frequency of the control field (a) $\Omega_{c}=0.02\gamma_{31}$, (b) $0.08\gamma_{31}$, (c) $0.8\gamma_{31}$ and (d) $2.0\gamma_{31}$. For the mechanical system we use the same set of parameters following the analog presented in Table \ref{TableEIT}.} \label{FigCOEIT} \end{figure} If the EIT condition $\Omega_{p}<<\Omega_{c}$ is deeply satisfied the absorption profile of EIT presented in Fig.\ref{FigCOEIT}, for $\gamma_{2} = 0$, remains observable even for nonvanishing $\gamma_{2}$, since the condition $\gamma_{2}<<\gamma_{31}$ is prescribed \cite{Fleischhauer2005}. In this way, the classical model reproduces the atomic system for any set of parameters. If we recall the dressed states analysis for EIT, the dark state, given by the transparency window observed between the two peaks of absorption, is writen as the superposition between the bare ground states $\left|1\right\rangle$ and $\left|2\right\rangle$ and not the excited state $\left|3\right\rangle$. This means that an atom in this state has no probability of absorbing or emitting a photon. The idea of quantum interference process behind the cancelation of absorption in EIT systems is widely described in the literature \cite{Harris1997, Marangos1998, Fleischhauer2005}. When classical analogies for such systems are presented, like the one we are discussing here, many questions arises as to: What physical property is transparent for the coupled oscillators? What is interfering in this system? And most importantly, what is the classical dark state in this case? The first question was already responded by Alzar et. al. \cite{Nussenzveig2002}. They showed that the classical observable related to the EIT absorption profile is given by the real part of the average power absorbed by oscillator 1, owing the application of the harmonic force $F_{s}(t)$, while the dispersive behavior is contained in the real part of the frequency dependence of the amplitude of $x_{1}$. Note that these observables are in fully agreement with our definition of the reduced mechanical susceptibility $\tilde{\chi}_{M}(\omega_{s})=\rho_{co}(\omega_{s})$. The power absorbed by oscillator 1 is given by $P_{s}(t) = F_{s}(t)\dot{x}_{1}(t) = -i\omega_{s}F_{s}(t)x_{1}(t)$. The relation between $P_{s}$ and $\rho_{co}$ is drawn from eq.\eqref{e8} through $x_{1}(t)$. Once $P_{s}$ is multiplied by the imaginary number $i$, the imaginary part of $\rho_{co}$ depicted in Fig.\ref{FigCOEIT} is related to the real part of $P_{s}$. Equation \eqref{e8} also provides a straightforward relation between the dispersive behavior, defined by Alzar, and the real part of $\rho_{co}$ in Fig.\ref{FigCOEIT}, once $\rho_{co}$ is contained in the amplitude of $x_{1}$. In analogy to the dressed states analysis, if we recall the normal modes description for the coupled oscillators system we can answer to the remaining questions readily. All calculations are described in detail in Appendix B. Considering the simplified case where $m_{1,2} = m$, $k_{1,2} = k$ and the definition of the normal coordinates $X_{+}$ and $X_{-}$, which are linear combinations of $x_{1}(t)$ and $x_{2}(t)$, the coupled Hamiltonian \eqref{HamiltA}, described in appendix A, can be written as a combination of two uncoupled forced harmonic oscillators with normal resonance frequencies $\omega_{+} = \sqrt{k/m}$ and $\omega_{-} = \sqrt{\omega^{2}_{+} + 2\omega^{2}_{12}}$. These are the resonance frequencies of the two normal modes of the system, usually named as the symmetric $NM_{(+)}$ and asymmetric $NM_{(-)}$ modes. In $NM_{(+)}$ both masses move in exactly the same way, meaning that the middle spring is never stretched, while in $NM_{(-)}$ the masses move oppositely. This means that any arbitrary motion of the system, like the displacement of oscillator 1 or 2, is a linear combination of those two normal modes. In other words, $x_{1,2}(t)$ can be seen as a superposition of two harmonic motions. The EIT-like profile is observed when the damping forces are considered. In this case the eqs.\eqref{NCX} in Appendix B for the normal modes, become coupled through the damping constants $\gamma_{1}$ and $\gamma_{2}$. Solving for the steady-state solution we find a relationship between the normal coordinates $X_{+}$ and $X_{-}$ which depends on the frequencies of the normal modes $\omega_{\pm}$, the frequency of the force $\omega_{s}$ and the damping constant of oscillator 2, $\gamma_{2}$, see eq.\eqref{relNM}. As we are probing the response of oscillator 1 due to the harmonic force $F_{s}(t)$, the classical dark state is observed when $\omega_{s} = \omega_{1}$, with $\omega_{1}^{2} = \omega_{+}^{2} + \omega_{12}^{2}$. Note that the frequency $\omega_{1}$ sits in the range between $\omega_{+}$ and $\omega_{-}$, which is a region of high probability to occurs interference between the normal modes. As we have discussed the EIT transparency window, which characterizes the dark state, is observed when the conditions $\Omega_{p}<<\Omega_{c}$ and $\gamma_{2}<<\gamma_{31}$ are prescribed. According to Table \ref{TableEIT} the classical analog for these conditions are $\Omega_{s}<<\Omega_{12}$ and $\gamma_{2}<<\gamma_{1}$. Considering $\gamma_{2} \rightarrow 0$, as in Fig.\ref{FigCOEIT}, and $\omega_{s} = \omega_{1}$ we find that $X_{+} = - X_{-}$. Once the displacement of oscillators 1 and 2 are given by $x_{1,2} = \sqrt{2}/2\left(X_{+} \pm X_{-}\right)$, we have $x_{1} = 0$ and $x_{2} \neq 0$ in this case. From eq.\eqref{e8} $x_{1} = 0$ is fulfilled for $\rho_{co} = 0$, as observed in Fig.\ref{FigCOEIT} for zero detuning. Thus, the classical dark state is obtained when oscillator 1 stays stationary while oscillator 2 oscillates harmonically. In other words, oscillator 1 becomes transparent to the effect of the driving force for $\omega_{s} = \omega_{1}$ conducting to zero power absorption, which is a consequence of a destructive interference between the two normal modes $NM_{(\pm)}$ in the displacement of oscillator 1. The first classical condition $\Omega_{s}<<\Omega_{12}$ becomes necessary for small but non-zero $\gamma_{2}$, i.e., $\gamma_{2}<<\gamma_{1}$. In this case the classical dark state remains observable for $k_{12}>>k_{1}$, meaning that $X_{+} \approx - X_{-}$, see eq.\eqref{Nmodes} in Appendix B. From the definitions of $\Omega_{s}$ and $\Omega_{12}$ one can find readily that $\Omega_{s} = F\sqrt{\Omega_{12}/k_{12}}$, showing that the relation between $\Omega_{s}$ and $\Omega_{12}$ also depends on $F$ as expected. Similarly to the atomic system, where the probe field is turned on slowly for the state $\left|1\right\rangle$ evolving into the dark state and decouple from the other states, in the classical system the strength of the force, given by the amplitude $F$, is also very small to guarantee the usual approximation of small oscillations. Then, if $F$ is relatively small and $k_{12}>>k_{1}$ we have the condition $\Omega_{s}<<\Omega_{12}$ for nonvanishing $\gamma_{2}$ but $\gamma_{2}<<\gamma_{1}$. Thus, the conditions to observe the phenomenology of EIT can be completely mapped onto the classical system composed by two coupled damped harmonic oscillators, showing that Im$\left\{\tilde{\chi}_{e}(\omega_{p})\right\}\equiv$ Im$\left\{\tilde{\chi}_{M}(\omega_{s})\right\}$ and Re$\left\{\tilde{\chi}_{e}(\omega_{p})\right\}\equiv$ Re$\left\{\tilde{\chi}_{M}(\omega_{s})\right\}$ since $\Omega_{p}<<\Omega_{c}$ and $\gamma_{2}<<\gamma_{31}$. The similarities obtained between the EIT atomic system and the mechanical coupled oscillators are not surprising. Many aspects of the atom-field interaction can be described by the classical theory of optical dispersion \cite{Christy1972, Lipson2011}. According to this theory systems which can be approximated by two discrete levels are represented as classical harmonic oscillators. Then, the classical picture of a two-level atomic system consists of a massive positive nucleus surrounded by an electron cloud with an equal negative charge. The electron of charge $q$ and mass $m$ is supposed to be bound to the immovable nucleus by a linear restoring force $-kx$, where $x$ is the distance between their centres of mass and charge. For the static case these centres are coincident and the atom has zero dipole moment. The energy loss is introduced phenomenologically as a damping force proportional to velocity $-\eta\dot{x}$. If the atom is disturbed by an electromagnetic field $E$, there is also an applied force on the electron $F_{q}=qE$, and then, the electron cloud oscillates along the centre of mass. Thus, we have an oscillating dipole with dynamics described by the same equation of motion of a forced, damped harmonic oscillator, $m\ddot{x}+\eta\dot{x}+kx=F_{q}$, which is the same obtained previously for the first oscillator if $k_{12}=0$. Once the EIT phenomenon is observed in an ensemble of noninteracting three-level atoms in their ground states, it provides an instructive example of the extension of the classical theory of optical dispersion for multi-level systems. Each atomic transition behaves as a harmonic oscillator which loses energy by some mechanical friction mechanism. If we turn back to the physical analogy between EIT and the classical model reported by Alzar et al. \cite{Nussenzveig2002}, the atom is represented by oscillator 1. According to the classical theory presented previously, this would be correct if the atom has two discrete levels of energy, i.e., only one dipole-allowed transition, which is not the case. As we are dealing with three-level atoms, the correct is to represent each dipole transition as a harmonic oscillator. According to the classical picture for the atom, displayed in Fig.\ref{EsquemaEIT}(c), the dipole transition frequencies $\omega_{31}$ and $\omega_{32}$ correspond to the natural frequencies of particles 1 and 2, respectively. The analog for the control and probe fields are equivalent to those presented in \cite{Nussenzveig2002}, where they are identified by the coupling spring and by the harmonic force acting on particle 1, respectively. For other classical systems, like RLC coupled circuits and acoustic structures, the analog of the EIT absorption is also obtained from the real part of the power absorbed by the pumped oscillator \cite{Nussenzveig2002, Tokman2002, Huang2010, Serna2011, Huang2013}. In what follows the classical analog for different quantum systems are presented using the same configuration for the two mechanical coupled harmonic oscillators model discussed here. \subsection{EIT-like in two coupled optical cavities} Once we can reproduce the phenomenology of EIT with two classical coupled oscillators it is natural to consider the oscillators quantum mechanically and see the consequences of it in the EIT-like phenomenon and its conditions \cite{Ponte2005}. For this end, we used a model consisting of two coupled optical cavities with one of them pumped by a coherent field. The use of optical cavities is convenient because we will show the classical analog for EIT-related phenomena in systems comprised by a single two- or three-level atom coupling a single mode of an optical resonator. The two single electromagnetic modes of frequencies $\omega_{cav}^{(a)}$ and $\omega_{cav}^{(b)}$ of optical resonators $a$ and $b$, respectively, exchange energy with Rabi frequency $2\lambda$. Cavity $a$ is driven by a coherent field (probe) of frequency $\omega_{p}$ and strength $\varepsilon$, as illustrated in Fig.\ref{CoupledCavity}(a). \begin{figure}[!ht] \includegraphics[width=6cm]{CoupledCavity.jpg} \caption{(Color online). (a) Two coupled cavities showing their respective single cavity modes with frequencies $\omega_{cav}^{(a)}$, $\omega_{cav}^{(b)}$ and cavity decay rates $\kappa_{a}$, $\kappa_{b}$. Cavity $A$ is driven by a classical probe field with frequency $\omega_{p}$ and strength $\varepsilon$. The electromagnetic modes exchange energy with Rabi frequency $2\lambda$. (b) Classical analog showing the equivalence for each parameter of the coupled cavity modes in the mechanical system.} \label{CoupledCavity} \end{figure} Introducing the rotating-wave approximation (RWA) and considering identical frequencies $\omega_{cav}^{(a)}=\omega_{cav}^{(b)}=\omega_{cav}$ for simplicity, the time-independent Hamiltonian which describes the cavity-cavity coupling in the probe laser rotating frame is given by \begin{equation} H=\Delta_{cav}\left(a^{\dagger}a+b^{\dagger}b\right) + \lambda\left(ab^{\dagger}+a^{\dagger}b\right) +\varepsilon\left( a+a^{\dagger}\right).\label{e10} \end{equation} Since the cavity modes are quantized, they are expressed in terms of creation ($a^{\dagger},b^{\dagger}$) and annihilation ($a,b$) operators. $\Delta_{cav}=\omega_{cav}-\omega_{p}$ is the probe-cavity detuning. The master equation for the cavity-cavity density operator is \begin{equation} \dot{\rho}=-i[H,\rho]+\sum\limits_{\alpha=a,b}\kappa_{\alpha}(2\alpha \rho\alpha^{\dagger}-\alpha^{\dagger}\alpha\rho-\rho\alpha^{\dagger}\alpha) \label{e11} \end{equation} where $\kappa_{\alpha}$ is the cavity mode decay rate of cavity $\alpha$. The time evolution for the expectation value of the field operators are \begin{subequations}\label{EvoCav} \begin{align} \left\langle \dot{a}\right\rangle &= -i\left\{ \left(\Delta_{cav} - i\kappa_{a}\right)\left\langle a\right\rangle + \lambda\left\langle b\right\rangle + \varepsilon \right\},\\ \nonumber\\ \left\langle \dot{b}\right\rangle &= -i\left\{ \left(\Delta_{cav} - i\kappa_{b}\right)\left\langle b\right\rangle + \lambda\left\langle a\right\rangle \right\}, \end{align} \end{subequations}\label{} which exhibits essentially the same structure as eqs.\eqref{HRhosB}, Appendix A, for $\dot{\rho}_{\alpha}$ and $\dot{\rho}_{\beta}$, respectively, in the description of the dynamics of the coupled oscillators system. Once the cavity mode $a$ absorbs photons from the pumping field and communicates them to cavity $b$, through the coupling $\lambda$, we represent the probe response of the cavity-cavity system as a reduced electric susceptibility given by the expectation value of the driven cavity field, i.e., $\tilde{\chi}_{CC}(\omega_{p})=\left\langle a\right\rangle $. Note that it is precisely what was done for the EIT medium, where $\tilde{\chi}_{e}(\omega_{p})=\rho_{31}$. A formal correspondence between $\rho_{31}$ in atomic physics and the intracavity field $\left\langle a\right\rangle$ was already pointed out by Stefan Weiss et al. in their work about optomechanically induced transparency \cite{Kippenberg2010}. The steady state solutions of the expectation value of field operators in eqs.\eqref{EvoCav} provide the solution for the intracavity field of cavity $a$: \begin{equation} \left\langle a\right\rangle =\frac{-\varepsilon\left( \Delta_{cav}-i\kappa_{b}\right)}{\left(\Delta_{cav}-i\kappa_{a}\right) \left(\Delta_{cav}-i\kappa_{b}\right) -\lambda^{2}}, \label{e12} \end{equation} which is identical to the reduced mechanical susceptibility $\tilde{\chi}_{M}(\omega_{s})=\rho_{co}$ obtained for the two coupled harmonic oscillators in eq.\eqref{e9} for $\Delta_{1}=\Delta_{2}=\Delta_{s}$. The negative signal observed in eq.\eqref{e12} can be reproduced from the classical equations by considering the phase $\phi_{s}=\pi$ in the applied force on oscillator 1, which is equivalent to make $-F$ in eq.\eqref{e9}. Once it is considered only one force in the classical analog the phase is not relevant. Nonetheless it becomes important for atomic systems with more than three-levels, like in the four-level tripod configuration we show afterwards, in which the classical analog is obtained by considering two oscillating forces out of phase by $\pi$. The classical analog of each parameter of the coupled cavity modes is summarized in table \ref{TableEITCC}. The cavity EIT-like condition is given by $\varepsilon<< \lambda$ and $\kappa_{b} << \kappa_{a}$ and the classical analog is obtained for any set of parameters. \begin{table}[th] \caption{Classical analog of EIT-like in two coupled cavity modes (EIT-CCM) using two mechanical coupled harmonic oscillators (2-MCHO).} \label{TableEITCC} \centering \begin{tabular} {c c} \hline\hline EIT-CCM $\left(\left\langle a\right\rangle \right) $ & 2-MCHO $\left(\rho_{co}\right)$ \\[1ex] \hline $\Delta_{cav}$ & $\Delta_{s}$\\ $\varepsilon$ & $\Omega_{s}$\\ $\lambda$ & $\Omega_{12}$\\ $\kappa_{a}$ & $\gamma_{1}$\\ $\kappa_{b}$ & $\gamma_{2}$ \\[1ex] \hline \end{tabular} \end{table} The agreement between the cavity-field and oscillator-force responses is somehow expected. In the quantum theory of radiation \cite{Scully1997} a general multimode field is represented by a collection of harmonic oscillators, one for each mode. Then, the single mode of the electromagnetic field of cavity $a$ or $b$ is dynamically equivalent to a simple harmonic oscillator. Once we have two coupled cavity modes, naturally it will be equivalent to two coupled oscillators. Narducci \textit{et al.} \cite{Narducci1968} showed that differences in the dynamics of two coupled quantum oscillators may arise between the approximated Hamiltonian given by eq.\eqref{e10} and its exact solution, where the counter rotating-wave terms $a^{\dagger}b^{\dagger}$ and $ab$ are considered. They established the limits of validity of the RWA in terms of the strength of coupling $\lambda$. Our results show that, if the RWA is assumed to be valid, the quantum dynamics of two coupled cavity modes can be reproduced by the classical dynamics of two coupled harmonic oscillators. Thus, to obtain the classical analog for systems which involve a cavity mode, we can represent it as a harmonic oscillator with natural frequency $\omega_{cav}$, similarly to an atomic dipole-allowed transition in the low atomic excitation condition. The result obtained in eq.\eqref{e12} goes beyond than the perfect agreement between quantum and classical models. It opens the possibility of a physical interpretation for the expectation value of the photon annihilation operator $\left\langle a\right\rangle$, showing that it is directly related to the electric susceptibility of a cavity mode. In what follows we show that this interpretation can also be used for systems comprised by two- and three-level atoms interacting with a single cavity mode driven by a coherent field. \subsection{EIT-like in two-level atom coupled to an optical cavity mode} The absorption spectrum of EIT is also observed when a single two-level atom is coupled to a single cavity mode. This effect was predicted by Rice and Brecha \cite{Brecha1996} and termed as cavity induced transparency (CIT). They found that under specific conditions an atom-cavity transmission window, usually referred to as intracavity dark state, arises as a consequence of quantum interference between two absorption paths and not as a result of vacuum-Rabi splitting. They showed the analogous in the weak-probe limit considering the driven cavity and the driven atom cases. We will examine both configurations and show their classical equivalent using two coupled oscillators. First we consider the driven cavity case. The system is comprised of a single atom with two energy levels, $\left\vert g\right\rangle$ and $\left\vert e\right\rangle$, coupled to a single electromagnetic mode of frequency $\omega_{cav}$ of an optical resonator. The cavity is driven by a coherent field (probe) with frequency $\omega_{p}$ and strength $\varepsilon_{c}$. The atomic transition $|g\rangle\leftrightarrow|e\rangle$ (frequency $\omega_{0}$) is coupled by the cavity mode with vacuum Rabi frequency $2g$. The time-independent Hamiltonian which describes the atom-field coupling in a rotating frame is obtained using the driven Jaynes-Cummings model \begin{equation} H = \Delta_{0}\sigma_{ee}+\Delta_{c}a^{\dagger}a+g\left(a\sigma_{eg}+a^{\dagger}\sigma_{ge}\right) + \varepsilon_{c}\left(a+a^{\dagger}\right), \label{e13} \end{equation} with detunings given by $\Delta_{0}=\omega_{0}-\omega_{p}$, $\Delta_{c}=\omega_{cav}-\omega_{p}$. The master equation for the atom-cavity density operator is \begin{align} \label{e14} \dot{\rho} & = -i[H,\rho] + \kappa(2a\rho a^{\dagger} - a^{\dagger}a\rho - \rho a^{\dagger}a) \nonumber\\ & + \Gamma_{eg}(2\sigma_{ge}\rho\sigma_{eg} - \sigma_{ee}\rho - \rho\sigma_{ee}) \nonumber\\ & + \gamma_{e}(2\sigma_{ee}\rho\sigma_{ee} - \sigma_{ee}\rho - \rho\sigma_{ee}), \end{align} where $\kappa$ is the cavity-field decay rate, $\Gamma_{eg}$ the polarization decay rate of the excited level $|e\rangle$ to the level $|g\rangle$, and $\gamma_{e}$ the non-radiative atomic dephasing rate of state $|e\rangle$. By using the commutation relation $\left[ a,a^{\dagger}\right] =1$ and considering perfect atom-cavity resonance $\omega_{0}=\omega_{cav}$, implying that $\Delta_{0}=\Delta_{c}$, the time evolution of the expected values of the atomic and field operators are given by \begin{subequations}\label{e15} \begin{align} \left\langle \dot{a}\right\rangle &= -i\left\{ \left(\Delta_{c} - i\kappa\right)\left\langle a\right\rangle + g\left\langle \sigma_{ge}\right\rangle + \varepsilon_{c} \right\},\label{e15a}\\ \left\langle \dot{\sigma}_{ge}\right\rangle &= -i\left\{ \left(\Delta_{c} - i\gamma_{eg}\right) \left\langle \sigma_{ge}\right\rangle - g\left\langle a\right\rangle \left\langle \sigma_{z} \right\rangle \right\}, \label{e15b} \end{align} \end{subequations} where $\gamma_{eg}=\Gamma_{eg}+\gamma_{e}$ and $\left\langle \sigma_{z}\right\rangle =\left\langle \sigma_{ee}\right\rangle -\left\langle\sigma_{gg}\right\rangle $. The closed set of coupled equations above are obtained by using a semiclassical approximation \cite{Ren1994}, which consists of factoring joint operator moments $\left\langle a\sigma\right\rangle \rightarrow\left\langle a\right\rangle \left\langle \sigma\right\rangle $. Thereby, the cavity field is described by a complex amplitude $\left\langle a\right\rangle = \alpha$ rather than a quantum mechanical operator. The EIT-like phenomenon in this system is observed when the Rabi frequency of the cavity field $g\left\langle a\right\rangle _{max}$ is large compared to the Rabi frequency of the probe field, $\varepsilon_{c}<<g\left\langle a\right\rangle _{max}$, and also when $\gamma_{eg}<<\kappa$. The average $\left\langle a\right\rangle_{max}=\varepsilon_{c}/\left( \Delta_{c}-i\kappa\right)$ is the maximum value of $\left\langle a\right\rangle$ in the absence of atoms $(g=0)$. As we have seen in the previous section, the optical response of the atom-cavity medium is proportional to the expectation value of the cavity field $\left\langle a\right\rangle $, once the cavity mode is pumped weakly by the probe field. Then, we will represent the probe response as an atom-cavity reduced susceptibility $\tilde{\chi}_{AC}(\omega_{p})=\left\langle a\right\rangle $. The real part of $\tilde{\chi}_{AC}$ is related to the absorption spectrum of the system and its imaginary part to the phase of the outgoing light field of the cavity. In the steady state, $\dot{\rho}=0$, the equations above give for the expectation value of the photon annihilation operator, \begin{equation} \left\langle a\right\rangle =\frac{-\varepsilon_{c}\left(\Delta_{c}-i\gamma_{eg}\right)}{\left(\Delta_{c}-i\kappa\right) \left(\Delta_{c}-i\gamma_{eg}\right) + g^{2}\left\langle \sigma_{z}\right\rangle}.\label{e16} \end{equation} If $\left\langle \sigma_{z}\right\rangle =-1$, $\left\langle a\right\rangle$ becomes identical to the reduced mechanical susceptibility $\tilde{\chi}_{M}(\omega_{s})=\rho_{co}$, see eq.\eqref{e9}. Mathematically, $\left\langle\sigma_{z}\right\rangle =-1$ is the limit to reach low atomic excitation, meaning that the probe field is so weak that we can consider only the zero- and one-photon states ($\left\vert 0\right\rangle ,\left\vert 1\right\rangle$) of the cavity mode. As illustrated in Fig.\ref{AtomCavity}(a), the atom-field system will be limited to the first splitting of the dressed states which forms the anharmonic Jaynes-Cummings ladder. \begin{figure}[!ht] \includegraphics[width=8cm]{AtomCavity.jpg} \caption{(Color online). (a) \textit{Top:} Single two-level atom with resonance frequency $\omega_{0}$ and atomic polarization decay rate $\gamma_{eg}$, interacting with a single mode of an optical resonator with frequency $\omega_{cav}$ and cavity decay rate $\kappa$. The atom-field dipole coupling is described by the vacuum Rabi frequency $2g$. A classical probe field with frequency $\omega_{p}$ and strength $\varepsilon$ pumps either the cavity or the atom. \textit{Bottom:} First doublet of dressed-states of the Jaynes-Cummings ladder as a result of the coupling between the bare cavity ($\left\vert 0\right\rangle ,\left\vert 1\right\rangle $) and the bare atom ($\left\vert g\right\rangle ,\left\vert e\right\rangle $). (b) and (c) show the atom-field classical analogs for the driven cavity and driven atom cases, respectively.} \label{AtomCavity} \end{figure} The atom-field classical analog for the driven cavity case is shown in Fig.\ref{AtomCavity}(b) and each parameter is identified as in table \ref{ACsystems}. It is also interesting to make comparisons between the original EIT-$\Lambda$ scheme and other quantum systems. In this case, the cavity makes the role of the atomic transition $|1\rangle\leftrightarrow |3\rangle$ and the atom represents the transition $|2\rangle\leftrightarrow |3\rangle$, see Figs.\ref{EsquemaEIT}(a) and \ref{EsquemaEIT}(c). Figure \ref{TLPC} shows the imaginary and real parts of the reduced susceptibility $\tilde{\chi}_{AC}(\omega_{p})$ vs the normalized probe-cavity detuning $\Delta_{c}/\kappa$ for different set of parameters in comparison with its classical analog $\tilde{\chi}_{M}(\omega_{s})$. The full quantum atom-cavity description is solved for the steady state of $\rho$ following the method presented in \cite{Tan1999}, where the cavity field Fock basis is truncated according to the probe strength. In Figs.\ref{TLPC}(a) and \ref{TLPC}(b) the EIT-like condition $\varepsilon_{c}<<g\left\langle a\right\rangle_{max}$ is not deeply satisfied, showing that the intracavity dark-state $\left\langle a\right\rangle =0$ for $\Delta_{c}=0$ is not observed, differently for its classical counterpart once $\gamma_{eg} \equiv \gamma_{2} = 0$, see Appendix B. When such condition is fulfilled the results show perfect agreement even for nonvanishing $\gamma_{eg}$, like in Figs.\ref{TLPC}(c) and \ref{TLPC}(d), since $\gamma_{eg}<<\kappa$. \begin{figure}[!ht] \includegraphics[width=8.5cm]{TLPC.jpg} \caption{(Color online). Imaginary and real parts of the reduced atom-cavity susceptibility $\tilde{\chi}_{AC}$ vs the normalized probe-cavity detuning $\Delta_{c}/\kappa$ for the two-level atom interacting with a single mode of a driven optical cavity in comparison with its mechanical analog $\tilde{\chi}_{M}$. The parameters are (a) $\varepsilon_{c}=0.02\kappa$, $g=0.02\kappa$, $\gamma_{eg}=0.0$, (b) $0.5\kappa$, $1.0\kappa$, $0.0$, (c) $0.02\kappa$, $0.8\kappa$, $0.01\kappa$ and (d) $0.02\kappa$, $2.0\kappa$, $0.01\kappa$. The classical results were obtained using the same set of parameters following the analog depicted in table \ref{ACsystems}.} \label{TLPC} \end{figure} As we have mentioned the condition $\left\langle \sigma_{z}\right\rangle =-1$ in eq.\eqref{e16} means the atom-cavity field can be described by the first doublets of dressed-states of the Jaynes-Cummings ladder, see Fig.\ref{AtomCavity}(a), regardless the atom-cavity system being considered in the strong coupling regime $g>>(\gamma_{eg},\kappa)$, like in Fig.\ref{TLPC}(d). Thus, the quantum atom-field correlations can be completely neglected and then, atom and cavity field can be treated in the same footing as harmonic oscillators. In ref.\cite{Rempe2014} the authors used the full classical result, given by eq.\eqref{e16}, to analyze experimentally the measurement of antiresonances in a strongly-coupled atom-cavity system by using heterodyne detection. The aspects of EIT-like phenomenon regarding the spectrum of absorption obtained from the imaginary part of $\left\langle a\right\rangle $, can also be observed through the calculation of cavity transmission. It is provided by the average photon number $\left\langle a^{\dagger}a\right\rangle $. Once we have the classical analog for $\left\langle a\right\rangle \equiv\rho_{co}$, one can see readily that $\left\langle a^{\dagger}a\right\rangle \equiv \rho^{\ast}_{co}\rho_{co}$. For the driven atom case, the probe field with strength $\varepsilon_{0}$ pumps the atom instead of the cavity mode. For this system, the time-independent Hamiltonian in a rotating frame reads \begin{equation} H = \Delta_{0}\sigma_{ee}+\Delta_{c}a^{\dagger}a+g\left( a\sigma_{eg}+a^{\dagger}\sigma_{ge}\right) +\varepsilon_{0}\left(\sigma_{eg}+\sigma_{ge}\right). \label{e17} \end{equation} As before we consider atom and cavity on resonance $\omega_{0}=\omega_{cav}$, then $\Delta_{c}=\Delta_{0}$, where $\Delta_{0}=\omega_{0}-\omega_{p}$ is the probe-atom detuning. Once the probe field couples directly to the atom, the probe absorption is related to the density matrix element $\rho_{eg}=\left\langle \sigma_{ge}\right\rangle$, in analogy with $\rho_{31}$ in eq.\eqref{SFull}. Then, the atom-cavity reduced susceptibility is represented by $\tilde{\chi}_{AC}(\omega_{p})=\left\langle \sigma_{ge}\right\rangle $. Using the master equation (\ref{e14}) to obtain the time evolution for the atomic and field operators, we solve for the expectation value of the lowering atomic operator in the steady state, \begin{equation} \left\langle \sigma_{ge}\right\rangle =\frac{\varepsilon_{0}\left\langle\sigma_{z}\right\rangle \left( \Delta_{0}-i\kappa\right)}{\left(\Delta_{0}-i\gamma_{eg}\right) \left(\Delta_{0}-i\kappa\right)+g^{2}\left\langle \sigma_{z}\right\rangle }, \label{ACPA} \end{equation} which is also identical to the mechanical reduced susceptibility $\tilde{\chi}_{M}=\rho_{co}$ for $\left\langle \sigma_{z}\right\rangle =-1$. Note that eq.\eqref{e16} can be recovered from eq.\eqref{ACPA} by changing $\gamma_{eg}\leftrightarrow\kappa$. Thus, the first EIT-like condition $\varepsilon_{0}<<g\left\langle a\right\rangle_{max}$ remains the same and the second is now switched to $\kappa<<\gamma_{eg}$. The classical analog for this system is illustrated in Fig.\ref{AtomCavity}(d) and each atom-cavity parameter is identified classically in table \ref{ACsystems}. Differently from Figs.\ref{TLPC}(a) and \ref{TLPC}(b), the dark state is observed in the driven atom for both, classical and quantum responses. Like in the original EIT configuration presented in Fig.\ref{FigCOEIT}, the maximum absorption peaks in the quantum system decreases when the condition $\varepsilon_{0} << g\left\langle a\right\rangle_{max}$ is not deeply satisfied, meaning that the approximation $\left\langle \sigma_{z}\right\rangle = -1$ is not valid. The dissipative rates $\gamma_{eg}$ and $\kappa$ for the driven cavity ($\gamma_{eg} << \kappa$) and driven atom ($\kappa<< \gamma_{eg}$) cases, respectively, make the role of the non-radiative atomic dephasing rate of state $|2\rangle$, $\gamma_{2}$, in the EIT system. If those parameters are relatively large the intracavity dark state will be no longer perfect \cite{Fleischhauer2005}. Next sections are dedicated to show the classical analog for atomic systems with more than three-levels of energy using three coupled harmonic oscillators. \begin{table}[ptb] \caption{Classical analog of EIT for different quantum systems using two mechanical coupled harmonic oscillators (2-MCHO). We present the analogs for the three-level atom in $\Lambda$ configuration (EIT-$\Lambda$), two-coupled cavity modes (EIT-CCM) and two-level atom-cavity systems for the driven cavity (EIT-DC) and driven atom (EIT-DA) cases.} \label{ACsystems} \begin{center} \begin{tabular}{c c c c c} \hline\hline EIT-$\Lambda$ & EIT-CCM & EIT-DC & EIT-DA & 2-MCHO\\ $\rho_{31}$ & $\left\langle a\right\rangle $ & $\left\langle a\right\rangle$ & $\left\langle \sigma_{ge}\right\rangle$ & $\rho_{co}$ \\[1ex] \hline $\Delta_{p}$ & $\Delta_{p}$ & $\Delta_{c}$ & $\Delta_{0}$ & $\Delta_{s}$\\ $\Omega_{p}$ & $\varepsilon$ & $\varepsilon_{c}$ & $\varepsilon_{0}$ & $\Omega_{s}$\\ $\Omega_{c}$ & $\lambda$ & $g$ & $g$ & $\Omega_{12}$\\ $\gamma_{31}$ & $\kappa_{a}$ & $\kappa$ & $\gamma_{eg}$ & $\gamma_{1}$\\ $\gamma_{2}$ & $\kappa_{b}$ & $\gamma_{eg}$ & $\kappa$ & $\gamma_{2}$ \\[1ex] \hline \end{tabular} \end{center} \end{table} \section{Classical analog of EIT in different physical systems using three-coupled harmonic oscillators} Now we show how to represent mechanically the EIT-related phenomena observed in four-level atoms in the inverted-Y, tripod and cavity EIT configurations. As we are adding an atomic allowed transition, coupled by a laser field, to the original atomic three-level EIT system, we have to add their classical equivalent in the mechanical system. Then, the mechanical configuration is now composed by three coupled harmonic oscillators as shown in Fig.\ref{TMHCO}. Hereafter we will follow the same reasoning and notation used for the two coupled oscillators described previously. Considering the general case, where each particle is driven by a coherent force $F_{js}(t)=F_{j}e^{-i(\omega_{s}t+\phi_{s})}+c.c.$ $(j=1,2,3)$ and assuming the solutions $x_{j}=N_{j}e^{-i\omega_{s}t}+c.c.$, the equations of motion on the three masses give rise to the following equations: \begin{subequations} \label{CETL}% \begin{align} \left(-\omega_{s}^{2}+\omega_{1}^{2}-2i\gamma_{1}\omega_{s}\right)N_{1}-\omega_{12}^{2}N_{2}-\omega_{13}^{2}N_{3} & = \frac{F_{1}}{m}e^{-i\phi_{1}}, \label{CETLA}\\ \left(-\omega_{s}^{2}+\omega_{2}^{2}-2i\gamma_{2}\omega_{s}\right)N_{2}-\omega_{12}^{2}N_{1} & = \frac{F_{2}}{m}e^{-i\phi_{2}}, \label{CETLB}\\ \left(-\omega_{s}^{2}+\omega_{3}^{2}-2i\gamma_{3}\omega_{s}\right)N_{3}-\omega_{13}^{2}N_{1} & = \frac{F_{3}}{m}e^{-i\phi_{3}}, \label{CETLC} \end{align} \end{subequations} where $\omega_{1}^{2}=\left(k_{1}+k_{12}+k_{13}\right)/m$, $\omega_{2}^{2}=\left(k_{2}+k_{12}\right)/m$, $\omega_{3}^{2}=\left(k_{3}+k_{13}\right)/m$, $\omega_{12}^{2}=k_{12}/m$, $\omega_{13}^{2}=k_{13}/m$ and $\phi_{j}$ ($j=1,2,3$) the respective phases. As before we consider identical masses $m_{1}=m_{2}=m_{3}=m$ and frequencies $\omega_{j}$ ($j=1,2,3$) near to $\omega_{s}$, implying that the approximations $\omega_{j}^{2}-\omega_{s}^{2}\approx2\omega_{j}(\omega_{j}-\omega_{s})$ and $\gamma_{j}\omega_{s} \approx \gamma_{j}\omega_{j}$ can be used and the corresponding detunings $\Delta_{j}=\omega_{j}-\omega_{s}$ properly defined. As before we have omitted the complex conjugate solution ($c.c.$) for simplicity. The mechanical representation of the atomic systems we are about to show are more complicated owing the amount of dipole transitions and coupling fields. Depending on the atomic configuration, we will choose which particle or particles in the classical system are driven by the corresponding forces $F_{js}(t)$. The collective motion of the system for the configuration presented in Fig.\ref{TMHCO} is described by three normal modes, owing the addition of the third mass. Considering the simple case, where $k_{i} = k$ ($i=1,2,3$) and $k_{1j} = k_{\alpha}$ with $\omega_{1j}^{2} = \omega^{2} = k_{\alpha}/m$ ($j=2,3$), the resonance frequencies are $\omega_{0} = \sqrt{k/m}$, $\omega_{+} = \sqrt{\omega_{0}^{2} + \omega^{2}}$ and $\omega_{-} = \sqrt{\omega_{0}^{2} + 3\omega^{2}}$, which are the frequencies of the normal modes $NM_{(0)}$, $NM_{(+)}$ and $NM_{(-)}$, respectively. The modes $NM_{(0)}$ and $NM_{(-)}$ are similar to the two normal modes described in Sec.\ref{SecEIT}. In $NM_{(0)}$ the three masses move in phase while in $NM_{(-)}$, $m_{1}$ moves oppositely to $m_{2}$ and $m_{3}$. In the third mode, $NM_{(+)}$, $m_{1}$ stays stationary while $m_{2}$ and $m_{3}$ oscillate harmonically exactly out of phase with each other. The analysis performed in Appendix B can be extended to the present case by defining the normal coordinates $X_{0}$, $X_{+}$ and $X_{-}$, which are proportional to $x_{1} + x_{2} + x_{3}$, $x_{2} - x_{3}$ and $x_{1} - x_{2} - x_{3}$, respectively, meaning that any arbitrary motion of the system is a superposition of those three normal modes. The classical dark state is defined according to the EIT-like conditions for each system. \begin{figure}[!ht] \includegraphics[width=5cm]{TMHCO.jpg} \caption{(Color online). Mechanical model comprised by three coupled damped harmonic oscillators used to reproduce the EIT-related phenomenology observed in multi-level atomic systems. It consists of three masses $m_{1}$, $m_{2}$ and $m_{3}$ attached to five springs with constant springs $k_{1}$, $k_{2}$, $k_{3}$ for the outside springs and $k_{12}$, $k_{13}$ for the coupling springs. For the general case, a driving force $F_{js}(t)$ of frequency $\omega_{s}$ acts on mass $m_{j}$ and the damping constant of the $j$th harmonic oscillator is represented by $\gamma_{j}$ ($j=1,2,3$).} \label{TMHCO}% \end{figure} \subsection{EIT in four-level atoms in the inverted-Y configuration} The effect of two or more electromagnetic fields interacting with multi-level atomic systems has been extensively explored theoretically and experimentally in recent years \cite{Xiao2003}. The absorption spectrum of a variety of four-level atomic systems exposed to three laser fields is characterized by a double dark resonance. This effect is named as double EIT. The four-level atom in the inverted-Y configuration can be seen as a three-level atom in $\Lambda$ configuration, composed by the states $\left\vert 1\right\rangle $, $\left\vert 2\right\rangle $ and $\left\vert 3\right\rangle $, plus a second excited state $\left\vert 4\right\rangle $, as shown in Fig.\ref{Yinvertido}(a). Transitions $\left\vert 1\right\rangle \leftrightarrow\left\vert 3\right\rangle $ and $\left\vert 2\right\rangle\leftrightarrow\left\vert 3\right\rangle$ interact with the probe and control fields as in the usual three-level $\Lambda$ type. A third coupling field of frequency $\omega_{r}$ and Rabi frequency $2\Omega_{r}$, named as pumping field, couples the transition $\left\vert 3\right\rangle \leftrightarrow \left\vert 4\right\rangle$. \begin{figure}[!ht] \includegraphics[width=7.5cm]{Yinvertido.jpg} \caption{(Color online). (a) Schematic energy level diagram of a four-level atom in the inverted-Y configuration, showing three classical electromagnetic fields, probe $(\omega_{p})$, control $(\omega_{c})$ and pump $(\omega_{r})$, coupling the transitions $|1\rangle\leftrightarrow|3\rangle$, $|2\rangle\leftrightarrow |3\rangle$ and $|3\rangle\leftrightarrow|4\rangle$, respectively, and their corresponding detunings. The atomic decay rates are represented by $\gamma_{31}=\Gamma_{31}+\Gamma_{32}+\gamma_{3}$, $\gamma_{43}=\Gamma_{43}+\gamma_{4}$ and $\gamma_{2}$. The classical analog shown in (b) consists of only one force acting on mass $m_{1}$, meaning that $F_{2s}=F_{3s}=0$ in Fig.\ref{TMHCO}.} \label{Yinvertido} \end{figure} By introducing the dipole and rotating-wave approximations, the time-independent Hamiltonian for this system can be written as \begin{align} \label{HYinv1} H &= -\Delta_{p}\sigma_{11}-\Delta_{c}\sigma_{22}-\Delta_{r}\sigma _{44}-\Omega_{p}\left( \sigma_{13}+\sigma_{31}\right) \nonumber\\ &- \Omega_{c}\left( \sigma_{23}+\sigma_{32}\right) -\Omega_{r}\left(\sigma_{43}+\sigma_{34}\right), \end{align} where the detunings are given by $\Delta_{p}=\omega_{31}-\omega_{p}$, $\Delta_{c}=\omega_{32}-\omega_{c}$ and $\Delta_{r}=\omega_{43}-\omega_{r}$. Its dynamics is obtained numerically by solving the master equation for the atomic density operator \begin{align} \dot{\rho} &= - i[H,\rho]+\sum\limits_{m=1,2}\Gamma_{3m}(2\sigma_{m3}\rho\sigma_{3m}-\sigma_{33}\rho-\rho\sigma_{33})\nonumber\\ &+ \Gamma_{43}(2\sigma_{34}\rho\sigma_{43}-\sigma_{44}\rho-\rho\sigma_{44})\nonumber\\ &+ \sum\limits_{n=2,3,4}\gamma_{n}(2\sigma_{nn}\rho\sigma_{nn}-\sigma_{nn}\rho-\rho\sigma_{nn}), \label{e18} \end{align} with the polarization decay rate $\Gamma_{43}$ and non-radiative atomic dephasing rate $\gamma_{4}$, accounting for the additional state $\left\vert 4\right\rangle $. The information about absorption and dispersion of the probe field in the four-level atomic medium is obtained through the reduced electric susceptibility $\tilde{\chi}_{e}(\omega_{p})=\rho_{31}(\omega_{p})$, in analogy with previous definitions. For the inverted-Y system we also used the weak probe field approximation, $\Omega_{p}<<\left(\Omega_{c},\Omega_{r}\right)$, implying that almost all the atomic population is in the ground state $\rho_{11}\approx1$. From the full density-matrix equations of motion and assuming that the values of $\rho_{43}$ and $\rho_{23}$ are approximately zero \cite{Xiao2003}, we solved for the steady state of $\rho$ to find \begin{equation} \rho_{31}(\omega_{p})=\frac{\Omega_{p}\left(\delta_{2}-i\gamma_{2}\right)\left(\delta_{4}-i\gamma_{43}\right)}{\Upsilon_{Q}-\Omega_{c}^{2}\left(\delta_{4}-i\gamma_{43}\right) - \Omega_{r}^{2}\left(\delta_{2}-i\gamma_{2}\right)}, \label{RhoY} \end{equation} where $\Upsilon_{Q}=\left(\Delta_{p}-i\gamma_{31}\right) \left(\delta_{2}-i\gamma_{2}\right) \left(\delta_{4}-i\gamma_{43}\right)$, $\gamma_{31}=\Gamma_{31}+\Gamma_{32}+\gamma_{3}$ and $\gamma_{43}=\Gamma_{43}+\gamma_{4}$. Here we introduced the two-photon detunings $\delta_{2}=\Delta_{p}-\Delta_{c}$ and $\delta_{4}=\Delta_{p}-\Delta_{r}$. Note that when $\Omega_{r}=0$, eq.\eqref{RhoY} reduces to eq.\eqref{e5} for the three-level EIT-$\Lambda$ configuration. The classical analog to demonstrate double EIT in four-level atoms in the inverted-Y configuration was proposed by Serna et al. \cite{Serna2011}. They used a mechanical system comprised by three coupled harmonic oscillators and also an electric analog composed by three coupled RLC circuits. Here we used the same configuration as in \cite{Serna2011} in order to identify an one-to-one correspondence between the classical and quantum dynamic variables for this system. Its corresponding reduced mechanical susceptibility $\tilde{\chi}_{M}(\omega_{s})=\rho_{co}(\omega_{s})$ is obtained from eqs.\eqref{CETL} by setting $F_{2s}=F_{3s}=0$ and solving for the displacement of particle 1 for $\phi_{1}=0$, \begin{equation} \rho_{co}(\omega_{s})=\frac{\Omega_{s}\left(\Delta_{2}-i\gamma_{2}\right)\left(\Delta_{3}-i\gamma_{3}\right)}{\Upsilon_{C}-\Omega_{12}^{2}\left(\Delta_{3}-i\gamma_{3}\right) -\Omega_{13}^{2}\left(\Delta_{2} - i\gamma_{2}\right)}, \label{e19} \end{equation} where $\Upsilon_{C}=\left(\Delta_{1}-i\gamma_{1}\right) \left(\Delta_{2}-i\gamma_{2}\right)\left(\Delta_{3}-i\gamma_{3}\right)$, the coupling rates $\Omega_{12}=\omega_{12}^{2}/2\sqrt{\omega_{1}\omega_{2}}$, $\Omega_{13}=\omega_{13}^{2}/2\sqrt{\omega_{1}\omega_{3}}$ and the pumping rate $\Omega_{s}=\sqrt{F_{1}^{2}/2m\omega_{1}}$. As we have discussed in Sec.IIA the coupling-field detunings $\Delta_{c}$ and $\Delta_{r}$ in eq.\eqref{RhoY} can be reproduced readily in the classical system by setting $\Delta_{1}=\Delta_{s}$, $\Delta_{2}=\Delta_{s}-\Delta_{21}$ and $\Delta_{3}=\Delta_{s}-\Delta_{31}$, where $\Delta_{21}$ and $\Delta_{31}$ account for the detuning between the frequencies of the oscillators 2-1 and 3-1, respectively. For perfect resonances $\Delta_{c}=\Delta_{r}=0$, the classical detunings are reduced to $\Delta_{1}=\Delta_{2}=\Delta_{3}=\Delta_{s}$. Note that even for $k_{2}=k_{3}$ we have $\omega_{2}\neq\omega_{3}$ so that, for the resonance case the analog is complete by adjusting the detunings to be identical through $k_{1}$, $k_{12}$ and $k_{13}$. Comparing $\rho_{31}(\omega_{p})$, eq.\eqref{RhoY}, and $\rho_{co}(\omega_{s})$, eq.\eqref{e19}, we identify classically each parameter of the atomic system as in Table \ref{AYinv}. The classical analog is illustrated in Fig.\ref{Yinvertido}(b). As shown before, each atomic dipole-allowed transition corresponds to a harmonic oscillator in the mechanical system. Then, the addition of state $\left\vert 4\right\rangle $ and the coupling field of frequency $\omega_{r}$ imply the addition of one more harmonic oscillator ($m_{3}$), to account for the atomic transition $|3\rangle\leftrightarrow|4\rangle$, and a second coupling spring ($k_{13}$) to communicate energy to the pumped oscillator $m_{1}$. \begin{table} \label{AYinv} \caption{Classical analog of EIT-like for the four-level atom in an inverted-Y configuration (EIT-4Y) using three mechanical coupled harmonic oscillators (3-MCHO).} \begin{center} \begin{tabular} {c c} \hline\hline EIT-4Y $\left(\rho_{31}\right)$ & 3-MCHO $\left(\rho_{co}\right)$ \\[1ex] \hline $\Delta_{p}$ & $\Delta_{1}$\\ $\delta_{2}$ & $\Delta_{2}$\\ $\delta_{4}$ & $\Delta_{3}$\\ $\Omega_{p}$ & $\Omega_{s}$\\ $\Omega_{c}$ & $\Omega_{12}$\\ $\Omega_{r}$ & $\Omega_{13}$\\ $\gamma_{31}$ & $\gamma_{1}$\\ $\gamma_{2}$ & $\gamma_{2}$\\ $\gamma_{43}$ & $\gamma_{3}$\\[1ex] \hline \end{tabular} \end{center} \end{table} \begin{figure}[!ht] \includegraphics[width=8.5cm]{YinvA.jpg} \caption{(Color online). Imaginary and real parts of the reduced electric susceptibility ($\tilde{\chi}_{e}$) vs normalized probe-atom detuning $\Delta_{p}/\gamma_{31}$ for the four-level atom in a inverted-Y configuration in comparison with its classical counterpart ($\tilde{\chi}_{M}$) obtained using three coupled harmonic oscillators. The parameters are $\Omega _{p}=0.02\gamma_{31}$, $\Gamma_{43}=0.5\gamma_{31}$, $\gamma_{2}=0.0$, (a) $\Omega_{c}=\Omega_{r}=0.08\gamma_{31}$, (b) $\Omega_{c}=0.08\gamma_{31}$, $\Omega_{r}=1.0\gamma_{31}$, (c) $\Omega_{c}=0.8\gamma_{31}$, $\Omega_{r}=1.0\gamma_{31}$ and (d) $\Omega_{c}=\Omega_{r}=2.0\gamma_{31}$. The coupling-field detunings $\Delta_{c}$, $\Delta_{r}$ are zero in (a), (b), (c) and (d) $\Delta_{c}=1.0\gamma_{31}$, $\Delta_{r}=-1.0\gamma_{31}$. For the classical system we use the same set of parameters following the analog presented in table \ref{AYinv}.} \label{YinvA} \end{figure} The imaginary and real parts of the reduced electric susceptibility $\tilde{\chi}_{e}(\omega_{p})$ are depicted in Fig.\ref{YinvA} as a function of the normalized probe-atom detuning $\Delta_{p}/\gamma_{31}$ in comparison with its classical counterpart $\tilde{\chi}_{M}(\omega_{s})$. Figures \ref{YinvA}(a) and \ref{YinvA}(b) show disagreement between the results, meaning that the condition $\Omega_{p}<<\left( \Omega_{c},\Omega_{r}\right)$ is not deeply satisfied and part of the atomic population is not in the ground state $\left\vert 1\right\rangle $. In Fig.\ref{YinvA}(c) and Fig.\ref{YinvA}(d) the condition is satisfied with classical and quantum results showing excellent agreement. The classical dark state in this case is also produced when oscillator 1 stays stationary while oscillators 2 and 3 oscillate harmonically. Note that when $\omega_{s} = \omega_{1} = \sqrt{\left(k_{1}+k_{12}+k_{13}\right)/m}$ the system is pumped in the range between the normal frequencies $\omega_{0}$ and $\omega_{-}$, which is a region of high probability to occur interference between the normal modes $NM_{(0)}$ and $NM_{(-)}$. Once $x_{1} = 0$, it is featured by zero absorption power of oscillator 1, which is equivalent to $\tilde{\chi}_{M} = 0$ for zero detuning. Figure \ref{YinvA}(d) shows that a third resonance peak appears as a consequence of making the coupling-atom detunings $\Delta_{c}$ and $\Delta_{r}$ different of zero. If we set $\Omega_{c} = \Omega_{r}$ the peaks become symmetric giving rise to two transmission windows, which characterizes double EIT. By manipulating the parameters of the system we can control the two EIT dips from a narrow to a wider splitting of the Autler-Townes doublets. We see that all these resonant features can be reproduced with the mechanism of classical interference of the normal modes of the three coupled harmonic oscillators in the displacement of oscillator 1. \subsection{EIT in four-level atom in a tripod configuration} The four-level atom in a tripod configuration is also based on a three-level EIT system and it is promising for many applications, ranging from the realization of polarization quantum phase gates to quantum information processes \cite{Friedmann2004, Corbalan2004, Malakyan2004, Wang2008}. Differently of the inverted-Y configuration, here the atomic level $\left\vert 4\right\rangle $ is a ground state, see Fig.\ref{Tripod}(a). The time-independent Hamiltonian is essentially the same as eq.\eqref{HYinv1} and the master equation is slightly modified as, \begin{align} \dot{\rho} &= -i[H,\rho]+\sum\limits_{m=1,2,4}\Gamma_{3m}(2\sigma_{m3}\rho\sigma_{3m}-\sigma_{33}\rho-\rho\sigma_{33})\nonumber\\ &+ \sum\limits_{n=2,3,4}\gamma_{n}(2\sigma_{nn}\rho\sigma_{nn}-\sigma_{nn}\rho-\rho\sigma_{nn}),\label{e20} \end{align} where we introduce the polarization decay rate $\Gamma_{34}$ of the excited level $|3\rangle$ to the level $|4\rangle$. \begin{figure}[!ht] \includegraphics[width=8.5cm]{Tripod.jpg} \caption{(Color online). (a) Schematic energy level diagram of a four-level atom in a tripod configuration, showing three classical electromagnetic fields, probe $(\omega_{p})$, control $(\omega_{c})$ and pump $(\omega_{r})$, coupling the transitions $|1\rangle\leftrightarrow|3\rangle$, $|2\rangle\leftrightarrow |3\rangle$ and $|3\rangle\leftrightarrow|4\rangle$, respectively, and their corresponding detunings. The atomic decay rates are represented by $\gamma_{34}=\Gamma_{31}+\Gamma_{32}+\Gamma_{34}+\gamma_{3}$, $\gamma_{2}$ and $\gamma_{4}$. The classical analog is obtained considering a force acting in each harmonic oscillator with phases $\phi_{1}=\phi_{3}=0$ and $\phi_{2}=\pi$, as shown in (b).} \label{Tripod} \end{figure} In the same way as in the inverted-Y configuration the response of the probe field is given by the reduced electric susceptibility $\tilde{\chi}_{e}=\rho_{31}$. Solving for $\rho_{31}$ and considering the limit of low atomic excitation $\rho_{11}\approx1$ we have, \begin{equation} \rho_{31}=\frac{\Omega_{p}\left(\Delta_{p}-i\gamma_{2}\right) \left(\Delta_{p}-i\gamma_{4}\right) - \Omega_{p}\Omega_{c}\Upsilon_{23}-\Omega_{p}\Omega_{r}\Upsilon_{43}}{\Upsilon_{Q}-\Omega_{c}^{2}\left(\Delta_{p}-i\gamma_{4}\right) - \Omega_{r}^{2}\left(\Delta_{p}-i\gamma_{2}\right)}, \label{FQtripod} \end{equation} where $\Upsilon_{23}=\left(\Delta_{p}-i\gamma_{4}\right) \rho_{23}$, $\Upsilon_{43}=\left(\Delta_{p}-i\gamma_{2}\right) \rho_{43}$ and $\Upsilon_{Q}=\left(\Delta_{p}-i\gamma_{34}\right) \left(\Delta_{p}-i\gamma_{2}\right) \left(\Delta_{p}-i\gamma_{4}\right)$ with $\gamma_{34}=\Gamma_{31}+\Gamma_{32}+\Gamma_{34}+\gamma_{3}$. The real and imaginary parts of the nondiagonal density matrix element $\rho_{23}$ are identical to the same for $\rho_{43}$, as shown in Fig.\ref{sigma3234}. Despite their small values they are not neglected here, like in the inverted-Y configuration. Note that the real parts of $\rho_{23,43}$ change their signal with $\Delta_{p}$, while the signal of the imaginary parts are kept the same. These details are essential to obtain the correct classical analog for the atomic tripod configuration. \begin{figure}[!ht] \includegraphics[width=7cm]{sigma3234.jpg} \caption{(Color online). Imaginary and real parts of $\rho_{23}$ and $\rho_{43}$ vs the normalized probe-atom detuning $\Delta_{p}/\gamma_{34}$ for perfect atom-field resonances $\Delta_{c}=\Delta_{r}=0$, using the parameters $\Omega_{p}=0.002\gamma_{34}$, $\Omega_{c}=\Omega_{r}=1.0\gamma_{34}$ and $\gamma_{2}=\gamma_{4}=0$.} \label{sigma3234} \end{figure} If we consider $\Omega_{r}=0$ in eq.\eqref{FQtripod} we end up with, \begin{equation} \rho_{31}=\frac{\Omega_{p}\left(\Delta_{p}-i\gamma_{2}\right) - \Omega_{p}\Omega_{c}\rho_{23}}{\left(\Delta_{p}-i\gamma_{34}\right) \left(\Delta_{p}-i\gamma_{2}\right) - \Omega_{c}^{2}}. \label{FQtripodB} \end{equation} Apart from the dimensionless term $\rho_{23}$, the equation above has the same form of a mechanical model comprised by two harmonic oscillators with two forces acting on particles 1 and 2 out of phase by $\pi$. In eqs.\eqref{CETL} we would have $F_{2}=-F_{1}$ for $k_{13}=0$, or $F_{3}=-F_{1}$ for $k_{12}=0$, once the same is observed for $\Omega_{c}=0$. Then, as a first suggestion, one could propose the classical analog for the atomic tripod configuration by considering the forces $F_{2s}$ and $F_{3s}$ out of phase with $F_{1s}$ by $\pi$ , i.e., $\phi_{1}=0$ and $\phi_{2}=\phi_{3}=\pi$. But Fig.\ref{sigma3234} shows that the real parts of $\rho_{23,43}$ are in phase with their corresponding imaginary parts for $\Delta_{p}<0$ and out of phase by $\pi$ for $\Delta_{p}>0$. As additional transitions, $\rho_{23}$ and $\rho_{43}$, represent additional harmonic oscillators we reproduce this effect by assuming only the force acting on particle 2 out of phase by $\pi$ with the force applied on particle 1, meaning that $F_{2}=-F_{1}$ and $F_{3}=F_{1}$. This classical model mimics the EIT features presented by the tripod configuration in very good agreement. Taking into account the considerations above the reduced mechanical susceptibility is obtained from equations \eqref{CETL} for the displacement of oscillator 1 as follows, \begin{equation} \rho_{co}=\frac{\Omega_{s}^{(1)}\left(\Delta_{2}-i\gamma_{2}\right) \left(\Delta_{3}-i\gamma_{3}\right) -\Omega_{s}^{(2)}\Omega_{12}\Upsilon_{3}+\Omega_{s}^{(3)}\Omega_{13}\Upsilon_{2}}{\Upsilon_{C}-\Omega_{12}^{2}\left(\Delta_{3}-i\gamma_{3}\right) - \Omega_{13}^{2}\left( \Delta_{2}-i\gamma_{2}\right)}, \label{Ctripod} \end{equation} where $\Upsilon_{3}=\Delta_{3}-i\gamma_{3}$, $\Upsilon_{2}=\Delta_{2} - i\gamma_{2}$ and $\Upsilon_{C}=\left(\Delta_{1}-i\gamma_{1}\right)\left(\Delta_{2}-i\gamma_{2}\right) \left( \Delta_{3}-i\gamma_{3}\right)$, $\Omega_{12}=\omega_{12}^{2}/2\sqrt{\omega_{1}\omega_{2}}$ and $\Omega_{13}=\omega_{13}^{2}/2\sqrt{\omega_{1}\omega_{3}}$. The mechanical pumping rates are given by $\Omega_{s}^{(j)}=\sqrt{F_{j}^{2}/(2m\omega_{j})}$ and they are related to the force $F_{j}$ acting on the $j$-th oscillator, $j=1,2,3$. Once there is only one probe field applied to the atomic system with Rabi frequency $\Omega_{p}$, eq.\eqref{FQtripod}, the classical pumping rates have to be the same, i.e., $\Omega_{s}^{(j)}=\Omega_{s}$. Consequently $\omega_{1}=\omega_{2}=\omega_{3}$, implying that $k_{2}=k_{1}+k_{13}$ and $k_{3}=k_{1}+k_{12}$. This also conducts to $\Delta_{1}=\Delta_{2}=\Delta_{3}=\Delta_{s}$. Considering all these conditions, eq.\eqref{Ctripod} becomes identical to eq.\eqref{FQtripod} for the atomic system. The classical analog for each parameter is depicted in table \ref{TableTripod} and illustrated in Fig.\ref{Tripod}(b). Huang et al. \cite{Huang2013} proposed recently a classical analog for the atomic tripod configuration, considering $F_{1}=0$ and $F_{2}=F_{3}$ in eqs.\eqref{CETL}. According to them their classical analog, or in our terms, their reduced mechanical susceptibility $\tilde{\chi}_{M}^{H}=\rho_{co}^{H}$ is obtained solving for the displacement of oscillators 2 or 3. Using these conditions and the same definitions above we have, \begin{equation} \rho_{co}^{H}=\frac{\Omega_{s}\left(\Delta_{s}-i\gamma_{1}\right) \left(\Delta_{s}-i\gamma_{3}\right) - \Omega_{s}\Omega_{13}^{2}+\Omega_{s}\Omega_{12}\Omega_{13}}{\Upsilon_{C}-\Omega_{12}^{2}\left( \Delta_{s} - i\gamma_{3}\right) - \Omega_{13}^{2}\left(\Delta_{s}-i\gamma_{2}\right)}. \label{RhoCOHuang} \end{equation} Comparing eq.\eqref{RhoCOHuang} with $\rho_{31}$, eq.\eqref{FQtripod}, we see that it is not possible to establish a one-to-one classical correspondence for the quantum variables $\Upsilon_{23}=\left(\Delta_{p}-i\gamma_{4}\right) \rho_{23}$ and $\Upsilon_{43} = \left(\Delta_{p}-i\gamma_{2}\right)\rho_{43}$. According to eq.\eqref{RhoCOHuang} we would have $\Omega_{c}\Upsilon_{23}\equiv\Omega_{13}^{2}$ and $-\Omega_{r}\Upsilon_{43}\equiv\Omega_{12}\Omega_{13}$. The classical analog for the other variables are shown in table \ref{TableTripod}. Note that we have two constraints for the classical variables in this case, $\gamma_{1}=\gamma_{2}$ and $\Omega_{12}=\Omega_{13}$. \begin{table} \caption{Classical analog of EIT-like in a four-level atom in a tripod configuration (EIT-Tripod) using three mechanical coupled harmonic oscillators considering the forces acting on the three particles as $F_{2}=-F_{1}$ and $F_{3}=F_{1}$ for our model (3CO) and $F_{2}=F_{3}$, $F_{1}=0$ for Huang's model (3CO-H) \cite{Huang2013}.} \begin{center} \begin{tabular}{c c c } \hline\hline EIT-Tripod $\left(\rho_{31}\right)$ & 3CO $\left(\rho_{co}\right)$ & 3CO-H $\left(\rho^{H}_{co}\right)$ \\[1ex] \hline $\Delta_{p}$ & $\Delta_{s}$ & $\Delta_{s}$\\ $\Omega_{p}$ & $\Omega_{s}$ & $\Omega_{s}$\\ $\Omega_{c}$ & $\Omega_{12}$ & $\Omega_{12}$, $\Omega_{13}$\\ $\Omega_{r}$ & $\Omega_{13}$ & $\Omega_{12}$, $\Omega_{13}$\\ $\gamma_{34}$ & $\gamma_{1}$ & $\gamma_{1}$\\ $\gamma_{2}$ & $\gamma_{2}$ & $\gamma_{1}, \gamma_{2}$\\ $\gamma_{4}$ & $\gamma_{3}$ & $\gamma_{3}$\\ $\Upsilon_{23}$ & $\Upsilon_{3}$ & -\\ $-\Upsilon_{43}$ & $\Upsilon_{2}$ & -\\[1ex] \hline \end{tabular} \end{center} \label{TableTripod} \end{table} \begin{figure}[!ht] \includegraphics[width=8.5cm]{Tripod3M.jpg} \caption{(Color online). Imaginary and real parts of the reduced electric susceptibility $\tilde{\chi}_{e}$ vs normalized probe-atom detuning $\Delta_{p}/\gamma_{34}$ for the four-level atom in a tripod configuration in comparison with its classical counterparts $\tilde{\chi}_{M}$, eq.\eqref{Ctripod}, and $\tilde{\chi}_{M}^{H}$, eq.\eqref{RhoCOHuang}, obtained using three coupled harmonic oscillators. The parameters are $\Omega_{p}=0.002\gamma_{34}$, $\Delta_{c}=\Delta_{r}=0$, $\gamma_{2}=\gamma_{4}=0$ for different values of the Rabi frequencies of the coupling $\Omega_{c}$ and pumping $\Omega_{r}$ fields. It is considered $\Omega_{c}=\Omega_{r}$ with values (a) $0.08\gamma_{34}$, (b) $0.8\gamma_{34}$, (c) $1.5\gamma_{34}$ and (d) $2.0\gamma_{34}$. For the classical models we obtain $\tilde{\chi}_{M}$ and $\tilde{\chi}_{M}^{H}$ using the same set of parameters following the analog presented in table \ref{TableTripod}.} \label{Tripod3M} \end{figure} In Fig.\ref{Tripod3M} we plot the real and imaginary parts of the reduced electric susceptibility $\tilde{\chi}_{e}$ for the atomic system as a function of the normalized probe-atom detuning $\Delta_{p}/\gamma_{34}$ in comparison with its two classical counterparts $\tilde{\chi}_{M}$ and $\tilde{\chi}_{M}^{H}$ obtained from eqs.\eqref{Ctripod} and \eqref{RhoCOHuang}, respectively. We consider the weak-probe limit $\Omega_{p}<<(\Omega_{c},\Omega_{r})$ with $\Omega_{p}=0.002\gamma_{34}$ for perfect coupling-field resonances $\Delta_{c}=\Delta_{r}=0$ and $\gamma_{2}=\gamma_{4}=0$. For all cases we consider $\Omega_{c}=\Omega_{r}$ owing the constraint obtained from eq.\eqref{RhoCOHuang}, where $\Omega_{12}=\Omega_{13}$. Figure \ref{Tripod3M}(a) shows that both classical analogs reproduce the EIT features calculated for the atomic tripod system in very good agreement. When the Rabi frequencies of the coupling $(\Omega_{c})$ and pumping $(\Omega_{r})$ fields increase, Figs.\ref{Tripod3M}(b), \ref{Tripod3M}(c) and \ref{Tripod3M}(d), show that only the mechanical susceptibility $\tilde{\chi}_{M}$, given by eq.\eqref{Ctripod}, reproduces satisfactorily the behavior of the atomic system. Although the impossibility of obtaining a one-to-one correspondence between classical and quantum variables, the classical analog proposed in ref.\cite{Huang2013}, eq.\eqref{RhoCOHuang}, exhibits a similar behavior as the tripod configuration, but total agreement is observed only for small values of $\Omega_{12}$, $\Omega_{13}$. If the EIT-like condition $\Omega_{p}<<(\Omega_{c},\Omega_{r})$ is deeply satisfied, the analog proposed here shows perfect agreement for any set of parameters. \subsection{Cavity EIT (CEIT)} In Sec.IIC we have shown the classical analog for a system consisting of a single two-level atom coupled to a single cavity mode. In this section we present for the first time the analog for the extended system considering a three-level atom placed inside an optical cavity. This system also exhibits EIT features being usually referred to as intracavity EIT or simply cavity EIT (CEIT). The optical cavity enhances the main characteristics of EIT, regarding atomic coherence and interference, which may be useful for a variety of fundamental studies and practical applications \cite{Scully1998, Zhu2007, Xiao2008, Souza2013}. The system is comprised of a single atom with three energy levels in $\Lambda$ configuration, as in Fig.\ref{EsquemaEIT}(a), coupled to a single electromagnetic mode of frequency $\omega_{cav}$ of an optical resonator, see Fig.\ref{CEIT}(a). The cavity is driven by a coherent field (probe) of strength $\varepsilon$ and frequency $\omega_{p}$. The atomic transitions $|1\rangle\leftrightarrow |3\rangle$ (frequency $\omega_{31}$) and $|2\rangle\leftrightarrow|3\rangle$ (frequency $\omega_{32}$) are coupled by the cavity mode with vacuum Rabi frequency $2g$ and by a classical field (control) with frequency $\omega_{c}$ and Rabi frequency $2\Omega_{c}$, respectively. The time-independent Hamiltonian which describes the atom-field coupling in a rotating frame is given by \begin{align} H &= - \Delta_{p}\sigma_{11}+\left( \Delta_{1}-\Delta_{2}\right) \sigma_{22} + \Delta_{1}\sigma_{33}+\Delta_{p}a^{\dagger}a \nonumber\\ &+ \left( ga\sigma_{31}+\Omega_{c}\sigma_{32}+\varepsilon a+h.c.\right), \label{e22} \end{align} where the detunings are $\Delta_{p}=\omega_{cav}-\omega_{p}$, $\Delta_{1}=\omega_{31}-\omega_{cav}$ and $\Delta_{2}=\omega_{32}-\omega_{c}$. The master equation for the atom-cavity density operator is the same as eq.\eqref{e14}, where we have to consider the cavity-field decay rate $\kappa$, the polarization decay rates $\Gamma_{3m}$ $(m=1,2)$ of the excited level $|3\rangle$ to the levels $|m\rangle$ and the non-radiative atomic dephasing rates $\gamma_{n}$ $(n=2,3)$ of states $|n\rangle$. Similarly to the standard two-level atom-cavity system (CQED), in the EIT-like condition $\Omega_{c} >> g\left\langle a\right\rangle _{max}$, with $\left\langle a\right\rangle _{max} = \varepsilon/\left(\Delta_{p} - i\kappa\right)$, the CEIT system will be limited to the first splitting of the dressed states, Autler-Townes-like effect, separated by $2\sqrt{g^{2} + \Omega_{c}^{2}}$. Additionally, there are the intracavity dark states which causes an empty-cavity-like transmission, not observed in the two-level CQED configuration. The CEIT dressed states also compose a kind of anharmonic Jaynes-Cummings ladder structure \cite{Souza2013}. The probe response is given by the reduced atom-cavity susceptibility which is represented by the expectation value of the cavity field $\tilde{\chi}_{CEIT}(\omega_{p})=\left\langle a\right\rangle $. In the steady state $\dot{\rho}=0$ and considering the low atomic excitation limit $\left\langle\sigma_{11}\right\rangle \approx 1$ we have \begin{equation} \left\langle a\right\rangle =\frac{-\varepsilon\left(\delta_{1}-i\gamma_{31}\right) \left(\delta_{2}-i\gamma_{2}\right) + \varepsilon\Omega_{c}^{2}}{\Upsilon_{Q}-\Omega_{c}^{2}\left( \Delta_{p}-i\kappa\right) - g^{2}\left(\delta_{2}-i\gamma_{2}\right)}, \label{e23} \end{equation} where $\Upsilon_{Q}=\left(\delta_{1}-i\gamma_{31}\right) \left(\delta_{2}-i\gamma_{2}\right) \left(\Delta_{p}-i\kappa\right)$ with $\gamma_{31}=\Gamma_{31}+\Gamma_{32}+\gamma_{3}$, $\delta_{1}=\Delta_{p}-\Delta_{1}$ and $\delta_{2}=\delta_{1}-\Delta_{2}$. Once the atom-cavity system consists of two atomic dipole allowed transitions and one cavity mode, its classical analog is also modeled on three coupled harmonic oscillators. The analysis of the probe response for the tripod system, given by $\rho_{31}$, revealed that more than one mechanical force have to be taken into account in the mechanical configuration. For all other systems considered before we see that the probe field is represented by a coherent force applied only on the harmonic oscillator corresponding to the respective atomic transition or cavity mode. By inspection of the expectation value of $\sigma_{13}$, written as follows, \begin{equation} \left\langle \sigma_{13}\right\rangle =\frac{-g\left\langle a\right\rangle \left(\delta_{2}-i\gamma_{2}\right)}{\left(\delta_{1} - i\gamma_{31}\right)\left(\delta_{2}-i\gamma_{2}\right) - \Omega_{c}^{2}},\label{sigma13} \end{equation} we see that, it is basically the equation for two coupled harmonic oscillators pumped by the Rabi frequency of the cavity field $g\left\langle a\right\rangle$, as illustrated in Fig.\ref{CEIT}(b). Thus, for the classical analog of CEIT we also consider only one force applied on the harmonic oscillator representing the cavity mode, which is driven by the probe field. \begin{figure}[!ht] \includegraphics[width=8.5cm]{CEIT.jpg} \caption{(Color online). Three-level atom in a $\Lambda$ configuration inside an optical resonator showing the quantum cavity field with frequency $\omega_{cav}$ and vacuum Rabi frequency $2g$ coupling the atomic transition $|1\rangle\leftrightarrow|3\rangle$. The control field with frequency $\omega_{c}$ couples the transition $|2\rangle\leftrightarrow|3\rangle$ and the probe field with frequency $\omega_{p}$ and strength $\varepsilon$ drives the cavity mode. (b) Classical analog for $\left\langle \sigma_{13}\right\rangle $ given by eq.\eqref{sigma13} corresponding to two coupled harmonic oscillators pumped by the Rabi frequency of the cavity field $g\left\langle a\right\rangle $. (c) Classical analog for each parameter of the CEIT system.} \label{CEIT} \end{figure} Then, the classical analog is obtained from eqs.\eqref{CETL} considering $F_{1s}=F_{2s}=0$. Solving for the displacement of particle 3 and considering $\phi_{3}=\pi$ we find for the reduced mechanical susceptibility $\tilde{\chi}_{M}=\rho_{co}$, \begin{equation} \rho_{co}(\omega_{s})=\frac{-\Omega_{s}\left(\Delta_{1}-i\gamma_{1}\right)\left(\Delta_{2}-i\gamma_{2}\right) + \Omega_{s}\Omega_{12}^{2}}{\Upsilon_{C}-\Omega_{12}^{2}\left(\Delta_{3}-i\gamma_{3}\right) - \Omega_{13}^{2}\left(\Delta_{2}-i\gamma_{2}\right)}, \label{CCEIT} \end{equation} where $\Upsilon_{C}=\left(\Delta_{1}-i\gamma_{1}\right) \left(\Delta_{2}-i\gamma_{2}\right) \left(\Delta_{3}-i\gamma_{3}\right)$, $\Omega_{12}=\omega_{12}^{2}/2\sqrt{\omega_{1}\omega_{2}}$, $\Omega_{13}=\omega_{13}^{2}/2\sqrt{\omega_{1}\omega_{3}}$ and $\Omega_{s}=\sqrt{F_{3}^{2}/2m\omega_{3}}$. Note that eqs.\eqref{e23} and \eqref{CCEIT} are identical. The classical analog for each parameter of the CEIT system is shown in table \ref{TableCEIT} and illustrated in Fig.\ref{CEIT}(c). \begin{table}[th] \caption{Classical analog of EIT-like for the cavity EIT system (CEIT) using three mechanical coupled harmonic oscillators (3-MCHO).} \begin{center} \begin{tabular}{c c} \hline\hline CEIT $\left(\left\langle a\right\rangle \right)$ & 3-MCHO $\left(\rho_{co}\right)$ \\[1ex] \hline $\delta_{1}$ & $\Delta_{1}$\\ $\delta_{2}$ & $\Delta_{2}$\\ $\Delta_{p}$ & $\Delta_{3}$\\ $\varepsilon$ & $\Omega_{s}$\\ $\Omega_{c}$ & $\Omega_{12}$\\ $g$ & $\Omega_{13}$\\ $\gamma_{31}$ & $\gamma_{1}$\\ $\gamma_{2}$ & $\gamma_{2}$\\ $\kappa$ & $\gamma_{3}$\\[1ex] \hline \end{tabular} \end{center} \label{TableCEIT} \end{table} Figures \ref{CEITA} and \ref{CEITB} show the real and imaginary parts of the reduced atom-cavity susceptibility $\tilde{\chi}_{CEIT}$ vs the normalized probe-cavity detuning $\Delta_{p}/\kappa$ for perfect atom-field resonances $\Delta_{1} = \Delta_{2} = 0$ in comparison with its classical counterpart $\tilde{\chi}_{M}$. The Rabi frequency of the probe field is set to be $\Omega_{p} = 0.02\kappa$ in Fig.\ref{CEITA}, Fig.\ref{CEITB}(c), Fig.\ref{CEITB}(d) and $\Omega_{p} = 0.5\kappa$ in Fig.\ref{CEITB}(a), Fig.\ref{CEITB}(b), while the dissipation rates are fixed at $\gamma_{31} = 0.1\kappa$, $\gamma_{2} = 0$. In Fig.\ref{CEITA} the vacuum Rabi frequency is fixed at $g = 1.0\kappa$ and the steady state of $\left\langle a\right\rangle$ is calculated for different values of the Rabi frequency of the control field $\Omega_{c}$. In Fig.\ref{CEITB} we do the opposite, fixing $\Omega_{c} = 1.0\kappa$ and varying $g$. Note that there is a small difference between the classical and quantum results in Fig.\ref{CEITA}(a). If we increase the magnitude of $\Omega_{p}$ the difference becomes more pronounced as displayed in Figs.\ref{CEITB}(a) and \ref{CEITB}(b). In these cases the CEIT condition $\Omega_{c}>>g\left\langle a\right\rangle _{max}$ is not deeply satisfied and $\left\langle \sigma_{11}\right\rangle \neq1$. For all other set of parameters the results show perfect agreement. The classical dark state, equivalent to the intracavity dark state of the CEIT system, is now observed when oscillator 3 is driven resonantly $\omega_{s} = \omega_{3} = \sqrt{(k_{3} + k_{13})/m}$. Note that this is exactly the resonance frequency $\omega_{+}$ of the normal mode $NM_{(+)}$, where $m_{1}$ stays stationary while $m_{2}$ and $m_{3}$ oscillate harmonically out of phase with each other. Thus, the classical dark state is naturally identified as a peak in $\omega_{3}$, meaning that the power transferred from the harmonic source to oscillator 3 is total and featured by Im$\left\{\tilde{\chi}_{M}\right\} = 1$ in Fig.\ref{CEITA} and Fig.\ref{CEITB} for zero detuning. \begin{figure}[!ht] \includegraphics[width=8.5cm]{CEITA.jpg} \caption{(Color online). Imaginary and real parts of the reduced atom-cavity electric susceptibility $\tilde{\chi}_{CEIT}$ vs the normalized probe-cavity detuning $\Delta_{p}/\kappa$ for the CEIT system in comparison with its classical counterpart $\tilde{\chi}_{M}$ for $\Omega_{p}=0.02\kappa$, $g=1.0\kappa$, $\gamma_{31}=0.1\kappa$, $\gamma_{2}=0$, $\Delta_{1}=\Delta_{2}=0$ and different values of the Rabi frequency of the control field (a) $\Omega_{c}=0.02\kappa$, (b) $0.5\kappa$, (c) $2.0\kappa$ and (d) $3.0\kappa$. For the classical system we use the same set of parameters following the analog presented in table \ref{TableCEIT}.} \label{CEITA} \end{figure} \begin{figure}[!ht] \includegraphics[width=8.5cm]{CEITB.jpg} \caption{(Color online). The same as in Fig.\ref{CEITA} for $\Omega_{c}=1.0\kappa$, $\gamma_{31}=0.1\kappa$, $\gamma_{2}=0$, $\Delta_{1}=\Delta_{2}=0$ and (a) $\Omega_{p}=0.5\kappa$, $g=0.5\kappa$, (b) $0.5\kappa$, $1.0\kappa$, (c) $0.02\kappa$, $2.0\kappa$ and (d) $0.02\kappa$, $3.0\kappa$.} \label{CEITB} \end{figure} Figure \ref{CEITT} displays the transmission spectrum of cavity EIT obtained experimentally by M\"{u}cke \textit{et al.} for 15 atoms, on average, trapped inside a high finesse cavity \cite{Muecke2010}, in comparison with a semiclassical and the classical analog models. As mentioned before, the semiclassical model is obtained from the semiclassical approximation $\left\langle a\sigma\right\rangle \rightarrow\left\langle a\right\rangle\left\langle \sigma\right\rangle$ where only the field is treated classically. It means that the quantized nature of the three-state atom is respected with $\left\langle a\sigma_{11}\right\rangle \neq\left\langle a\right\rangle $, differently from the full classical case given by eq.\eqref{e23}. The red dotted line in Fig.\ref{CEITT}, named as SCMA, shows the semiclassical result for $N=15$ resting atoms and the black dash-dotted line (SCMB) shows the same semiclassical model but considering atomic motion as in ref.\cite{Muecke2010}. The parameters were adjusted in order to obtain the best fitting. The dephasing rate of state $\left\vert 2\right\rangle$ and the atom-cavity detuning, for example, were set to be $\gamma_{2}=0.001\kappa$ and $\Delta_{1}=-0.3\kappa$, respectively, owing the decreasing in the transmission and the shifting of the central intracavity dark state peak. We can model mechanically $N$\ atoms by considering $N$\ pairs of harmonic oscillators, like in Fig.\ref{CEIT}(b), coupling independently to oscillator 3, which represents the driven cavity mode. The dynamics of the three-level atom pumped by the Rabi frequency of the cavity can be obtained from the displacement of particle 1 in eqs.\eqref{CETL}. Substituting $N_{2}$\ from eq.\eqref{CETLB} in eq.\eqref{CETLA} we have, \begin{equation} N_{1} = \frac{\Omega_{13}\tilde{N}_{3}\left(\Delta_{2}-i\gamma_{2}\right)}{\left(\Delta_{1}-i\gamma_{1}\right)\left(\Delta_{2}-i\gamma_{2}\right) -\Omega_{12}^{2}}, \label{e24} \end{equation} where $\tilde{N}_{3}=\sqrt{\omega_{3}/\omega_{1}}N_{3}$. Note that eq.\eqref{e24} is the classical analog for $\left\langle \sigma_{13}\right\rangle$ given by eq.\eqref{sigma13}. It represents the mechanical atom being pumped by the third harmonic oscillator with pumping rate $\Omega_{13}\tilde{N}_{3}$, in analogy to the Rabi frequency of the cavity field $g\left\langle a\right\rangle$ in the quantum model. Then, if we want to model mechanically $N$ atoms independently coupled to a single cavity mode we have to consider $N\times N_{1}$ in eq.\eqref{CETLC}. Thus, substituting eq.\eqref{e24} in eq.\eqref{CETLC} for $\phi_{3}=\pi$ we end up with, \begin{equation} \rho_{Nco}=\frac{-\Omega_{s}\left(\Delta_{1}-i\gamma_{1}\right) \left(\Delta_{2}-i\gamma_{2}\right) + \Omega_{s}\Omega_{12}^{2}}{\Upsilon_{C}-\Omega_{12}^{2}\left(\Delta_{3}-i\gamma_{3}\right) - N\Omega_{13}^{2}\left(\Delta_{2}-i\gamma_{2}\right)}. \label{e25} \end{equation} We see that the only difference between eqs.\eqref{CCEIT} and \eqref{e25} is to change the mechanical coupling rate $\Omega_{13}$ for the effective coupling $\Omega_{13}^{(eff)}=\sqrt{N}\Omega_{13}$, where $N$ is the number of pairs of harmonic oscillators as in Fig.\ref{CEIT}(b). Then, to resemble the quantum mechanical average photon number $\left\langle a^{\dagger}a\right\rangle $, which provides the transmission spectrum depicted in Fig.\ref{CEITT}, we have to calculate $\rho_{Nco}^{\ast}\rho_{Nco}$ from eq.\eqref{e25} for $N=15$. As stated before the atom-cavity detuning can be modeled by setting $\Delta_{3}=\Delta_{s}$ and $\Delta_{1}=\Delta_{s}+\Delta_{13}$, where $\Delta_{13}$ accounts for the detuning of the resonant frequencies between oscillators 1-3. Using the same set of parameters for the semiclassical model, following the analog depicted in table \ref{TableCEIT}, the full classical result is plotted in Fig.\ref{CEITT}, solid blue line, showing excellent agreement with the semiclassical model SCMA. It indicates that the experiment was performed by considering the CEIT conditions deeply, where $\left\langle \sigma_{11}\right\rangle \approx1$, once the difference between the experimental data and the SCMA theory is solved by taking into account the movement of the atoms inside the cavity, which is corroborated by the SCMB model. \begin{figure}[!ht] \includegraphics[width=7cm]{CEITT.jpg} \caption{(Color online). Experimental transmission spectrum (open circles) vs normalized probe-cavity detuning $\Delta_{p}/\kappa$ for the CEIT system reported in ref.\cite{Muecke2010} for $N\approx15$ atoms in comparison with a semiclassical model and the classical harmonic oscillators. The parameters used for the semiclassical theory, which considers $15$ resting atoms (SCMA - red dotted line), are $\varepsilon=\sqrt{0.02}\kappa$, $g=0.85\kappa$, $\Omega_{c}=1.5\kappa$, $\gamma_{31}=1.04\kappa$, $\gamma_{2}=0.001\kappa$, $\Delta_{1}=-0.3\kappa$, $\Delta_{2}=0$. For the mechanical system, solid blue line (NCO), we make use of the classical analog for $N$ oscillators in eq.\eqref{e25} to calculate $\rho_{Nco}^{\ast}\rho_{Nco}$, using the same set of parameters according to table \ref{TableCEIT} and the analog for the atom-cavity detuning $\Delta_{13}=-0.3\gamma_{3}$. The black dash-dotted line is obtained from the same semiclassical theory as SCMA, but considering the atoms inside the cavity in movement (SCMB). This is performed by changing randomly the parameters $g$, $\Delta_{1}$ and $\Delta_{2}$ in an interval of values specified from experimental considerations.} \label{CEITT} \end{figure} \section{Conclusions} In this work we showed that mechanical analogs can be obtained for atomic systems which present EIT-related phenomena, if they are considered deeply in the EIT-like conditions. In this case atoms and single cavity modes behave as oscillating dipoles and all dissipative and coherent atom-field processes can be reproduced with systems composed by coupled damped harmonic oscillators. The frequencies of the spectral lines of the atom are equivalent to the natural oscillation frequencies of the oscillators, showing that each atomic-dipole allowed transition corresponds to a classical harmonic damped oscillator. We also showed that the classical dark state is caused by a destructive interference between the normal modes of the system in the displacement of the driven oscillator, and it is observed in analogous conditions with the dark state of the corresponding EIT system. Through the concept of mechanical susceptibility, with its imaginary part corresponding to the power absorbed by the driven oscillator and its real part related to its amplitude, the classical models presented here describe correctly the action of the atom interacting with an electromagnetic field, reproducing the imaginary and real behavior of the electric susceptibility, respectively. Nevertheless, when the population of the atomic system is shared between its bare states ($\rho_{11} \neq 1$) or when anharmonic effects takes place, owing the excitation of high energy states, the classical models does not provide a detailed description of the phenomena the way the full quantum theory does. It would be interesting to introduce anharmonicities in the dynamics of the coupled oscillators in order to further explore the connection between these with quantum effects when the EIT-like conditions are not deeply prescribed. Furthermore, the probe response of driven cavity modes and atom-cavity configurations provide a physical interpretation for the average photon annihilation operator $\left\langle a\right\rangle$, revealing that it can be directly related to the electric susceptibility of the system. In conclusion, the fact that we can reproduce the phenomenology of EIT with classical harmonic oscillators does not mean EIT is a classical phenomenon. We are just showing that the quantum interference process behind EIT has its equivalent in classical systems, where two or more normal modes interfere to each other to perform such phenomenologies. The patterns of interference observed in the mechanical scheme can be considerably useful to provide a general mapping of EIT-like systems into a variety of classical systems for practical device applications without the necessity of sophisticated technologies required for atomic systems. \begin{acknowledgments} We acknowledge fruitful discussions with D. Z. Rossatto. J. A. S. and C.J.V.-B gratefully acknowledge support by the Brazilian founding agency S\~{a}o Paulo Research Foundation (FAPESP) grants \#2013/01182-5, \#2013/04162-5, \#2014/07350-0 and \#2012/00176-9, the Brazilian National Council of Scientific and Technological Development (CNPq) and the Brazilian National Institute of Science and Technology for Quantum Information (INCT-IQ). \end{acknowledgments} \section{Appendix} \subsection {The dynamics of two-coupled harmonic oscillators} In this appendix we used the Hamiltonian formalism to show that, additionally to the steady-state solution of the EIT system, its dynamics is also equivalent to the dynamics of two coupled harmonic oscillators. Hence, we showed how to obtain $\rho_{co}$, drawn from the Newtonian formalism in Sec.\ref{SecEIT}, eq.\eqref{e9}, using the Hamiltonian of the system. If we recall from introductory physics the total Hamiltonian for two coupled harmonic oscillators is obtained from the displacement $x_{j}$ and linear momentum $p_{j}$ of the $j$th oscillator as \begin{eqnarray}\label{HamiltA} H = \sum^{2}_{j=1}\left(\frac{p_{j}^{2}}{2m} + \frac{1}{2}m\omega_{j}^{2}x_{j}^{2}\right) - m\omega_{12}^{2} x_{1}x_{2} - x_{1}F_{s}(t), \end{eqnarray} where we consider the masses to be equal to $m_{1,2} = m$, $\omega_{j}^{2} = \left(k_{j} + k_{12}\right)/m$ ($j = 1,2$), $\omega_{12}^{2} = k_{12}/m$ and the force applied on oscillator 1, $F_{s}(t) = Fe^{-i(\omega_{s} + \phi_{s})t} + c.c.$ for $\phi_{s} = 0$, as illustrated in Fig.\ref{EsquemaEIT}(b). By defining the classical variables $\alpha = \left(m\omega_{1}x_{1} + ip_{1}\right)/\sqrt{2\hbar m\omega_{1}}$ and $\beta = \left(m\omega_{2}x_{2} + ip_{2}\right)/\sqrt{2\hbar m\omega_{2}}$ and considering the simplified case where the natural frequencies of the oscillators are the same, $\omega_{1,2} = \omega$ meaning that $k_{1,2} = k$, the equation above for $\hbar = 1$ takes the form, \begin{eqnarray}\label{DefOs} H &=& \omega\left(\alpha^{*}\alpha + \beta^{*}\beta\right) - \frac{\omega_{12}^{2}}{2\omega}\left(\alpha^{*}\beta^{*} + \alpha\beta + \alpha^{*}\beta + \alpha\beta^{*}\right) \nonumber\\ &-& \sqrt{\frac{F^{2}}{2m\omega}}\left(\alpha^{*} + \alpha\right)\left(e^{i\omega_{s}t} + e^{-i\omega_{s}t}\right). \end{eqnarray} The same way as in eq.\eqref{e7} the coupling rate between particles $1$ and $2$ is defined as $\Omega_{12} = \omega_{12}^{2}/2\omega$. Here we are able to find a direct expression for the pumping rate $\Omega_{s}$ as a function of the parameters of the classical system without the necessity of considering the constant $C_{1}$, like in eq.\eqref{e8}. From eq.\eqref{DefOs} we have $\Omega_{s} = \sqrt{F^{2}/2m\omega}$, which is analogous to the Rabi frequency of the probe field ($\Omega_{p}$). Now we make an approximation in order to discard fast oscillatory terms like $e^{\pm 2i\omega_{s}t}$ for $\omega \approx \omega_{s}$. This is similar to the rotating wave approximation used in the quantum case. By performing the transformation $\alpha(t) = \tilde{\alpha}(t) e^{-i\omega t}$, likewise for $\beta$, we have, \begin{eqnarray}\label{HamiA} H &=& \omega\left(\alpha^{*}\alpha + \beta^{*}\beta\right) - \Omega_{12}\left(\alpha^{*}\beta + \alpha\beta^{*}\right) \nonumber\\ &-& \Omega_{s}\left(\alpha e^{i\omega_{s}t} + \alpha^{*} e^{-i\omega_{s}t}\right). \end{eqnarray} From the Poisson brackets $\dot{\rho} = \left\{\rho,H\right\} = -i\partial H/\partial \rho^{*}$ $(\rho = \alpha, \beta)$ the time evolution of $\alpha$ and $\beta$ are given by, \begin{subequations}\label{HRhos} \begin{align} \dot{\alpha} &= -i\left( \omega\alpha - \Omega_{12}\beta - \Omega_{s}e^{-i\omega_{s}t} -i\gamma_{1}\alpha\right),\\ \nonumber\\ \dot{\beta} &= -i\left( \omega\beta - \Omega_{12}\alpha -i\gamma_{2}\beta \right), \end{align} \end{subequations} where we have added phenomenologically the dissipation terms $\gamma_{1}$ and $\gamma_{2}$ in analogy to the master equation formalism. By performing the transformation $\alpha(t) = \rho_{\alpha}(t) e^{-i\omega_{s}t}$, the same way for $\beta$, eqs.\eqref{HRhos} are writen as \begin{subequations}\label{HRhosB} \begin{align} \dot{\rho}_{\alpha} &= -i \left\{\left( \Delta_{s} - i\gamma_{1} \right)\rho_{\alpha} - \Omega_{12}\rho_{\beta} - \Omega_{s} \right\},\\ \nonumber\\ \dot{\rho}_{\beta} &= -i \left\{\left( \Delta_{s} - i\gamma_{2} \right)\rho_{\beta} - \Omega_{12}\rho_{\alpha} \right\}, \end{align} \end{subequations} with $\Delta_{s} = \omega - \omega_{s}$. Note the equations above are completely equivalent to eqs.\eqref{EvolveEITB} for $\rho_{31}$ and $\rho_{21}$, respectively, if we consider the stationary solution $\dot{\rho}_{\alpha,\beta}(t) = 0$. It shows that the dynamics of both systems, EIT and coupled oscillators, are also equivalent with $\rho_{31} \equiv \rho_{\alpha}$ and $\rho_{21} \equiv \rho_{\beta}$. In the steady state eqs.\eqref{HRhosB} gives for $\rho_{\alpha}$, \begin{align}\label{HamRho} \rho_{\alpha}(\omega_{s}) =\frac{\Omega_{s}\left(\Delta_{s} - i\gamma_{2} \right)}{\left(\Delta_{s} - i\gamma_{1}\right)\left(\Delta_{s} - i\gamma_{2}\right) - \Omega_{12}^{2}}, \end{align} showing that $\rho_{\alpha} = \rho_{co}$ for $\Delta_{1,2} = \Delta_{s}$ in eq.\eqref{e8}, as expected, once the Hamiltonian is equivalent to the Newtonian formalism. \subsection {The classical dark state} Here we explain the Physics underlying the classical dark state for two coupled harmonic oscillators. For this we used the concepts of normal coordinates and normal modes to describe the collective motion of the system. This state is obtained when oscillator 1 is driven resonantly ($\omega_{s} = \omega_{1}$) by the harmonic force $F_{s}(t)$, causing the cancelation of the reduced mechanical susceptibility $\tilde{\chi}_{M}(\omega_{s})=\rho_{co}(\omega_{s})$ defined in Sec.\ref{SecEIT}. We consider the simple case where $m_{1,2} = m$ and $\omega_{1,2} = \omega$. From the definition of the normal coordinates \begin{subequations}\label{MNXMm} \begin{eqnarray} X_{+} &=&\left(x_{1}+x_{2}\right) /\sqrt{2}, \\ X_{-} &=&\left(x_{1}-x_{2}\right) /\sqrt{2}, \end{eqnarray} \end{subequations} and the normal momenta \begin{subequations}\label{MNPMm} \begin{eqnarray} P_{+} &=&\left(p_{1}+p_{2}\right) /\sqrt{2}, \\ P_{-} &=&\left(p_{1}-p_{2}\right) /\sqrt{2}, \end{eqnarray} \end{subequations} the coupled Hamiltonian given in eq.\eqref{HamiltA}, Appendix A, is now written as a combination of two uncoupled forced harmonic oscillators: \begin{eqnarray} H_{nm} = \sum_{i=+,-} \left( \frac{P_{i}^{2}}{2m}+\frac{1}{2}m\omega_{i}^{2}X_{i}^{2} - \frac{\sqrt{2}}{2}F_{s}(t)X_{i} \right), \end{eqnarray} where $\omega_{+} = \sqrt{k/m}$ and $\omega_{-} = \sqrt{\omega^{2}_{+} + 2\omega^{2}_{12}}$ are the resonance frequencies of the two normal modes of the system. Those are usually labeled as symmetric ($NM_{(+)}$) and asymmetric ($NM_{(-)}$) modes, owing the collective motion performed by each other. In $NM_{(+)}$ both masses move in phase with frequency $\omega_{+}$ and the amplitudes are equal. In $NM_{(-)}$ both masses move oppositely, outward and then inward, with frequency $\omega_{-}$, which is higher than $\omega_{+}$ because the middle spring is now stretched or compressed adding its effect to the restoring force. As we have seen, the equations of motion \eqref{classicA} described in Sec.\ref{SecEIT} are obtained by adding the damping force $-\eta_{j}\dot{x}_{j}$ to the resultant force of each oscillator, with $\eta_{j} = 2m\gamma_{j}$ $(j = 1, 2)$. From eqs.\eqref{HamiltA}, \eqref{MNXMm}, \eqref{MNPMm} and the Hamilton equation, \begin{eqnarray} \dot{p}_{j} = -\frac{\partial H}{\partial x_{j}} - 2m\gamma_{j}\dot{x}_{j}, \end{eqnarray} the equations of motion for the normal coordinates are \begin{subequations}\label{NCX} \begin{eqnarray} \ddot{X}_{+} + \Gamma\dot{X}_{+} + \gamma\dot{X}_{-} + \omega_{+}^{2}X_{+} &=& \frac{F_{s}(t)}{m\sqrt{2}}, \\ \ddot{X}_{-} + \gamma\dot{X}_{+} + \Gamma\dot{X}_{-} + \omega_{-}^{2}X_{-} &=& \frac{F_{s}(t)}{m\sqrt{2}}, \end{eqnarray} \end{subequations} with $\Gamma =\left(\gamma_{1} + \gamma_{2}\right)$ and $\gamma =\left(\gamma_{1} - \gamma_{2}\right)$. Note that the collective motions, provided by the normal modes, become uncoupled for $\gamma_{1} = \gamma_{2}$, once the coupling is performed through the asymmetric dissipation $\gamma$. As before, we assume that the steady-state solution for the normal coordinates has the form $X_{i} = N_{i}e^{-i\omega_{s}t} + c.c.$, which conducts to the relationship \begin{eqnarray}\label{relNM} N_{+} = \left[\frac{\omega_{-}^{2} - \omega_{s}^{2} + 2i\gamma_{2}\omega_{s}}{\omega_{+}^{2} - \omega_{s}^{2} + 2i\gamma_{2}\omega_{s}}\right]N_{-}. \end{eqnarray} Using the explicit values of $\omega_{+}$ and $\omega_{-}$ defined previously, the classical dark state is obtained when $\omega_{s} = \omega$, with $\omega^2 = \omega_{+}^2 + \omega_{12}^2$. Then, \begin{eqnarray}\label{Nmodes} N_{+} = \left[\frac{\omega_{12}^{2} - 2i\gamma_{2}\omega}{-\omega_{12}^{2} - 2i\gamma_{2}\omega}\right]N_{-}. \end{eqnarray} Note that the system is pumped in a region of high interference between the normal modes, once $\omega_{s} = \omega$ is a frequency in the range between $\omega_{+}$ and $\omega_{-}$. To see how this state looks like we have to apply the classical analog for the EIT condition, which is $\Omega_{12}>>\Omega_{s}$ and $\gamma_{2}<<\gamma_{1}$, see Sec.\ref{SecEIT} for more details. For $\gamma_{2} \rightarrow 0$, eq.\eqref{Nmodes} provides $N_{+} = - N_{-}$ and consequently $X_{+} = - X_{-}$. From eqs.\eqref{MNXMm} it can be shown readily that $x_{1} = \sqrt{2}/2 \left(X_{+}+X_{-}\right)$ and $x_{2} = \sqrt{2}/2 \left(X_{+}-X_{-}\right)$. Note that the displacement of both oscillators can be described as a superposition of the two normal modes of the system. In this particular case we have $x_{1} = 0$ and $x_{2} \neq 0$. Then, the classical dark state is obtained when oscillator 1 stays stationary while oscillator 2 oscillates harmonically, meaning that it is featured by zero absorption power of oscillator 1. From eq.\eqref{e8} wee see that $\rho_{co}(\omega_{s}) \propto \left(N_{+}+N_{-}\right)$, justifying why $\rho_{co}(\omega_{s}) = 0$ throughout the paper for zero detuning, like in Fig.\ref{FigCOEIT}. The first EIT-like condition $\Omega_{s}<<\Omega_{12}$ is demonstrated for $\gamma_{2} \neq 0$. If $\gamma_{2}<<1$, eq.\eqref{Nmodes} becomes \begin{eqnarray}\label{NmodesB} N_{+} = - \left[1 - \frac{4i\gamma_{2}\omega}{\omega_{12}^{2}}\right]N_{-}. \end{eqnarray} The condition above is equivalent to $\gamma_{2}<<\gamma_{1}$, because all parameters of the system are scaled to $\gamma_{1}$. In this case the classical dark state remains observable when $k_{12}>>k_{1}$, which implies that $\omega \approx \omega_{12} = \sqrt{k_{12}/m}$ and then $N_{+} \approx -N_{-}$. If the frequency $\omega$ of the driven oscillator is taken from the expressions for the classical pumping $\Omega_{s} = \sqrt{F^{2}/2m\omega}$ and coupling $\Omega_{12} = \omega_{12}^{2}/2\omega$ rates, we have $\Omega_{s} = F\sqrt{\Omega_{12}/k_{12}}$. In the usual approximation of small oscillations the strength of the force, given by the amplitude $F$, is very small. Then, if $k_{12}>>1$, which is fulfilled for $k_{12}>>k_{1}$, the condition $\Omega_{s}<<\Omega_{12}$ must be prescribed for $\gamma_{2} \neq 0$, in analogy to the EIT system, where $\Omega_{p}<<\Omega_{c}$ since $\gamma_{2}<<\gamma_{31}$ for nonvanishing $\gamma_{2}$. Thus, we show that the classical dark state is caused by a destructive interference between the normal modes $NM_{(\pm)}$ in the displacement of oscillator 1, and it is observed in analogous conditions with the dark state of the EIT system. The normal modes description performed here can be extended to the case of three coupled harmonic oscillators, as discussed in Sec.III, where the classical dark state is defined according to the configuration of the system.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} When we look at the eight major planets of our solar system, we cannot stop being curious by many questions - and many puzzles to ask. From inside out, the Mercury, the Venus, the Earth, and the Mars, these might look like the earthlings, and then the Jupiter and the Saturn might be mini-Suns, the other two we don't really know. The fact that all eight planets fall in the same plane with the same direction might indicate that they might form in similar or related times. For those earthlings, the Mercury, the Venus, the Earth, and the Mars, why only on Earth are there living things? The Sun provides the energy resources of all kind - the light, the electromagnetic waves of all frequencies, the neutrinos, and the cosmic-ray particles of all kinds - the main provider of the extraterrestrial origin. Besides the light, solar neutrinos, which come from the nuclear reactions in the core of the Sun, also carry away a huge amount of energy. Differing from the light, solar neutrinos, once produced, would travel up to the astronomical distance without suffering second (weak) interactions. Solar neutrinos and all other neutrinos would be the thing that would "shine" the dark world, after all the lights cease to ignite - another life of the Universe if the Universe ceases to expand or start to contract. It would be a lot more interesting to look at neutrinos and antineutrinos more intimately. \section{Solar Neutrinos} When the Sun is shining on us, a significant amount of the solar energy get carried away by neutrinos. Solar neutrinos are elusive because they only participate weak interactions - so almost all of them pass away by us without being noticed. In fact, solar neutrinos are even more elusive than, e.g., antineutrinos because charged weak interactions do not operate between solar neutrinos and the ordinary low$-Z$ matter, i.e. break-up of light nuclei by solar neutrinos being negligible - they are made of from the matter rather than the antimatter. Solar neutrinos come from the most important reactions in the so-called $pp$-I chain,\cite{Commins} \begin{equation} p+p \to D+e^+ +\nu_e, \qquad (E_\nu^{max}=0.42\,MeV: \, \phi_\nu=6.0\times 10^{10}cm^{-2} sec^{-1}), \end{equation} \begin{equation} p+p+e^-\to D+\nu_e, \qquad (E_\nu=1.44: \,\phi_\nu=1.5\times 10^8), \end{equation} or from the $pp$-II chain, \begin{equation} ^7Be+e^-\to ^7Li+\nu_e,\qquad (E_\nu=0.86\,MeV:\, \phi_\nu=2.7\times 10^9;\quad E_\nu=0.38:\,3.0\times 10^8), \end{equation} or from the $pp$-III chain, \begin{equation} ^8B\to ^8Be*+e^+ +\nu_e,\qquad (E_\nu^{max}=14.06;\, \phi_\nu=3.0\times 10^6), \end{equation} or from the C-N-O cycle, \begin{equation} ^{13}N \to ^{13}C+e^+ +\nu_e, \qquad (E=1.19: \, 3.0\times 10^8), \end{equation} \begin{equation} ^{15}O\to ^{15}N+e^+ +\nu_e, \qquad (E=1.70: \, 2.0\times 10^8). \end{equation} Here the neutrino fluxes $\phi_\nu$ are measured at the sea level on Earth, in units of $cm^{-2}sec^{-1}$. Of course, the electron-like neutrinos may oscillate into muon-like or tao-like specifies but fortunately neutral weak interactions do not differentiate among them; other types of neutrino oscillations, so far less likely, could be relevant though. The average distance of the planet Jupiter from the Sun is 5.203 $a.u.$ with the Jupiter year 11.9 our years. The radius of the Jupiter is 71,398 $km$, much bigger than the Earth's 6,378 $km$. In terms of the mass, the Jupiter's $1.901\times 10^{27}\,Kg$ is about 300 times than the Earth's $5.974\times 10^{24}\,Kg$. It is believed that the composition of the Jupiter is similar to our Sun, mostly the hydrogen plus a certain fraction of the helium. Therefore, when solar neutrinos encounter the Jupiter, we anticipate that the following weak interactions will dominate: \begin{equation} \nu + p \to \nu +p, \qquad \nu + ^4He \to \nu + ^4He, \end{equation} while the reaction $\nu + e^- \to \nu + e^-$ would serve as a small correction. \section{Estimate of the Mean Free Paths} For the neutral-current weak reaction induced by solar neutrinos on the protons, \begin{equation} \nu(p_\nu)+p(p) \to \nu(p'_\nu)+p(p'), \end{equation} the transition amplitude is given by\cite{Hwang} \begin{equation} T={G\over \sqrt 2}i {\bar u}_\nu(p'_\nu)\gamma_\lambda(1+\gamma_5) u_\nu(p_\nu) \cdot <p(p')\mid N_\lambda \mid p(p)>. \end{equation} We may proceed to parameterize the neutral-current matrix element as follows\cite{Hwang}: \begin{eqnarray} &<p(p')\mid N_\lambda(0)\mid p(p)> \nonumber\\ =&i\bar u(p') \{\gamma_\lambda f_V^N(q^2)-{\sigma_{\lambda\eta} q_\eta\over 2m_p}f_M^N(q^2) +\gamma_\lambda \gamma_5 f_A^N (q^2) +{i2Mq_\lambda \gamma_5\over m_\pi^2}f_P^N(q^2) \}u(p), \end{eqnarray} with $q^2\equiv \vec q\,^2-q_0^2$, $q_\lambda=(p'-p)_\lambda$, and $2M=m_p+m_n$. Here $f_V^N(q^2)$, $f_M^N(q^2)$, $f_A^N(q^2)$, and $f_P^N(q^2)$, respectively, the (neutral-current) vector, weak magnetism, axial, and pseudoscalar form factors. The differential cross section is given by \begin{eqnarray} &{d\sigma\over d\Omega_\nu} (\nu + p \to \nu + p) \nonumber\\ =&{G^2(E'_\nu)^2\over 2\pi^2} {E'_\nu \over E_\nu} \{ [(f_V^N(q^2))^2 +(f_M^N(q^2))^2 {q^2\over 4m_p^2} + (f_A^N(q^2))^2] cos^2 {\theta_\nu\over 2} \nonumber\\ & +2[(f_V^N(q^2)+ f_M^N(q^2))^2{q^2\over 4m_p^2} +(f_A^N(q^2))^2(1+ {q^2\over 4m_p^2}) \nonumber\\ & +4{E'_\nu \over m_p}(1+{E_\nu\over m_p} sin^2{\theta_\nu\over 2})f_A^N(q^2) (f_V^N(q^2)+f_M^N(q^2))]sin^2{\theta_\nu \over 2} \}. \end{eqnarray} In the tree approximation in the standard model of particle physics, we have \begin{equation} N_\lambda=(1-2sin^2\theta_W)I_\lambda^3-sin^2\theta_W Y_\lambda +I_\lambda^{3(5)} - {1\over 2}Y_\lambda^s - {1\over 2}Y_\lambda^{s(5)}, \end{equation} so that, for example, \begin{equation} f_V^N(q^2)=(1-2sin^2\theta_W)\cdot {1\over 2}(e_p(q^2)-e_n(q^2)) -sin^2\theta_W \cdot (e_p(q^2)+ e_n(q^2))-{1\over 2}f_V^S(q^2). \end{equation} \begin{equation} f_M^N(q^2)=(1-2sin^2\theta_W)\cdot {1\over 2}(\mu_p(q^2) -\mu_n(q^2))-sin^2\theta_W\cdot (\mu_p(q^2)-\mu_n(q^2)) -{1\over 2}f_M^S(q^2). \end{equation} \begin{equation} f_A^N(q^2)={1\over 2}f_A(q^2) -{1\over 2}f_A^S(q^2). \end{equation} As a reasonable estimate, we could use $q^2\approx 0$ and neglect all terms higher order in $q^2/m_p^2$ and $E_\nu/(2m_p)$. The integration over $d\Omega$ yields \begin{eqnarray} \sigma & \cong & {G^2E_\nu^2\over \pi}\cdot \{({\bar f}_V^2+ {\bar f}_A^2+...)(1+{2E_\nu\over m_p})^{-1}\nonumber\\ && + (2 {\bar f}_A^2+ ...)(1+{2E_\nu\over m_p})^{-2}\}\nonumber\\ & \approx & 1.686\times 10^{-20}\cdot ({\bar f}_V^2 + 3{\bar f}_A^2)\cdot ({E_\nu\over 1\,MeV})^2\cdot barn, \end{eqnarray} where ${\bar f}_V$ and ${\bar f}_A$ are suitable averages of $f^N_V(q^2)$ and $f^N_A(q^2)$, respectively. The neutrinos could come from either the three-body modes (i.e. the $\beta^+$ decays) or the two-body modes (such as the $\beta^+$ capture reactions). For the three-body modes, we could use the phase factors to do very good estimates for the neutrino spectra; we adopt this approximation in this paper. Our estimate, from Eqs. (1)-(6), for the average flux times the cross section, $\phi_\nu \sigma$, is given by \begin{equation} \phi_\nu\sigma=4.838\times 10^{-36}({\bar f}_V^2+3{\bar f}_A^2) sec^{-1}. \end{equation} The average density of the Jupiter is 1.2469 $gm/cm^3$. The inverse of the mean free path $n\sigma$ is given by \begin{equation} n\sigma=2.102\times 10^{-36} ({\bar f}_V^2+3{\bar f}_A^2) cm^{-1}. \end{equation} The neutrino flux suitably weighted by the energy factor, measured on the surface of the Jupiter, is \begin{equation} \phi_\nu=2.869\times 10^8 cm^{-2}sec^{-1}. \end{equation} This factor is already used before, calculated from from Eqs. (1)-(6) adjusted by the distance from the Jupiter and the Sun. As another estimate, we could compare how much energy the solar neutrinos deposit in the Jupiter to that in the Earth, \begin{equation} ({1\over 5.203})^2\times ({71,398 km \over 6,378 km})^3=51.82, \end{equation} modulated by small difference in the densities. Another interesting question to ask: The Sun produces a lot neutrinos (antineutrinos) per unit of time but we could ask which stellar object catches the most of them? How many of them get lost in the empty space? So, the Jupiter does come into play. \section{Discussions} Unless the neutrino species would oscillate into the antineutrino species, the relevant weak interactions are relatively simple in our problem - only neutral-current weak interactions and, in terms of the energy range, mostly the elastic channels. So, whether neutrino-antineutrino oscillation occurs is an important issue. Neutrino oscillations is now established to be of importance in the Sun - thus, we could speculate that it is also true in the Jupiter, the Mini-Sun, a factor of 10 smaller (in diameter). The scenario for the oscillations is still unknown and yet to be established, and we feel that the oscillations into antineutrinos ($\Delta L =2$) or into sterile species, if happens, would add a lot of fun in the game. Most of these issues can in principle be investigated in experiments on the Earth - there is no need to go to the Jupiter to enhance our knowledge. The stopping power of solar neutrinos may be the only signature, however small it could be (in terms of the temperature that it increases over a time span). It comes from the interior of the Jupiter, so it's interesting in this regard. \bigskip \section*{Acknowledgments} The Taiwan CosPA project is funded by the Ministry of Education (89-N-FA01-1-0 up to 89-N-FA01-1-5) and the National Science Council (NSC 96-2752-M-002-007-PAE). This research is also supported in part as another National Science Council project (NSC 96-2112-M-002-023-MY3).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The journal \textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \LaTeX. The style file \verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers. This document, \verb'mnras_guide.tex', provides guidance on using that style file and the features it enables. This is not a general guide on how to use \LaTeX, of which many excellent examples already exist. We particularly recommend \textit{Wikibooks \LaTeX}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts. Alternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides. For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\footnote{\label{foot:itas}\url{http://www.oxfordjournals.org/our_journals/mnras/for_authors/}}. Only technical issues with the \LaTeX\ class are considered here. \section{Obtaining and installing the MNRAS package} Some \LaTeX\ distributions come with the MNRAS package by default. If yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \TeX\ Archive Network\footnote{\url{http://www.ctan.org/tex-archive/macros/latex/contrib/mnras}} (CTAN). The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \LaTeX\ distribution), or used temporarily by placing them in the working directory for your paper. To use the MNRAS package, simply specify \verb'mnras' as the document class at the start of a \verb'.tex' file: \begin{verbatim} \documentclass{mnras} \end{verbatim} Then compile \LaTeX\ (and if necessary \bibtex) in the usual way. \section{Preparing and submitting a paper} We recommend that you start with a copy of the \texttt{mnras\_template.tex} file. Rename the file, update the information on the title page, and then work on the text of your paper. Guidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\ref{foot:itas}}$. Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents). If a paper is accepted, it is professionally typeset and copyedited by the publishers. It is therefore likely that minor changes to presentation will occur. For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process. Papers must be submitted electronically via the online submission system; paper submissions are not permitted. For full guidance on how to submit a paper, see the instructions to authors. \section{Class options} \label{sec:options} There are several options which can be added to the document class line like this: \begin{verbatim} \documentclass[option1,option2]{mnras} \end{verbatim} The available options are: \begin{itemize} \item \verb'letters' -- used for papers in the journal's Letters section. \item \verb'onecolumn' -- single column, instead of the default two columns. This should be used {\it only} if necessary for the display of numerous very long equations. \item \verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format. \item \verb'referee' -- \textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format. \item \verb'galley' -- \textit{(deprecated)} no running headers, no attempt to align the bottom of columns. \item \verb'landscape' -- \textit{(deprecated)} sets the whole document on landscape paper. \item \verb"usenatbib" -- \textit{(all papers should use this)} this uses Patrick Daly's \verb"natbib.sty" package for citations. \item \verb"usegraphicx" -- \textit{(most papers will need this)} includes the \verb'graphicx' package, for inclusion of figures and images. \item \verb'useAMS' -- adds support for upright Greek characters \verb'\upi', \verb'\umu' and \verb'\upartial' ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the \verb'amsmath' or \verb'amsymb' packages (see section~\ref{sec:packages}). \item \verb"usedcolumn" -- includes the package \verb"dcolumn", which includes two new types of column alignment for use in tables. \end{itemize} Some of these options are deprecated and retained for backwards compatibility only. Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems. If you want to include any other packages, see section~\ref{sec:packages}. \section{Title page} If you are using \texttt{mnras\_template.tex} the necessary code for generating the title page, headers and footers is already present. Simply edit the title, author list, institutions, abstract and keywords as described below. \subsection{Title} There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head'). Enter them with \verb'\title[]{}' like this: \begin{verbatim} \title[Running head]{Full title of the paper} \end{verbatim} The full title can be multiple lines (use \verb'\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\le~45$ characters on a single line. See appendix~\ref{sec:advanced} for more complicated examples. \subsection{Authors and institutions} Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \verb'\author[]{}' command. If the author list is more than one line long, start a new line using \verb'\newauthor'. Use \verb'\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list. For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location: \begin{verbatim} \author[K. T. Smith et al.]{ Keith T. Smith,$^{1}$ A. N. Other,$^{2}$ and Third Author$^{2,3}$ \\ $^{1}$Affiliation 1\\ $^{2}$Affiliation 2\\ $^{3}$Affiliation 3} \end{verbatim} Affiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'. Email addresses can be inserted with the \verb'\thanks{}' command which adds a title page footnote. If you want to list more than one email, put them all in the same \verb'\thanks' and use \verb'\footnotemark[]' to refer to the same footnote multiple times. Present addresses (if different to those where the work was performed) can also be added with a \verb'\thanks' command. \subsection{Abstract and keywords} The abstract is entered in an \verb'abstract' environment: \begin{verbatim} \begin{abstract} The abstract of the paper. \end{abstract} \end{verbatim} \noindent Note that there is a word limit on the length of abstracts. For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$. Immediately following the abstract, a set of keywords is entered in a \verb'keywords' environment: \begin{verbatim} \begin{keywords} keyword 1 -- keyword 2 -- keyword 3 \end{keywords} \end{verbatim} \noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years. Do \emph{not} make up new keywords! For the current list of allowed keywords, see the journal's instructions to authors$^{\ref{foot:itas}}$. \section{Sections and lists} Sections and lists are generally the same as in the standard \LaTeX\ classes. \subsection{Sections} \label{sec:sections} Sections are entered in the usual way, using \verb'\section{}' and its variants. It is possible to nest up to four section levels: \begin{verbatim} \section{Main section} \subsection{Subsection} \subsubsection{Subsubsection} \paragraph{Lowest level section} \end{verbatim} \noindent The other \LaTeX\ sectioning commands \verb'\part', \verb'\chapter' and \verb'\subparagraph{}' are deprecated and should not be used. Some sections are not numbered as part of journal style (e.g. the Acknowledgements). To insert an unnumbered section use the `starred' version of the command: \verb'\section*{}'. See appendix~\ref{sec:advanced} for more complicated examples. \subsection{Lists} Two forms of lists can be used in MNRAS -- numbered and unnumbered. For a numbered list, use the \verb'enumerate' environment: \begin{verbatim} \begin{enumerate} \item First item \item Second item \item etc. \end{enumerate} \end{verbatim} \noindent which produces \begin{enumerate} \item First item \item Second item \item etc. \end{enumerate} Note that the list uses lowercase Roman numerals, rather than the \LaTeX\ default Arabic numerals. For an unnumbered list, use the \verb'description' environment without the optional argument: \begin{verbatim} \begin{description} \item First item \item Second item \item etc. \end{description} \end{verbatim} \noindent which produces \begin{description} \item First item \item Second item \item etc. \end{description} Bulleted lists using the \verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only. \section{Mathematics and symbols} The MNRAS class mostly adopts standard \LaTeX\ handling of mathematics, which is briefly summarised here. See also section~\ref{sec:packages} for packages that support more advanced mathematics. Mathematics can be inserted into the running text using the syntax \verb'$1+1=2$', which produces $1+1=2$. Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below. \subsection{Equations} Equations should be entered using the \verb'equation' environment, which automatically numbers them: \begin{verbatim} \begin{equation} a^2=b^2+c^2 \end{equation} \end{verbatim} \noindent which produces \begin{equation} a^2=b^2+c^2 \end{equation} By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \verb'\numberwithin{equation}{section}' to the preamble. It is also possible to produce un-numbered equations by using the \LaTeX\ built-in \verb'\['\textellipsis\verb'\]' and \verb'$$'\textellipsis\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided. \subsection{Special symbols} \begin{table} \caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.} \label{tab:anysymbols} \begin{tabular}{lll} \hline Command & Output & Meaning\\ \hline \verb'\sun' & \sun & Sun, solar\\[2pt] \verb'\earth' & \earth & Earth, terrestrial\\[2pt] \verb'\micron' & \micron & microns\\[2pt] \verb'\degr' & \degr & degrees\\[2pt] \verb'\arcmin' & \arcmin & arcminutes\\[2pt] \verb'\arcsec' & \arcsec & arcseconds\\[2pt] \verb'\fdg' & \fdg & fraction of a degree\\[2pt] \verb'\farcm' & \farcm & fraction of an arcminute\\[2pt] \verb'\farcs' & \farcs & fraction of an arcsecond\\[2pt] \verb'\fd' & \fd & fraction of a day\\[2pt] \verb'\fh' & \fh & fraction of an hour\\[2pt] \verb'\fm' & \fm & fraction of a minute\\[2pt] \verb'\fs' & \fs & fraction of a second\\[2pt] \verb'\fp' & \fp & fraction of a period\\[2pt] \verb'\diameter' & \diameter & diameter\\[2pt] \verb'\sq' & \sq & square, Q.E.D.\\[2pt] \hline \end{tabular} \end{table} \begin{table} \caption{Additional commands for mathematical symbols. These can only be used in maths mode.} \label{tab:mathssymbols} \begin{tabular}{lll} \hline Command & Output & Meaning\\ \hline \verb'\upi' & $\upi$ & upright pi\\[2pt] \verb'\umu' & $\umu$ & upright mu\\[2pt] \verb'\upartial' & $\upartial$ & upright partial derivative\\[2pt] \verb'\lid' & $\lid$ & less than or equal to\\[2pt] \verb'\gid' & $\gid$ & greater than or equal to\\[2pt] \verb'\la' & $\la$ & less than of order\\[2pt] \verb'\ga' & $\ga$ & greater than of order\\[2pt] \verb'\loa' & $\loa$ & less than approximately\\[2pt] \verb'\goa' & $\goa$ & greater than approximately\\[2pt] \verb'\cor' & $\cor$ & corresponds to\\[2pt] \verb'\sol' & $\sol$ & similar to or less than\\[2pt] \verb'\sog' & $\sog$ & similar to or greater than\\[2pt] \verb'\lse' & $\lse$ & less than or homotopic to \\[2pt] \verb'\gse' & $\gse$ & greater than or homotopic to\\[2pt] \verb'\getsto' & $\getsto$ & from over to\\[2pt] \verb'\grole' & $\grole$ & greater over less\\[2pt] \verb'\leogr' & $\leogr$ & less over greater\\ \hline \end{tabular} \end{table} Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals. Many other mathematical symbols are also available, either built into \LaTeX\ or via additional packages. If you want to insert a specific symbol but don't know the \LaTeX\ command, we recommend using the Detexify website\footnote{\url{http://detexify.kirelabs.org}}. Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production. To produce bold symbols in mathematics, use \verb'\bmath' for simple variables, and the \verb'bm' package for more complex symbols (see section~\ref{sec:packages}). Vectors are set in bold italic, using \verb'\mathbfit{}'. For matrices, use \verb'\mathbfss{}' to produce a bold sans-serif font e.g. \mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\nabla$ (del, used in gradients, divergence etc.) use \verb'$\nabla$'. \subsection{Ions} A new \verb'\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states. For example, to typeset singly ionised calcium use \verb'\ion{Ca}{ii}', which produces \ion{Ca}{ii}. \section{Figures and tables} \label{sec:fig_table} Figures and tables (collectively called `floats') are mostly the same as built into \LaTeX. \subsection{Basic examples} \begin{figure} \includegraphics[width=\columnwidth]{example} \caption{An example figure.} \label{fig:example} \end{figure} Figures are inserted in the usual way using a \verb'figure' environment and \verb'\includegraphics'. The example Figure~\ref{fig:example} was generated using the code: \begin{verbatim} \begin{figure} \includegraphics[width=\columnwidth]{example} \caption{An example figure.} \label{fig:example} \end{figure} \end{verbatim} \begin{table} \caption{An example table.} \label{tab:example} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline Sun & 1.00 & 1.00\\ $\alpha$~Cen~A & 1.10 & 1.52\\ $\epsilon$~Eri & 0.82 & 0.34\\ \hline \end{tabular} \end{table} The example Table~\ref{tab:example} was generated using the code: \begin{verbatim} \begin{table} \caption{An example table.} \label{tab:example} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline Sun & 1.00 & 1.00\\ $\alpha$~Cen~A & 1.10 & 1.52\\ $\epsilon$~Eri & 0.82 & 0.34\\ \hline \end{tabular} \end{table} \end{verbatim} \subsection{Captions and placement} Captions go \emph{above} tables but \emph{below} figures, as in the examples above. The \LaTeX\ float placement commands \verb'[htbp]' are intentionally disabled. Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort. Simply place the \LaTeX\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers. By default a figure or table will occupy one column of the page. To produce a wider version which covers both columns, use the \verb'figure*' or \verb'table*' environment. If a figure or table is too long to fit on a single page it can be split it into several parts. Create an additional figure or table which uses \verb'\contcaption{}' instead of \verb'\caption{}'. This will automatically correct the numbering and add `\emph{continued}' at the start of the caption. \begin{table} \contcaption{A table continued from the previous one.} \label{tab:continued} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline $\tau$~Cet & 0.78 & 0.52\\ $\delta$~Pav & 0.99 & 1.22\\ $\sigma$~Dra & 0.87 & 0.43\\ \hline \end{tabular} \end{table} Table~\ref{tab:continued} was generated using the code: \begin{verbatim} \begin{table} \contcaption{A table continued from the previous one.} \label{tab:continued} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline $\tau$~Cet & 0.78 & 0.52\\ $\delta$~Pav & 0.99 & 1.22\\ $\sigma$~Dra & 0.87 & 0.43\\ \hline \end{tabular} \end{table} \end{verbatim} To produce a landscape figure or table, use the \verb'pdflscape' package and the \verb'landscape' environment. The landscape Table~\ref{tab:landscape} was produced using the code: \begin{verbatim} \begin{landscape} \begin{table} \caption{An example landscape table.} \label{tab:landscape} \begin{tabular}{cccccccccc} \hline Header & Header & ...\\ Unit & Unit & ...\\ \hline Data & Data & ...\\ Data & Data & ...\\ ...\\ \hline \end{tabular} \end{table} \end{landscape} \end{verbatim} Unfortunately this method will force a page break before the table appears. More complicated solutions are possible, but authors shouldn't worry about this. \begin{landscape} \begin{table} \caption{An example landscape table.} \label{tab:landscape} \begin{tabular}{cccccccccc} \hline Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\ Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\ \hline Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\ \hline \end{tabular} \end{table} \end{landscape} \section{References and citations} \subsection{Cross-referencing} The usual \LaTeX\ commands \verb'\label{}' and \verb'\ref{}' can be used for cross-referencing within the same paper. We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly. This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler). It is best to give each section, figure and table a logical label. For example, Table~\ref{tab:mathssymbols} has the label \verb'tab:mathssymbols', whilst section~\ref{sec:packages} has the label \verb'sec:packages'. Add the label \emph{after} the section or caption command, as in the examples in sections~\ref{sec:sections} and \ref{sec:fig_table}. Enter the cross-reference with a non-breaking space between the type of object and the number, like this: \verb'see Figure~\ref{fig:example}'. The \verb'\autoref{}' command can be used to automatically fill out the type of object, saving on typing. It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges. For example, \verb'\autoref{tab:journal_abbr}' produces \autoref{tab:journal_abbr}. \subsection{Citations} \label{sec:cite} MNRAS uses the Harvard -- author (year) -- citation style, e.g. \citet{author2013}. This is implemented in \LaTeX\ via the \verb'natbib' package, which in turn is included via the \verb'usenatbib' package option (see section~\ref{sec:options}), which should be used in all papers. Each entry in the reference list has a `key' (see section~\ref{sec:ref_list}) which is used to generate citations. There are two basic \verb'natbib' commands: \begin{description} \item \verb'\citet{key}' produces an in-text citation: \citet{author2013} \item \verb'\citep{key}' produces a bracketed (parenthetical) citation: \citep{author2013} \end{description} Citations will include clickable links to the relevant entry in the reference list, if supported by your \LaTeX\ compiler. \defcitealias{smith2014}{Paper~I} \begin{table*} \caption{Common citation commands, provided by the \texttt{natbib} package.} \label{tab:natbib} \begin{tabular}{lll} \hline Command & Ouput & Note\\ \hline \verb'\citet{key}' & \citet{smith2014} & \\ \verb'\citep{key}' & \citep{smith2014} & \\ \verb'\citep{key,key2}' & \citep{smith2014,jones2015} & Multiple papers\\ \verb'\citet[table 4]{key}' & \citet[table 4]{smith2014} & \\ \verb'\citep[see][figure 7]{key}' & \citep[see][figure 7]{smith2014} & \\ \verb'\citealt{key}' & \citealt{smith2014} & For use with manual brackets\\ \verb'\citeauthor{key}' & \citeauthor{smith2014} & If already cited in close proximity\\ \verb'\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\ \verb'\citetalias{key}' & \citetalias{smith2014} & \\ \verb'\citepalias{key}' & \citepalias{smith2014} & \\ \hline \end{tabular} \end{table*} There are a number of other \verb'natbib' commands which can be used for more complicated citations. The most commonly used ones are listed in Table~\ref{tab:natbib}. For full guidance on their use, consult the \verb'natbib' documentation\footnote{\url{http://www.ctan.org/pkg/natbib}}. If a reference has several authors, \verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \bibtex\ (see section~\ref{sec:ref_list}) then this is handled automatically. If not, the \verb'\citet*{}' and \verb'\citep*{}' commands can be used at the first citation to include all of the authors. \subsection{The list of references} \label{sec:ref_list} It is possible to enter references manually using the usual \LaTeX\ commands, but we strongly encourage authors to use \bibtex\ instead. \bibtex\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details. An MNRAS \bibtex\ style file, \verb'mnras.bst', is distributed as part of this package. The rest of this section will assume you are using \bibtex. References are entered into a separate \verb'.bib' file in standard \bibtex\ formatting. This can be done manually, or there are several software packages which make editing the \verb'.bib' file much easier. We particularly recommend \textsc{JabRef}\footnote{\url{http://jabref.sourceforge.net/}}, which works on all major operating systems. \bibtex\ entries can be obtained from the NASA Astrophysics Data System\footnote{\label{foot:ads}\url{http://adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry. Simply copy this into your \verb'.bib' file or into the `BibTeX source' tab in \textsc{JabRef}. Each entry in the \verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author. Simply cite it in the usual way, as described in section~\ref{sec:cite}, using the specified key. Compile the paper as usual, but add an extra step to run the \texttt{bibtex} command. Consult the documentation for your compiler or latex distribution. Correct formatting of the reference list will be handled by \bibtex\ in almost all cases, provided that the correct information was entered into the \verb'.bib' file. Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited. If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references. \section{Appendices and online material} To start an appendix, simply place the \verb' \section{ Introduction} \label{sec:introduction} Quasars, often referred to as Quasi-Stellar Objects (QSOs), represent the most luminous subset of the overall population of Active Galactic Nuclei (AGN). Their prodigious output (bolometric luminosities up to 10$^{47}$\,erg\,s$^{-1}$) is indicative of rapid accretion, at or near the Eddington limit, onto a supermassive black hole (BH; 10$^9$\,M$_\odot$) that resides at the heart of the host galaxy. Quasars radiate across the electromagnetic spectrum, from radio to X-rays. The primary emission comes from the accretion onto the BH (in the form of an accretion disc) which produces thermal emission at ultraviolet (UV)--optical wavelengths. Surrounding the accretion disc is a geometrically and optically thick structure of cold molecular gas and dust (commonly referred to as the dusty ``torus'') which obscures a direct view of the accretion disc from some viewing angles. The dust in the ``torus'' is heated by the accretion disc and emits thermally at mid-infrared wavelengths (MIR; 5\,--\,40\,$\mu$m), dropping off steeply to the far-infrared band \citep[FIR; 40~--~500\,$\mu$m; e.g.][]{elvis1994,richards2006,netzer2007,mullaney2011}. In the radio waveband ($>$1~cm) emission produced by relativistic jets emerging from the vicinity of the black hole (BH) is sometimes detected; overall $\approx$~5--10\% of optically selected quasars are powerful in the radio waveband \citep[e.g.][]{condon2013}. A standard spectroscopic signature of a quasar is the presence of broad emission lines superimposed onto a blue power-law continuum. However, the discovery of a subset of quasars with redder continuum emission (i.e.,\ red optical/IR colours) has challenged the conventional view. Despite many studies in the literature \citep[e.g.][]{rieke1982, webster1995, benn1998, kim1999, francis2000, richards2003, glikman2004, glikman2007, glikman2012, urrutia2009, banerji2012, stern2012, assef2013, ross2015, lamassa2016, tsai2017, kim2018}, the nature of red quasars remains uncertain. The majority of studies ascribe the red colours as due to dust-reddening of the accretion-disc emission \citep[e.g.][]{webster1995, wilkes2002, glikman2004, glikman2007, rose2013, kim2018}. Nonetheless, depending on the selection and luminosity of the quasar, the red colours can also be due to (1) an excess of flux at longer wavelengths either from a red synchrotron component or the ``contamination'' of starlight from the host galaxy \citep[e.g.][]{serjeant1996, benn1998, francis2000, whiting2001}, or (2) an intrinsically red continuum due to differences in the accretion disc and/or Eddington ratio when compared to normal quasars \citep[e.g.][]{richards2003,young2008}. Ever since the first focused studies of red quasars over 20 years ago there has been debate over the relationship between red and blue quasars. If the red-blue quasar dichotomy is equivalent to that between nearby obscured and unobscured AGN then the different optical colours will be largely due to the orientation of an anisotropic obscuring structure \citep[i.e.,\ the dusty ``torus''; e.g.,][]{antonucci1993,urry1995}. In this scenario red quasars will be more obscured by dust since they are viewed at inclinations closer to the equatorial plane of the dusty ``torus'' than blue quasars (i.e.,\ a red quasar is simply an inclined blue quasar whereby the observed viewing angle intersects a larger column of dust). Alternatively, a competing paradigm postulates that red and blue quasars are related within an evolutionary sequence that connects dust-obscured star formation (SF) with quasar activity through gas inflow and outflow/feedback \citep[e.g.,][]{sanders1988,hopkins2008,alexander2012}. With this model the rare red quasar population represents a brief transitional phase \citep[a few tens of Myr;][]{hopkins2008} between the dust-obscured SF and the blue quasar phase during which winds/jets drive away the obscuring dust, ultimately shutting down the SF and revealing an unobscured blue quasar. Distinguishing between these competing scenarios is a major focus of red quasar studies. On the basis of emission line kinematics and X-ray analysis, some studies favour the orientation scenario to explain the nature of red quasars \citep[e.g.,][]{wilkes2002, rose2013}. Conversely, through a slew of different selection approaches (e.g.,\ optical; near-infrared (NIR); radio) and multi-wavelength analyses, other studies have presented evidence that many red quasars show the properties expected for the brief evolutionary phase \citep[e.g.,][]{urrutia2008,banerji2012,banerji2017,glikman2012,glikman2015}: major-merger driven gas inflows, dust-obscured SF, and energetic outflows. Irregardless of the fundamental nature of red quasars, all of these studies agree that the origin of the red quasar colours is due to the obscuration of a blue quasar continuum by dust; however, the location of the obscuring dust (i.e.,\ in the nucleus versus the host galaxy) and its origin (i.e.,\ a consequence of the inclination of the dusty ``torus'' versus the aftermath of a galaxy major merger event) remains uncertain. A major challenge in comparing between different red quasar studies is the broad range of selection approaches adopted \citep[e.g.][]{webster1995, cutri2001, gregg2002, glikman2007, banerji2012, tsai2017}; e.g.,\ optical--NIR--MIR colour criteria; point-source morphologies; bright radio emission. Furthermore, most red quasar studies do not uniformly select a normal blue quasar sample to provide a reliable ``control" to demonstrate that any observed differences (e.g.,\ in redshift, luminosity, BH mass, Eddington ratio) are specific to the red quasar population rather than being a consequence of the different selection approaches. To robustly demonstrate that there are fundamental differences between red and blue quasars which cannot be attributed just to orientation, the quasar samples must be selected in a uniform manner, carefully controlling for any additional selection effects. In this work we have used the Sloan Digital Sky Survey \citep[SDSS;][]{york2000} to undertake a uniform selection of red and blue quasars to search for fundamental differences, and to hence distinguish between the evolutionary and orientation scenarios. In \cref{sec:sample selection} we outline the multi-wavelength data we used to construct a quasar parent sample which is uniform in selection and unbiased in the radio waveband. We also define our red and blue quasar samples and constrain the amount of dust obscuration required to produce the optical colours of the red quasars. Based on our findings we construct luminosity-redshift matched subsamples which we use throughout the paper as a comparison to the full colour-selected quasar subsamples. In \cref{subsec:radio-detection fraction result} we explore the radio properties of red and blue quasars and find that red quasars show an enhanced radio-detection fraction in comparison to blue quasars across all redshifts. In \cref{subsec:radio morphologies} we investigate the radio morphologies to determine which morphological structures are associated with the surfeit of radio-detected red quasars, and in \cref{subsec:radio luminosities} we explore the relation between the radio-detection fraction and the radio luminosity (${L_{\rm 1.4~GHz}}$ and ${L_{\rm 1.4~GHz}}/{L_{\rm 6\mu m}}$) of red quasars. Overall we find a significant enhancement in the detection of compact and faint radio sources in the red quasar population, a result that becomes stronger towards radio-quiet quasars. These results strongly argue against a simple orientation model but are in broad agreement with an evolutionary model, which we discuss in \cref{sec:discussion}. In this work we adopted a concordance flat $\Lambda$-cosmology with $H_0$ = 70\,km\,s$^{-1}$\,Mpc$^{-1}$ , $\Omega_{M}$ = 0.3, and $\Omega_\Lambda$ = 0.7. \section{Data sets and quasar sample definition} \label{sec:sample selection} In this study we explore the physical properties of red quasars at $0.2 < z < 2.4$ to understand whether they are intrinsically different to the overall quasar population at the same epochs. The multi-wavelength data and catalogues we used to select red and blue quasars are highlighted in \cref{subsec:data} and the details of our careful selection criteria are described in \cref{subsec:sample definition}; Figure~\ref{fig:flowchart} presents a flowchart that summarizes the different steps that are taken in defining our full colour-selected subsamples. In \cref{subsec:sample properties} we utilise the MIR emission as a robust tracer of the nuclear power of quasars and define a $L_{\rm 6 \mu m}-z$ matched sample to minimise luminosity and redshift effects on our results. \begin{figure*} \centering \includegraphics[width=38pc]{figures/flowchart.png} \caption{A schematic diagram of our selection process. We started with quasars from \citet{schneider2010} and selected those with the uniform selection flag and excluded sources which were solely targeted for spectroscopic followup due to their FIRST pre-selection; i.e.,\ sources which were selected only due to having a FIRST counterpart and did not satisfy any of the other selection criteria outlined in \citet{richards2002}. To construct a parent sample we included sources at redshifts of $0.2 < z < 2.4$ with {\it WISE} detections in bands $W1$, $W2$ and $W3$ (SNR $> 2$), and with bolometric luminosity and BH mass measurements in \citet{shen2011}. The parent sample was then equally subdivided into the bottom, middle and top 10\% of the redshift dependent $g^* - i^*$ distributions representing our full colour-selected samples termed as bQSOs, cQSOs and rQSOs, respectively (see Figure~\ref{fig:gi_z_distribution}). Additionally, the bQSOs, cQSOs and rQSOs were matched in 6\,$\mu$m luminosity and redshift to construct $L_{\rm 6 \mu m}-z$ matched colour-selected subsamples. Following the colour selection and luminosity-redshift matching, we searched for FIRST radio counterparts within a 10$''$ search radius; these samples represent FIRST-detected quasars termed as FbQSO, FcQSOs and FrQSOs for the FIRST-detected bQSOs, cQSOs and rQSOs, respectively. The source statistics for the different subsamples split into redshift bins are tabulated in Table~\ref{tab:source_stats}.} \label{fig:flowchart} \end{figure*} \subsection{Multi-wavelength data and catalogues} \label{subsec:data} Our quasar selection is based on the SDSS DR7 Quasar Catalogue \citep[][]{schneider2010}. We utilise this catalogue in combination with MIR data from the Wide-field Infrared Survey Explorer \citep[{\it WISE};][]{wright2010} and radio data from the Faint Images of the Radio Sky at Twenty-Centimeters \citep[FIRST;][]{becker1995,becker2012,helfand2015} and the NRAO VLA Sky Survey \citep[NVSS;][]{condon1998} to refine our selection approach and quasar analyses. \subsubsection{Optical data: the SDSS DR7 Quasar Catalogue} \label{subsubsec:sdss data} The SDSS DR7 Quasar Catalogue \citep[][hereafter S10]{schneider2010} consists of 105,783 spectroscopically confirmed quasars with luminosities of $M_i < -22.0$ out to $z = 5.48$ which exhibit at least one emission line with a full width at half-maximum (FWHM) $> 1000$\,km\,s$^{-1}$ or exhibit broad absorption line features. We briefly describe the construction of the quasar catalogue below. The quasar selection algorithm described in \citet{richards2002} was used to select primary candidate quasars for spectroscopic followup \citep[see also][]{richards2001,stoughton2002,schneider2005,vandenberk2005}. This algorithm distinguishes between quasars and the much more numerous stars and galaxies by (1) using their nonstellar colours obtained from the $u^*$\,(3543\,\AA{}), $g^*$\,(4770\,\AA{}), $r^*$\,(6231\,\AA{}), $i^*$\,(7625\,\AA{}) and $z^*$\,(9134\,\AA{}) broadband photometry and (2) searching for counterparts in the FIRST survey to all unresolved objects brighter than $i^*_{\rm dered}$ = 19.1.\footnote{Hereafter all SDSS PSF magnitudes used in this paper refer to the Galactic-extinction corrected magnitudes provided in \citet{shen2011}.} The target colour selection is sensitive to red quasars \citep[e.g.,\ quasars with $E(B-V)$ = 0.1 have a high probability of being selected; see \S2.2 of][]{richards2003}, although the most reddened quasars will be missed. In addition, non-quasar selection algorithms were used to supplement the primary quasar selection, including (1) objects with \textit{ROSAT} All-sky Survey \citep[RASS,][]{voges1999,voges2000} counterparts, (2) objects targeted as members of certain stellar populations (e.g.,\ \textit{F} stars and Main-sequence turnoff stars) but whose spectra showed them to be quasars, and (3) serendipitous objects \citep[FIRST matches or objects with peculiar colours; see][for further details on the SDSS selection algorithms]{stoughton2002,anderson2003,vandenberk2005,richards2006_sdss,shen2007,schneider2010,shen2011}. These candidates were only assigned fibres once the main samples of galaxies, Luminous Red Galaxies (LRGs) and quasars were tiled. Hence, these samples are incomplete and in combination with the inclusion of the ``Special Plates" in the Galactic cap \citep[e.g.,\ Stripe 82][]{stoughton2002,adelman-mccarthy2006}, were designed to explore the limits of the primary selection algorithm, to go deeper and to target objects with atypical colours. The selection of quasars via earlier versions of the algorithm or from the Special Plates introduced a non-uniformity in the selection of the S10 quasar sample and, therefore, approximately half of these objects are not suitable for statistical analyses \citep{shen2011, kratzer2015, park2015}. However, S10 identify targets that satisfy {\it a posteriori} the \citet{richards2002} selection algorithm, indicated with the uniform flag \citep[{\sc uniform\_target} = 1; see][]{shen2011,park2015,kratzer2015}, which provides a statistically reliable sample of 59,514 quasars up to $z = 5.48$. Of these, only 259 quasars were targeted uniquely by FIRST pre-selection. \subsubsection{Radio data: searching for FIRST counterparts} \label{subsubsec:first data} The VLA Faint Images of the Radio Sky at Twenty-Centimeters \citep[FIRST;][]{becker1995,becker2012,helfand2015} is a 1.4~GHz radio survey that observed $\approx$\,10,000~deg$^2$ of the SDSS region at a spatial resolution of 5$''$. The 5\,$\sigma$ source detection threshold of 1\,mJy makes it beneficial to detect quasars down to low radio luminosities; i.e.,\ including even radio-quiet AGN. The FIRST survey comprises 946,432 radio sources of which 30\% have a spectroscopic SDSS counterpart \citep[e.g.,][]{ivezic2002}. When cross-matching between different surveys there is a trade-off between completeness and the number of false associations. Since the majority of the SDSS quasars are likely to be unresolved in FIRST, a high completeness and low random association can be achieved even when adopting a small search radius. Based on the analysis of \citet{lu2007}, we adopted a 10$''$ cross-matching radius. They showed that the false association rate within 10$''$ of the QSO position is only 0.2\%, with just 2\% of radio quasars in SDSS having radio structures that extend beyond 10$''$ and have an undetected radio core in FIRST. Therefore, we consider all FIRST radio sources with centroidal positions within 10$''$ of the quasar to be directly associated, and summed their integrated flux. In practice only 6\% of all our radio-detected quasars have multiple FIRST counterparts treated in this fashion (see \cref{subsec:radio morphologies}). To explore our incompleteness to large radio sources we used additional data from the NVSS \citep[][]{condon1998} which has a beam size of 45$''$, significantly larger than FIRST and our cross-match radius (\cref{subsec:radio morphologies}). We calculated the 1.4\,GHz luminosities using the methodology described in \citet{alexander2003}, assuming a uniform radio spectral index of $\alpha = 0.5$ to compute a K-correction \citep[which is the division between steep and flat radio spectrum quasars; e.g.,][]{wall1975,kimball2008}.\footnote{We define the spectral index $\alpha$ as $f_\nu \propto \nu^{-\alpha}$.} In this work we used the FIRST integrated flux ($F_{\rm int}$) to compute the radio luminosities and the FIRST peak flux ($F_{\rm peak}$) to establish whether a source is radio faint (i.e.,\ $F_{\rm peak} < 3$\,mJy). \subsubsection{Infrared data: searching for {\it WISE} counterparts} \label{subsubsec:WISE data} To explore the dust properties of red quasars in comparison to blue quasars, we searched for MIR counterparts from the Wide-field Infrared Survey Explorer \citep[{\it WISE};][]{wright2010} which mapped the entire sky in four bands: $W1$ ($\lambda$~=~3.4\,$\mu$m; $PSF$~=~6.1$''$), $W2$ ($\lambda$~=~4.6\,$\mu$m; $PSF$~=~6.4$''$), $W3$ ($\lambda$~=~12\,$\mu$m; $PSF$~=~6.5$''$) and $W4$ ($\lambda$~=~22\,$\mu$m; $PSF$~=~12.0$''$). Using the Query Engine from the Infrared Science Archive at NASA/IPAC we matched the uniformly selected S10 quasars to the All-Sky WISE Source Catalogue (ALLWISE) within a 2.7$''$ radius. This ensured a 99.5\% certainty that the optical source is matched to the correct MIR counterpart \citep{lake2012}. We found a WISE counterpart for 58,137 uniformly selected S10 quasars. Since the MIR emission is a robust tracer of the reprocessed accretion disc emission from dust that is free of the effects of extinction, we determined the intrinsic luminosities of the quasars on the basis of the MIR fluxes after ensuring that the MIR emission is not significantly contaminated by non-AGN processes (see \cref{subsubsec: WISE properties}). We computed the rest-frame $6\mu$m luminosity (L$_{\rm 6\mu m}$) by log-linear interpolation or extrapolation of the fluxes in the $W2$ and $W3$ bands, assuming that they are equivalent to monochromatic fluxes at the effective wavelength of the filters. \subsection{Distinguishing between red and blue quasars: full colour-selected samples} \label{subsec:sample definition} A variety of different approaches have been adopted in the literature to select red quasars which makes it difficult to draw conclusions about their properties with respect to blue quasars. Our main aim is to construct a carefully controlled and redshift-sensitive experiment in which both red and blue quasars are drawn from the same parent sample to allow for a systematic exploration of their multi-wavelength properties. In \cref{subsubsec:parent sample} we construct a parent sample using the aforementioned catalogues/surveys and in \cref{subsubsec:colour selected samples} we define our red and blue quasar samples, referred to here as our full colour-selected samples. \subsubsection{Defining the quasar parent sample} \label{subsubsec:parent sample} As mentioned in \cref{subsubsec:sdss data}, the SDSS quasar selection approach is complex which introduces some non-uniformity in the overall quasar sample. We therefore sought a selection approach which is uniform and minimises radio biases introduced by the FIRST pre-selection of SDSS quasars. Figure~\ref{fig:flowchart} shows a flowchart that summarizes the different steps we took in defining our parent sample to then select red and blue quasars (\cref{subsubsec:colour selected samples}); the source statistics in each sample are reported in Table~\ref{tab:source_stats}. To construct a uniform parent sample we, firstly, selected S10 quasars with the uniform flag and then, using the hexadecimal bit value of the `BEST' flag \citep[see][]{stoughton2002,schneider2010}, we identified and removed sources pre-selected solely on the basis of their FIRST detection ({\sc bestflag} = 0\,$\times$\,8 for sources at high Galactic latitude and {\sc bestflag} = 0\,$\times$\,10 for sources at low Galactic latitude). In addition to uniform selection, our sample is restricted to the redshift range $0.2 < z < 2.4$ and required a SNR $> 2$ in the $WISE$ $W1$, $W2$, and $W3$ bands to allow the estimation of $L_{\rm 6\mu m}$. We excluded 70 sources of the uniform sample which lacked bolometric luminosity measurements. This yielded a final parent sample of 48,964 quasars that are uniform in selection and unbiased in the radio waveband within our selected redshift range. \bgroup \def1.3{1.3} \begin{table} \begin{center} \small \caption{\label{tab:source_stats} Source statistics for the quasar samples in four different redshift bins. We present the number of sources for the uniformly selected parent sample, the full colour-selected samples (bQSOs, cQSOs and rQSOs), the $L_{\rm 6\mu m}-z$ matched colour samples (bQSO$^{L_{\rm 6\mu m}}_{z}$, cQSO$^{L_{\rm 6\mu m}}_{z}$ and rQSO$^{L_{\rm 6\mu m}}_{z}$) and their respective FIRST-detected subsamples. } \begin{tabular}[c]{lllllllccccccc} \hline \hline Sample & N$_{z1-z4}$ & N$_{z1}$ & N$_{z2}$ & N$_{z3}$ & N$_{z4}$ \\ (1) & (2) & (3) & (4) & (5) & (6) \\ \hline \hline Parent & 48,964 & 5,402 & 6,021 & 18,286 & 19,255 \\ \\ \hline \multicolumn{6}{c}{Colour-selected QSO samples}\\ \hline bQSOs & 4,900 & 535 & 613 & 1,822 & 1,923 \\ cQSOs & 4,899 & 543 & 597 & 1,826 & 1,929\\ rQSOs & 4,899 & 545 & 590 & 1,833 & 1,930\\ \\ FbQSOs & 348 & 62 & 71 & 110 & 105 \\ FcQSOs & 318 & 52 & 48 & 121 & 97\\ FrQSOs & 821 & 99 & 127 & 298 & 297\\ \\ \hline \multicolumn{6}{c}{Matched $L_{\rm 6\mu m}$--$z$ QSO samples} \\ \hline bQSO$^{L_{\rm 6\mu m}}_{z}$ & 1,967 & 159 & 256 & 781 & 771 \\ cQSO$^{L_{\rm 6\mu m}}_{z}$& 1,967 & 161 & 252 & 780 & 773\\ rQSO$^{L_{\rm 6\mu m}}_{z}$ & 1,967 & 161 & 252 & 781 & 772\\ \\ FbQSO$^{L_{\rm 6\mu m}}_{z}$& 129 & 12 & 32 & 44 & 41 \\ FcQSO$^{L_{\rm 6\mu m}}_{z}$ & 113 & 13 & 20 & 43 & 37\\ FrQSO$^{L_{\rm 6\mu m}}_{z}$ & 323 & 39 & 51 & 121 & 112\\ \\ \hline \hline \end{tabular} \end{center} {\bf Notes.} (1): Target samples used in this study: Parent QSOs represents the uniformly selected S10 quasars with $W1-W3$ detections, b-, c-, rQSOs are the $g^* - i^*$ colour-selected samples and F-b-, c-, rQSOs are the radio-bright quasars matched to FIRST (see Figure~\ref{fig:gi_z_distribution}). (2)--(6): Source statistics for each sample within the respective redshift bins; i.e.,\ across the full $z$ range in our study, $0.2 < z_1< 0.5$, $0.5 < z_2 < 0.8$, $0.8 < z_3 < 1.5$ and $1.5 < z_4 < 2.4$. \end{table} \egroup \subsubsection{Defining the full colour-selected samples} \label{subsubsec:colour selected samples} To distinguish between red and blue quasars we produced $g^* - i^*$ colour distributions of our uniformly selected parent sample (\cref{subsubsec:WISE data}) as a function of redshift. We define red quasars (rQSOs) and blue quasars (bQSOs) by selecting the respective reddest 10\% and the bluest 10\% of the colour distribution. Additionally we selected the 10\% of quasars around the median of the $g^* - i^*$ distribution to construct a control sample (cQSOs). Hereafter we refer to all non-red quasars (including the bQSO and cQSO subsamples) as blue quasars. In Figure~\ref{fig:gi_z_distribution} we illustrate the selection of the three colour-selected quasar samples. In order to construct a rQSO selection which is sensitive to the redshift evolution of quasar SEDs we sorted the quasars by redshift and constructed the $g^* - i^*$ distributions in contiguous redshift bins consisting of 1000 sources as shown in the zoom-in panel in Figure~\ref{fig:gi_z_distribution}. We restricted our analysis to $z \leq 2.4$ due to the low optical completeness of quasars in S10 at $2.5 < z < 3.0$. This is a consequence of the crossing between the stellar and quasar loci in the SDSS multicolour space \citep[see e.g.,][]{richards2002}. \begin{figure*} \centering \includegraphics[width=40pc]{figures/gi_z.png} \caption{The $g^* - i^*$ colours versus redshift for the bQSOs (blue circles), cQSOs (green circles) and rQSOs (red circles) explored in this study. Our selected quasars are superimposed on the distribution of S10 \citep[grey;][]{schneider2010}. Also indicated is the redshift range of our study ($z = 0.2-2.4$; dash-dot black line). {\it Zoom-in panel}: An illustration of the selection of the \mbox{bQSOs} (bottom 10\%), cQSOs (middle 10\%) and rQSOs (top 10\%) using the $g^* - i^*$ colours and redshift for an example bin of 1000 sources. The $g^* - i^*$ distribution is shown on the right. } \label{fig:gi_z_distribution} \end{figure*} Even though the $u^* - z^*$ colour provides the broadest wavelength baseline for colour separation, the SDSS photometry in both of these bands is shallow and is more affected by atmospheric attenuation \mbox{\citep[e.g.,][]{ivezic2002}.} Furthermore, the $u^*$-band is heavily affected by the Lyman break at $z \geq 1.9$ which motivates our use of the $g^*$-band (only influenced at higher redshifts, i.e.,\ $z \geq 2.5$). Therefore, the $g^* - i^*$ colour was used to select red and blue quasars with the broadest possible wavelength range while optimising photometric depth \citep[see e.g.,][]{richards2003}. To verify that our colour selection does indeed reliably identify quasars with red and blue colours, and is not strongly influenced by broad emission line contamination in the $g^*$ or $i^*$ bands, we compared the $g^* - i^*$ to $u^* - r^*$ colours of the bQSOs, cQSOs and rQSOs as shown in Figure~\ref{fig:sdss colour-colour}. We note that the rQSOs selected using $g^* - i^*$ are also red in $u^* - r^*$ when comparing to the bQSOs and cQSOs. Approximately 2\% of the cQSOs have $g^* - i^* > 0.5$ (all of these sources are at $z < 0.3$; see Figure~\ref{fig:gi_z_distribution}). However, this is not a failure of our quasar selection approach and is a natural consequence of our redshift-sensitive selection technique. Overall, within the full colour-selected sample there are $\approx$~4900 quasars in each of the bQSO, cQSO, and rQSO sub samples, of which 348, 318, and 821 are radio detected, respectively. \begin{figure} \centering \includegraphics[width=20pc]{figures/sdss_ur_gi_all.png} \caption{A colour-colour diagram of the $u^* - r^*$ vs. $g^* - i^*$ colours for the full colour-selected samples; the bQSOs, cQSOs and rQSOs are indicated using the colour scheme in Figure~\ref{fig:gi_z_distribution} for redshifts in the range $0.2 < z < 2.4$. It is evident that the colours of red quasars are more broadly distributed in the colour-colour parameter space than more blue quasars, even in bands unrelated to our red quasar selection approach. Furthermore, our $g^* - i^*$ colour selection appears to be broadly consistent with other optical colours in selecting quasars, e.g.,\ $u^* - r^*$. A minority of the cQSOs have $g^* - i^* > 0.5$ which is not a failure of our approach but a normal consequence of our redshift-sensitive selection technique; see also Figure~\ref{fig:gi_z_distribution} where this is evident for quasars at $z < 0.3$.} \label{fig:sdss colour-colour} \end{figure} \subsubsection{Dust extinction in red quasars} \label{subsubsec: dust extinction} The most common explanation for the optical colours of red quasars is dust extinction, whether due to the inclination of a dusty torus resulting from an increase in dust along the line-of-sight or an obscuring dust envelope in which the young red quasar is embedded. We note that, due to the optical selection criteria, the SDSS will miss the most reddened quasars both because of their colours (red quasars overlay the stellar locus in most SDSS colour-colour diagrams) and because of the optical survey flux limit \citep[e.g.,][use NIR data and select SDSS optical drop outs to define their red quasar samples]{glikman2004,banerji2012}. In Figure~\ref{fig:delta_gi_fig} we visually examine the amount of dust reddening implied by our selection technique by plotting the $\Delta (g^* - i^*)$ colour as a function of redshift for the bQSOs, cQSOs and rQSOs. The dashed lines denote the effect of SMC-type extinction \citep{prevot1984} as a function of redshift with $E(B-V)$ = 0.04, 0.12 and 0.2 \citep[e.g.,][]{richards2003} on the emission of a typical quasar. Our selection of red quasars is broadly consistent with $E(B-V) > 0.04$ at $z > 0.8$ for a blue quasar SED. This corresponds to an equivalent hydrogen column density of $N_{\rm H} > $ 2.8 $\times$ $10^{20}$\,cm$^{-2}$ assuming an SMC-like dust-to-gas ($E(B-V)/N_{\rm H}$) ratio, comparable to that found towards the HII-regions in normal galaxies \citep{buchner2017}; we note that \citet{lamassa2016} and \citet{glikman2017} find that radio-selected red quasars have dust-to-gas ratios up-to an order of magnitude below the Galactic value, which would lead to higher hydrogen column densities by up-to an order of magnitude. The range in $\Delta (g^* - i^*)$ colours suggest that the majority of our rQSOs require $A_{\rm V} \sim 0.1-0.5$\,mag to produce the observed optical colours. By comparison, NIR based selection techniques are sensitive to selecting red quasars with a dust extinction of up to $A_{\rm V} \sim 1-6$\,mag \citep[e.g.][]{glikman2004,banerji2012}, hence the rQSOs in our study represent less extreme dust-reddened red quasars. \begin{figure} \centering \includegraphics[width=20pc]{figures/delta_GI_extinction.png} \caption{Redshift vs. $\Delta (g^* - i^*)$ for the bQSOs, cQSOs and rQSOs using the colour scheme in Figure~\ref{fig:gi_z_distribution}. We illustrate the relative colours for our parent sample as well (faded grey points). The dashed lines are the dust-reddening for $E(B-V)$ = 0.04, 0.12 and 0.2 \citep[]{richards2003}, which correspond to $A_{\rm V}$ = 0.1, 0.3, and 0.5~mags for a typical quasar SED. The range in the relative colours of the rQSOs suggests that the majority require $A_{\rm V} \sim 0.1-0.5$\,mag to produce the observed optical colours.} \label{fig:delta_gi_fig} \end{figure} \subsection{Defining the AGN power at rest-frame 6\,$\mu$m: luminosity-redshift matched samples} \label{subsec:sample properties} To reliably infer any fundamental differences in the properties of the rQSOs from blue quasars (i.e.,\ the bQSOs and cQSOs) we must ensure that our results are not driven by differences in the bolometric luminosities of the quasars. \citet{shen2011} provide bolometric luminosities ($L_{\rm bol}$) for the S10 quasar sample, however, they are inferred from rest-frame UV-optical continuum measurements and have not been corrected for dust extinction. Consequently, the $L_{\rm bol}$ values are likely to be significantly underestimated in the rQSOs, which will also lead to unreliable BH masses and Eddington ratios. We therefore calculate the bolometric luminosities for our quasar subsamples using the MIR {\it WISE} data, as described below, and utilise these to create luminosity-redshift matched samples. \subsubsection{MIR measurements: a more robust approach} \label{subsubsec: WISE properties} In order to accurately quantify the MIR emission from the quasar, it is imperative to verify that it is not contaminated by the host galaxy. Since the dusty AGN torus radiates predominantly at MIR wavelengths while star formation from the host galaxy peaks at FIR wavelengths, we would expect the MIR emission from our quasars to be dominated by the AGN. To verify this, we make use of the diagnostic $WISE$ three-band colour-colour diagram on which \citet{mateos2012} define a region (called the ``AGN wedge"; Figure~\ref{fig:WISE_wedge}) which identifies AGN with red MIR power-law SEDs with a spectral index $\alpha \leq -0.3$. Host star-formation with a MIR luminosity $>10$\% of that of the AGN would systematically move QSOs out of the lower right of the wedge; however, this requires high star formation rate (SFR) levels (e.g.,\ $> 100$ M$_{\odot}$/yr at $z > 1$). In Figure~\ref{fig:WISE_wedge} we plot the WISE colours ($W1-W2$ vs $W2-W3$) of our colour-selected quasars and indicate the AGN wedge to show where AGN are expected to lie. The bulk ($\sim$95-99\%) of the quasars lie within the wedge; however, a small fraction lie outside as reported in Table~\ref{tab:outside_wedge}. Most of the bQSO and cQSO outliers are higher redshift sources ($z~>~1.5$), while the majority of the rQSO outliers tend to be at lower redshifts ($z~<~1.5$); the same result is found for the FIRST-detected samples. The colours of the sources outside of the wedge at low redshifts can potentially be attributed to dominant host galaxy emission in low-luminosity quasars. However, since only a small fraction of the quasars lie outside of the AGN wedge we can confidently assert that $>90$\% of the MIR light in most of our quasars come from an AGN, and for these, we can reliably estimate their MIR AGN luminosities ($L_{\rm 6\mu m}$; see \cref{subsubsec:WISE data} for the details of the luminosity calculation). \begin{figure*} \centering \includegraphics[width=35pc]{figures/wise_wedge_combined.png} \caption{A {\it WISE} colour-colour diagram for our optically selected quasars colour coded by redshift. The \citet{mateos2012} AGN wedge is indicated with black solid lines. The bQSOs, cQSOs and rQSOs are shown in the top panels from left to right, and the radio-detected FbQSOs, FcQSOs and FrQSOs are presented in the bottom panels from left to right, respectively. The majority of the quasars lie in the AGN wedge; see Table~\ref{tab:outside_wedge}.} \label{fig:WISE_wedge} \end{figure*} \begin{table} \begin{center} \small \caption{\label{tab:outside_wedge} The percentage of bQSOs, cQSOs and rQSOs which lie outside the \citet{mateos2012} AGN wedge across $0.2 < z < 2.4$. Tabulated are both the colour-selected SDSS samples, as well as the FIRST-detected QSOs. } \begin{tabular}[]{ccccc} \hline \hline Sample & Nu. outside & \multicolumn{3}{c}{Percentage outside} \\ & & $0.2 < z < 2.4$ &$z < 1.5 $ & $z > 1.5 $ \\ \hline \hline \multicolumn{5}{c}{Colour-selected quasars} \\ \hline bQSO & 56 & 1\% & 0.3\% & 0.8\% \\ cQSO & 82 & 2\% & 0.8\% & 0.9\% \\ rQSO & 305 & 6\% & 4\% & 2\% \\ \hline \multicolumn{5}{c}{FIRST-detected quasars} \\ \hline FbQSO & 8 & 2\% & 0.9\% & 1\% \\ FcQSO & 9 & 3\% & 1\% & 2\% \\ FrQSO & 43 & 5\% & 3\% & 2\% \\ \hline \hline \end{tabular} \end{center} \end{table} In Figure~\ref{fig:logL6um vs z} we plot the $L_{\rm 6\,\mu m}$ as a function of redshift. The median $\log(L_{6 \mu m})$ of the colour-selected (square) and FIRST-detected (cross) quasars are also plotted in the respective $z$-bins. \begin{figure*} \centering \includegraphics[width=35pc]{figures/logL6um_z.png} \caption{Rest-frame 6\,$\mu m$ luminosity versus redshift for the bQSO, cQSO and rQSO quasars with the radio-detected FbQSOs, FcQSOs and FrQSOs overlaid. The median values for full colour-selected samples (X's) and FIRST-detected samples (squares) in each redshift bin are also plotted. } \label{fig:logL6um vs z} \end{figure*} We find lower median $\log(L_{6 \mu m})$ values for the rQSOs than the bQSOs at low redshifts ($z < 0.8$). This is also mirrored in the higher fraction of rQSO outliers in the AGN wedge at these redshifts and may be the consequence of host-galaxy dilution affecting the colours of the lower luminosity quasars at lower redshifts. However, at higher redshifts ($0.8 < z < 2.4$) the opposite trend is seen where the rQSOs peak at higher $6\,\mu$m luminosities in comparison to the bQSOs and cQSOs (both in terms of higher median luminosities and larger maximum values). These effects could be a consequence of the optical selection of SDSS quasars which would mean that rQSOs that are obscured in the optical/UV by dust will only satisfy the SDSS quasar selection if they are intrinsically more luminous. If the rQSOs are more subject to dust extinction, then we would expect the optical AGN emission to be relatively weaker when compared to the $L_{\rm 6 \mu m}$ than the bQSOs and cQSOs. In Figure~\ref{fig:L6mu vs Lbol fraction} this is demonstrated by a comparison between the inferred bolometric luminosities from \citet[][]{shen2011} ($L_{\rm bol,Shen}$) to those derived from the $6\,\mu$m luminosity: $L_{\rm bol,6\mu m} = BC_{\rm 6\mu m}~\times~L_{\rm 6\mu m}$, where we adopted $BC_{\rm 6\mu m} = 8$ from \citet{stanley2015}. The continuum luminosities at rest-frame 5100\,\AA{} (left), 3000\,\AA{} (middle) and 1350\,\AA{} (right) provided in \citet{shen2011} were used as the optical bolometric luminosity. The median luminosity ratios (vertical dash-dot lines) computed from the 5100\,\AA{} luminosity are relatively consistent for the bQSOs, cQSOs and rQSOs. Contrariwise, as we approach shorter wavelengths, going from 3000\,\AA{} to 1350\,\AA{} luminosities, the median luminosity ratios of the red quasars decrease with respect to that of the blue quasars. This is the signature that we would expect for dust reddening as the shorter wavelength emission will be more suppressed for a fixed amount of obscuration than longer wavelength emission. Conversely to the underestimated luminosities, a few red quasars have notable overestimated optical bolometric luminosities (lower tail) which may be due to host contamination in lower luminosity quasars at low redshifts. At $z < 0.5$ the $\Delta (g^* - i^*)$ values of the blue and red quasars are seen to deviate from the overall population (see Figure~\ref{fig:delta_gi_fig}); the blue quasars have bluer colours than the median quasar at the specific redshift, whereas the red quasars have redder colours. A possible explanation for the change in the apparent shape of the distribution is likely host galaxy contamination at lower redshifts. \begin{figure*} \centering \begin{minipage}[c]{\textwidth} \centering \includegraphics[width=40pc]{figures/L6mu_vs_Lbol_hist_lines.pdf} \caption{Luminosity ratio distributions using the continuum luminosities at rest-frame 5100\,\AA{} (left, BC = 9.26), 3000\,\AA{} (middle, BC = 5.15) and 1350\,\AA{} (right, BC = 3.81) adopted from \citet{shen2011} and the inferred bolometric luminosity from the rest-frame $6\,\mu$m luminosity with a bolometric correction of BC $= 8$. The vertical dash-dot lines illustrate the median luminosity ratio for the respective radio-detected quasar subsamples.} \label{fig:L6mu vs Lbol fraction} \end{minipage} \end{figure*} \subsubsection{Defining the luminosity-redshift matched colour-selected quasar samples } \label{subsubsec:matched L6um-z} The different optical and MIR luminosity distributions between the bQSOs, cQSOs and rQSOs shown in \cref{subsubsec: WISE properties} could skew any differences in the intrinsic properties between red and blue quasars. We have therefore adopted a luminosity and redshift matching approach to refine our quasar samples and exclude any biases in our results introduced by luminosity differences. We luminosity matched the bQSOs, cQSOs and rQSOs in rest-frame $6\,\mu$m luminosity and redshift using tolerances of 0.2~dex and 0.05, respectively, with the following procedure (see Figure~\ref{fig:flowchart} and Table~\ref{tab:source_stats}): (1) we used the the 2-d Cartesian Anisotropic algorithm from the `Triple Match' function in {\sc topcat} \citep{taylor2005,taylor2006} keeping the best symmetric matches of the bQSOs, cQSOs and rQSOs in $L_{\rm 6\mu m}$ and redshift (bQSO$^{L_{\rm 6\mu m}}_{z}$, cQSO$^{L_{\rm 6\mu m}}_{z}$ and rQSO$^{L_{\rm 6\mu m}}_{z}$) and (2) we associated the quasars in these matched samples to FIRST within a 10$''$ radius (FbQSO$^{L_{\rm 6\mu m}}_{z}$, FcQSO$^{L_{\rm 6\mu m}}_{z}$ and FrQSO$^{L_{\rm 6\mu m}}_{z}$), as described in \cref{subsubsec:first data}. Overall, within the $L_{\rm 6\mu m}-{z}$ matched colour-selected sample there are 1967 quasars in each of the bQSO, cQSO, and rQSO sub samples, of which 129, 113, and 323 are radio detected, respectively.\\ \section{Results} \label{sec:results} In this section we use our two defined samples (the full colour-selected sample and the $L_{\rm 6\mu m}-{z}$ matched colour-selected sample) to search for fundamental differences between the radio properties of red and blue quasars. We explore the radio-detection rates in \cref{subsec:radio-detection fraction result}, the radio morphologies in \cref{subsec:radio morphologies}, and the radio luminosities ($L_{\rm 1.4GHz}$ and $L_{\rm 1.4GHz}/L_{\rm 6 \mu m}$) in \cref{subsec:radio luminosities}. \subsection{FIRST detection rates of red versus blue quasars} \label{subsec:radio-detection fraction result} The FIRST radio-detection rate as a function of the median redshift in each of the four redshift bins for the colour-selected samples is presented in Figure~\ref{fig:radio_detection_frac}. We used the Bayesian binomial confidence interval technique discussed in \citet{cameron2011} to calculate 1$\sigma$ uncertainties on the radio-detection fractions for both the full and $L_{\rm 6\mu m}-z$ matched colour-selected samples. In Table~\ref{tab:radio detection frac} we show the source statistics for the different samples. The bQSOs and cQSOs have similar radio-detection fractions of $\approx$\,5\% -- 10\% across all redshifts. However, it is apparent that rQSOs always exhibit a significantly higher FIRST detection rate ($\approx$\,15\% -- 20\%) relative to non-red quasars; i.e.,\ the radio-detection fraction of red quasars is a factor of $\approx$\,2 -- 3 times larger than blue quasars. This enhancement is also apparent if we compute the radio-detection fraction for non-red quasars in our sample; i.e.,\ all quasars in the $g^* - i^*$ colour distribution excluding the reddest 10\%. The FIRST detection rates for these quasars are consistent with those of the bQSOs and cQSOs and, hence, they represent the typical quasar population of which only 5--10\% are radio bright. Our results are consistent between the full and $L_{\rm 6\mu m}-z$ matched colour-selected samples. \begin{figure*} \centering \includegraphics[width=35pc]{figures/radio_detection_frac.pdf} \caption{The FIRST 1.4~GHz radio-detection fraction as a function of redshift for the blue (bQSOs), control (cQSOs) and red (rQSOs) quasars using the colour scheme in Figure~\ref{fig:gi_z_distribution}; we also plot on the radio-detection fraction for non-red QSOs, which is defined as all quasars excluding the rQSOs. The data are taken from Table~\ref{tab:radio detection frac} and the errorbars correspond to the 1$\sigma$ binomial uncertainties. The horizontal bars denote the redshift ranges of the four $z$-bins. The rQSOs have a factor $\approx$~2--3 times larger radio-detection fraction in comparison to the bQSOs and cQSOs. The shaded regions represent the FIRST detection rates for the $L_{\rm 6\mu m}-z$ matched bQSOs, cQSOs and rQSOs.} \label{fig:radio_detection_frac} \end{figure*} \bgroup \def1.3{1.6} \begin{table} \begin{center} \small \caption{\label{tab:radio detection frac} The percentage of bQSOs, cQSOs, and rQSOs detected with FIRST. We also report the the detection fraction for non-red quasars; i.e.,\ excluding the 10\% of the $g^* - i^*$ distribution. Also given are the 1$\sigma$ uncertainties on the radio-detection fractions obtained from Bayesian statistics. See Table~\ref{tab:source_stats} for the source statistics of each sample in the respective redshift ranges and the sample descriptions. } \begin{tabular}[c]{ccccc} \hline \hline Redshift & \multicolumn{4}{c}{FIRST-detected percentage (\%)} \\ & bQSO & cQSO & rQSO & non-red QSO\\ \hline \hline $0.2 < z < 0.5$ & $11.6_{-1.2}^{+1.5}$ & $9.6_{-1.1}^{+1.4} $ & $18.2_{-1.5}^{+1.8}$ & $7.1_{-0.3}^{+0.4}$ \\ $0.5 < z < 0.8$ & $11.6_{-1.2}^{+1.4}$ & $8.0_{-0.9}^{+1.3} $ & $21.5_{-1.6}^{+1.8}$ & $6.7_{-0.2}^{+0.2}$ \\ $0.8 < z < 1.5$ & $6.0_{-0.5}^{+0.6} $ & $6.6_{-0.5}^{+0.6} $ & $16.3_{-0.8}^{+0.9}$ & $5.0_{-0.2}^{+0.2}$ \\ $1.5 < z < 2.4$ & $5.5_{-0.5 }^{+0.6}$ & $5.0_{-0.5 }^{+0.5}$ & $15.4_{-0.8}^{+0.8}$ & $7.6_{-0.4}^{+0.4}$ \\ \hline \hline \end{tabular} \end{center} \end{table} \egroup In our colour sample definitions we have taken the top 10\% to define rQSOs (see \cref{subsec:sample definition}). In Figure~\ref{fig:selection_10percent_bins} we show how the radio-detection fraction changes from the bluest to reddest quasars: the top 10\% of the reddest SDSS quasars always show the largest radio-detection fraction; however, we note that a more refined colour-based selection of red quasars could result in even more significant differences from blue quasars. \begin{figure*} \centering \includegraphics[width=35pc]{figures/radiodetection_frac_10per.pdf} \caption{The FIRST 1.4~GHz radio-detection fraction as a function of the $g^* - i^*$ quantile split into 10\% selection bins in the four respective redshift bins. The top 10\% of the reddest SDSS quasars always show the largest radio-detection fraction. } \label{fig:selection_10percent_bins} \end{figure*} \subsection{Radio morphologies of red versus blue quasars} \label{subsec:radio morphologies} Given the differences in the radio-detection fractions of red and blue quasars it is useful to determine whether there are associated differences in the radio morphologies; i.e.,\ whether red quasars favour a specific radio morphological class above another. Radio-detected quasars often exhibit large scale radio jets and lobes which extend to tens of kpc, and in extreme cases can reach up to several Mpc \citep[e.g.,][]{muxlow1991}. Their radio morphologies are diverse but can be broadly characterised into two groups: core and extended radio sources \citep[e.g.,][]{peterson1997,lu2007}. Core radio sources are spatially unresolved at resolutions similar to FIRST and tend to have flat radio spectra ($\alpha < 0.5$). In contrast, quasars with extended radio morphologies have radio spectra that are steep ($\alpha > 0.5$) and generally have the signature of two symmetric lobes (spatially resolved), although they can exhibit asymmetric lobes, jet-tail structures and other extended features. Extended radio sources are further differentiated into two main classes based on the surface brightness of the core to lobes, i.e.,\ Fanaroff-Riley (FR) type I and II \citep{fanaroffriley1974}, where the former are core-dominated systems and the latter are lobe-dominated systems \citep[e.g.,][]{fanaroffriley1974,bicknell1995}. With the modest resolution of FIRST we can explore radio morphologies for sources that extend beyond the 5$''$ beam size of the survey (this corresponds to projected sizes in the range 17--41\,kpc at $z = 0.2-2.4$ for our assumed cosmology). However, given the diverse range of radio morphologies of quasars, some will be too extended to be picked up by FIRST. Given these challenges we developed a comprehensive strategy to identify potentially extended radio sources, which we then visually classify using a simple approach that captures the morphological diversity of radio-detected quasars. Our approach to identify and classify potentially extended sources is illustrated in Figure~\ref{fig:first morphology flowchart} and is described below. \begin{figure} \centering \includegraphics[width=20pc]{figures/FIRST_flowchart.png} \caption{Schematic flow diagram of our radio morphology classification for the FrQSOs, FcQSOs and FbQSOs. Four subcategories were selected to be visually assessed to accurately determine the morphological class. For completeness we also visually checked all of the other radio-detected quasars.} \label{fig:first morphology flowchart} \end{figure} The FIRST survey decomposes radio sources into multiple gaussian components and reports size measurements for these components \citep{white1997}. Therefore the simplest approach in identifying extended sources is to search for cases where the gaussian sizes are larger than the FIRST beam size (5$''$) or with multiple radio counterparts within our cross-match radius of 10$''.$\footnote{Most of these FIRST-detected colour-selected quasars are associated with single radio counterparts within 10$''$ of the optical position. Only 4\% of the FrQSOs have multiple radio components, whilst 8\% of the FbQSOs and 10\% of the FcQSOs have multiple radio components.} This approach will identify the vast majority of extended sources but will miss the most extended systems that lack bright radio cores; i.e.,\ sources with very large lobe structures that lie beyond our 10$''$ cross-match radius and remain undetected by FIRST due to a faint radio core. To overcome this limitation we employed the 1.4~GHz NVSS survey which overlaps with the FIRST and SDSS footprint, but has a beam size of $45''$, significantly larger than FIRST. This allows for the inclusion of large-scale radio structures beyond 10$''$. Using the methodology given in \citet{lu2007}, we estimate that only $\approx$\,0.2\% will have spurious matches or unassociated sources for $r$\,=\,10$''$ (this increases to $\approx$\,4\% for $r$\,=\,45$''$). We note that since NVSS is 2.5 times less sensitive than FIRST, faint diffuse extended sources will be missed in this analysis. Figure~\ref{fig:first summed fluxes} shows the NVSS versus FIRST fluxes for the FbQSOs, FcQSOs and FrQSOs. The majority of the sources follow a 1:1 trend. In this figure sources indicated with crosses have NVSS fluxes that are at least twice their FIRST fluxes: 12\%, 10\% and 7\% of the FbQSOs, FcQSOs and FrQSOs are highlighted. For these sources FIRST may have underestimated their extended emission and they were visually assessed (see Figure~\ref{fig:first morphology flowchart}). While radio variability may also explain these single component outliers \citep[e.g.,][]{heeschen1987,kraus2003,barvainis2005,czerny2008}, we nevertheless include them as potential extended sources. \begin{figure*} \centering \includegraphics[width=40pc]{figures/FIRST_summed_fluxes.pdf} \caption{ NVSS flux vs. FIRST flux at 1.4\,GHz for the NVSS detected FbQSO, FcQSO and FrQSO indicated with faded blue, green and red circles. Sources with underestimated FIRST fluxes are indicated with dark blue, green and red crosses. The solid lines indicate a 1:1 relation and the dash-dot lines a 1:2 relation.} \label{fig:first summed fluxes} \end{figure*} All potentially extended sources highlighted by the process illustrated in Figure~\ref{fig:first morphology flowchart} were visually classified by three people (LK, DMA, DJR) using cutout images at 1.4~GHz extracted from the \href{https://third.ucllnl.org/cgi-bin/firstcutout}{VLA FIRST server}. Overall we inspected 396 FIRST cutouts of 2$'$~$\times$~2$'$ fields centred on the S10 QSO positions. This field size is sufficient since all but 7 of our radio-detected quasars have smaller angular sizes than 2$'$ \citep[see e.g.,][]{devries2006}, which corresponds to projected sizes of $<$~0.4--1.0\,Mpc at $z = 0.2-2.4$. In our visual assessment we employed a simplified classification scheme that captures the main morphological classes of radio-detected quasars and minimises redshift- and resolution-dependent effects: (1) faint, (2) compact, (3) extended, (4) FR\,II-like and (5) compact FR\,II; see Table~\ref{tab:morphology classes} for a more detailed description. Of all the FIRST-detected quasars in our study, 27\% required visual assessment according to our approach with the remaining 73\% of radio-detected quasars categorized either as compact (non-extended; single components with fitted major axes less than 5$''$) or faint ($F_{\rm peak} < 3$\,mJy) radio sources. Although the compact and faint radio sources are not expected to be extended, for completeness we performed a visual check of their morphologies: we found 5\% of the faint and compact samples to have either extended or FR~II-like morphologies and we reclassified these systems. \bgroup \def1.3{1.3} \begin{table*} \begin{center} \footnotesize \caption{\label{tab:morphology classes} The radio morphology classes used to classify the FIRST images for the radio-detected quasars. } \begin{tabular}[c]{llcc} \hline \hline Classification & Description \\ \hline \hline \vspace{0.2cm} Faint & \makecell[l]{Sources detected close to the FIRST detection threshold with peak fluxes of $F_{\rm peak} < 3$\,mJy.} \\ \vspace{0.2cm} Compact & \makecell[l]{Sources that are point-like with non-extended emission (fitted major axes $<$ 5$''$). } \\ \vspace{0.2cm} Extended & \makecell[l]{Single radio sources that are extended. \\ This class includes FR\,I-like systems where both lobes are fainter than the core; $F_{\rm lobe} < F_{\rm peak}$.} \\ \vspace{0.2cm} FR\,II-like & \makecell[l]{Double lobed systems with approximately the same brightness and offset from QSO position. \\ At least one lobe should be brighter than than QSO core ($F_{\rm lobe} > F_{\rm peak}$).} \\ \vspace{0.2cm} Compact FR\,II & \makecell[l]{FR\,II-like systems on small scales, i.e within the 10$''$ radius circle (which corresponds to a \\projected size of $\sim$\,85\,kpc at $z = 1.5$).} \\ \hline \hline \end{tabular} \end{center} \end{table*} \egroup Figure~\ref{fig:first morphology summaries} summarizes the results of our morphological analysis by showing the percentages of the bQSOs, cQSOs and rQSOs in each of the morphology classes; see also Table~\ref{tab:radio morph stats}. We calculated the 1$\sigma$ binomial uncertainties on our percentages for both the full and $L_{\rm 6\mu m}-z$ matched colour-selected samples following \citet{cameron2011}. This figure shows that rQSOs have a different radio morphological mix to blue quasars. There are no significant differences in the fraction of rQSOs and blue quasars with extended radio morphologies, such as the classical FRII-type systems. However, a factor 2--6 more rQSOs have either compact radio emission or are radio faint, in comparison to blue quasars, a result that is consistent between both the full and $L_{\rm 6\mu m}-z$ matched colour-selected samples. It is these compact and faint radio sources that are responsible for the increase in the overall radio-detection fraction between rQSOs and blue quasars in Figure~\ref{fig:radio_detection_frac}. \begin{figure*} \centering \includegraphics[width=35pc]{figures/FIRST_morphology_fractions.png} \caption{The percentage of both the full colour-selected (filled markers) and the $L_{6\mu m}-z$ matched (open markers) samples (using the colour scheme in Figure~\ref{fig:gi_z_distribution}) with different radio morphologies. The fractions are reported in five categories: faint sources detected near the sensitivity limit ($F_{\rm peak} < 3$\,mJy), bright compact radio sources, bright extended radio sources, FR\,II-like systems and compact FR\,IIs (small scale lobe-systems); see Table~\ref{tab:morphology classes} for more details. The errorbars correspond to the 1$\sigma$ binomial uncertainties and the vertical dash lines separate the different categories. Example 2$'$~$\times$~2$'$ FIRST images of each morphological class are illustrated in the top panel. The white circle represents our cross-matching radius of 10$''$. Extended radio emission is found among a similar fraction of all quasars, but red quasars show a surfeit of compact and faint systems. } \label{fig:first morphology summaries} \end{figure*} \bgroup \def1.3{1.3} \begin{table} \begin{center} \small \caption{\label{tab:radio morph stats} The number and percentage of quasars in each of the morphology classes described in Table~\ref{tab:morphology classes}; see Figure~\ref{fig:first morphology summaries}. } \begin{tabular}[c]{llllcccc} \hline \hline Classification & bQSO & cQSO & rQSO \\ \hline \hline \multicolumn{4}{c}{The number of quasars in each class:} \\ \hline Faint & 73 & 97 & 357 \\ Compact & 142 & 107 & 348 \\ Extended & 86 & 67 & 69 \\ FR\,II-like & 33 & 34 & 32 \\ Compact FR\,II & 6 & 10 & 10 \\ \\ \hline \multicolumn{4}{c}{The percentage in each class (see Figure~\ref{fig:first morphology summaries}):} \\ & (\%) & (\%) & (\%) \\ \hline Faint & $1.49 _{- 0.153 }^{+ 0.193 } $ & $1.98 _{- 0.179 }^{+ 0.218 } $ & $7.287 _{- 0.352 }^{+ 0.387 }$ \\ Compact & $2.898 _{- 0.22 }^{+ 0.258 } $ & $2.184 _{- 0.189 }^{+ 0.228 }$ & $7.103 _{- 0.348 }^{+ 0.383 }$ \\ Extended & $1.755 _{- 0.168 }^{+ 0.207 }$ & $1.368 _{- 0.146 }^{+ 0.186 }$ & $1.408 _{- 0.148 }^{+ 0.188 }$ \\ FR\,II-like & $0.673 _{- 0.097 }^{+ 0.137 }$ & $0.694 _{- 0.099 }^{+ 0.139 }$ & $0.653 _{- 0.096 }^{+ 0.136 }$ \\ Compact FR\,II & $0.122 _{- 0.032 }^{+ 0.073 }$ & $0.204 _{- 0.046 }^{+ 0.086 }$ & $0.204 _{- 0.046 }^{+ 0.086 }$ \\ \\ \hline \hline \end{tabular} \end{center} \end{table} \egroup \subsection{Radio luminosities of red versus blue quasars} \label{subsec:radio luminosities} The excess of faint radio detections in red quasars compared to blue quasars suggests that a larger fraction of FrQSOs have low radio luminosities. In addition to the radio morphologies we therefore also explored the radio luminosities ($L_{\rm 1.4GHz}$) and ``radio-loudness" (here defined as $L_{\rm 1.4GHz}/L_{\rm 6 \mu m}$) of the FIRST-detected quasars. For consistency, in these analyses we used the total FIRST fluxes of all radio-detected components within 10$''$ to calculate the 1.4\,GHz luminosities, even if their NVSS fluxes differed from FIRST (see \cref{subsubsec:first data}). In Figure~\ref{fig:luminosity vs z} we plot $L_{\rm 1.4GHz}$ vs. redshift for the FbQSOs, FcQSOs and FrQSOs. The radio luminosity distributions of all three samples changes with redshift due to the flux limit of the FIRST survey. Beyond $z > 0.5$, $L_{\rm 1.4GHz}$ is higher than that of the strongest star-forming galaxies, indicating that the radio emission must be dominated by the AGN in almost all cases \citep[e.g.,\ Arp~220 is the most powerful nearby starburst with $\log(L_{\rm 1.4GHz}/{\rm W~Hz^{-1}})$ = 23.4;][]{condon2013}. Noticeably, as indicated by our morphology analysis, there is a strong concentration of FrQSOs close to the FIRST detection limit. \begin{figure*} \centering \includegraphics[width=40pc]{figures/Lradio_vs_z_summed_fluxes.pdf} \caption{Radio luminosity at 1.4\,GHz versus redshift for the radio-detected FbQSO (left), FcQSO (middle) and FrQSO (right) quasars. The FIRST fluxes of quasars with multiple radio counterparts within 10$''$ have been added to compute the total luminosity. The median $\log(L_{\rm 1.4GHz})$ in each of the 4 redshift bins are indicated with crosses. The dash-dot lines indicate radio luminosities corresponding to star formation rates (SFRs) of 100 and 1000 M$_\odot$\,yr$^{-1}$ \citep[see][for details on the SFR calculations]{hopkins2001}.} \label{fig:luminosity vs z} \end{figure*} An alternative way to investigate the radio power is to explore the relative ratio of the quasar in the radio band to the overall accretion power. This is effectively a measure of the ``radio loudness", which we define here as $R$. Many studies use the 5\,GHz-to-2500\,\AA{} flux ratio to define radio-loud quasars as objects with $R > 10$ \citep[e.g.,][]{kellerman1989,stocke1992, urry1995,ivezic2002,glikman2004,zakamska2014,mehdipour2019}. The transition between the radio-quiet and radio-loud regime is not definite and often quasars with $3 < R < 100$ are considered to be radio-intermediate. Using the $R$-values provided in \citet{shen2011} and the standard definition of radio loudness, we find that 81\%, 78\% and 83\% of the FbQSOs, FcQSOs and FrQSOs are radio-loud quasars with $R>10$. To reliably constrain the radio loudness in our analyses here we use the rest-frame 1.4\,GHz\,--\,6\,$\mu$m luminosity ratio ($L_{\rm 1.4GHz}/L_{\rm 6 \mu m}$), which will be less affected by dust extinction and redshift effects. In Figure~\ref{fig:radio-loudness} we plot the relative fraction of FrQSOs and FbQSOs compared to FcQSOs as a function of $R$ for both the full (filled markers) and $L_{\rm 6\mu m}$--z matched (shaded regions) colour-selected samples. We estimate the transition between radio-quiet and radio-loud quasars by requiring that 90\% of the \citet{shen2011} identified radio-loud FcQSOs satisfies our radio loudness definition: $\log_{10}(L_{\rm 1.4GHz}/L_{\rm 6 \mu m}) \approx -4.2$ (dot-dashed line); we used the FcQSOs since they will have typical optical quasar colours unaffected by dust reddening. This figure shows a factor $\approx$~3--4 enhancement in the fraction of FrQSOs with respect to both FbQSOs and FcQSOs towards lower values of $R$, indicating that the enhanced radio-detection fraction for the FrQSOs is from systems around the radio-loud--radio-quiet threshold; we quantified this threshold ($\log_{10}(L_{\rm 1.4GHz}/L_{\rm 6 \mu m}) \approx -4.2$) in terms of the mechanical-to-radiative power to be $P_{\rm mech,sync}/P_{\rm rad,L6\mu m}\approx0.001$.\footnote{On the basis of the methodology given in \citet{willot1999}, we calculate the mechanical power from the jet and find that for our assumed radio quiet-radio loud threshold the mechanical-to-radiative power ($P_{\rm mech,sync}/P_{\rm rad,L6\mu m}$) corresponds to $\sim$\,0.1\%. In this calculation we assume a normalization factor $\digamma_{\rm W}$ = 5 \citep[see][]{daly2012}; the normalization factor ranges between 1\,--\,20, and combines several factors such as the lobe filling factor and the amount of energy from non-radiating particles.} As may be expected, given our earlier radio-morphology results, the enhancement in the fraction of rQSOs at low $R$ values comes from systems with either faint or compact radio morphologies; the number of sources in each bin are indicated in Figure~\ref{fig:radio-loudness} and tabulated in Table~\ref{tab:radio loudness source stats}. No significant differences are found between red and blue quasars within the classical extended radio-loud systems. \bgroup \def1.3{1.3} \begin{table} \begin{center} \small \caption{\label{tab:radio loudness source stats} The number of FbQSOs, FcQSOs and FrQSOs in the bins of $\log_{10}(L_{\rm 1.4GHz}/L_{\rm 6 \mu m})$ plotted in Figure~\ref{fig:radio-loudness}. Each bin is subdivided into our radio morphology classes; faint, compact and extended sources (extended, FR~II and compact FR~II). } \begin{tabular}[c]{lllllccccc} \hline \hline $\log_{10}(L_{\rm 1.4GHz}/L_{\rm 6 \mu m})$ & Sample & Faint & Compact & Extended \\ \hline \hline -4.75 & FbQSO & 53 & 14 & 7 \\ & FcQSO & 84 & 16 & 11 \\ & FrQSO & 321 & 63 & 13 \\ \hline -3.75 & FbQSO & 17 & 26 & 29 \\ & FcQSO & 11 & 25 & 16 \\ & FrQSO & 33 & 108 & 27 \\ \hline -3.25 & FbQSO & 2 & 41 & 39 \\ & FcQSO & 0 & 26 & 34 \\ & FrQSO & 0 & 85 & 27 \\ \hline -2.25 & FbQSO & 0 & 61 & 50 \\ & FcQSO & 0 & 40 & 50 \\ & FrQSO & 0 & 87 & 43 \\ \hline \hline \end{tabular} \end{center} \end{table} \egroup \begin{figure*} \centering \includegraphics[width=35pc]{figures/radio_detection_frac_radioloudness.png} \caption{The FIRST 1.4~GHz radio-detection fraction in bins of radio loudness, computed using the 1.4\,GHz and 6\,$\mu$m luminosities, of the full (coloured markers) and $L_{\rm 6\mu m}-z$ matched (coloured shaded regions) colour samples. The relative fraction of FIRST-detected C\,{\sc iv} BALQSOs compared to FcQSOs at $1.5 < z < 2.4$ are also plotted as a function of $R$ (grey crosses; see \cref{subsec: empirical constraints evolution model}). The blue, red and grey markers are offset for illustration purposes. The grey shaded region indicates the radio-quiet regime and the dashed-dot line show the transition between radio-quiet and radio-loud quasars, quantified in terms of the mechanical-to-radiative power (see footnote~4 and \cref{subsec:radio luminosities}). The number of sources in each $R$-bin is subdivided into faint and compact morphologies (F+C) and extended (E; including extended, FR~IIs and compact FR~IIs) morphologies. The enhanced radio-detection fraction for the rQSOs is predominantly from systems around the radio quiet-radio loud threshold with faint or compact radio morphologies.} \label{fig:radio-loudness} \end{figure*} \section{Discussion} \label{sec:discussion} We have analysed the SDSS DR7 Quasar Catalogue from S10 to look for fundamental differences in the radio properties between red and blue quasars. By carefully selecting rQSOs, cQSOs and bQSOs from the top, middle and bottom 10\% of the observed $g^* - i^*$ colour distributions we have generated uniformly selected samples unbiased in their radio properties. Overall, we have found that rQSOs have a FIRST radio-detection fraction of $\approx 15 - 20$\%, which is a factor of $\approx\,2-3$ times larger than that of blue quasars (cQSOs and bQSOs). Through a visual inspection of the FIRST images and an assessment of the radio luminosities (${L_{\rm 1.4~GHz}}$ and ${L_{\rm 1.4~GHz}}/{L_{\rm 6\mu m}}$) we find that the radio-detection excess for rQSOs is primarily due to compact and radio-faint quasars (those around the radio quiet-radio loud threshold). No significant differences are found between rQSOs, cQSOs, and bQSOs within the classical extended radio-loud systems. We find consistent results between our full and $L_{\rm 6\mu m}$--z matched colour-selected samples. Given that the radio luminosities of the quasars are at least an order of magnitude above those expected from star-formation (e.g.,\ $L_{\rm 1.4GHz} > 10^{25}$\,W\,Hz$^{-1}$ at $z > 1.5$), the differences in the radio properties will be driven by AGN-related processes (e.g.,\ radio core, jets, lobes, winds). Many previous studies have explored the optical properties of radio-detected quasars, finding that they tend to have redder optical colours than radio-undetected quasars \citep[e.g.,][]{ivezic2002,white2003}. While other studies have explored the radio morphologies of radio-detected quasars and found that the quasars with unresolved radio emission tend to have redder optical colours than the quasars with extended radio emission \citep[e.g.,][]{lu2007,kimball2011}. However, fewer studies have explored the radio properties of carefully selected red and blue quasars. \citet[][]{tsai2017} distinguished between red and blue quasars based on their spectral colours (flux ratio of the rest frame 4000\AA{} to 3000\AA{} continuum emission) and, similarly to our work, found a larger number of red quasars to be detected with FIRST; however, their analysis was restricted to low redshifts ($0.3 < z < 1.2$). Similarly, \citet{richards2003} reported that dust-reddened SDSS quasars have average FIRST detection-fractions $\approx$\,2\,$\times$ larger than intrinsically blue quasars while \citet{white2007} stacked the FIRST data of SDSS quasars as a function of $g^*-r^*$ colour and found an increase in the radio-flux density towards redder optical colours. We also note that \citet{georgakakis2009} tentatively found that a higher fraction of 2MASS NIR selected quasars are associated with radio emission than optically selected SDSS quasars. \\ However, our study is the first to systematically explore the optical colour dependence on the radio-detection fraction as a function of radio morphology, luminosity, and ``radio-loudness". A strength of our approach is that we have carefully selected comparison samples (rQSO, cQSO, and bQSO) from the same quasar population, allowing us to rule out significant selection and luminosity effects in our analyses. Overall, given the connection between the rQSOs and radio emission it is natural to ask whether they are produced by the same physical process. The only process likely to contribute to both the optical and radio wavebands is synchrotron radiation, which is the dominant physical mechanism in the radio waveband. Qualitatively, it appears unlikely that synchrotron radiation can explain the enhanced radio emission in rQSOs since the significant difference between rQSOs and blue quasars occurs in comparatively radio-weak systems. Indeed, from a more quantitative analysis, where we scale a synchrotron-dominated SED to the radio flux, we predict that only $\approx$~6\% of the FbQSOs and FcQSOs and $\approx$~8\% of FrQSOs are likely to suffer from significant contamination of the optical emission from synchrotron processes; see \cref{sec:appendix} for more details. Our results are qualitatively similar to those of \citet{glikman2007} who undertook a similar analysis but for a brighter NIR selected red-quasar sample. We therefore conclude that dust extinction is the most plausible explanation for the optical colours of the majority of red quasars, in agreement with previous works \citep[e.g.,][]{webster1995,gregg2002,richards2003,glikman2004,glikman2007,glikman2012,kim2018}, particularly at $z > 0.5$ where dilution from the host galaxy is likely to be weak (see \cref{subsubsec: WISE properties}). On the basis of our results how can we explain the connection between the red colours and different radio properties of the rQSOs when compared to blue quasars? Below we discuss our results within the context of the two competing models for red quasars, the orientation model (\cref{subsec: evidence agains the orientation model}) and the evolutionary model (\cref{subsec: empirical constraints evolution model}). \subsection{Evidence against a simple orientation-dependent model} \label{subsec: evidence agains the orientation model} In a simple orientation-dependent model we would not expect physical differences between red and blue quasars; in this model any observed differences would merely be a consequence of more dust along the line-of-sight. Since radio emission is not affected by dust, we would therefore expect no significant differences in the radio-detection fraction between different quasar sub populations, in stark contrast to our result (factor $\approx$~3 times more radio-detected rQSOs when compared to bQSOs and cQSOs, even when controlling for luminosity and redshift effects). In fact, since blue quasars would be more face on than the red quasars, on the basis of the orientation model, we would actually expect the opposite result (i.e.,\ a relatively larger fraction of radio-detected bQSOs and cQSOs than rQSOs). The differences in the radio morphologies and the radio luminosities of the rQSOs and blue quasars also argue against the orientation model. In the orientation model the rQSOs are more inclined than blue quasars and so the radio emission would be more extended (on average) while we find an excess of compact radio morphologies for the rQSOs, again the opposite to what we would expect. The larger fraction of rQSOs with low radio luminosities (i.e.,\ either ${L_{\rm 1.4~GHz}}$ or ${L_{\rm 1.4~GHz}}/{L_{\rm 6\mu m}}$) is also inconsistent with the orientation model which would predict no significant differences in the radio luminosities of the different quasar sub populations. \subsection{Empirical constraints for an evolutionary model} \label{subsec: empirical constraints evolution model} Since our radio results cannot be explained (solely) by orientation, they must imply fundamental differences between red and blue quasars. These differences could be driven by changes in the physical properties of the quasars themselves (e.g.,\ the BH accretion disc) or they could be related to the larger-scale ``environment"; we use the term ``environment" here to describe a broad range of potential physical phenomena over a wide range of size scales (e.g.,\ pc--Mpc scales). We briefly discuss the empirical constraints that our study can place on these scenarios below. Several studies have found that a small number of X-ray, optical, and NIR selected quasars are characterized by intrinsically red continua, potentially related to different accretion rates in the red quasars compared to blue quasars \citep{puchnarewicz1998,richards2003,young2008,ruiz2014,kim2018}. Since it has also been noted that radio emission is enhanced in low Eddington rate AGN \citep[e.g.][]{heckman2004,kauffmann2008}, it is therefore reasonable to consider whether the rQSOs have lower accretion rates in comparison to bQSOs and cQSOs due to their enhanced radio-detection fractions. We test this here by estimating the range in Eddington ratios between the $L_{\rm 6\mu m}-z$ matched colour samples using the bolometric luminosities (estimated from $L_{\rm 6\mu m}$; see \cref{subsubsec: WISE properties}) and the FWHM of the broad lines as a proxy of the BH masses (i.e.,\ based on virial BH masses). In Figure~\ref{fig:fwhm all matched} we show the estimated bolometric luminosity ($L_{\rm bol,6\mu m}$) as a function of FWHM for the b,c,rQSO$^{L_{\rm 6\mu m}}_{z}$ quasars. The median values for the different broad lines (different $z$-bins) are overlaid and we calculated the median absolute deviation (MAD) as the uncertainty on the median. Eddington ratio tracks ranging from $0.01 < \lambda_{\rm Edd} < 1$ are also plotted, calibrated against values from \citet{shen2011} for the cQSOs. A large scatter in the FWHMs is observed, especially for the rQSOs which often have larger associated uncertainties due to lower SNR spectra. Nevertheless, from the median values it is clear that there are no statistically significant differences in the FWHM values between the b,c,rQSO$^{L_{\rm 6\mu m}}_{z}$ quasars. These results would appear to be in disagreement with \citet{richards2003} who found a trend of narrower Balmer lines towards redder optical colours; however, we note that this result is based on composite quasar spectra with higher SNR allowing them to see more subtle trends than in our analysis. Using the bolometric and BH mass relation for the Eddington ratio, we estimated the median Eddington ratio for each bin of the bQSOs, cQSOs, and rQSOs: all are broadly consistent with an increase from low-to-high redshift of $\lambda_{\rm Edd} \approx $~0.1 -- 0.5. Since the quasars are matched in luminosity we therefore argue that there are no strong differences in the {\it average} accretion rates between red and blue quasars. This result appears to be in disagreement with some previous studies which have found that radio-selected red quasars have higher accretion rates than blue quasars \citep[e.g.,][]{urrutia2012,kim2015_accretion}, on the basis of optical--NIR spectra of small quasar samples. However, our current analysis is limited by the large uncertainties in individual FWHM measurements from \citet{shen2011}. We aim to clarify the connection between red quasars and accretion properties in upcoming work using a matched sample of rQSOs and cQSOs with high-quality VLT-XSHOOTER optical--NIR spectroscopy. \begin{figure*} \centering \includegraphics[width=40pc]{figures/fwhm_L6um.png} \caption{The bolometric luminosity estimated using the $L_{\rm 6\mu m}$ vs. FWHM of the broad-lines H$\beta$ (circles), Mg~{\sc ii} (crosses) and C~{\sc iv} (X's) of the b,c,rQSO$^{L_{\rm 6\mu m}}_{z}$ quasars. The median values and median absolute deviations (MAD) are plotted for each emission line. Also plotted are Eddington ratio tracks (dash-dot grey lines) determined from the Eddington ratios provided \citet{shen2011} for the cQSOs (since they represent typical quasars). There are no strong differences in the FWHM measurements and implied Eddington ratios between the red quasars and blue quasars.} \label{fig:fwhm all matched} \end{figure*} While our analyses of the accretion properties do not show significant differences between the red and blue quasars {\it on average}, the enhancement in the fraction of rQSOs with compact or faint radio emission suggest fundamental differences between these sub populations. The radio compactness indicates that the source of the differences in the radio properties emerges on nuclear or galaxy scales ($<40$\,kpc). Within the context of the evolutionary model, this could imply that the red quasars are in a younger transitional phase with small, but expanding, radio jets \citep[such as Compact Steep Spectrum (CSS) and gigahertz-peaked spectrum (GPS); see also e.g.,][]{odea1997, murgia1999, rossetti2006, randall2011, dallacasa2013, orienti2016}; indeed, \citet{georgakakis2012} has argued a similar interpretation for a small sample of FIRST-detected dust-reddened quasars. In this model red quasars represent an early obscured phase during which energetic winds drive away the obscuring dust and gas revealing an unobscured view of the accretion disc (i.e.,\ a blue quasar). As the dust cocoon expands it confines a young radio source which remains compact on galaxy scales. Strong shocks associated with these interactions could sustain their radio synchrotron emission \citep[e.g.,][]{hwang2018}, leading to the enhanced radio-detection fractions which we observe. A prediction of the evolutionary model is that red quasars will host more powerful winds than blue quasars. Interestingly, \citet{mehdipour2019} recently showed an anti-correlation between the column density of ionised winds in quasars and the ``radio-loudness", whereby the systems with the weaker radio emission have stronger winds. The enhancement in the fraction of rQSOs with low $R = {L_{\rm 1.4~GHz}}/{L_{\rm 6\mu m}}$ values therefore indirectly suggests that they will host stronger winds than bQSOs. On the basis of the current data, we are unable to test whether the rQSOs have stronger winds than the bQSOs. However, we can use the sub population of broad absorption line quasars \citep[BALQSOs;][]{foltz1987,weymann1991}, which are known to host powerful winds, to see whether they are preferentially radio weak and to therefore provide an indirect test of this relationship. In Figure~\ref{fig:radio-loudness} we plot the relative fraction of FIRST-detected BALQSOs compared to FcQSOs as a function of $R$; we identified BALQSOs as systems with C~{\sc iv} ($1.5 < z < 2.4$) broad absorption lines from our parent sample (see Figure~\ref{fig:flowchart}) using the data provided in \citet{shen2011} ({\sc bal\_flag} = 1 or {\sc bal\_flag} = 3). We find an enhancement of BALQSOs at low values of $R$ and a deficiency at high values of $R$, results which are in general agreement with those obtained by \citet{morabito2019} for deeper low-frequency LOFAR radio data. The behaviour at high values of $R$ is strikingly similar to that found for the rQSOs, suggesting that rQSOs may also host powerful winds, in good agreement with the evolutionary model; see also \citet{urrutia2009} for evidence of an enhancement of BALQSOs in the red-quasar population. We will more directly test this hypothesis in a future VLT-XSHOOTER spectroscopic study and also determine whether the red quasars from our sample host more powerful winds than the blue quasars. \section{Conclusions} \label{sec:conclusion} We have taken a novel approach to search for fundamental differences between red and blue quasars at $0.2 < z < 2.4$ to allow us to test between the two main competing models: the unified orientation model and an evolutionary model. Our quasar selection is based on the SDSS survey and is uniformly selected and unbiased in the radio. We distinguished between red (rQSOs), control (cQSOs) and blue (bQSOs) quasars using carefully constructed $g^*-i^*$ colour distributions as a function of redshift. Our rQSO selection is therefore sensitive to the redshift evolution of quasar SEDs. We found that the red colours of the rQSOs suggest that the majority require $A_{\rm V} \sim 0.1-0.5$\,mag of dust reddening to the median cQSO to produce the observed optical colours. From a systematic comparison of the radio properties of the rQSOs, cQSOs and bQSOs, we have identified fundamental differences that cannot be attributed to just the orientation model. Our results are consistent between our full and $L_{\rm 6\mu m}-z$ matched colour-selected samples: \begin{itemize} \item{{\bf Enhanced radio emission from the AGN in rQSOs (see Figure~\ref{fig:radio_detection_frac}):} a larger fraction of rQSOs (by a factor of 2--3) are detected in the 1.4~GHz FIRST survey when compared to cQSOs and bQSOs. The average FIRST detection rate across $0.2 < z <2.4$ are 5\%--10\% for the cQSOs and bQSOs and 15\%--20\% for the rQSOs. The radio luminosities are at least an order of magnitude above those expected from SF (e.g.,\ $L_{\rm 1.4GHz} > 10^{25}$\,W\,Hz$^{-1}$ at $z > 1.5$), indicating that they are driven by AGN-related processes. See \cref{subsec:radio-detection fraction result}. } \item{{\bf rQSOs have differences in their arcsecond-scale radio morphologies (see Figure~\ref{fig:first morphology summaries}):} from a visual assessment of the FIRST cutouts we have found that the incidence of systems with extended and FR~II-like radio morphologies among the rQSOs is the same as among the cQSOs/bQSOs, but they have a much higher incidence of faint and compact radio counterparts (by a factor of 2--6). The radio-detection enhancement of the rQSOs therefore occurs in the compact and faint radio sources rather than the classical extended radio-loud systems. See \cref{subsec:radio morphologies}. } \item{{\bf rQSOs have lower radio--MIR luminosity ratios (${L_{\rm 1.4~GHz}}/{L_{\rm 6\mu m}}$; see Figure~\ref{fig:radio-loudness}):} we found a factor $\approx$~3--4 enhancement of rQSOs at low ${L_{\rm 1.4~GHz}}/{L_{\rm 6\mu m}}$ values, around the radio quiet-radio loud threshold, when compared to cQSOs and bQSOs. These differences are dominated by the compact and faint radio sources that are responsible for the enhanced radio-detection fraction of rQSOs. We see no significant differences in the classical extended radio-loud systems. See \cref{subsec:radio luminosities}. } \end{itemize} By linking the enhanced radio-detection rates and dust extinction of rQSOs we conclude that these sources are not blue quasars with additional dust along the line-of-sight due to a larger viewing angle as a simple orientation model would predict. By comparison we argue that the radio properties of the rQSOs are consistent with the evolutionary paradigm where rQSOs contain younger, more compact radio sources, possibly in a brief transitional phase where powerful winds are driving away the obscuring gas and dust. In future work we will investigate radio spectral indices with multi-frequency radio data, which provide a more comprehensive understanding of the radio properties of the rQSOs. We will also further explore the origin of redness in the rQSOs from optical--NIR spectral analysis, constrain the star-formation rates from the host galaxy using {\it Herschel}--ALMA far-IR and mm imaging, and search for merger-driven signatures from high spatial resolution optical--NIR imaging. \section{Acknowledgements} We would like to acknowledge Elizabeth Wetherell for her assistance in the original discovery of the enhanced radio-detection fraction for red quasars. We also thank the anonymous referee for their useful comments which greatly improved the presentation and discussion of the results. We would also like to express our gratitude to the following people for their concise feedback and contributions: Manda Banerji, Alastair Edge, Richard McMahon, Andrea Merloni, Adam D. Myers, Gordon T. Richards, Nicholas P. Ross and Benny Trakhtenbrot. We acknowledge the Faculty of Science Durham Doctoral Scholarship (LK), the Science and Technology Facilities Council (DMA, DJR, through grant code ST/P000541/1), a European Union COFUND/Durham Junior Research Fellowship (EL, through EU grant agreement no. 609412) and the financial support from the Swiss National Science Foundation (SF). Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Understanding of the electroweak symmetry breaking mechanism is one of the main tasks in particle physics. The establishment of the structure of the Higgs sector would be a break-through in our knowledge about matter. So it is important to think about alternatives to the Standard Model Higgs sector especially if they lead to a dilution of the signal. The simplest possible extension is the addition of scalar fields which are singlets under the gauge group of the Standard Model. Radiative corrections to weak processes are not sensitive to the presence of singlets in the theory, because no Feynman graphs containing singlets appear at the one--loop level. Since effects at the two--loop level are below the experimental precision, the presence of a singlet sector is not ruled out by any of the LEP1 precision data. The only connection to such a hidden sector is a possible Higgs singlet coupling, leading to a nonstandard invisible Higgs decay. Whereas the invisible decay of the Higgs boson with a width comparable to the Standard Model leads to relatively sharp missing energy signals, e.g. well known from discussions on Majoron models \cite{valle}, a strongly coupled hidden sector could lead to fast Higgs decay and thereby to wide resonances. This would disturb the signal to background ratio if necessary cuts are imposed. To check the influence of a hidden sector we will study the coupling of a Higgs boson to an O(N) symmetric set of scalars, which is one of the simplest possibilities, introducing only a few extra parameters in the theory. The effect of the extra scalars is practically the presence of a possibly large invisible decay width of the Higgs particle. When the coupling is large enough the Higgs resonance can become wide even for a light Higgs boson. It was shown earlier that there is a range of parameters, where such a Higgs boson can be seen neither at LEP nor at the LHC \cite{vladimir,lep2report,DPRoy}. \section{The model} The scalar sector of the model consists of the usual Higgs sector coupled to a real N--component vector $\vec\varphi$ of scalar fields, denoted by Phions in the following. The lagrangian density is given by, \begin{eqnarray} \label{definition} {\cal L} &=& - \partial_{\mu}\phi^+ \partial^{\mu}\phi -\lambda (\phi^+\phi - v^2/2)^2 - 1/2\,\partial_{\mu} \vec\varphi \partial^{\mu}\vec\varphi -1/2 \, m^2 \,\vec\varphi^2 \nonumber \\ &&- \kappa/(8N) \, (\vec\varphi^2 )^2 -\omega/(2\sqrt{N})\, \, \vec\varphi^2 \,\phi^+\phi \nonumber \end{eqnarray} where $\phi$ is the standard Higgs doublet. Couplings to fermions and vector bosons are the same as in the Standard Model. The ordinary Higgs field acquires the vacuum expectation value $v/\sqrt{2}$. For positive $\omega$ the $\vec\varphi$--field acquires no vacuum expectation value. After spontaneous symmetry breaking one is left with the ordinary Higgs boson, coupled to the Phions into which it decays. Also the Phions receive an induced mass from the spontaneous symmetry breaking which is suppressed by a factor $1/\sqrt{N}$. If the factor N is taken to be large, the model can be analysed with $1/N$--expansion techniques. By taking this limit the Phion mass is suppressed, whereas the decay width of the Higgs boson is not. Because the Higgs width is now depending on the Higgs Phion coupling its value is arbitrary. Therefore the main effect of the presence of the Phions is to give a possibly large invisible decay rate to the Higgs boson. The invisible decay width is given by \begin{equation} \Gamma_H =\frac {\omega^2 v^2}{32 \pi M_H} = \frac {\omega^2 (\sin\theta_W\cos\theta_W M_Z)^2)}{32 \pi^2 \alpha_{em} M_H}\quad .\nonumber \end{equation} The Higgs width is compared with the width in the Standard Model for various choices of the coupling $\omega$ in Fig.~\ref{width}. The model is different from Majoron models \cite{valle}, since the width is not necessarily small. The model is similar to the technicolor--like model of Ref.~\cite{chivukula}. \begin{figure}[hbt] \vspace{0.1cm} \centerline{\epsfig{figure=lcwidth.eps,height=8.cm,angle=0}} \caption{\it Higgs width in comparison with the Standard Model.} \label{width} \end{figure} Consistency of the model requires two conditions. One condition is the absence of a Landau pole below a certain scale $\Lambda$. The other follows from the stability of the vacuum up to a certain scale. An example of such limits is given in Fig.~\ref{stability}, where $\kappa=0$ was taken at the scale $2m_Z$, which allows for the widest parameter range. The regions of validity up to a given scale $\Lambda$ is sandwiched between the upper--right and the lower--left contour lines in the figure. The first stem from the Landau pole, the second from instability of the vacuum at that scale. \begin{figure}[htb] \vspace{0.1cm} \centerline{\epsfig{figure=lcstab.eps,height=8.cm,angle=0}} \caption{\it Theoretical limits on the parameters of the model in the $\omega$ vs. $M_H$ plane. The contour lines correspond to the cutoff scales $\Lambda = 10^{19}$, $10^6$, $10^4$ and $10^3$ GeV.} \label{stability} \end{figure} To search for the Higgs boson there are basically two channels, one is the standard decay, which is reduced in branching ratio due to the decay into Phions. The other is the invisible decay, which rapidly becomes dominant, eventually making the Higgs resonance wide (see Fig.~\ref{width}). In order to give the bounds we neglect the coupling $\kappa$ as this is a small effect. We also neglect the Phion mass. (For other values of the Phion mass the bounds can be found by rescaling the decay widths with the appropriate phase space factor.) \section{LC bounds} At a linear collider (LC) the upper limits on the couplings in the present model come essentially from the invisible decay, as the branching ratio into visible particles drops with increasing $\varphi$--Higgs coupling, whereas for the Higgs mass limits one has to consider visible decays, too. The $WW$--fusion process can not be used to look for invisible Higgs decay. One is therefore left with the Higgsstrahlung und $ZZ$--fusion reaction. For energies up to 500 GeV the Higgsstrahlungs cross section is dominant and still comparable if one multiplies with the branching ratio $B(Z\rightarrow e^+e^-,\mu^+\mu^-)$. The Higgsstrahlungs reaction is preferred, because one can tag the on-shell Z boson. Thus we only have considered reactions containing an on shell Z boson with its decay into $e^+e^-$ or $\mu^+\mu^-$. The signal cross section is the well known Higgsstrahlungs cross section modified by the non standard Higgs width due to Phion decay. With the invariant mass of the invisible Phion system, $s_I$, it reads: \begin{equation} \sigma_{(e^+e^-\rightarrow Z+E\!\!\!/)} = \int ds_I \, \sigma_{(e^+e^-\rightarrow ZH)}(s_I) \, \frac{\sqrt{s_I} \quad \Gamma(H\rightarrow E\!\!\!/)} {\pi ((M_H^2-s_I)^2+s_I\,\Gamma(H\rightarrow \mbox{All})^2)}\nonumber\end{equation} To reduce the $Z \nu\nu$ background \cite{mele}, we used the fact that the angular distribution of the Z--boson for the signal peaks for small values of $|\cos\theta_Z|$ in contrast to the background. Thus we imposed the cut $|\cos\theta_Z|<0.7$. Because we assume the reconstruction of the on-shell Z--boson we use the kinematical relation \newline $E_Z=(\sqrt{s}-M_Z^2+s_I)/(2\sqrt{s})$ between the Z energy and the invariant mass of the invisible system to define a second cut. Because the differential cross section $d\sigma/ds_I$ peaks at $M_H^2$, we impose the following condition on the Z energy: \begin{equation} \frac{\sqrt{s}-M_Z^2+(M_H-\Delta_H)^2}{2\sqrt{s}}<E_Z< \frac{\sqrt{s}-M_Z^2+(M_H-\Delta_H)^2}{2\sqrt{s}}\nonumber\end{equation} For the choice of $\Delta_H$ a comment is in order. As long as the Higgs width is small one is allowed to use small $\Delta_H$, which reduces the background considerably keeping most of the signal events. But in the case of large $\varphi$--Higgs coupling, $\omega$, one looses valuable events. To compromise between both effects we took $\Delta_H=30 (100)$ GeV for colliders with center of mass energy of $500 (1400)$ GeV, respectively. For the exclusion limits we assumed an integrated luminosity of $500$ ($1000$) $fb^{-1}$ for the two center of mass energies. To define the $95 \%$ confidence level we used Poisson statistics as in Ref. \cite{lep2report}. The result is given in Fig.~\ref{exclu1}. We conclude from the above that a LC with the proposed high luminosities can essentially cover the parameter range up to the theoretically allowed limit with a completely clean signal, consisting of leptons plus missing energy. Such a LC appears to be the unique machine to be sensitive to this class of models. \newpage \begin{figure}[htb] \vspace{0.1cm} \centerline{\epsfig{figure=lcexclusion.eps,height=6.5cm,angle=0}} \caption{\it Exclusion limits at a LC at an energy of 500 (1400) GeV and luminosity 500 (1000) $fb^{-1}$, respectively. } \label{exclu1} \end{figure} \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{s:intro} \textit{Clustering problems} are widely studied in Combinatorial Optimization literature due to their vast applications in Operational Research, Machine Learning, Data Science and Engineering \cite{WS11,LIN,CGTS99,VGRMMV01,CG99,JV01,K12,Y00,LS16,CL12,KSS10,SS18}. Typically a fixed number of centers must be placed in a metric space such that a set of clients is served the best possible way. The quality of a clustering solution is captured through the \textit{$p$-norm} of the vector consisting of the distance of each client to its closest center, for some $p\geq 1$ or $p = \infty$. For example \textit{$k$-median} and \textit{$k$-means} assume $p=1$ and $2$ respectively, while \textit{$k$-center} assumes $p=\infty$ \cite{LIN,KSS10,SS18}. Today's access on vast data (that may be frequently updated over time) has motivated the study of clustering problems in case of \textit{time-evolving clients}, which dynamically change positions over time \cite{KW18,FKKLSZ19,EMS14,ANS17}. In time-evolving clustering problems, centers may also change position over time so as to better capture the clients' trajectories. For example, a city may want to reallocate the units performing rapid tests for Covid-19 so as to better serve neighborhoods with more cases, the distribution of which may substantially change from day to day. Other interesting applications of dynamic clustering include viral marketing, epidemiology, facility location (e.g. schools, hospitals), conference planning etc. \cite{JV11,EMS14,N3,PS01,CBK07}. Our work is motivated by the fact that in most settings of interest, clients can move in fairly complicated and unpredictable ways, and thus, an \textit{a-priori knowledge} on such trajectories is heavily under question (most of the previous work assumes perfect knowledge on clients' positions over time \cite{EMS14,ANS17,KW18,FKKLSZ19}). To capture this lack of information we cast clustering problems under the perspective of \textit{online learning} \cite{H16}. We study an online learning problem called \textit{Dynamic $k$-Clustering} in which a \textit{learner} selects at each round $t$, the positions of $k$ centers trying to minimize the connection cost of some clients, the positions of which are unknown to the learner prior to the selection of the centers. \begin{online_problem}[Dynamic $k$-Clustering] Given a metric space $d:V \times V \mapsto \mathbb{R}_{\geq 0}$. At each round $t$, \begin{enumerate} \item The learner selects a set $F_t \subseteq V$, with $|F_t| = k$, at which centers are placed. \item The adversary selects the positions of the clients, denoted as $R_t$ (after the selection of the positions of the centers by the learner). \item The learner suffers the connection cost of the clients, \[C_{R_t}(F_t) = \left(\sum_{j \in R_t} d(j,F_t)^p\right)^{1/p}\] where $d(j,F_t)$ is the distance of client $j$ to the closest center, $d(j,F_t) = \min_{i \in F_t}d_{ij}$. \end{enumerate} \end{online_problem} Based on the past positions of the clients $R_1,R_2,\ldots, R_{t-1}$ an \textit{online learning algorithm} must select at each round $t$, a set of $k$ centers $F_t \subseteq V$ such that the connection cost of the clients over time is close to the connection cost of the \textit{optimal (static) solution} $F^\ast$. If the cost of the online learning algorithm is at most $\alpha$ times the cost of $F^\ast$, the algorithm is called $\alpha$-regret, whereas in case $\alpha = 1$, the algorithm is called \textit{no-regret} \cite{H16}. Intuitively, a low-regret online learning algorithm converges to the optimal positions of the centers (with respect to the overall trajectories of the clients) by just observing the clients' dynamics. \begin{example}\label{ex:1} The clients are randomly generated according to a time-varying uniform distribution with radius $0.3$ and center following the periodic trajectory $\left(\sin ( \frac{2\pi \cdot t}{T}),\cos ( \frac{2\pi \cdot t}{T})\right)$ for $t=1,\ldots,T$. \begin{figure}[!htb] \centering {\includegraphics[width=0.7\linewidth]{images/uniform_circle/k=0.png}\label{fig:sub1}}\hfill \label{f:circle} \end{figure} The centers placed by a (sufficiently) low-regret algorithm would converge to positions similar in structure to the ones illustrated in Figure~\ref{f:circle2} (for $k=1,2,4$ and $k=8$) which are clearly close to the optimal (static) solution for the different values of $k$. \begin{figure}[!htb] \centering {\includegraphics[width=0.45\linewidth]{images/uniform_circle/k=1.png}\label{fig:sub1}}\hfill {\includegraphics[width=0.45\linewidth]{images/uniform_circle/k=2.png}\label{fig:sub2}}\hfill {\includegraphics[width=0.45\linewidth]{images/uniform_circle/k=4.png}\label{fig:sub2}}\hfill {\includegraphics[width=0.45\linewidth]{images/uniform_circle/k=8.png}\label{fig:sub2}}\hfill \caption{The figure depicts the actual centers at which a low-regret algorithm, that we subsequently propose, converges. For further details see Section~\ref{s:experiments}.} \label{f:circle2} \end{figure} \end{example} \textbf{Efficient Online Learning for Dynamic $k$-Clustering.} The existence of no-regret online learning algorithms for Dynamic $k$-Clustering immediately follows by standard results in online learning literature \cite{H16}. Dynamic $k$-Clustering is a special case of \textit{Learning from Expert Advice} problem for which the famous \textit{Multiplicative Weights Update Algorithm} achieves no-regret \cite{H16}. Unfortunately using the $\mathrm{MWU}$ for Dynamic $k$-Clustering is not really an option due to the huge time and space complexity that $\mathrm{MWU}$ requires. In particular $\mathrm{MWU}$ keeps a different weight (probability) for each of the possible ${|V|}\choose{k}$ possible placements of the $k$ centers, rendering it inapplicable even for small values of $|V|$ and $k$. Our work aims to shed light on the following question. \begin{question}\label{q:main} Is there an online learning algorithm for Dynamic $k$-Clustering that runs in polynomial time and achieves $\alpha$-regret? \end{question} \smallskip \textbf{Our Contribution and Techniques.} We first show that constant regret cannot be achieved in polynomial time for Dynamic $k$-Clustering. In particular we prove that any $O(1)$-regret polynomial-time online learning algorithm for Dynamic $k$-Clustering implies the existence of an $O(1)$-approximation algorithm for the \textit{Minimum-$p$-Union problem} \cite{CDKKR16}. Recent works on the theory of computational complexity establish that unless well-established cryptographic conjectures fail, there is no $O(1)$-approximation algorithm for $\mathrm{Min}$-$p$-$\mathrm{Union}$ \cite{CDKKR16,A12,CDM17}. This result narrows the plausible regret bounds achievable in polynomial time, and reveals an interesting gap between Dynamic $k$-Clustering and its offline counterparts, which admit polynomial-time $O(1)$-approximation algorithms. Our main technical contribution consists of polynomial-time online learning algorithms for Dynamic $k$-Clustering with non trivial regret bounds. We present a $\Theta(k)$-regret polynomial-time deterministic online learning algorithm and a $\Theta(r)$-regret polynomial-time randomized online learning algorithm, where $r$ is the maximum number of clients appearing in a single round ($r = \max_{1\leq t \leq T}|R_t|$). Combining these algorithms, one can achieve $\Theta\left( \min(k,r) \right)$-regret for Dynamic $k$-Clustering, which (to the best of our knowledge) is the first guarantee on the regret achievable in polynomial time. The regret bounds above are independent of the selected $p$-norm, and hold for any $p \geq 1$ and for $p = \infty$. At a technical level, our approach consists of two major steps. In the first step, we consider an online learning problem, that can be regarded as the \textit{fractional relaxation} of the Dynamic $k$-Clustering (see Section~\ref{s:fractional}), where the \textit{fractional connection cost} is given by the optimal value of an appropriate convex program and the action space of the learner is the $|V|$-dimensional simplex. For this intermediate problem, we design a \textit{no-regret} polynomial-time online learning algorithm through the use of the subgradients of the fractional connection cost. We show that such subgradient vectors can be computed in polynomial time via the solution of the dual program of the fractional connection cost. In the second step of our approach (see Section~\ref{s:det} and Section~\ref{s:rand}), we provide computationally efficient online (deterministic and randomized) rounding schemes converting a vector lying in the $|V|$-dimensional simplex (the action space of Fractional Dynamic $k$-Clustering) into $k$ locations for the centers on the metric space $V$ (the action space of Dynamic $k$-Clustering). In Section~\ref{s:det}, we present a deterministic rounding scheme that, combined with the no-regret algorithm for Fractional Dynamic $k$-Clustering, leads to a $\Theta(k)$-regret polynomial-time deterministic online learning algorithm for the original Dynamic $k$-Clustering. Interestingly, this regret bound is approximately optimal for all deterministic algorithms. In Section~\ref{s:rand}, we show that combining the no-regret algorithm for Fractional Dynamic $k$-Clustering with a randomized rounding scheme proposed in \cite{CL12}\footnote{This randomized rounding scheme was part of a $4$-approximation algorithm for $k$-median \cite{CL12}} leads to a $\Theta(r)$-regret randomized algorithm running in polynomial time. Combining these two online learning algorithms, we obtain a $\Theta(\min(k,r))$-regret polynomial-time online learning algorithm for Dynamic $k$-Clustering, which is the main technical contribution of this work. Finally, in Section~\ref{s:experiments}, we present the results of an experimental evaluation, indicating that for client locations generated in a variety of natural and practically relevant ways, the realized regret of the proposed algorithms is way smaller than $\Theta\left( \min(k,r) \right)$. \begin{remark} Our two-step approach provides a structured framework for designing polynomial-time low-regret algorithms in various combinatorial domains. The first step extends far beyond the context of Dynamic $k$-Clustering and provides a systematic approach to the design of \textit{polynomial-time no-regret online learning algorithms} for the \textbf{fractional relaxation} of the combinatorial online learning problem of interest. Combining such no-regret algorithms with online rounding schemes, which convert fractional solutions into integral solutions of the original online learning problem, may lead to polynomial time low-regret algorithms for various combinatorial settings. Obviously, designing such rounding schemes is usually far from trivial, since the specific combinatorial structure of each specific problem must be taken into account. \end{remark} \textbf{Related Work.} Our work relates with the research line of Combinatorial Online Learning. There exists a long line of research studying low-regret online learning algorithms for various combinatorial domains such that online routing \cite{HS97,AK08}, selection of permutations \cite{TW00,YHKSTT11,FLPS20,A14,HW07}, selection of binary search trees \cite{TM03}, submodular optimization \cite{HK12a,JB11,SG08}, matrix completion \cite{HKS12}, contextual bandits \cite{ALLS14,DHKKLRZ11} and many more. Finally, in combinatorial games agents need to learn to play optimally against each other over complex domains \cite{ITLMPT11,dehghani2016price}. As in the case of Dynamic $k$-Clustering in all the above online learning problems, MWU is not an option, due to the exponential number of possible actions. Another research direction of Combinatorial Online Learning studies \textit{black-box reductions} converting polynomial time offline algorithm (full information on the data) into polynomial time online learning algorithms. \cite{kalai03} showed that any (offline) algorithm solving optimally and in polynomial time the objective function, that the \textit{Follow the Leader framework} suggests, can be converted into a no-regret online learning algorithm. \cite{kakade07} extended the previous result for specific class of online learning problems called \textit{linear optimization problems} for which they showed that any $\alpha$-approximation (offline) can be converted into an $\alpha$-regret online learning algorithm. They also provide a surprising counterexample showing that such black-box reductions do not hold for general combinatorial online learning problems. Both the time efficiency and the regret bounds of the reductions of \cite{kalai03} and \cite{kakade07} were subsequently improved by \cite{rahmanian17,suehiro12,koolen10,balcan06,syrganis17,hazan16,fujita13,garber17,wei18}. We remark that the above results do not apply in our setting since Dynamic $k$-Clustering can neither be optimally solved in polynomial-time nor is a linear optimization problem. Our works also relates with the more recent line of research studying clustering problems with \textit{time-evolving clients}. \cite{EMS14} and \cite{ANS17} respectively provide $\Theta\left( \log (nT)\right)$ and $O(1)$-approximation algorithm for a generalization of the facility location problem in which clients change their positions over time. The first difference of Dynamic $k$-Clustering with this setting is that in the former case there is no constraint on the number of centers that can open and furthermore, crucially perfect knowledge of the positions of the clients is presumed. More closely related to our work are \cite{KW18,FKKLSZ19}, where the special case of Dynamic $k$-Clustering on a line is studied (the clients move on a line over time). Despite the fact that both works study online algorithms, which do not require knowledge on the clients' future positions, they only provided positive results for $k=1$~and~$2$. \section{Preliminaries and Our Results}\label{s:prelim} In this section we introduce notation and several key notions as long as present the formal Statements of our results. We denote by $D$ the diameter of the metric space, $D = \max_{i \in V, j \in V} d_{ij}$. We denote with $n$ the cardinality of the metric space $\left(|V| =n\right)$ and with $r$ the maximum number of clients appearing in a single round, $r = \max_{1\leq t \leq T}|R_t|$. Finally we denote with $\Delta_{n}^k$ the $n$-dimensional simplex, $\Delta_{n}^k= \{y \in \mathbb{R}^n:~ \sum_{i \in V} y_i = k ~\mathrm{and}~y_i \geq 0\}$. Following the standard notion of regret in online learning \cite{H16}, we provide the formal definition of an \textit{$\alpha$-regret} online learning algorithm for \textit{Dynamic $k$-Clustering}. \begin{definition}\label{d:regret} An online learning algorithm for the \textit{Dynamic $k$-Clustering} is $\alpha$-regret if and only if for any sequence of clients' positions $R_1,\ldots,R_T \subseteq V$, \[\sum_{t=1}^T C_{R_t}(F_t) \leq \alpha \cdot \min_{|F^\ast| \leq k} \sum_{t=1}^T C_{R_t}(F^\ast) + \Theta\left(\mathrm{poly}(n,D) \cdot T^\beta\right)\] where $F_1,\ldots,F_T$ are the positions of the centers produced by the algorithm for the sequence $R_1,\ldots,R_T$ and $\beta < 1$. \end{definition} Next, we introduce the \textit{Minimum-$p$-Union} problem, the inapproximability results of which allow us to establish that constant regret cannot be achieved in polynomial time for Dynamic $k$-Clustering. \begin{problem}[$\mathrm{Min-}p\mathrm{-Union}$] Given a universe of elements $\mathbb{E}$ and a collection of sets $\mathbb{U} =\{S_1, \ldots, S_m\}$ where $S_i \subseteq \mathbb{E}$. Select $\mathbb{U}' \subseteq \mathbb{U}$ such that $|\mathbb{U'}| =p$ and $|\cup_{S_i \in \mathbb{U}'}S_i|$ is minimized. \end{problem} As already mentioned, the existence of an $O(1)$-approximation algorithm for $\mathrm{Min-}p\mathrm{-Union}$ violates several widely believed conjectures in computational complexity theory\cite{CDKKR16,A12,CDM17}. In Theorem~\ref{t:hardnes} we establish the fact that the exact same conjectures are violated in case there exists an online learning algorithm for \textit{Dynamic $k$-Clustering} that runs in polynomial-time and achieves $O(1)$-regret. \begin{theorem}\label{t:hardnes} Any $c$-regret polynomial-time online learning algorithm for the Dynamic $k$-Clustering implies a $(c+1)$-approximation polynomial-time algorithm for $\mathrm{Min-}p\mathrm{-Union}$. \end{theorem} In Section~\ref{s:det}, we present a polynomial-time deterministic online learning algorithm achieving \textit{$\Theta(k)$}-regret. \begin{theorem}\label{t:det-regret} There exists a $6k$-regret deterministic online learning algorithm for Dynamic $k$-Clustering that runs in polynomial time (Algorithm~\ref{alg:det}). More precisely, \[\sum_{t=1}^T\mathrm{C}_{R_t}(F_t) \leq 6k \cdot \min_{|F^\ast|=k} \sum_{t=1}^T\mathrm{C}_{R_t}(F^\ast) + \Theta \left (k D n \sqrt{\log n T} \right)\] where $F_1,\ldots,F_T$ are the positions in which Algorithm~\ref{alg:det} places the centers for the sequence of clients' positions $R_1,\ldots,R_T$. \end{theorem} In Theorem~\ref{t:lower_bound_det} we prove that the $\Theta(k)$ bound on the regret of Algorithm~\ref{alg:det} cannot be significantly ameliorated with deterministic online learning algorithm even if the algorithm uses exponential time and space. \begin{theorem}\label{t:lower_bound_det} For any deterministic online learning algorithm for Dynamic $k$-Clustering problem, there exists a sequence of clients $R_1,\ldots,R_T$ such as the regret is at least $k+1$. \end{theorem} In Section~\ref{s:rand} we present a randomized online learning algorithm the regret of which depends on the parameter $r$. \begin{theorem}\label{t:rand-regret} There exists a $\Theta(r)$-regret randomized algorithm that runs in polynomial time (Algorithm~\ref{alg:rand}). For any sequence of clients' positions $R_1,\ldots,R_T$ with $|R_t| \leq r$, \begin{equation*} \begin{split} \sum_{t=1}^T \mathbb{E}\left[C_{R_t}(F_t)\right] & = 4r \cdot \min_{|F^\ast|=k} \sum_{t=1}^T\mathrm{C}_{R_t}(F^\ast)\\ &+ \Theta \left (k D n \sqrt{\log n T} \right) \end{split} \end{equation*} where $F_t$ is the random variable denoting the $k$ positions at which Algorithm~\ref{alg:rand} places the centers at round $t$. \end{theorem} By combining Algorithm~\ref{alg:det} and Algorithm~\ref{alg:rand} we can achieve $\Theta \left(\min(k,r)\right)$-regret in polynomial time. \begin{theorem}\label{t:main} There exists an online learning algorithm for Dynamic $k$-Clustering that runs in polynomial-time and achieves $\min\left(6k,4r \right)$-regret. \end{theorem} \begin{remark} In case the value $r = \min_{1\leq t \leq T}|R_t|$ is initially known to the learner, then Theorem~\ref{t:main} follows directly by Theorem~\ref{t:det-regret}~and~\ref{t:rand-regret}. However even if $r$ is not initially known, the learner can run a Multiplicative Weight Update Algorithm that at each round follows either Algorithm~\ref{alg:det} or Algorithm~\ref{alg:rand} with some probability distribution depending on the cost of each algorithm so far. By standard results for MWU \cite{H16}, this meta-algorithm admits time-average cost less than the best of Algorithm~\ref{alg:det}~and~\ref{alg:rand}. \end{remark} \section{Fractional Dynamic $k$-Clustering }\label{s:fractional} In this section we present the \textit{Fractional Dynamic $k$-Clustering} problem for which we provide a polynomial-time no-regret online learning algorithm. This online learning algorithm serves as a primitive for both Algorithm~\ref{alg:det} and Algorithm~\ref{alg:rand} of the subsequent sections concerning the original Dynamic $k$-Clustering. The basic difference between Dynamic $k$-Clustering and Fractional Dynamic $k$-Clustering is that in the second case the learner can \textit{fractionally} place a center at some point of the metric space $V$. Such a fractional opening is described by a vector $y \in \Delta_{n}^k$. \begin{online_problem}\label{pr:frac} [Fractional Dynamic $k$-Clustering]At each round $t \geq 1$, \begin{enumerate} \item The learner selects a vector $y_t \in \Delta_{n}^k$. The value $y_i^t$ stands for the fractional amount of center that the learner opens in position $i \in V$. \item The adversary selects the positions of the clients denoted by $R_t \subseteq V$ (after the selection of the vector $y_t$). \item The learner incurs fractional connection cost $\mathrm{FC}_{R_t}(y_t)$ described in Definition~\ref{d:frac_cost}. \end{enumerate} \end{online_problem} \begin{definition}[Fractional Connection Cost]\label{d:frac_cost} Given the positions of the clients $R \subseteq V$, we define the fractional connection cost $\mathrm{FC}_{R}(\cdot)$ of a vector $y \in \Delta_n^k$ as the optimal value of the following convex program. \begin{equation} \begin{array}{lr@{}ll} \mbox{\emph{minimize}} \left(\sum_{j \in R}\beta_j^p \right)^{1/p} \\ \\ \mathrm{}{s.t.}~~~~ \beta_j = \sum\limits_{i \in V} d_{ij} \cdot x_{ij} \,\,~~~~\forall j \in R\\ ~~~~~~~~~ \sum\limits_{i \in V} x_{ij} = 1 \,\,~~~~~~~~~~~~~~~\forall j \in R\\ ~~~~~~~~ x_{ij} \leq y_i \,\,~~~~~~~\forall j \in R,~\forall i \in V\\ ~~~~~~~~ x_{ij} \geq 0 \,\,~~~~~~~~\forall j \in R,~\forall i \in V \end{array} \end{equation} \end{definition} It is not hard to see that once the convex program of Definition~\ref{d:frac_cost} is formulated with respect to an \textit{integral vector} $y\in \Delta_n^k$ ($y_i$ is either $0$ or $1$) the fractional connection cost $\mathrm{FC}_{R}(y)$ equals the original connection cost $\mathrm{C}_{R}(y)$. As a result, the cost of the optimal solution $y^\ast \in \Delta_{k}^n$ of Fractional Dynamic $k$-Clustering is upper bounded by the cost of the optimal positioning of the centers $F^\ast$ in the original Dynamic $k$-Clustering. \begin{lemma}\label{l:frac_int} For any sequence of clients' positions $R_1,\ldots,R_T$, the cost of the optimal fractional solution $y^\ast$ for Fractional Dynamic $k$-Clustering is smaller than the cost of the optimal positioning $F^\ast$ for Dynamic $k$-Clustering, \[ \min_{y^\ast \in \Delta_n^k}\sum_{t=1}^T \mathrm{FC}_{R_t}(y^\ast) \leq \min_{ |F^\ast| =k}\sum_{t=1}^T \mathrm{C}_{R_t}(F^\ast)\] \end{lemma} Lemma~\ref{l:frac_int} will be used in the next sections where the online learning algorithms for the original Dynamic $k$-Clustering are presented. To this end, we dedicate the rest of this section to design a polynomial time no-regret algorithm for Fractional Dynamic $k$-Clustering. A key step towards this direction is the use of the subgradient vectors of $\mathrm{FC}_{R_t}(\cdot)$. \begin{definition}[Subgradient]\label{d:subgradients} Given a function $f:\mathbb{R}^n \mapsto \mathbb{R}$, a vector $g \in \mathbb{R}^n$ belongs in the subgradient of $f$ at point $x\in \mathbb{R}^n$,$g \in \partial f(x)$, if and only if $f(y) \geq f(x) + g^\top (y -x)~$, for all $y \in \mathbb{R}^n$. \end{definition} Computing the subgradient vectors of functions, as complicated as $\mathrm{FC}_{R_t}(\cdot)$, is in general a computationally hard task. One of our main technical contributions consists in showing that the latter can be done through the solution of an adequate convex program corresponding to the dual of the convex program of Definition~\ref{d:frac_cost}. \begin{lemma}\label{l:dual} Consider the convex program of Definition~\ref{d:frac_cost} formulated with respect to a vector $y \in \Delta_n^k$ and the clients' positions $R$. Then the following convex program is its dual. \begin{equation}\label{eq:ALP} \begin{array}{lr@{}ll} \mbox{\emph{maximize}}~~~ \sum_{j \in R}A_j - \sum_{i \in V}\sum_{j \in R}k_{ij}\cdot y_i\\ \\ \mathrm{s.t.}~~~~ ||\lambda||_{p}^\ast \leq 1 \\ ~~~~~~~~~ d_{ij} \cdot \lambda_j + k_{ij} \geq A_j \,\,~~~~\forall i \in V, j \in R\\ ~~~~~~~~~~k_{ij} \geq 0 \,\,~~~~~~~~~~~~~~~~~~~~~~~\forall i \in V, j \in R\\ \end{array} \end{equation} where $|| \cdot||_{p}^\ast$ is the dual norm of $||\cdot ||_p$ \end{lemma} In the following lemma we establish the fact that a subgradient vector of $\partial \mathrm{FC}_{R_t}(\cdot)$ can be computed through the optimal solution of the convex program in Lemma~\ref{l:dual}. \begin{lemma}\label{l:subgradients} Let $k_{ij}^\ast$ denote the value of the variables $k_{ij}$ in the optimal solution of the convex program in Lemma~\ref{l:dual} formulated with respect to vector $y \in \Delta_n^k$ and the clients' positions $R$. Then for any vector $y' \in \Delta_n^k$, \[\mathrm{FC}_{R_t}(y') \geq \mathrm{FC}_{R_t}(y) + \sum_{i \in V} \left(-\sum_{j \in R}k_{ij}^{\ast} \right)\cdot \left( y_i' - y_i \right) \] Moreover there exits an $\Theta(r \cdot |V|)$ algorithm for solving the dual program (Algorithm~\ref{alg:dual}) and additionally $|k_{ij}^\ast| \leq D$. \end{lemma} \begin{algorithm}[H] \caption{A time-efficient algorithm for solving the dual program of Lemma~\ref{l:dual}} \begin{algorithmic}[1] \State\textbf{Input:} A vector $y \in \Delta_{n}^k$ and a set of clients $R \subseteq V$. \State \textbf{Output:} An optimal solution for the convex program of Lemma~\ref{l:dual}. \For{ each client $j \in R$,} \State Sort the nodes $i \in V$ in increasing order according to $d_{ij}$. \State $\mathrm{Rem} \leftarrow 1$ \For{each each $i \in V$} \State $x_{ij} \leftarrow \min(y_i , \mathrm{Rem})$. \State $\mathrm{Rem} \leftarrow \mathrm{Rem} - x_{ij}$. \EndFor \EndFor \For{ each client $j \in R$} \State $V_j^+ \leftarrow \{i \in V:~ x_{ij} > 0\}$ and $D_j \leftarrow \max_{i \in V_{j}^+} d_{ij}$. \State $\beta_j \leftarrow \sum_{i \in V}d_{ij} \cdot x_{ij}$ \State $\lambda_j \leftarrow \left[ \frac{\beta_j}{||\beta||_p} \right]^{p-1}$ \State $A_j \leftarrow \lambda_j \cdot D_j$ \State $k_{ij} \leftarrow \min \left(\lambda_j\cdot \frac{x_{ij}}{y_i} \cdot \left(D_j - d_{ij}\right) , 0 \right)$ \EndFor \end{algorithmic} \label{alg:dual} \end{algorithm} \begin{remark} Algorithm~\ref{alg:dual} is not only a computationally efficient way to solve the convex program of Lemma~\ref{l:dual}, but most importantly guarantees that the value $k_{ij}^\ast$ are bounded by $D$ (this is formally Stated and proven in Lemma~\ref{l:dual}). The latter property is crucial for developing the no-regret algorithm for Fractional Dynamic $k$-Clustering. \end{remark} Up next we present the no-regret algorithm for Fractional Dynamic $k$-Clustering. \begin{algorithm}[H] \caption{A no-regret algorithm for Fractional Dynamic $k$-Clustering} \begin{algorithmic}[1] \State Initially, the learner selects $y^1_i = k/n$ for all $i \in V$. \For{ rounds $t = 1 \cdots T$} \State The learner selects $y_t \in \Delta_{n}^k$. \State The adversary selects the positions of the clients $R_t \subseteq V$. \State The learner receives cost, $\mathrm{FC}_{R_t}(y_t)$. \State The learner runs Algorithm~\ref{alg:dual} with input $y_t$ and $R_t$ and sets $g_i^t = -\sum_{ j \in R_t}k_{ij}^t$ \For{ each $i \in V$} \State \[y_i^{t+1} = \frac{ y_i^t \cdot e^{-\epsilon g_i^t}}{\sum_{i\in V}y_i^t \cdot e^{- \epsilon g_i^t}}\] where $\epsilon =\frac{\sqrt{ \log n}}{D r \sqrt{T}}$ \EndFor \EndFor \end{algorithmic} \label{alg:frac_no_regret} \end{algorithm} We conclude the section with Theorem~\ref{t:no-regret-frac} that establishes the no-regret property of Algorithm~\ref{alg:frac_no_regret} and the proof of which is deferred to the Appendix~\ref{app:fractional}. \begin{theorem}\label{t:no-regret-frac} Let $y_1,\ldots,y_T$ be the sequence of vectors in $\Delta_n^k$ produced by Algorithm~\ref{alg:frac_no_regret} for the clients' positions $R_1,\ldots,R_T$. Then, \[\sum_{t=1}^T\mathrm{FC}_{R_t}(y_t) \leq \min_{y^\ast \in \Delta_k} \sum_{t=1}^T\mathrm{FC}_{R_t}(y^\ast) + \Theta \left (k D n \sqrt{\log n T} \right)\] \end{theorem} \section{A $\Theta(k)$-Regret Deterministic Online Learning Algorithm}\label{s:det} In this section we show how one can use Algorithm~\ref{alg:frac_no_regret} described in Section~\ref{s:fractional} to derive $\Theta(k)$-regret for the Dynamic $k$-Clustering in polynomial-time. The basic idea is to use a rounding scheme that given a vector $y\in \Delta_n^k$ produces a placement of the $k$ centers $F_y \subseteq V$ (with $|F_y| \leq k$) such that \textit{for any set of clients' positions $R$}, the connection cost $C_{R}(F_y)$ is approximately bounded by the factional connection cost $\mathrm{FC}_{R}(y)$. This rounding scheme is described in Algorithm~\ref{alg:rounding}. \begin{algorithm} \caption{Deterministic Rounding Scheme} \begin{algorithmic}[1] \State \textbf{Input}: A vector $y \in \Delta_{n}^k$. \State \textbf{Output}: A set $F_y \subseteq V$ at which centers are opened. \State Run Algorithm~\ref{alg:dual} with input $y$ and $R = V$. \State Sort the positions $i \in V$ according to the values $\beta_i$ produced by Algorithm~\ref{alg:dual}. \State $F_y \leftarrow \emptyset$ \For{ $i = 1$ \bfseries{ to } $V$} \If{ $\min _{j \in F_y}d_{ij} > 6k \cdot \beta_i$} \State $F_y \leftarrow F_y \cup \{i\}$ \EndIf \EndFor \end{algorithmic} \label{alg:rounding} \end{algorithm} \begin{lemma}\label{l:rounding_lemma}[Rounding Lemma] Let $F_y$ denote the positions of the centers produced by Algorithm~\ref{alg:rounding} for input $y \in \Delta_n^k$. Then the following properties hold, \begin{itemize} \item For any set of clients $R$, \[\mathrm{C}_R(F_y)~\leq 6k \cdot \mathrm{FC}_{R}(y)\] \item The cardinality of $\mathrm{F}_y$ is at most $k$, $|\mathrm{F}_y| \leq k$. \end{itemize} \end{lemma} Up next we show how the deterministic rounding scheme described in Algorithm~\ref{alg:rounding} can be combined with Algorithm~\ref{alg:frac_no_regret} to produce an $\Theta(k)$-regret deterministic online learning algorithm that runs in polynomial-time. The overall online learning algorithm is described in Algorithm~\ref{alg:det} and its regret bound is formally Stated and proven in Theorem~\ref{t:det-regret}. \begin{algorithm}[H] \caption{A $\Theta(k)$-regret deterministic online learning algorithm for Dynamic $k$-Clustering} \label{alg:det} \begin{algorithmic}[1] \For{ rounds $t = 1 \cdots T$} \State The learner computes the vector $y_t \in \Delta_{n}^k$ by running Algorithm~\ref{alg:frac_no_regret} for the sequence of clients' positions $(R_1,\ldots,R_{t-1})$. \State The learner places centers to the positions $F_{y_t}$ produced by Algorithm~\ref{alg:rounding} given input $y_t$. \State The adversary selects the clients' positions $R_t \subseteq V$. \State The learner suffers connection cost $C_{R_t}(F_{y_t})$ \EndFor \end{algorithmic} \label{alg:det} \end{algorithm} We conclude the section with the proof of Theorem~\ref{t:det-regret} in which the regret bounds of Algorithm~\ref{alg:det} are established. \begin{proof}[Proof of Theorem~\ref{t:det-regret}] The second case of Lemma~\ref{l:rounding_lemma} ensures that $|F_{t}|\leq k$ and thus Algorithm~\ref{alg:det} opens at most $k$ facilities at each round. Applying the first case of Lemma~\ref{l:rounding_lemma} for $R=R_t$ we get that $C_{R_t}(F_t)\leq 6k \cdot \mathrm{FC}_{R_t}(y_t)$. As a result, \begin{eqnarray*} &&\sum_{t=1}^T C_{R_t}(F_t) \leq \sum_{t=1}^T 6k \cdot \mathrm{FC}_{R_t}(y_t)\\ &&\leq 6k \min_{y^\ast \in \Delta_k} \sum_{t=1}^T \mathrm{FC}_{R_t}(y^\ast) + \Theta \left (k D n \sqrt{\log n T} \right) \end{eqnarray*} where the last inequality follows by Theorem~\ref{t:no-regret-frac}. However Lemma~\ref{l:frac_int} ensures that \[\min_{y^\ast \in \Delta_k} \sum_{t=1}^T \mathrm{FC}_{R_t}(y^\ast) \leq \min_{F^\ast: |F^\ast|=k} \sum_{t=1}^T \mathrm{C}_{R_t}(F^\ast)\] \end{proof} \section{A \textbf{$\Theta(r)$}-Regret Randomized Online Learning Algorithm}\label{s:rand} In this section we present a $\Theta(r)$-regret randomized online learning algorithm. This algorithm is described in Algorithm~\ref{alg:rand} and is based on the randomized rounding developed by Charikar and Li for the $k$-median problem \cite{CL12}. \begin{lemma}[\cite{CL12}]\label{l:Charikar-Lin} There exists a polynomial-time randomized rounding scheme that given a vector $y \in \Delta_n^k$ produces a probability distribution, denoted as $\mathrm{CL}(y)$, over the subsets of $V$ such that, \begin{enumerate} \item with probability $1$ exactly $k$ facilities are opened, $\mathbb{P}_{F \sim \mathrm{CL}(y)}\left[|F| = k\right] = 1$. \item for any position $j \in V$, \[\mathbb{E}_{F \sim \mathrm{CL}(y)}\left[C_{\{j\}}(F_y) \right] \leq 4 \cdot \mathrm{FC}_{\{j\}}(y).\] \end{enumerate} \end{lemma} Similarly with the previous section, combining the randomized rounding of Charikar-Li with Algorithm~$1$ produces a $\Theta(r)$-regret randomized online learning algorithm that runs in polynomial-time. \begin{algorithm}[H] \caption{A $\Theta(r)$-regret randomized online learning algorithm} \label{alg:rand} \begin{algorithmic}[1] \For{ rounds $t = 1 \cdots T$} \State The learner computes the vector $y_t \in \Delta_{n}^k$ by running Algorithm~\ref{alg:frac_no_regret} for the sequence of clients' positions $(R_1,\ldots,R_{t-1})$. \State The learner places centers to the positions $F_t \subseteq V$ produced by the Charikar-Li randomized rounding with input $y_t$, $F_t \sim \mathrm{CL}(y_t)$. \State The adversary selects a request $R_t \subseteq V$. \State The learner suffers connection cost $C_{R_t}(F_t)$ \EndFor \end{algorithmic} \end{algorithm} The proof of Theorem~\ref{t:rand-regret} that establishes the regret bound of Algorithm~\ref{alg:rand} follows by Lemma~\ref{l:Charikar-Lin} and Theorem~\ref{t:no-regret-frac} and is deferred to the Appendix~\ref{app:rand}. \section{Experimental Evaluations}\label{s:experiments} In this section we evaluate the performance of our online learning algorithm against adversaries that select the positions of the clients according to time-evolving probability distributions. We remark that the regret bounds established in Theorem~\ref{t:det-regret} and Theorem~\ref{t:rand-regret} hold even if the adversary \textit{maliciously} selects the positions of the clients at each round so as to maximize the connection cost. As a result, in case clients arrive according to some (unknown and possibly time-varying) probability distribution that does not depend on the algorithm's actions, we expect the regret of to be way smaller. In this section we empirically evaluate the regret of Algorithm~\ref{alg:det} for Dynamic $k$-Clustering in case $p =\infty$. We assume that at each round $t$, $20$ clients arrive according to several static or time-varying two-dimensional probability distributions with support on the $[-1,1] \times [-1,1]$ square and the possible positions for the centers being the discretized grid with $\epsilon = 0.1$. In order to monitor the quality of the solutions produced by Algorithm~\ref{alg:det}, we compare the time-average connection cost of Algorithm~\ref{alg:det} with the time-average \textit{fractional connection cost} of Algorithm~\ref{alg:frac_no_regret}. Theorem~\ref{t:no-regret-frac} ensures that for $T=\Theta(k^2 D^2/\epsilon^2)$ the time-average fractional connection cost of Algorithm~\ref{alg:frac_no_regret} is at most $\epsilon$ greater than the time-average connection cost of the optimal static solution for Dynamic $k$-Clustering. In the following simulations we select $\epsilon = 0.1$ and track the ratio between the time-average cost of Algorithm~\ref{alg:det} and of Algorithm~\ref{alg:frac_no_regret} which acts as an upper bound on the regret. \textbf{Uniform Square} In this case the $20$ clients arrive \textit{uniformly at random} in the $[-1,1] \times [-1,1]$ square. Figure~\ref{f:uniform_square} illustrates the solutions at which Algorithm~\ref{alg:det} converges for $k=2,3$ and $8$ as long as the achieved regret. \begin{figure}[!htb] \centering {\includegraphics[width=0.49\linewidth]{images/uniform_square/k=2,T=400,Clients=20.png}\label{fig:sub2}}\hfill {\includegraphics[width=0.49\linewidth]{images/uniform_square/k=2.png}\label{fig:sub3}}\hfill {\includegraphics[width=0.49\linewidth]{images/uniform_square/k=3,T=9000,Cleints=20.png}\label{fig:sub2}}\hfill {\includegraphics[width=0.49\linewidth]{images/uniform_square/k=3.png}\label{fig:sub3}}\hfill {\includegraphics[width=0.49\linewidth]{images/uniform_square/k=8,T=20000,Clients=20.png}\label{fig:sub2}}\hfill {\includegraphics[width=0.49\linewidth]{images/uniform_square/k=8.png}\label{fig:sub3}}\hfill \caption{The \textcolor{green} {green curve} depicts the time-average connection cost Algorithm~\ref{alg:det}, the \textcolor{red}{red curve} depicts the time-average fractional connection cost of Algorithm~\ref{alg:frac_no_regret} and the \textcolor{blue}{blue curve} depicts their ratio that acts as an upper bound on the regret. }\label{f:uniform_square} \end{figure} \textbf{Uniform Distribution with Time-Evolving Centers} In this case the $20$ clients arrive according to the uniform distribution with radius $0.3$ and a time-varying center that periodically follows the trajectory described in Example~\ref{ex:1}. Figure~\ref{f:circle2} depicts the centers at which Algorithm~\ref{alg:det} converges after $100k^2$ rounds which are clearly close to the optimal ones. \textbf{Moving-Clients on the Ellipse} In this case the $20$ clients move in the ellipse $\left(\frac{x}{1.2}\right)^2 + \left(\frac{y}{0.6}\right)^2=1$ with different speeds and initial positions. The position of client $i$ is given by $\left(x_i(t),y_i(t)\right) = \left(1.2 \cos ( 2\pi f_i t + \theta_i ) , 0.6 \sin \left( 2\pi f_i t + \theta_i \right)\right)$ where each $f_i,\theta_i$ was selected uniformly at random in $[0,1]$. Figure~\ref{fig:circle} illustrates how Algorithm~\ref{alg:det} converges to the underlying ellipse as the number of rounds increases. \begin{figure}[!htb] \centering {\includegraphics[width=0.45\linewidth]{images/ellipse/ellipse_T=1000.png}\label{fig:sub1}}\hfill {\includegraphics[width=0.45\linewidth]{images/ellipse/ellipse_T=10000.png}\label{fig:sub1}}\hfill {\includegraphics[width=0.5\linewidth]{images/ellipse/ellipse_T=100000.png}\label{fig:sub1}}\hfill \caption{The solution produced by Algorithm~\ref{alg:det} for $k=8$ after $100$, $1000$ and $10000$ rounds.} \label{fig:circle} \end{figure} \textbf{Mixture of Multivariate Guassians} In this case 15 clients arrive according to the Gaussian with $\mu_1 = (-0.7,0.7)$ and $\Sigma_1 =[[0.3,0],[0,0.3]]$ and $5$ according to the Gaussian with $\mu_2 = (0.7,-0.7)$ and $\Sigma_2 =[[0.3,0],[0,0.3]]$. All the clients outside the $[-1,1]\times [-1,1]$ are projected back to the square. Figure~\ref{f:gaussian} illustrates the solutions at which Algorithm~\ref{alg:det} converges for $k=2,8$ and $16$. \begin{figure}[!htb] \centering {\includegraphics[width=0.49\linewidth]{images/Gaussian/k=2,Gaussian,T=2000}\label{fig:sub2}}\hfill {\includegraphics[width=0.49\linewidth]{images/Gaussian/k=2,cost.png}\label{fig:sub3}}\hfill {\includegraphics[width=0.49\linewidth]{images/Gaussian/Gaussian,k=8,T=20000}\label{fig:sub2}}\hfill {\includegraphics[width=0.49\linewidth]{images/Gaussian/k=8,cost.png}\label{fig:sub3}}\hfill {\includegraphics[width=0.49\linewidth]{images/Gaussian/k=16}\label{fig:sub2}}\hfill {\includegraphics[width=0.49\linewidth]{images/Gaussian/k=16,cost.png}\label{fig:sub3}}\hfill \caption{On the left, the solutions which Algorithm~\ref{alg:det} converges for $k=2,8$ and $k=16$. On the right, the time-average cost of Algorithm~\ref{alg:det}, Algorithm~\ref{alg:frac_no_regret} and the regret bounds.} \label{f:gaussian} \end{figure} \section{Conclusion} This work studies polynomial-time low-regret online learning algorithms for Dynamic $k$-Clustering, an online learning problem capturing clustering settings with time-evolving clients for which no information on their locations over time is available. We show that, under some well-established conjectures, $O(1)$-regret cannot be achieved in polynomial time and we provide a $\Theta(\min(k,r))$-regret polynomial time algorithm with $r$ being the maximum number of clients appearing in a single round. At a technical level, we present a two-step approach where in the first step we provide a no-regret algorithm for the Fractional Dynamic $k$-Clustering while in the second step we provide online rounding scheme converting the sequence of fractional solutions, produced by the no-regret algorithm, into solutions of Dynamic $k$-Clustering. Applying the same approach to other combinatorial online learning problems is an interesting research direction. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Systems in with important electrodiffusion and osmotic water flow are found throughout life \cite{pappenheimer1987silver,boron2008medical,davson1970textbook}. Such systems include brain ionic homeostasis \cite{somjen2004ions,kahle2009molecular}, fluid secretion by epithelial systems \cite{hill2008fluid}, electrolyte regulation in the kidney \cite{koeppen2007renal,layton2009mammalian}, fluid circulation in ocular systems \cite{mathias2007lens,fischbarg2005mathematical}, and water uptake by plants \cite{taiz2010plant}. Mathematical models of electrodiffusion and/or osmosis have been proposed and used in many physiological contexts, and have formed a central topic in biology for a very long time \cite{pappenheimer1987silver,Hille,aidley_physiology_1998}. Some are simple models using ordinary differential equations while others are more detailed in that they include partial differential equations (PDEs) describing the spatial variation of the concentration and flow fields \cite{KS,hoppensteadt2002modeling, weinstein1994mathematical,yi2003mathematical,shapiro2001osmotic, lee2008immersed,mathias1985steady}. In this paper, we propose a system of PDEs that describes ionic electrodiffusion and osmotic water flow at the cellular level. To the best of the authors' knowledge, this is the first model in which osmotic water flow and electrodiffusion have been treated within a unified framework including cells with deformable and capacitance-carrying membranes. A salient feature of our model is that it possesses a natural thermodynamic structure; it satisfies a free energy equality. As such, the present work may be viewed as a generalization of the classical treatment of osmosis and electrodiffusion in irreversible thermodynamics to spatially extended systems \cite{katzir1965nonequilibrium,kedem1958thermodynamic,kjelstrup2008non}. To introduce our approach, we first focus attention on uncharged systems. In Section \ref{diffosm}, we treat the case in which the diffusing chemical species carry no electric charge. We write down equations that are satisfied by the water velocity field $\mb{u}$, the chemical concentrations $c_k, k=1,\cdots,N$ and the membrane position $\mb{X}$. The model is shown to satisfy a free energy equality in which the sum of the entropic free energy and the elastic energy of the membrane is dissipated through viscous water flow, bulk diffusion, transmembrane chemical fluxes and osmotic water flow. One interesting consequence of this analysis is that the classical van t'Hoff law of osmotic pressure arises naturally from the requirement that osmotic water flow be dissipative. We note that models with the similar purpose of describing diffusing non-electrolytes and their interaction with osmotic water flow across moving membranes, have been proposed in the literature \cite{lee2008immersed,atzberger2009microfluidic,layton2006modeling} (in the Appendix \ref{leeatz}, we discuss the relationship of our model with that of \cite{lee2008immersed, atzberger2009microfluidic}). In Section \ref{elecdiff}, we extend the model of Section \ref{diffosm} to treat the case of ionic electrodiffusion. We introduce the electrostatic potential $\phi$ which satisfies the Poisson equation. The membrane now carries capacitance, which can result in a jump in the electrostatic potential across the membrane. We shall see that this model also satisfies a free energy equality. The free energy now includes an electrostatic contribution. The verification of the free energy equality in this case is not as straightforward as in the non-electrolyte case, and requires a careful examination of surface terms. In Section \ref{simple}, we discuss simplifications of our model. We make the system dimensionless and assess the relative magnitudes of the terms in the equations. An important simplification is obtained when we take the electroneutral limit. In this case, the electrostatic potential becomes a Lagrange multiplier that helps to enforce the electroneutrality condition. In Section \ref{animal}, we develop a computational scheme to simulate the limiting system obtained in the electroneutral limit, when the geometry of the cell is assumed spherical. As an application, we treat animal cell volume control. \section{Diffusion of Non-electrolytes and Osmotic Water Flow} \label{diffosm} \subsection{Model Formulation} Consider a bounded domain $\Omega\subset \mathbb{R}^3$ and a smooth closed surface $\Gamma\subset \Omega$. This closed surface divides $\Omega$ into two domains. Let $\Omega_i\subset \Omega$ be the region bounded by $\Gamma$, and let $\Omega_e=\Omega\backslash(\Omega_i\cup \Gamma)$. In the context of cell biology, $\Omega_i$ may be identified with the intracellular space and $\Omega_e$ the extracellular space. Although cell physiological systems of biological cells serve as our primary motivation for formulating the models of this paper, this identification is not necessary. In this section we formulate a system of PDEs that governs the diffusion of {\em non}-electrolytes and osmotic flow of water in the presence of membranes. In Section \ref{elecdiff}, we shall build upon this model to treat the electrolyte case. We consider $N$ non-electrolyte chemical species whose concentrations we call $c_k, k=1,\cdots, N$. Let $\omega$ be the entropic part of the free energy per unit volume of this solution. The following expression for $\omega$ is the most standard choice: \begin{equation} \omega_0=\sum_{k=1}^N k_BTc_k\ln c_k.\label{ent} \end{equation} This expression is valid when the ionic solution is sufficiently dilute and lead to linear diffusion of solute. Our calculations, however, do not depend on this choice of $\omega$. If the solution in question deviates significantly from ideality, other expressions for $\omega$ may be used in place of $\omega_0$. Given $\omega$, the chemical potential $\mu_k$ of the $k-$th chemical species is given as: \begin{equation} \mu_k=\sigma_k, \; \sigma_k\equiv \PD{\omega}{c_k}\label{mukc}. \end{equation} We have introduced two symbols $\mu_k$ and $\sigma_k$ in anticipation of the discussion of the electrolyte case, where $\mu_k$ and $\sigma_k$ are different. For water, it is convenient to consider the water potential $\psi_w$, the free energy per unit volume, rather than the $\mu_w$, the chemical potential (free energy per molecule). The water potential and water chemical potential are thus related by the relation $v_w\psi_w=\mu_w$ where $v_w$ is the volume of water per molecule. We define: \begin{equation} \psi_w\equiv \pi_w+p, \; \pi_w=\paren{\omega-\sum_{k=1}^N c_k\sigma_k} =\paren{\omega-\sum_{k=1}^N c_k\PD{\omega}{c_k}} \label{muw} \end{equation} where $p$ is the pressure. As we shall see, $p$ will be determined in our model via the equations of fluid flow (Eq. \eqref{stokes}). The entropic part of $\psi_w$ (or the {\em osmotic pressure}), $\pi_w$, is given as the {\em negative} multiple of the (semi)Legendre transform of $\omega$ with respect to all the ionic concentrations $c_k$. This expression for osmotic pressure can be found, for example, in \cite{doi1996introduction}. As we shall see in the proof of Theorem \ref{mainc}, the above definition of $\pi_w$ is forced upon us if we insist that our model satisfy a free energy dissipation principle. We begin by writing down the equations of ionic concentration dynamics. At any point in $\Omega_i$ or $\Omega_e$ \begin{equation} \PD{c_k}{t}+\nabla\cdot(\mb{u} c_k)=\nabla \cdot \paren{c_k\frac{D_k}{k_BT}\nabla \mu_k}\label{ckeq} \end{equation} where $D_k$ is the diffusion coefficient and $\mb{u}$ is the fluid velocity field. Ions thus diffuse down the chemical potential gradient and are advected with the local fluid velocity. We have assumed here that cross-diffusion (concentration gradient of one species driving the diffusion of another species) is negligible. We must supplement these equations with boundary conditions. Most formulations of non-equilibrium thermodynamic processes seem to be confined either to the bulk or to the interface between two bulk phases \cite{katzir1965nonequilibrium,degroot1962non,kjelstrup2008non}. Here we must couple the equations in the bulk and with boundary conditions at the interface, which as a whole give us a consistent thermodynamic treatment of diffusion and osmosis. On the outer boundary $\Gamma_\text{out}=\partial \Omega$, for simplicity, we impose no-flux boundary conditions. Let us now consider the interfacial boundary conditions on the membrane $\Gamma$. Since we want to account for osmotic water flow, the membrane $\Gamma$ will deform in time. Sometimes, we shall use the notation $\Gamma_t$ to make this time dependence explicit. Let $\Gamma_\text{ref}$ be the resting or reference configuration of $\Gamma$. The membrane will then be a smooth deformation of this reference surface. We may take some (local) coordinate system $\bm{\theta}$ on $\Gamma_\text{ref}$, which would serve as a material coordinate for $\Gamma_t$. The trajectory of a point that corresponds to $\bm{\theta}=\bm{\theta}_0$ is given by $\mb{X}(\bm{\theta}_0,t)\in \mathbb{R}^3$. For fixed $t$, $\mb{X}(\cdot,t)$ gives us the shape of the membrane $\Gamma_t$. Consider a point $\mb{x}=\mb{X}(\bm{\theta},t)$ on the membrane. Let $\mb{n}$ be the outward unit normal on $\Gamma$ at this point. The boundary conditions satisfied on the intracellular and extracellular faces of the membrane are given by: \begin{equation} c_k\paren{\mb{u}-\frac{D_k}{k_BT}\nabla \mu_k}\cdot \mb{n}=c_k\PD{\mb{X}}{t}\cdot \mb{n}+j_k+a_k \text{ on } \Gamma_i \text{ or } \Gamma_e.\label{ckbc} \end{equation} The expression ``on $\Gamma_{i,e}$'' indicates that the quantities are to be evaluated on the intracellular and extracellular faces of $\Gamma$ respectively. The term $j_k$ is the passive chemical flux that passes through the membrane and $a_k$ is the active flux. Fluxes going from $\Omega_i$ to $\Omega_e$ is taken to be positive. Equation \eqref{ckbc} is just a statement of conservation of ions at the moving membrane. It is easy to check that \eqref{ckeq} together with \eqref{ckbc} implies conservation of each species. The flux $j_k$ is in general a function of concentrations of all chemical species on both sides of the membrane. The chemicals are usually carried by channels and transporters, and the functional form of $j_k$ describe the kinetic features of these carriers. As we shall see in Section \ref{cross}, $j_k$ can also be a function of the difference in water chemical potential across the membrane. The passive nature of the $j_k$ is expressed by the following inequality: \begin{equation}\label{jkcond} [\mu_k]j_k \geq 0, \; [\mu_k]=\at{\mu_k}{\Gamma_i}-\at{\mu_k}{\Gamma_e} \end{equation} where $\at{\cdot}{\Gamma_{i,e}}$ expresses evaluation of quantities at the intracellular and extracellular faces of the membrane $\Gamma$ respectively. We shall see that this condition is consistent with the free energy identity \eqref{FE}. For any quantity $w$, $[w]=\at{w}{\Gamma_i}-\at{w}{\Gamma_e}$ will henceforth always denote the difference between $w$ evaluated on the intracellular and extracellular faces of $\Gamma$. A simple example of $j_k$ occurs when $j_k$ is a function only of $[\mu_k]$ and satisfies the following conditions: \begin{equation}\label{monotone} j_k=j_k([\mu_k]), \; j_k(0)=0, \; \PD{j_k}{[\mu_k]}\geq 0. \end{equation} It is easily checked that $j_k$ in this case satisfies \eqref{jkcond}. We shall see concrete examples in Section \ref{elecdiff}. Condition \eqref{jkcond} is somewhat restrictive in the sense that it is possible to have a ``passive'' flux that does not satisfy \eqref{jkcond} when multiple species flow and interact. We shall discuss this briefly in Section \ref{cross}. Condition \eqref{jkcond} needs to be relaxed to describe systems in which different chemical species flow through one channel or (passive) transporter. Those systems usually couple fluxes of different chemical species. They often couple (unidirectional) influx and efflux of the same species (symporters and antiporters) \cite{hille1982transport,Hille,tosteson1989membrane,boron2008medical,davson1970textbook}. The active flux $a_k$ is typically due to ionic pump currents often driven by ATP \cite{tosteson1989membrane,boron2008medical,davson1970textbook}. We now discuss force balance. We shall treat the cytosol as a viscous fluid and the cell membrane as an elastic surface. The cell membrane itself is just a lipid bilayer, and cannot support a large mechanical load. The cell membrane is often mechanically reinforced by an underlying actin cortex and overlying system of connective tissue, and in the case of plant cells, by an overlying cell wall. If we view these structures as part of the membrane, our treatment of the membrane being elastic may be a useful simplification. We could employ a more complete model of cell mechanics incorporating, in particular, the mechanical properties of the cytoskeleton and extracellular lamina. However, our emphasis here is on demonstrating how osmosis can be seamlessly combined with mechanics, and we intentionally keep the mechanical model simple to clarify the underlying ideas. Consider the equations of fluid flow. The flow field $\mb{u}$ satisfies the Stokes equation at any point in $\Omega_i$ or $\Omega_e$: \begin{equation} \nu\Delta \mb{u}-\nabla p =0, \; \nabla \cdot \mb{u}=0 \label{stokes} \end{equation} where $\nu$ is the viscosity of the electrolyte solution. Note that the above equations can also be written as follows: \begin{equation} \nabla \cdot \Sigma_m(\mb{u},p)=0,\; \nabla \cdot \mb{u}=0, \; \Sigma_m(\mb{u},p)=\nu(\nabla\mb{u}+(\nabla\mb{u})^T)-pI \end{equation} where $I$ is the $3\times 3$ identity matrix and $(\nabla\mb{u})^T$ is the transpose of $\nabla\mb{u}$. Here, $\Sigma_m$ is the mechanical stress tensor. It is possible to carry out much of the calculations to follow even if we retain inertial terms and work with the Navier-Stokes equations or use other constitutive relations for the mechanical stress. In particular, such modifications will not destroy the free energy identity to be discussed below. We do note, however, that incompressibility is important for our computations. We now turn to boundary conditions. We let $\mb{u}=0$ on the outer boundary $\Gamma_\text{out}$ for simplicity. On the cell membrane $\Gamma$, we have the following conditions. Take a point $\mb{x}=\mb{X}(\bm{\theta},t)$ on the boundary $\Gamma$, and let $\mb{n}$ be the unit outward normal on $\Gamma$ at this point. First, by force balance, we have: \begin{equation} [\Sigma_m(\mb{u},p)]=\mb{F}_\text{elas}.\label{forcebalance} \end{equation} Here, $\mb{F}_\text{elas}$ is the elastic force per unit area of membrane. We make some assumptions about the form of the elastic force. We assume that the membrane is a hyperelastic material in the sense that the elastic force can be derived from an elastic energy functional $E_\text{elas}$ that is a function only of the configuration $\mb{X}$: \begin{equation} E_\text{elas}(\mb{X})=\int_{\Gamma_\text{ref}} \mathcal{E}(\mb{X})dm_{\Gamma_\text{ref}} \label{elas} \end{equation} where $m_{\Gamma_\text{ref}}$ is the surface measure of $\Gamma_\text{ref}$ and $\mathcal{E}$ is the elastic energy density measured with respect to this measure. It is possible that $\mathcal{E}$ is a function of spatial derivatives of $\mb{X}$. The elastic force $\mb{F}_\text{elas}(\mb{x})$ satisfies the relation: \begin{equation} \begin{split} \at{\D{}{s}}{s=0}\int_{\Gamma_\text{ref}} \mathcal{E}(\mb{X}(\bm{\theta})+s\mb{Y}(\bm{\theta})) dm_{\Gamma_\text{ref}} &=-\int_{\Gamma} \mb{F}_\text{elas}(\mb{x})\cdot \mb{Y}(\mb{X}^{-1}(\mb{x}))dm_\Gamma \end{split} \end{equation} where $\mb{Y}$ is an arbitrary vector field defined on $\Gamma_\text{ref}$ and $m_\Gamma$ is the natural measure on the surface $\Gamma$ and is related to $m_{\Gamma_\text{ref}}$ by $dm_\Gamma=Qdm_{\Gamma_\text{ref}}$ where $Q$ is the Jacobian determinant relating $\Gamma_t$ to the reference configuration $\Gamma_\text{ref}$. The expression $\mb{X}^{-1}(\mb{x})$ is the inverse of the map $\mb{x}=\mb{X}(\bm{\theta})$. Thus, $\mb{F}_\text{elas}$ is given as the variational derivative of the elastic energy up to the Jacobian factor $Q$. Consequently, we have the following relation: \begin{equation} \D{}{t}E_\text{elas}(\mb{X}) =-\int_{\Gamma} \mb{F}_\text{elas}\cdot \PD{\mb{X}}{t}dm_\Gamma.\label{dtelas} \end{equation} In the above, $\PD{\mb{X}}{t}$ should be thought of as a function of $\mb{x}$, i.e., $\PD{\mb{X}}{t}=\PD{\mb{X}}{t}(\mb{X}^{-1}(\mb{x}))$. We shall henceforth abuse notation and let $\PD{\mb{X}}{t}$ be a function of $\mb{x}$ or $\bm{\theta}$ depending on the context of the expression. In addition to the force balance condition \eqref{forcebalance}, we need a continuity condition on the interface $\Gamma$. Since we are allowing for osmotic water flow, we have a slip between the movement of the membrane and the flow field. At a point $\mb{x}=\mb{X}(\bm{\theta},t)$ on the boundary $\Gamma$ we have: \begin{equation} \mb{u}-\PD{\mb{X}}{t}=j_w\mb{n}\label{cont} \end{equation} where $j_w$ is water flux through the membrane. We are thus assuming that water flow is always normal to the membrane and that there is no slip between the fluid and the membrane in the direction tangent to the membrane. Given that $\mb{n}$ is the outward normal, $j_w$ is positive when water is flowing out of the cell. Like $j_k$ in \eqref{ckbc}, we let $j_w$ be a passive flux in the following sense. We let: \begin{equation}\label{jwexp} j_w=j_w([\widehat{\psi_w}]),\; \at{\widehat{\psi_w}}{\Gamma_{i,e}}= \at{\pi_w}{\Gamma_{i,e}} -\at{((\Sigma_m(\mb{u},p))\cdot\mb{n})}{\Gamma_{i,e}} \end{equation} where $\pi_w$ was the entropic contribution to the water potential defined in \eqref{muw}. In general, $j_w$ can be a function of other variables (see Section \ref{cross}). The function $j_w$ satisfies the condition, analogous to \eqref{jkcond}: \begin{equation}\label{jwcond} [\widehat{\psi_w}]j_w\geq 0. \end{equation} This condition is clearly satisfied if: \begin{equation}\label{jwmonotone} j_w=j_w([\widehat{\psi_w}]),\; j_w(0)=0,\; \PD{j_w}{[\widehat{\psi_w}]}>0. \end{equation} Water flow across the membrane is thus driven by the difference in the entropic contribution to the chemical potential as well as the jump in the mechanical force across the cell membrane. When the flow field $\mb{u}$ is equal to $0$, the above expression for $\widehat{\psi_w}$ reduces to: \begin{equation} \at{\widehat{\psi_w}}{\Gamma_{i,e}}= \at{\pi_w}{\Gamma_{i,e}} +\at{p}{\Gamma_{i,e}}=\at{\psi_w}{\Gamma_{i,e}}. \end{equation} We may thus view $\widehat{\psi_w}$ as a modification of $\psi_w$ to take into account dynamic flow effects. Let $\omega=\omega_0$ given in \eqref{ent} in the definition \eqref{muw} of $\pi_w$. Then, we have, under zero flow conditions, \begin{equation} [\widehat{\psi_w}]=\jump{p-k_BT\sum_{k=1}^N c_k}.\label{muwclassical} \end{equation} We thus reproduce the standard statement that water flow across the membrane is driven by the difference in osmotic and mechanical pressure, where the osmotic pressure, $\pi_w$, is given by the van t'Hoff law. \subsection{Free Energy Identity} We now show that the system described above satisfies the following free energy identity. \begin{theorem}\label{mainc} Suppose $c_k,\mb{u}, p$ be smooth functions that satisfy \eqref{ckeq}, \eqref{stokes}, and in $\Omega_i$ and $\Omega_e$ and satisfy boundary conditions \eqref{ckbc}, \eqref{forcebalance}, \eqref{cont} on the membrane $\Gamma$. Suppose further that $c_k$ and satisfy no-flux boundary conditions and $\mb{u}=0$ on the outer boundary $\Gamma_\text{out}$. Then, $c_k,\mb{u},p$ and $\phi$ satisfy the following free energy identity. \begin{equation} \begin{split} \D{}{t}(G_S+E_\text{elas})&=-I_p-J_p-J_a\\ G_S&=\int_{\Omega_i\cup \Omega_e}\omega d\mb{x}\\ I_p&=\int_{\Omega_i\cup \Omega_e}\paren{\nu\abs{\nabla \mb{u}}^2 +\sum_{k=1}^Nc_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2}d\mb{x}\\ J_p&=\int_{\Gamma} \paren{[\widehat{\psi_w}]j_w +\sum_{k=1}^N[\mu_k]j_k}dm_\Gamma\\ J_a&=\int_{\Gamma}\paren{\sum_{k=1}^N[\mu_k]a_k}dm_\Gamma. \end{split}\label{FE} \end{equation} Here, $E_\text{elas}$ was given in \eqref{elas} and $\abs{\nabla \mb{u}}$ is the Frobenius norm of the $3\times 3$ rate of deformation matrix $\nabla \mb{u}$. If $a_k\equiv 0$, then the free energy is monotone decreasing. \end{theorem} The free energy is given as a sum of the entropic contribution $G_S$ and the membrane elasticity term $E_\text{elas}$. This free energy is dissipated through bulk currents $I_p$ and membrane currents $J_p$. Dissipation in the bulk comes from ionic electrodiffusion and viscous dissipation. Dissipation at the membrane comes from ionic channel currents and transmembrane water flow. If active membrane currents are present, they may contribute to an increase in the free energy through the term $J_a$. It is sometimes useful to rewrite $J_p+I_p$ as: \begin{equation} \begin{split}\label{Fwc} J_p+I_p&=F_w+F_c,\\ F_w&=\int_{\Omega_i\cup \Omega_e}\nu\abs{\nabla \mb{u}}^2d\mb{x} +\int_{\Gamma}[\widehat{\psi_w}]j_wdm_\Gamma,\\ F_c&=\int_{\Omega_i\cup \Omega_e} \sum_{k=1}^Nc_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2d\mb{x} +\int_{\Gamma}\sum_{k=1}^N[\mu_k]j_kdm_\Gamma, \end{split} \end{equation} where $F_w$ and $F_c$ are the dissipations due to water flow and solute diffusion respectively. In the statement of the Theorem, it is important that \eqref{jkcond} and \eqref{jwcond} are used only to conclude that $J_p$ be positive. Identity \eqref{FE} should be seen as giving us the definition of what a passive current should be. We now prove Theorem \ref{main}. An interesting point about the calculation to follow is how dissipation through transmembrane water flow comes from two different sources, equations for ionic concentration dynamics and the fluid equations. The former contributes the osmotic term $\pi_w$ term, and the latter contributes the mechanical term $p$, which together add up to the water potential $\psi_w$. \begin{proof}[Proof of Theorem \ref{mainc}] First, multiply \eqref{ckeq} with $\mu_k$ in \eqref{muk} and integrate over $\Omega_i$ and sum in $k$: \begin{equation} \sum_{k=1}^N\int_{\Omega_i}\mu_k\paren{\PD{c_k}{t}+\nabla\cdot (\mb{u} c_k)}d\mb{x} =\sum_{k=1}^N\int_{\Omega_i}\mu_k \nabla \cdot \paren{c_k\frac{D_k}{k_BT}\nabla \mu_k}d\mb{x}.\label{mkck} \end{equation} The summand in the right hand side becomes: \begin{equation} \begin{split} \int_{\Omega_i}\mu_k \nabla \cdot \paren{c_k\frac{D_k}{k_BT}\nabla \mu_k}d\mb{x} &=\int_{\Gamma_i}\paren{\mu_k {c_k\frac{D_k}{k_BT}\nabla \mu_k}\cdot \mb{n}}dm_\Gamma\\ &-\int_{\Omega_i}\paren{c_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2}d\mb{x} \end{split} \end{equation} where $\mb{n}$ is the outward normal on $\Gamma$. Consider the left hand side of \eqref{mkck}. \begin{equation} \sum_{k=1}^N\mu_k\paren{\PD{c_k}{t}+\nabla \cdot (\mb{u} c_k)} =\sum_{k=1}^N\PD{\omega}{c_k}\paren{\PD{c_k}{t}+\mb{u}\cdot \nabla c_k} =\PD{\omega}{t}+\nabla \cdot (\mb{u}\omega),\label{mkcksum} \end{equation} where we used \eqref{mukc} and the incompressibility condition in \eqref{stokes}. Integrating the above over $\Omega_i$, we have: \begin{equation} \begin{split} \int_{\Omega_i} \paren{\PD{\omega}{t}+\nabla \cdot (\mb{u}\omega)}d\mb{x} &=\int_{\Omega_i} \PD{\omega}{t}d\mb{x} +\int_{\Gamma_i} \omega\mb{u}\cdot\mb{n}dm_\Gamma\\ &=\D{}{t}\int_{\Omega_i}\omega d\mb{x} +\int_{\Gamma_i}\omega\paren{\mb{u}-\PD{\mb{X}}{t}}\cdot\mb{n}dm_\Gamma \end{split} \end{equation} where we used the fact that $\mb{u}$ is divergence free in the first equality. The term involving $\PD{\mb{X}}{t}$ comes from the fact that the membrane $\Gamma$ is moving in time. Performing similar calculations on $\Omega_e$, and adding this to the above, we find: \begin{equation}\label{domega} \begin{split} &\D{}{t}\int_{\Omega_i\cup \Omega_e}\omega d\mb{x} +\int_{\Gamma}[\omega]j_w dm_\Gamma\\ =&\sum_{k=1}^N \int_{\Gamma}\jump{\mu_k {c_k\frac{D_k}{k_BT}\nabla \mu_k}\cdot \mb{n}}dm_\Gamma -\sum_{k=1}^N \int_{\Omega_i\cup \Omega_e}\paren{c_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2}d\mb{x}. \end{split} \end{equation} where we used \eqref{cont}. Using \eqref{ckbc} and \eqref{cont}, we may rewrite the second boundary integral as follows: \begin{equation}\label{concbndry} \int_{\Gamma}\jump{\mu_k {c_k\frac{D_k}{k_BT}\nabla \mu_k} \cdot \mb{n}}dm_\Gamma =\int_{\Gamma}\paren{\jump{\mu_k c_k}j_w -[\mu_k](j_k+a_k)}dm_\Gamma. \end{equation} We now turn to equation \eqref{stokes}. Multiply this by $\mb{u}$ and integrate over $\Omega_i$: \begin{equation} \int_{\Omega_i} \mb{u}\cdot(\nu \Delta \mb{u} -\nabla p)d\mb{x} =\int_{\Gamma_i}\paren{\Sigma_m(\mb{u},p)\mb{n}}\cdot\mb{u}dm_\Gamma- \int_{\Omega_i}\mb{\nu}\abs{\nabla \mb{u}}^2d\mb{x}=0 \end{equation} Performing a similar calculation on $\Omega_e$ and adding this to the above, we have: \begin{equation} \int_{\Gamma}\jump{\paren{\Sigma_m(\mb{u},p)\mb{n}}}\cdot \mb{u}dm_\Gamma- \int_{\Omega_i\cup \Omega_e}\mb{\nu}\abs{\nabla \mb{u}}^2d\mb{x} =0 \end{equation} We may use \eqref{forcebalance}, \eqref{dtelas} and \eqref{cont} to find \begin{equation}\label{elasen} \D{}{t} E_\text{elas}(\mb{X})= \int_{\Gamma}\jump{\paren{\Sigma_m(\mb{u},p)\mb{n}}\cdot \mb{n}}j_wdm_\Gamma -\int_{\Omega_i\cup \Omega_e}\mb{\nu}\abs{\nabla \mb{u}}^2d\mb{x} \end{equation} Combining \eqref{domega}, \eqref{concbndry} and \eqref{elasen}, we have: \begin{equation} \begin{split} &\D{}{t}\paren{\int_{\Omega_i\cup \Omega_e}\omega d\mb{x}+E_\text{elas}(\mb{X})}\\ =&-\sum_{k=1}^N\int_{\Omega_i\cup \Omega_e}\paren{c_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2}d\mb{x}-\int_{\Omega_i\cup \Omega_e}\mb{\nu}\abs{\nabla \mb{u}}^2d\mb{x}\\ &-\int_{\Gamma}[\mu_k](j_k+a_k)dm_\Gamma -\int_{\Gamma}\jump{\omega-c_k\sigma_k-(\Sigma_m(\mb{u},p)\mb{n})\cdot\mb{n}}j_wdm_\Gamma \end{split} \end{equation} Recalling the definition of $\widehat{\psi_w}$ in \eqref{jwexp}, we obtain the desired equality. In the absence of active currents $a_k$, is decreasing given that $j_k$ and $j_w$ satisfy conditions \eqref{jkcond} and \eqref{jwcond} respectively. \end{proof} In the last line of the above proof, note that the expression for $\widehat{\psi_w}$ arises naturally as a result of integrating by parts. In this sense, we may say that osmotic water flow arises as a natural consequence of requiring that the free energy be decreasing in time. \subsection{Cross Coefficients and Solvent Drag}\label{cross} As can be seen from \eqref{FE} or \eqref{FEE} the only condition we need to impose for the free energy to decrease with time in the absence of active currents is the following: \begin{equation}\label{jkjw} [\widehat{\psi_w}]j_w+\sum_{k=1}^N [\mu_k]j_k\geq 0. \end{equation} This condition is weaker than conditions \eqref{jkcond} and \eqref{jwcond} being satisfied separately by $j_k$ and $j_w$. We now discuss an important case in which $j_k$ and $j_w$ may not individually satisfy \eqref{jkcond} and \eqref{jwcond} but \eqref{jkjw} is satisfied nevertheless. This arises whenever fluxes are coupled as is usually the case for fluxes through transporters or (single filing) channels \cite{hille1982transport,Hille,tosteson1989membrane,boron2008medical, davson1970textbook}. We note that such cross-diffusion can be relevant even in bulk solution \cite{tyrrell1971diffusion,justice1983conductance,hoheisel1993theoretical, taylor1993multicomponent,accascina1959electrolytic}. If $\jump{\mu_k}$ and $[\widehat{\psi_w}]$ remain small, the dissipation $J_p$ in \eqref{FE} may be approximated by a quadratic form in the jumps: \begin{equation}\label{quad} \begin{split} J_p&=\int_\Gamma \jump{\bm{\mu}}\cdot\mb{j}dm_\Gamma=\int_\Gamma \jump{\bm{\mu}}\cdot(\mathcal{L}\jump{\bm{\mu}})dm_\Gamma,\\ \bm{\mu}&=(\mu_1,\cdots,\mu_N,\widehat{\psi_w})^T, \mb{j}=(j_1,\cdots,j_N,j_w)^T, \end{split} \end{equation} where $\mathcal{L}$ is a symmetric $(N+1)\times(N+1)$ matrix. Requiring that the free energy be decreasing implies that $\mathcal{L}$ must be positive definite. The {\em maximum dissipation principle} requires that $\mb{j}$ be given as variational derivatives of $J_p/2$ with respect to $[\bm{\mu}]$: \begin{equation} \mb{j}=\mathcal{L}\jump{\bm{\mu}}. \end{equation} Note that, without the maximum dissipation principle, \eqref{quad} only implies $\mb{j}=(\mathcal{L}+\tilde{\mathcal{L}})\jump{\bm{\mu}}$ where $\tilde{\mathcal{L}}$ is an arbitrary skew symmetric matrix. The symmetry of the coefficient matrix $\mathcal{L}$ relating $[\bm{\mu}]$ and $\mb{j}$ is an instance of the Onsager reciprocity relation \cite{degroot1962non,katzir1965nonequilibrium,kjelstrup2008non}. A lipid bilayer membrane is impermeable to many solutes, but only approximately so. In this case, a water flux may induce a solute flux, and this may be expressed as $\mathcal{L}_{kw}\neq 0$ where $\mathcal{L}_{kw}$ is the $(k,N+1)$ entry of the matrix $\mathcal{L}$. This is known as solvent drag. Given the presence of such cross coefficients, \eqref{jkcond} and \eqref{jwcond} are not necessary true, whereas condition \eqref{jkjw} is true by construction. \section{Electrodiffusion of Ions and Osmotic Water Flow}\label{elecdiff} \subsection{Model Formulation} Let us now consider the case in which the chemical species are electrically charged. As in the previous section, we let $c_k, k=1,\cdots, N$ be the concentrations of the ionic species. Given $\omega$, the entropic part of the free energy per unit volume, the chemical potential $\mu_k$ of the $k-$th species of ion is given as: \begin{equation} \mu_k=\PD{\omega}{c_k}+qz_k\phi= \sigma_k+qz_k\phi\label{muk}. \end{equation} The chemical potential is thus a sum of the entropic term $\sigma_k$ and the electrostatic term. In the electrostatic term, $q$ is the elementary charge, $z_k$ is the valence of the $k$-th species of ion, and $\phi$ is the electrostatic potential. The definition of the water potential, $\psi_w$ and $\widehat{\psi_w}$, remain the same. The ionic concentrations $c_k$ satisfy \eqref{ckeq} and \eqref{ckbc} except that we now use \eqref{muk} as our expression for the chemical potential. Ions are thus subject to drift by the electric field in addition to diffusion and advection by the local flow field. If the electrolyte solution is sufficiently dilute, the chemical potential $\mu_k$ is given by \eqref{muk} with $\omega$ equal to \eqref{ent}. However, deviations from ideality can be significant in electrolyte solutions, especially in higher concentrations \cite{fawcett2004liquids,lee2008molecular,kunz2010specific,fraenkel2010simplified, eisenberg2010crowded}. Cross-diffusion (or flux coupling) in the bulk can also be significant in electrolyte solutions \cite{tyrrell1971diffusion,justice1983conductance,hoheisel1993theoretical,taylor1993multicomponent,accascina1959electrolytic}. These effects are clearly important in describing the {\em molecular} physiology of ion channel pores and enzyme active sites at which ionic concentrations can reach tens of molars \cite{eisenberg2010crowded,zhang2010molecular}. The question of whether these effects are significant in formulating phenomenological models in {\em cellular} physiology, where the typical ionic concentrations are two orders of magnitude lower, is largely unexplored. This exploration is beyond the scope of the present paper, but we point out that our formalism allows the incorporation of such effects \cite{eisenberg2010energy}. The transmembrane flux $j_k$ is now a function of the membrane potential $[\phi]$ in addition to the dependencies discussed in the previous section. We require that $j_k$ satisfy condition \eqref{jkcond}. The electrostatic potential $\phi$ satisfies the Poisson equation: \begin{equation} -\nabla \cdot (\epsilon\nabla\phi)=\sum_{k=1}^N qz_kc_k\label{poisson} \end{equation} where $\epsilon$ is the dielectric constant. We shall assume that $\epsilon$ is constant in space and time. This restriction may be lifted, at the expense of introducing a relation that describes the evolution of $\epsilon$. We also assume that there is no fixed background charge. It is easy to generalize the calculations below to the case when the immobile charges, if present, always stay away from the moving membrane. Otherwise, one would need to introduce ``collision rules'' to determine what happens when the membrane hits the immobile charges. We impose Neumann boundary conditions for \eqref{poisson} on the outer boundary $\Gamma_\text{out}$ for simplicity. On the membrane $\Gamma$, we impose the following boundary condition: \begin{equation} -\at{\epsilon\PD{\phi}{\mb{n}}}{\Gamma_i} =-\at{\epsilon\PD{\phi}{\mb{n}}}{\Gamma_e}=C_m[\phi].\label{cap} \end{equation} where $C_m$ is the capacitance per unit area of membrane. The above is simply a statement about the continuity of the electric flux density. Since the membrane is moving, the capacitance $C_m$ is itself an evolving quantity. We assume the following family of constitutive laws for $C_m$. At $\mb{x}=\mb{X}(\bm{\theta},t)$, \begin{equation} C_m(\mb{x})=C_m(Q(\mb{X}))\label{CmQ} \end{equation} where $Q(\mb{X})$ is the Jacobian or metric determinant of the configuration $\Gamma_t$ at time $t$ with respect to the reference configuration $\Gamma_\text{ref}$. This factor describes the extent to which the membrane is stretched from the rest configuration. A simple example of \eqref{CmQ} would be: \begin{equation} C_m(\mb{x})=C_m^0=\text{const}.\label{Cm0} \end{equation} As another example, we may set: \begin{equation} C_m(\mb{x})=C_m^0 Q(\mb{X})\label{Cmscaling} \end{equation} where $C_m^0=\text{const}$ is the capacitance per unit area measured in the reference configuration. Relation \eqref{Cmscaling} is the natural scaling if we assume that the membrane is made of an incompressible material. For suppose the membrane is made of a material whose dielectric constant is $\epsilon_m$. If the thickness of the membrane at the point $\mb{x}=\mb{X}(\bm{\theta},t)$ is $d(\mb{x})$, the membrane capacitance there is given by $\epsilon_m/d(\mb{x})$. The incompressibility of the material implies that the local membrane volume remains constant in time: $d(\mb{x})Q(\mb{X})=\text{const}$. Thus, $C_m(\mb{x})$ must be proportional to $Q(\mb{X})$. Force balance must be modified to take into account electrostatic forces.The flow field $\mb{u}$ satisfies the Stokes equation in $\Omega_i$ or $\Omega_e$ with an electrostatic force term: \begin{equation} \nu\Delta \mb{u}-\nabla p-\paren{\sum_{k=1}^Nqz_kc_k}\nabla \phi =0, \; \nabla \cdot \mb{u}=0 \label{stokesE} \end{equation} Note that the above equations can also be written as follows: \begin{equation} \begin{split} \nabla \cdot (\Sigma_m(\mb{u},p)+\Sigma_e(\phi))&=0,\; \nabla \cdot \mb{u}=0,\\ \Sigma_m(\mb{u},p)&=\nu(\nabla{u}+(\nabla{u})^T)-pI,\\ \Sigma_e(\phi)&= \epsilon\paren{\nabla \phi\otimes \nabla \phi-\frac{1}{2}\abs{\nabla \phi}^2I}. \end{split} \end{equation} Here, $\Sigma_e$ is the Maxwell stress tensor generated by the electric field. Note that we have used \eqref{poisson} to rewrite the electrostatic force in \eqref{stokesE} in terms of $\Sigma_e$. We now turn to boundary conditions. We continue to let $\mb{u}=0$ on the outer boundary $\Gamma_\text{out}$. On the cell membrane $\Gamma$, we have the following conditions. First, by force balance, we have: \begin{equation} [(\Sigma_m(\mb{u},p)+\Sigma_e(\phi))\mb{n}] =\mb{F}_\text{elas}+\mb{F}_\text{cap}\label{forcebalanceE} \end{equation} In addition to $\mb{F}_\text{elas}$, we have an additional term $\mb{F}_\text{cap}$ which arises because the membrane carries capacitive energy. We shall call this the capacitive force, which is given as: \begin{equation} \mb{F}_\text{cap}=\tau_\text{cap}\kappa_\Gamma\mb{n} -\nabla_\Gamma \tau_\text{cap}, \; \tau_\text{cap} =\frac{1}{2}\paren{C_m+Q\PD{C_m}{Q}}[\phi]^2 \label{FcapCmQ} \end{equation} where $\kappa_\Gamma$ is the sum of the principal curvatures of the membrane $\Gamma$ and $\nabla_\Gamma=\nabla-\mb{n}(\mb{n}\cdot \nabla)$ is the surface gradient on $\Gamma$. The above expression shows that the capacitive force can be seen as a surface tension of strength $-\tau_\text{cap}$. The above capacitive force is chosen so that Theorem \ref{main} holds, and in this sense, the proof of Theorem \ref{main} provides a variational interpretation of this force. In Appendix \ref{capforce}, we give a physical interpretation of expression \eqref{FcapCmQ}. An interesting variant of \eqref{forcebalanceE} is the following. Suppose the membrane is incompressible in the sense that $Q\equiv 1$ for all time. We note that this condition of two-dimensional incompressibility is {\em not} the same as assuming that the membrane is made of a (three-dimensional) incompressible material. In the case of three-dimensional incompressibility, the membrane may stretch, but this would lead to a thinning of the membrane, leading to the constitutive law \eqref{Cmscaling} as we saw earlier. When $Q\equiv 1$ for all time, we let: \begin{equation} [(\Sigma_m(\mb{u},p)+\Sigma_e(\phi))\mb{n}] =\mb{F}_\text{elas}+\mb{F}_\text{p}\label{forcebalancep} \end{equation} where $\mb{F}_p$ is given as: \begin{equation}\label{Fp} \mb{F}_\text{p}=\lambda \kappa_\Gamma\mb{n}-\nabla_\Gamma \lambda. \end{equation} The above is a surface pressure and $\lambda$ is determined so that $Q\equiv 1$. Note that in \eqref{forcebalancep} we do not need a capacitive force since it can be absorbed into the surface pressure term. The continuity condition \eqref{cont} remains the same. We continue to require that the passive flux $j_k$ satisfy \eqref{jkcond}. An important difference, however, is that $j_k$ now depends strongly on the membrane potential $[\phi]$, given that $\mu_k$ depends on $\phi$. Ions usually flow through ionic channels, which often have open and closed states. The passive flux $j_k$ may also depend on such states, which are described by gating variables. This flux is the subject of the large experimental and theoretical work on ion channels \cite{Hille}. For the present work, it is appropriate to use the classical phenomenological treatments of flux, although their molecular underpinnings are not clear \cite{chen1997permeation, eisenberg1999structure,gillespie2002physical}. Some of popular choices $j_k$ include \cite{KS,Hille,HH}: \begin{align} j_k^\text{HH}&=g_k[\mu_k]=g_k\paren{z_k[\phi] +\ln\paren{\frac{\at{c_k}{\Gamma_i}}{\at{c_k}{\Gamma_e}}}},\label{HH}\\ j_k^\text{GHK}&=P_kz_k\phi'\paren{ \frac{\at{c_k}{\Gamma_i}\exp(z_k\phi')-\at{c_k}{\Gamma_e}} {\exp(z_k\phi')-1}}, \; \phi'=\frac{q\jump{\phi}}{k_BT},\label{GHK} \end{align} where $g_k$ and $P_k$ are positive and depend on the gating variables in certain modeling contexts. It is easily seen that both $j_k^\text{HH}$ and $j_k^\text{GHK}$ satisfy \eqref{jkcond}. We shall use expression \eqref{GHK} in our numerical computations in Section \ref{animal}. We remark that the model we just proposed is nothing other than the Poisson-Nernst-Planck-Stokes system if we let $\omega=\omega_0$ given in \eqref{ent} \cite{Rubinstein}. The novelty here is in the interface conditions at the membrane, \eqref{ckbc}, \eqref{cap}, \eqref{forcebalanceE} and \eqref{cont}. The Poisson Nernst-Planck system has received much attention in the field of semiconductors \cite{roosbroeck_theory_1950,Jerome_semiconductor, selberherr1984analysis}, ionic channels \cite{eisenberg1996computing}, ion exchange membranes and desalination \cite{Rubinstein} as well as physical chemistry \cite{bazant2004diffuse}. \subsection{Free Energy Identity} We now show that the system described in the previous section possesses a natural free energy. \begin{theorem}\label{main} Suppose $c_k,\mb{u},p$ and $\phi$ are smooth functions that satisfy \eqref{ckeq}, \eqref{stokesE}, and \eqref{poisson} in $\Omega_i$ and $\Omega_e$ and satisfy boundary conditions \eqref{ckbc}, \eqref{forcebalanceE}, \eqref{cont} and \eqref{cap} on the membrane $\Gamma$. Suppose further that $c_k$ and $\phi$ satisfy no-flux boundary conditions and $\mb{u}=0$ on the outer boundary $\Gamma_\text{out}$. Then, $c_k,\mb{u},p$ and $\phi$ satisfy the following free energy identity. \begin{equation} \begin{split} \D{}{t}(G_S+E_\text{elas}+E_\text{elec})&=-I_p-J_p-J_a\\ E_\text{elec}&=\int_{\Omega_e\cup \Omega_e}\frac{1}{2}\epsilon \abs{\nabla{\phi}}^2d\mb{x} +\int_{\Gamma}\frac{1}{2}C_m[\phi]^2dm_\Gamma \end{split}\label{FEE} \end{equation} Here, $G_S, E_\text{elas}, I_p, J_p, I_a$ are the same as in \eqref{FE}. The same identity holds if we require $Q\equiv 1$ and adopt \eqref{forcebalancep} instead of \eqref{forcebalanceE}. If $a_k\equiv 0$, the free energy is monotone decreasing. \end{theorem} In addition to the terms present in \eqref{FEE}, we now have an electrostatic term in the energy. Before proving Theorem \ref{main}, we collect some calculus results. We first introduce some notation. Take a point $\mb{x}=\mb{x}_0\in \Gamma$ at $t=t_0$. Let $\mb{X}^\mb{n}(t;x_0,t_0)$ be the space-time curve that goes through $\mb{x}=\mb{x}_0$ at time $t=t_0$ and is orthogonal to $\Gamma$ at each time instant. Equivalently, $\mb{X}^\mb{n}(t;\mb{x}_0,t_0)$ is the solution to the following ordinary differential equation: \begin{equation} \D{}{t}\mb{X}^\mb{n}(t;\mb{x}_0,t_0) =v_\Gamma(\mb{X}^\mb{n},t)\mb{n}(\mb{X}^\mb{n},t), \; \mb{X}^\mb{n}(t_0;,\mb{x}_0,t_0)=\mb{x}_0. \end{equation} Here, $\mb{n}(\mb{x},t)$ is the unit normal at the point $\mb{x}$ at time $t$ pointing from $\Omega_i$ into $\Omega_e$, and $v_\Gamma(\mb{x},t)\mb{n}(\mb{x},t)$ is the normal velocity of $\Gamma$ at that point. Consider a quantity $w(\mb{x},t)$ defined on the evolving surface $\Gamma$. Define: \begin{equation} (D^\mb{n}_t w)(\mb{x}_0,t_0)= \at{\D{}{t}w(\mb{X}^\mb{n}(t;\mb{x}_0,t_0),t)}{\mb{x}=\mb{x}_0,t=t_0}. \label{Dnt} \end{equation} The above expression is an analogue of the convective derivative on the surface $\Gamma$. We shall make use of the following well-known identity: \begin{equation} \D{}{t}\int_\Gamma wdm_\Gamma= \int_\Gamma \paren{D^\mb{n}_t w+\kappa_\Gamma w v_\Gamma}dm_\Gamma \label{Dtw} \end{equation} where $\kappa_\Gamma$ is the sum of the principal curvatures of $\Gamma$. We now state two calculus identities that we shall find useful in the proof of Theorem \ref{main}. \begin{lemma}\label{divongamma} Let $w(\mb{x},t)$ be a smooth function on $\Gamma_t$. We have: \begin{equation} \int_\Gamma \paren{wQ^{-1}\PD{Q}{t}}dm_\Gamma= \int_\Gamma \paren{\kappa_\Gamma w \mb{n}-(\nabla_\Gamma w)} \cdot\PD{\mb{X}}{t}dm_\Gamma.\label{Qt} \end{equation} where $Q$ is the Jacobian determinant of $\Gamma_t$ with respect to the reference configuration $\Gamma_\text{ref}$. \end{lemma} \begin{proof} Note that \begin{equation} \PD{w}{t}=D_t^\mb{n}w+(\nabla_\Gamma w)\cdot\PD{\mb{X}}{t}\label{wt} \end{equation} where the partial derivatives in $t$ is along material trajectories (constant $\bm{\theta}$). The validity of the above identity should be clear by considering the geometric relation between the orthogonal trajectory $\mb{X}^\mb{n}$ and the material trajectory $\mb{X}$. We also have the following relation for the time derivative of the integral of $w$ over $\Gamma$. \begin{equation} \begin{split} &\D{}{t}\int_\Gamma w dm_\Gamma=\D{}{t}\int_{\Gamma_{\text{ref}}} wQdm_{\Gamma_\text{ref}} =\int_{\Gamma_{\text{ref}}}\paren{\PD{w}{t}Q+w\PD{Q}{t}}dm_{\Gamma_\text{ref}}\\ =&\int_\Gamma\paren{\PD{w}{t}+wQ^{-1}\PD{Q}{t}}dm_\Gamma \end{split} \end{equation} Comparing this with \eqref{Dtw} (with $v_\Gamma=\PD{\mb{X}}{t}\cdot\mb{n}$) and using the identity \eqref{wt}, we obtain the desired result. \end{proof} \begin{lemma}\label{calcidentity} Suppose $w(\mb{x},t), \mb{x}\in (\Omega_i\cup \Gamma)$ is a smooth function defined in $\Omega_i$ whose derivatives are continuous up to the boundary $\Gamma$. Then, we have the following identity: \begin{equation} \begin{split} &\int_{\Gamma_i}\paren{w\PD{}{\mb{n}}\paren{\PD{w}{t}}+(w\Delta w) v_\Gamma} dm_\Gamma\\ =&\int_{\Gamma_i}\paren{w D_t^\mb{n}\paren{\PD{w}{\mb{n}}}+ \paren{\kappa_\Gamma w\PD{w}{\mb{n}}-\abs{\nabla_\Gamma w}^2} v_\Gamma}dm_\Gamma \label{identity} \end{split} \end{equation} where $\int_{\Gamma_i}$ denotes integration over the $\Omega_i$ face of $\Gamma$. A similar identity holds for functions defined in $\Omega_e\cup \Gamma$. \end{lemma} As we shall see, we only need $w$ to be defined in the vicinity of $\Gamma$ for the above to be true. \begin{proof} We only treat the $\Gamma_i$ case. The proof for $\Gamma_e$ is exactly the same. We decompose the integrand in the left hand side of \eqref{identity} into tangential and normal contributions. It is well-known that the Laplacian can be written as: \begin{equation} \Delta w=\PDD{2}{w}{\mb{n}}+\kappa_\Gamma \PD{w}{\mb{n}}+\Delta_\Gamma w \label{lapphi} \end{equation} where $\Delta_\Gamma$ is the Laplace-Beltrami operator of the surface $\Gamma$. We now rewrite $\PD{}{t}\paren{\PD{w}{\mb{n}}}$ in \eqref{identity} in an analogous fashion. For this, we first introduce the signed distance function $\psi(\mb{x},t)$ in a neighborhood of $\Gamma$: \begin{equation} \psi(\mb{x},t)= \begin{cases} \text{dist}(\mb{x},\Gamma_t) &\text{ if } \mb{x}\in \Omega_e,\\ 0 &\text{ if } \mb{x}\in \Gamma_t,\\ -\text{dist}(\mb{x},\Gamma_t) &\text{ if } \mb{x}\in \Omega_i, \end{cases} \end{equation} where $\text{dist}(\mb{x},\Gamma_t)$ is the distance between $\mb{x}$ and $\Gamma_t$. Clearly, $\nabla\psi$ evaluated at any point on $\Gamma$ gives the outward unit normal vector $\mb{n}$. Introduce the following vector field $\mb{v}$ defined in a neighborhood of $\Gamma$ where $\psi$ is smooth: \begin{equation} \mb{v}=v_\Gamma\mb{n} \text{ on } \Gamma,\; (\nabla \mb{v})\nabla \psi=0. \label{defv} \end{equation} The second condition above just says that $\mb{v}$ is constant along lines perpendicular to the level sets. It is well known that the signed distance function satisfies the following transport equation in a neighborhood of $\Gamma$: \begin{equation} D_v\psi\equiv \PD{\psi}{t}+\mb{v}\cdot\nabla \psi=0.\label{vtransport} \end{equation} Note that the above convective derivative evaluated on $\Gamma$ is equal to $D^\mb{n}_t$ defined in \eqref{Dnt}. For any point on $\Gamma$: \begin{equation} \PD{}{\mb{n}}\paren{\PD{w}{t}}=\nabla \psi\cdot \nabla w_t \end{equation} where the subscript $t$ indicates the partial derivative with respect to $t$. We now rewrite this expression as follows: \begin{equation} \begin{split} \nabla \psi\cdot \nabla w_t &=D_v(\nabla \psi\cdot \nabla w) -\nabla \psi_t\cdot \nabla w -\mb{v}\cdot \nabla (\nabla \psi\cdot \nabla w)\\ &=D_v(\nabla \psi\cdot \nabla w) +\nabla (\mb{v}\cdot \nabla \psi)\cdot \nabla w -\mb{v}\cdot \nabla (\nabla \psi\cdot \nabla w) \end{split}\label{psiphit} \end{equation} where we used \eqref{vtransport} in the last equality. Now, consider the second term in the last line: \begin{equation} \begin{split} \nabla (\mb{v}\cdot \nabla \psi)\cdot \nabla w =&\nabla_\Gamma(\mb{v}\cdot \nabla \psi)\cdot \nabla_\Gamma w +(\nabla \psi\cdot \nabla (\mb{v}\cdot \nabla \psi)) (\nabla \psi\cdot \nabla w)\\ =&\nabla_\Gamma(\mb{v}\cdot \nabla \psi)\cdot \nabla_\Gamma w +(\nabla \psi\cdot((\nabla \mb{v}) \nabla \psi)) (\nabla \psi\cdot \nabla w)\\ &+(\mb{v}\cdot((\nabla^2 \psi) \nabla \psi)) (\nabla \psi\cdot \nabla w) \end{split}\label{vpsiphi1} \end{equation} where $\nabla_\Gamma$ is the surface gradient on $\Gamma$. Note that $\nabla^2\psi$ is {\em not} the Laplacian but the matrix of second derivatives of $\psi$. The second to last term in \eqref{vpsiphi1} is $0$ by \eqref{defv}. The last term is also $0$, since: \begin{equation} (\nabla^2 \psi) \nabla \psi =\frac{1}{2}\nabla(\abs{\nabla \psi}^2)=0\label{d2psi0} \end{equation} where we used $\abs{\nabla\psi}^2=1$. Thus \eqref{vpsiphi1} reduces to \begin{equation} \nabla (\mb{v}\cdot \nabla \psi)\cdot \nabla w =\nabla_\Gamma(\mb{v}\cdot \nabla \psi)\cdot \nabla_\Gamma w \label{vpsiphi11}. \end{equation} Let us look at the final term in \eqref{psiphit}. \begin{equation} \mb{v}\cdot \nabla(\nabla \psi\cdot \nabla w) =(\mb{v}\cdot \nabla \psi)(\nabla \psi\cdot(\nabla \psi \cdot \nabla w)) =(\mb{v}\cdot \nabla \psi)(\nabla \psi\cdot((\nabla^2w)\nabla \psi)) \label{vpsiphi2} \end{equation} where we used \eqref{d2psi0} in the last equality. Combining \eqref{vpsiphi11}, \eqref{vpsiphi2} and \eqref{psiphit}, we have: \begin{equation} \nabla \psi\cdot \nabla w_t =D_v(\nabla \psi\cdot \nabla w) +\nabla_\Gamma(\mb{v}\cdot \nabla \psi)\cdot \nabla_\Gamma w -(\mb{v}\cdot \nabla \psi)(\nabla \psi\cdot((\nabla^2w)\nabla \psi)) \end{equation} or equivalently: \begin{equation} \PD{}{\mb{n}}\paren{\PD{w}{t}} =D_t^\mb{n}\paren{\PD{w}{\mb{n}}} +\nabla_\Gamma v_\Gamma \cdot \nabla_\Gamma w -v_\Gamma\paren{\PDD{2}{w}{\mb{n}}} \label{dtdphidn} \end{equation} where we used \eqref{defv}, $\mb{n}=\nabla \psi$ on $\Gamma$ and the equality of $D_t^\mb{n}$ and $D_v$ on $\Gamma$. Now, consider the integral: \begin{equation} \begin{split} &\int_{\Gamma_i}\paren{w\PD{}{\mb{n}}\paren{\PD{w}{t}} +(w\Delta w)v_\Gamma}dm_\Gamma\\ =&\int_{\Gamma_i}\paren{w D_t^\mb{n}\paren{\PD{w}{\mb{n}}} +w\nabla_\Gamma v_\Gamma\cdot \nabla_\Gamma w +w\paren{\Delta_\Gamma w+\kappa_\Gamma \PD{w}{\mb{n}}}v_\Gamma} dm_\Gamma\\ =&\int_{\Gamma_i}\paren{w D_t^\mb{n}\paren{\PD{w}{\mb{n}}} +\paren{\kappa_\Gamma w\PD{w}{\mb{n}} -\abs{\nabla_\Gamma w}^2}v_\Gamma}dm_\Gamma \end{split} \end{equation} where we used \eqref{lapphi} and \eqref{dtdphidn} in the first equality and integrated by parts along $\Gamma$ in the second equality. Note that there are no boundary terms since $\Gamma$ is a closed compact surface. This proves \eqref{identity}. \end{proof} We are now ready to prove Theorem \ref{main}. \begin{proof}[Proof of Theorem \ref{main}] First, multiply \eqref{ckeq} with $\mu_k$ in \eqref{muk} and integrate over $\Omega_i$ and sum in $k$: \begin{equation} \sum_{k=1}^N\int_{\Omega_i}\mu_k\paren{\PD{c_k}{t}+\nabla \cdot(\mb{u}c_k)}d\mb{x} =\sum_{k=1}^N\int_{\Omega_i}\mu_k \nabla \cdot \paren{c_k\frac{D_k}{k_BT}\nabla \mu_k}d\mb{x}.\label{mkcke} \end{equation} The summand in the right hand side becomes: \begin{equation} \begin{split} \int_{\Omega_i}\mu_k \nabla \cdot \paren{c_k\frac{D_k}{k_BT}\nabla \mu_k}d\mb{x} &=\int_{\Gamma_i}\paren{\mu_k {c_k\frac{D_k}{k_BT}\nabla \mu_k}\cdot \mb{n}}dm_\Gamma\\ &-\int_{\Omega_i}\paren{c_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2}d\mb{x} \end{split} \end{equation} where $\mb{n}$ is the outward normal on $\Gamma$. Consider the left hand side of \eqref{mkcke}. \begin{equation} \begin{split} &\sum_{k=1}^N\mu_k\PD{c_k}{t}=\sum_{k=1}^N\paren{\sigma_k\PD{c_k}{t}+qz_k\phi\PD{c_k}{t}}\\ =&\sum_{k=1}^N\PD{\omega}{c_k}\PD{c_k}{t}+\phi\PD{}{t}\paren{\sum_{k=1}^N qz_kc_k} =\PD{\omega}{t}-\phi\PD{}{t}\paren{\nabla \cdot\paren{\epsilon\nabla \phi}} \end{split}\label{mkcksume} \end{equation} We used \eqref{muk} in the first equality and \eqref{poisson} in the last equality. Integrate final expression in \eqref{mkcksume} over $\Omega_i$. \begin{equation} \begin{split} &\int_{\Omega_i}\paren{\PD{\omega}{t}-\phi\nabla\cdot \paren{\epsilon\nabla\paren{\PD{\phi}{t}}}} d\mb{x}\\ =&\int_{\Gamma_i}\paren{-\phi\PD{}{\mb{n}}\paren{\epsilon\PD{\phi}{t}}}dm_\Gamma +\int_{\Omega_i} \PD{}{t}\paren{\omega+\frac{\epsilon}{2}\abs{\nabla \phi}^2}d\mb{x}. \end{split} \end{equation} For the second term in the left hand side of \eqref{mkcke}, we have, similarly to \eqref{mkcksume}: \begin{equation} \sum_{k=1}^N\mu_k\mb{u}\cdot \nabla c_k=\nabla \cdot (\mb{u}\omega) +\phi\cdot \nabla\paren{\mb{u} \sum_{k=1}^Nqz_kc_k}. \end{equation} Integrate the above expression over $\Omega_i$: \begin{equation} \begin{split} &\int_{\Omega_i}\paren{\nabla \cdot (\mb{u}\omega) +\phi\cdot \nabla\paren{\mb{u}\sum_{k=1}^Nqz_kc_k}}d\mb{x}\\ =&\int_{\Gamma_i}\paren{\omega+\phi\sum_{k=1}^Nqz_kc_k} \mb{u}\cdot\mb{n}dm_\Gamma -\int_{\Omega_i}\paren{\sum_{k=1}^Nqz_kc_k}\mb{u}\cdot \nabla \phi d\mb{x} \end{split} \end{equation} Collecting the above calculations, we have rewritten identity \eqref{mkcke} as: \begin{equation} \begin{split} &\int_{\Omega_i} \PD{}{t}\paren{\omega+\frac{\epsilon}{2}\abs{\nabla \phi}^2}d\mb{x} +\int_{\Gamma_i} \paren{-\phi\PD{}{\mb{n}}\paren{\epsilon\PD{\phi}{t}}} dm_\Gamma\\ =&-\int_{\Gamma_i} \paren{\paren{\omega-\sum_{k=1}^N c_k\sigma_k}\mb{u}\cdot\mb{n} +\sum_{k=1}^N \mu_k\paren{c_k\paren{\mb{u} -\frac{D_k}{k_BT}\nabla \mu_k}\cdot \mb{n}}} dm_\Gamma\\ +&\int_{\Omega_i}\paren{-\sum_{k=1}^Nc_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2 +\paren{\sum_{k=1}^Nqz_kc_k}\mb{u}\cdot \nabla \phi}d\mb{x} \end{split} \end{equation} where we used $qz_k\phi=\mu_k-\sigma_k$ (Eq. \eqref{muk}) in the boundary integral after the equality. Performing a similar calculation on $\Omega_e$, and adding this to the above, we find: \begin{equation} \begin{split} &\D{}{t}\int_{\Omega_i\cup \Omega_e} \paren{\omega+\frac{\epsilon}{2}\abs{\nabla \phi}^2}d\mb{x} -\int_{\Gamma}\jump{\phi\PD{}{\mb{n}}\paren{\epsilon\PD{\phi}{t}}}dm_\Gamma\\ -&\int_{\Gamma}\paren{ \jump{\omega+\frac{\epsilon}{2}\abs{\nabla \phi}^2}\PD{\mb{X}}{t}\cdot\mb{n} }dm_\Gamma\\ =&-\int_{\Gamma}\paren{\jump{\pi_w}\mb{u}\cdot\mb{n} +\sum_{k=1}^N\paren{[\mu_k](j_k+a_k)+[c_k\mu_k]\PD{\mb{X}}{t}\cdot\mb{n}}} dm_\Gamma\\ +&\int_{\Omega_i}\paren{-\sum_{k=1}^Nc_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2 +\paren{\sum_{k=1}^Nqz_kc_k}\mb{u}\cdot \nabla \phi}d\mb{x} \end{split} \end{equation} where we used \eqref{muw} and \eqref{ckbc} to rewrite the boundary integral after the equality. Note that the second boundary integral before the equality comes from the fact that the boundary $\Gamma$ is moving. Rearranging terms and using \eqref{cont}, we have: \begin{equation} \begin{split} &\D{}{t}\int_{\Omega_i\cup \Omega_e} \paren{\omega+\frac{\epsilon}{2}\abs{\nabla \phi}^2}d\mb{x}\\ =&\int_{\Gamma}\paren{ \jump{\phi\PD{}{\mb{n}}\paren{\epsilon\PD{\phi}{t}}} +\jump{\frac{\epsilon}{2}\abs{\nabla \phi}^2 +\phi\nabla\cdot(\epsilon\nabla \phi) }\PD{\mb{X}}{t}\cdot \mb{n}}dm_\Gamma\\ -&\int_{\Gamma}\paren{[\pi_w]j_w +\sum_{k=1}^N[\mu_k](j_k+a_k)}dm_\Gamma\\ +&\int_{\Omega_i\cup\Omega_e}\paren{-\sum_{k=1}^Nc_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2 +\paren{\sum_{k=1}^Nqz_kc_k}\mb{u}\cdot \nabla \phi}d\mb{x}. \end{split}\label{main1} \end{equation} We used $\mu_k-\sigma_k=qz_k\phi$ and used \eqref{poisson} to rewrite the first boundary integral after the equality. Note that: \begin{equation} \begin{split} &\int_{\Gamma} \jump{\phi\PD{}{\mb{n}}\paren{\epsilon\PD{\phi}{t}} +\phi\nabla\cdot(\epsilon\nabla \phi) \PD{\mb{X}}{t}\cdot\mb{n}}dm_\Gamma\\ =&\int_{\Gamma} \jump{\phi D_t^\mb{n}\paren{\epsilon \PD{\phi}{\mb{n}}} +\paren{\kappa_\Gamma \phi\epsilon\PD{\phi}{\mb{n}} -\epsilon \abs{\nabla_\Gamma \phi}^2} \PD{\mb{X}}{t}\cdot\mb{n}}dm_\Gamma \end{split} \end{equation} where we used Lemma \ref{calcidentity} with $w=\phi$ and $v_\Gamma=\PD{\mb{X}}{t}\cdot \mb{n}$ in \eqref{identity}. Using this and the definition of $\nabla_\Gamma$, we may rewrite the first boundary integral in \eqref{main1} as: \begin{equation} \begin{split} &\int_{\Gamma}\paren{ \jump{\phi\PD{}{\mb{n}}\paren{\epsilon\PD{\phi}{t}}} +\jump{\frac{\epsilon}{2}\abs{\nabla \phi}^2 +\phi\nabla\cdot(\epsilon\nabla \phi) }\PD{\mb{X}}{t}\cdot\mb{n}}dm_\Gamma\\ =&-\int_{\Gamma}[\phi]D_t^\mb{n}(C_m[\phi])dm_\Gamma\\ &+\int_{\Gamma}\paren{-\kappa_\Gamma C_m[\phi]^2 +\jump{\frac{\epsilon}{2}\paren{\abs{\PD{\mb{\phi}}{\mb{n}}}^2 -\abs{\nabla_\Gamma \phi}^2}}}\PD{\mb{X}}{t}\cdot\mb{n} dm_\Gamma\label{phibterms} \end{split} \end{equation} where we used \eqref{cap}. We now turn to equation \eqref{stokesE}. Multiply this by $\mb{u}$ and integrate over $\Omega_i$: \begin{equation} \begin{split} &\int_{\Omega_i} \mb{u}\cdot(\nu \Delta \mb{u} -\nabla p)d\mb{x} -\int_{\Omega_i}\paren{\sum_{k=1}^Nqz_kc_k}\mb{u}\cdot \nabla \phi d\mb{x}\\ =&\int_{\Gamma_i}\paren{\Sigma(\mb{u},p)\mb{n}}\cdot\mb{u}dm_\Gamma- \int_{\Omega_i}\mb{\nu}\abs{\nabla \mb{u}}^2d\mb{x}\\ &-\int_{\Omega_i} \paren{\sum_{k=1}^Nqz_kc_k}\mb{u}\cdot \nabla \phi d\mb{x}=0 \end{split} \end{equation} Performing a similar calculation on $\Omega_e$ and by summation, we have: \begin{equation}\label{sigupgamma} \int_{\Gamma}\jump{\paren{\Sigma(\mb{u},p)\mb{n}}\cdot\mb{u}}dm_\Gamma- \int_{\Omega_i\cup \Omega_e}\mb{\nu}\abs{\nabla \mb{u}}^2d\mb{x} =\int_{\Omega_i\cup \Omega_e} \paren{\sum_{k=1}^Nqz_kc_k}\mb{u}\cdot \nabla \phi d\mb{x} \end{equation} Let us first assume \eqref{forcebalanceE} holds. First write $\Sigma_e(\phi)\mb{n}$ in the following form: \begin{equation} \begin{split} \Sigma_e(\phi)\mb{n}&=\epsilon\paren{\PD{\phi}{\mb{n}}\nabla \phi-\frac{1}{2}\abs{\nabla \phi}^2\mb{n}}\\ &=\epsilon\paren{ \frac{1}{2}\paren{\abs{\PD{\phi}{\mb{n}}}^2-\abs{\nabla_\Gamma\phi}^2}\mb{n} +\PD{\phi}{\mb{n}}\nabla_\Gamma\phi} \end{split} \end{equation} We may now write \eqref{forcebalanceE} as: \begin{equation} [\Sigma_m(\mb{u},p)\mb{n}]=\mb{F}_\text{elas}+\mb{F}_\text{cap}- \jump{\frac{\epsilon}{2}\paren{\abs{\PD{\phi}{\mb{n}}}^2 -\abs{\nabla_\Gamma\phi}^2}}\mb{n} +C_m[\phi]\nabla_\Gamma [\phi]\label{bup} \end{equation} where we used \eqref{cap} in the last term. Using \eqref{phibterms}, \eqref{sigupgamma} and \eqref{bup} in \eqref{main1} we have: \begin{equation} \begin{split} &\D{}{t}\int_{\Omega_i\cup \Omega_e} \paren{\omega+\frac{\epsilon}{2}\abs{\nabla \phi}^2}d\mb{x}\\ =&\int_{\Gamma}\paren{-[\phi]D_t^\mb{n}(C_m[\phi]) +\paren{-\kappa_\Gamma C_m[\phi]^2\mb{n}+C_m[\phi]\nabla_\Gamma[\phi]+\mb{F}_\text{cap}} \cdot \PD{\mb{X}}{t}}dm_\Gamma\\ +&\int_\Gamma \mb{F}_\text{elas}\cdot\PD{\mb{X}}{t}dm_\Gamma -\int_{\Gamma}\paren{[\widehat{\psi_w}]j_w +\sum_{k=1}^N[\mu_k](j_k+a_k)}dm_\Gamma\\ -&\int_{\Omega_i\cup\Omega_e}\paren{\sum_{k=1}^Nc_k\frac{D_k}{k_BT}\abs{\nabla \mu_k}^2 +\mb{\nu}\abs{\nabla \mb{u}}^2}d\mb{x}. \end{split}\label{main2} \end{equation} Comparing \eqref{main2}, \eqref{FEE} and using \eqref{dtelas}, we see that the proof of \eqref{FEE} rests on the evaluation of the first boundary integral after the equality in \eqref{main2}. \begin{equation} \begin{split} &\int_{\Gamma}[\phi]D_t^\mb{n}(C_m[\phi])dm_\Gamma =\int_{\Gamma} \paren{D_t^\mb{n}\paren{\frac{1}{2}C_m[\phi]^2} +\frac{1}{2}(D_t^\mb{n}C_m)[\phi]^2}dm_\Gamma\\ =&\D{}{t}\int_\Gamma \frac{1}{2}C_m[\phi]^2dm_\Gamma -\int_\Gamma\paren{ \paren{\frac{1}{2}C_m[\phi]^2}\kappa_\Gamma \PD{\mb{X}}{t}\cdot\mb{n} -\frac{1}{2}(D_t^\mb{n}C_m)[\phi]^2}dm_\Gamma\label{phiDtnphi} \end{split} \end{equation} where we used \eqref{Dtw} with $w=[\phi]$, $v_\Gamma=\PD{\mb{X}}{t}\cdot \mb{n}$ in the second equality. We also have: \begin{equation} \int_{\Gamma}(C_m[\phi]\nabla_\Gamma [\phi])dm_\Gamma =\int_{\Gamma}\paren{\nabla_\Gamma \paren{\frac{1}{2}C_m[\phi]^2}- \frac{1}{2}(\nabla_\Gamma C_m)[\phi]^2}dm_\Gamma\label{phiCmphi} \end{equation} Using \eqref{phiDtnphi} and \eqref{phiCmphi}, we have: \begin{equation} \begin{split} &\int_{\Gamma}\paren{-[\phi]D_t^\mb{n}(C_m[\phi]) +\paren{-\kappa_\Gamma C_m[\phi]^2\mb{n}+C_m[\phi]\nabla_\Gamma[\phi]}\cdot \PD{\mb{X}}{t}}dm_\Gamma\\ =&-\D{}{t}\int_\Gamma \frac{1}{2}C_m[\phi]^2dm_\Gamma -\int_\Gamma \frac{1}{2} \paren{D_t^\mb{n}C_m+(\nabla_\Gamma C_m)\cdot\PD{\mb{X}}{t}}[\phi]^2 dm_\Gamma \\ &+\int_\Gamma\paren{ -\paren{\frac{1}{2}C_m[\phi]^2}\kappa_\Gamma\mb{n}+ \nabla_\Gamma \paren{\frac{1}{2}C_m[\phi]^2}} \cdot \PD{\mb{X}}{t}dm_\Gamma\\ \end{split}\label{main3} \end{equation} Consider the second boundary integral after the equality. \begin{equation} \begin{split} &\int_\Gamma \frac{1}{2} \paren{D_t^\mb{n}C_m+(\nabla_\Gamma C_m)\cdot\PD{\mb{X}}{t}}[\phi]^2 dm_\Gamma\\ =&\int_\Gamma \paren{\frac{1}{2}\PD{C_m}{t}[\phi]^2}dm_\Gamma =\int_\Gamma \paren{\frac{1}{2}\PD{C_m}{Q}\PD{Q}{t}[\phi]^2}dm_\Gamma\\ =&\int_\Gamma \paren{\frac{1}{2}Q\PD{C_m}{Q}[\phi]^2\kappa_\Gamma\mb{n} -\nabla_\Gamma\paren{\frac{1}{2}Q\PD{C_m}{Q}[\phi]^2}}\cdot\PD{\mb{X}}{t} dm_\Gamma \end{split}\label{Cmtterm} \end{equation} where we used \eqref{wt} with $w=C_m$ in the first equality, and \eqref{Qt} with $w=\frac{1}{2}Q\PD{C_m}{Q}[\phi]^2$ in the last equality. From \eqref{Cmtterm}, \eqref{main3}, \eqref{main2} and expression \eqref{FcapCmQ} of $\mb{F}_\text{cap}$, we obtain the desired result. If $Q\equiv 1$ and \eqref{forcebalancep} holds, we may argue as follows. Equation \eqref{main2} remains valid with $\mb{F}_\text{cap}$ replaced by $\mb{F}_\text{p}$. Verification of \eqref{FEE} rests on the evaluation of the first boundary integral in \eqref{main2}. Proceeding as in the above, we have: \begin{equation} \begin{split} &\int_{\Gamma}\paren{-[\phi]D_t^\mb{n}(C_m[\phi]) +\paren{-\kappa_\Gamma C_m[\phi]^2\mb{n}+C_m[\phi]\nabla_\Gamma[\phi]+\mb{F}_p}\cdot \PD{\mb{X}}{t}}dm_\Gamma\\ =&-\D{}{t}\int_\Gamma \frac{1}{2}C_m[\phi]^2dm_\Gamma -\int_\Gamma \paren{\frac{1}{2}\PD{C_m}{Q}\PD{Q}{t}[\phi]^2}dm_\Gamma\\ &+\int_\Gamma\paren{ \paren{\lambda-\frac{1}{2}C_m[\phi]^2}\kappa_\Gamma\mb{n}- \nabla_\Gamma \paren{\lambda-\frac{1}{2}C_m[\phi]^2}} \cdot \PD{\mb{X}}{t}dm_\Gamma \end{split} \end{equation} where we used the \eqref{Fp}. Since $\PD{Q}{t}=0$, the second boundary integral after the equality is $0$. Using \eqref{Qt} with $w=\lambda-\frac{1}{2}C_m[\phi]^2$ and $\PD{Q}{t}=0$, we see that the last boundary integral is also $0$. In the absence of active currents $a_k$, the free energy is decreasing since $j_k$ and $j_w$ satisfying conditions \eqref{jkcond} and \eqref{jwcond} respectively. \end{proof} \section{Limiting Systems}\label{simple} We now discuss some limiting cases of the model we introduced in the previous section. For this purpose, we shall first make the equations dimensionless. In what follows, the primed symbols denote dimensionless variables. We introduce the following non-dimensionalization of space and time. \begin{equation} \mb{x}=L\mb{x}',\; \mb{X}=L\mb{X}',\; t=T_Dt',\; T_D=\frac{L^2}{D_0},\; D_k=D_0D_k', \end{equation} where $L$ is the characteristic length scale (for example the size of the domain $\Omega_i$) and $D_0$ is the characteristic diffusion coefficients of ions. We thus measure time with respect to the diffusive time scale of ions. For concentrations and the electrostatic potential, we let: \begin{equation} c_k=c_0c_k',\; \phi=\frac{k_BT}{q}\phi'. \end{equation} For pressure and the membrane elastic force, we let: \begin{equation} p=c_0k_BTp',\; \mb{F}_\text{elas}=c_0k_BT\mb{F}_\text{elas}'. \end{equation} For the characteristic fluid and membrane velocity, we turn to relation \eqref{cont}. Let $\zeta$ be the characteristic hydraulic permeability of the membrane, which we may take as follows: \begin{equation} \zeta=\at{\PD{j_w}{[\widehat{\psi_w}]}} {[\widehat{\psi_w}]=0}. \end{equation} Then, $\zeta c_0k_BT$ is the characteristic velocity generated by an osmotic gradient across the membrane. We thus let: \begin{equation} \mb{u}=\zeta c_0k_BT \mb{u}'. \end{equation} With the above dimensionless variables, we may rewrite our system as follows. For simplicity, we shall adopt expression \eqref{ent} as our definition of the entropic part of the free energy $\omega$, so that: \begin{equation} \mu_k'=z_k\phi'+\ln c_k'. \end{equation} In $\Omega_i$ and $\Omega_e$, we have: \begin{subequations}\label{dlessfull} \begin{align} \PD{c_k'}{t'}+\text{Pe}\nabla'\cdot (\mb{u}'c_k')&=-\nabla'\cdot \mb{f}_k',\; \mb{f}_k'=-D_k'(\nabla'c_k'+z_kc_k'\nabla'\phi'),\label{dlessck}\\ -\nabla'\cdot(\beta^2\nabla'\phi')&=\sum_{k=1}^N z_kc_k',\label{dlesspoisson}\\ \gamma\Delta' \mb{u}'&=\nabla'p'+\paren{\sum_{k=1}^N z_kc_k'}\nabla \phi', \; \nabla'\cdot \mb{u}'=0,\label{dlessstokes} \end{align} where $\nabla', \nabla'\cdot$ and $\Delta'$ are the gradient, divergence and Laplace operators in the $\mb{x}'$ coordinate and the dimensionless constants are: \begin{equation} \text{Pe}=\frac{\zeta c_0k_BT}{D/L},\; \beta=\frac{r_d}{L},\; r_d=\sqrt{\frac{\epsilon k_BT}{q^2c_0}},\; \gamma=\frac{\nu \zeta}{L}. \label{Pebeta} \end{equation} In the above, $\text{Pe}$ is the P\'eclet number which, in this case, measures the ratio between the fluid velocity induced by osmotic gradients and the characteristic diffusive velocity. The constant $\beta$ measures the ratio between $r_d$, the Debye length and $L$. The constant $\gamma$ is the ratio between the viscosity of water and the hydraulic resistance of the membrane. The boundary conditions at the membrane interface $\Gamma$ become: \begin{align} \at{\paren{\paren{c_k'\paren{\text{Pe}\mb{u}'-\PD{\mb{X}'}{t'}}+\mb{f}'_k} \cdot \mb{n}}}{\Gamma_{i,e}}&=\alpha (j_k'+a_k'),\label{dlessckbc}\\ -\at{\paren{\beta\nabla'{\phi'}\cdot \mb{n}}}{\Gamma_{i,e}}&= \theta C_m'\jump{\phi'},\label{dlessbcpoisson}\\ \mb{u}'-\frac{1}{\text{Pe}}\PD{\mb{X}'}{t'}&=j_w'\mb{n},\label{dlesscont}\\ \jump{\paren{\Sigma'_m(\mb{u}',p')+\beta^2\Sigma_e'(\phi')}\mb{n}} &=\mb{F}'_\text{elas}+\beta\theta\mb{F}'_\text{cap}.\label{dlessforce} \end{align} \end{subequations} In equation \eqref{dlessckbc}, $\alpha$ is a dimensionless constant given by the ratio of the characteristic membrane permeability $p_m$ and diffusion in the bulk: \begin{equation}\label{alphapm} \alpha=\frac{p_m}{D/L}, \; p_m=\sum_{k=1}^N\frac{k_BT}{c_0}\at{\PD{j_k}{\jump{\mu_k}}} {\jump{\mu_k}=0}. \end{equation} The currents $j_k$ and $a_k$ are scaled so that $j_k=p_mc_0j_k'$ and $a_k=p_mc_0a_k'$. In \eqref{dlessbcpoisson}: \begin{equation} C_m=C_m^0C_m', \; \theta=\frac{C_m^0 k_BT/q}{qc_0r_d}, \end{equation} where $C_m^0$ is the characteristic magnitude of the membrane capacitance per unit area (see \eqref{Cm0} or \eqref{Cmscaling}). The dimensionless constant $\theta$ is the ratio between the membrane charge and the total amount of charge in a layer of thickness on the order of the Debye length. In \eqref{dlesscont}, $j_w=\zeta c_0k_BT j_w'$. The variables in \eqref{dlessforce}, are defined by: \begin{align} \Sigma'_m(\mb{u}',p')&=\gamma(\nabla' \mb{u}'+(\nabla' \mb{u}')^T)-p'I,\\ \Sigma'_e(\phi')&=\nabla' \phi'\otimes \nabla \phi' -\frac{1}{2}\abs{\nabla'\phi'}^2I,\\ \mb{F}_\text{cap}&= \tau_\text{cap}'\kappa_\Gamma'\mb{n}-\nabla_\Gamma'\tau_\text{cap}', \; \tau_\text{cap}' =\frac{1}{2}\paren{C_m'+Q\PD{C_m'}{Q}}\jump{\phi'}^2 \end{align} where $\kappa_\Gamma'(=\kappa_\Gamma L)$ is the sum of the two principal curvatures of $\Gamma$ measured in the $\mb{x}'$ spatial variable and $\nabla_\Gamma'$ is the surface gradient operator with respect to $\mb{x}'$. Equations \eqref{dlessck}-\eqref{dlessstokes} and the boundary conditions \eqref{dlessckbc}-\eqref{dlessforce} constitute our dimensionless system. In the rest of this section we shall drop the $\prime$ in the variables with the understanding that all variables, unless otherwise stated, are dimensionless. The dimensionless system above possesses five dimensionless constants $\alpha, \beta, \gamma, \theta$ and $\text{Pe}$. We consider two limiting cases of the above system. First of all, consider the case when $\text{Pe}\ll 1$. Assuming all primed quantities are $\mathcal{O}(1)$ with respect to $\text{Pe}$, we see, from \eqref{dlesscont} that \begin{equation} \PD{\mb{X}}{t}=\mathcal{O}(\text{Pe}). \end{equation} Therefore, the membrane does not move to leading order. If we collect all leading order terms, we see that the equations \eqref{dlessck} and \eqref{dlesspoisson} decouple from \eqref{dlessstokes}. We thus obtain the following Poisson-Nernst-Planck system with interface boundary conditions: \begin{subequations}\label{Pe0} \begin{align} \PD{c_k}{t}&=-\nabla\cdot \mb{f}_k \text{ in } \Omega_{i,e}\\ -\nabla\cdot(\beta^2\nabla\phi)&=\sum_{k=1}^N z_kc_k \text{ in } \Omega_{i,e}\\ \at{(\mb{f}_k\cdot \mb{n})}{\Gamma_{i,e}}&=\alpha (j_k+a_k),\\ -\at{\paren{\beta\nabla{\phi}\cdot \mb{n}}}{\Gamma_{i,e}}&= \theta C_m\jump{\phi}, \end{align} \end{subequations} where the membrane $\Gamma$ is fixed in time. This model was introduced in \cite{leonetti_biomembrane_1998} (see also \cite{schaff1997general,choi1999electrodiffusion} for related models). For single cell systems, the P\'eclet number is about $\text{Pe}\approx 10^{-1}$ to $10^{-4}$. The above may thus be a good approximation to the full system in the $T_D$ time scale. In the context of multicellular systems, however, $L$ may be large and $\text{Pe}$ can reach unity, as can be seen from expression \eqref{Pebeta} of $\text{Pe}$. It should be pointed out that there are situations in which the representative fluid velocity is not dictated by the osmotic pressure, in which case one should adopt a different definition for the P\'eclet number. For example, if we are interested in blood cells in a flow environment, the ambient hemodynamic flow velocity should be taken as the representative velocity. We note that \eqref{Pe0} also satisfies a free energy equality. \begin{proposition} Suppose $c_k$ and $\phi$ are smooth functions that satisfy \eqref{Pe0}. Then, the following equality holds: \begin{equation} \D{}{t}\paren{G_S+E_\text{elec}}=-F_c-J_a.\label{PeFE} \end{equation} In the above, $G_S, E_\text{elec}, F_c$ and $J_a$ are dimensionless versions of the corresponding quantities in \eqref{FE}, \eqref{Fwc} and \eqref{FEE}. \end{proposition} \begin{proof} This follows from a simple calculation. \end{proof} In this sense, system \eqref{Pe0} may be seen as the system associated with the energy law \eqref{PeFE} where the mechanical energy and dissipation in \eqref{FEE} is discarded. We next consider the limit when $\beta\ll 1$. This limit is motivated by the fact that the Debye length $r_d$ is approximately $1$nm in typical physiological systems, far smaller than the typical length scale of interest. By formally letting $\beta \to 0$ in \eqref{dlessfull} we obtain the following system of equations: \begin{subequations}\label{beta0} \begin{align} \PD{c_k}{t}+\text{Pe}\mb{u}\cdot\nabla c_k&=-\nabla\cdot \mb{f}_k \text{ in } \Omega_{i,e} \label{dlessckbeta0}\\ \quad \sum_{k=1}^N z_kc_k&=0 \text{ in } \Omega_{i,e},\label{dlessEN}\\ \gamma\Delta \mb{u}-\nabla p&=0, \quad \nabla\cdot \mb{u}=0 \text{ in } \Omega_{i,e},\label{beta0stokes}\\ \alpha (j_k+a_k)&=\at{\paren{\paren{c_k\paren{\text{Pe}\mb{u}-\PD{\mb{X}}{t}}+\mb{f}_k} \cdot \mb{n}}}{\Gamma_{i,e}},\label{dlessbeta0ckbc}\\ \mb{u}-\frac{1}{\text{Pe}}\PD{\mb{X}}{t}&=j_w\mb{n},\quad \jump{\Sigma_m(\mb{u},p)\mb{n}}=\mb{F}_\text{elas} \text{ on } \Gamma. \label{beta0stokesbc} \end{align} \end{subequations} We have discarded all terms in \eqref{dlessfull} that involve $\beta$ and have eliminated the boundary condition \eqref{dlessbcpoisson}. The most important feature of the above system is that we have, in place of the Poisson equation \eqref{dlesspoisson}, the electroneutrality condition \eqref{dlessEN}. The electrostatic potential $\phi$ thus evolves so that the electroneutrality constraint \eqref{dlessEN} is satisfied at each time instant. Although $\phi$ is thus determined only implicitly through the electroneutrality condition, it is possible to obtain a PDE satisfied by $\phi$ by taking the derivative of \eqref{dlessEN} with respect to $t$ and using \eqref{dlessckbeta0}: \begin{equation}\label{phieqbeta0} \begin{split} 0&=\nabla \cdot(a\nabla \phi+\mb{b})\\ a&=\sum_{k=1}^N z_k^2D_kc_k, \; \mb{b}=\sum_{k=1}^N z_kD_k\nabla c_k. \end{split} \end{equation} We point out that the electroneutrality condition does {\em not} imply that $\Delta \phi=0$ as may be erroneously inferred from \eqref{dlesspoisson}. In fact, as $\beta\to 0$, $\Delta \phi$ may remain order $1$ with respect to $\beta$ while the right hand side of \eqref{dlesspoisson} will go to $0$ like $\beta^2$. This is a common fallacy in applications of the electroneutral limit. The boundary condition for this elliptic equation can be obtained by taking the sum in $k$ in boundary condition \eqref{dlessbeta0ckbc}: \begin{equation}\label{phieqbeta0bc} -\at{\paren{a\nabla \phi+\tilde{\mb{b}}}\cdot \mb{n}}{\Gamma_{i,e}} =\sum_{k=1}^N\alpha z_k(j_k+a_k). \end{equation} Suppose \begin{equation}\label{currdownhill} \PD{}{\jump{\phi}}\sum_{k=1}^Nz_kj_k>0. \end{equation} The above inequality states that current flowing out of the cell should increase if $\jump{\phi}$ increases, and is thus satisfied by biophysically reasonable expressions for $j_k$. This inequality is clearly satisfied if $j_k$ are of the form \eqref{HH} or \eqref{GHK} (see also \eqref{monotone}). Condition \eqref{currdownhill} is necessary for the boundary value problem \eqref{phieqbeta0} and \eqref{phieqbeta0bc} to be uniquely solvable (up to an arbitrary constant), assuming $a_k$ is a given function of $\mb{x}$ (and $t$). In connection to \eqref{phieqbeta0} and \eqref{phieqbeta0bc}, we perform the following calculation to illuminate the nature of system \eqref{beta0} as it relates to \eqref{dlessfull}. Suppose $\phi$ satisfies \eqref{dlessfull}. By taking the time derivative of \eqref{dlesspoisson} with respect to $t$, we obtain: \begin{equation}\label{phieq} \begin{split} \nabla \cdot 0&=\paren{\beta^2\nabla\PD{\phi}{t}+a\nabla \phi+\widetilde{\mb{b}}},\\ a&=\sum_{k=1}^N z_k^2D_kc_k, \; \widetilde{\mb{b}}= \sum_{k=1}^N \paren{-\text{Pe}z_kc_k\mb{u}+z_kD_k\nabla c_k}, \end{split} \end{equation} We used \eqref{dlessck} in deriving the above. At the boundary, we may use \eqref{dlessbcpoisson} and \eqref{dlessckbc} to find that: \begin{equation}\label{phieqbc} \begin{split} -&\at{\paren{\beta^2\nabla\PD{\phi}{t}+a\nabla \phi+\widetilde{\mb{b}}}\cdot \mb{n}} {\Gamma_{i,e}}\\ =&\beta\theta\PD{}{t}\paren{C_m\jump{\phi}} +\sum_{k=1}^N \paren{z_kc_k\PD{\mb{X}}{t}\cdot\mb{n}+\alpha z_k(j_k+a_k)}. \end{split} \end{equation} If we formally let $\beta\to 0$ in \eqref{phieq} and \eqref{phieqbc}, we obtain \eqref{phieqbeta0} and \eqref{phieqbeta0bc} respectively. For the above limit to be justified, we must require that $\PD{\phi}{t}$ and $\PD{[\phi]}{t}$ remain order $1$ with respect to $\beta$ as $\beta\to 0$. It is thus only when the evolution of $\phi$ and $[\phi]$ is sufficiently slow that we can reliably use system \eqref{beta0} as as an approximation to \eqref{dlessfull}. We see from \eqref{phieq} and \eqref{phieqbc} that there are two other time scales in the system besides the diffusive time scale $T_D$. The first is the Debye time scale, $\beta^2T_D$. This is the relaxation time scale of deviations from electroneutrality. This Debye time scale is too small to be of physiological interest, and we may safely ignore the $\beta^2\PD{\phi}{t}$ terms except for the very short initial layer that may exist depending on initial conditions. The other time scale, $\beta\theta T_D$, which we shall call the cable time scale, is the time scale over which the membrane potential $[\phi]$ can change. In excitable tissue, the ionic currents $j_k$ can change on a time scale comparable to $\beta\theta T_D$. The interaction of rapid changes in $j_k$ and the capacitive current $\beta\theta\PD{}{t}(C_m[\phi])$ term lead to cable effects, including the propagation of action potentials, which is essential in describing a wide range of electrophysiological behavior. Such phenomena are usually described by the cable model, in which the intracellular and extracellular media are treated as ohmic resistive media \cite{HH,KS}. Setting $\beta\theta\PD{}{t}(C_m[\phi])$ term to $0$ could thus be problematic for certain applications. It is thus of interest to develop a model in which the term $\beta^2\PD{\phi}{t}$ is ignored but $\beta\theta\PD{}{t}(C_m[\phi])$ is not, while retaining electrodiffusive and osmotic effects contained in the full model. Such a model (without osmotic effects) is proposed in \cite{Sinica} and was applied in \cite{mori_ephaptic_2008} to a problem in cardiology. A key ingredient in the derivation of such a model is an analysis of a boundary layer that forms at the membrane interface $\Gamma$. This boundary layer, in physical terms, correspond to charge accumulation on both sides of the membrane. We refer the reader to \cite{morithesis,mori2009three} for this model and its relationship to conventional cable models. An important feature of system \eqref{beta0} is that it satisfies the following energy equality. \begin{proposition} Suppose $c_k, \phi, \mb{u}$ and $p$ are smooth functions that satisy system \eqref{beta0}. Then, the following equality holds: \begin{equation} \D{}{t}\paren{G_S+E_\text{elas}}=-I_p-J_p-J_a.\label{ENFE} \end{equation} In the above, $G_S, E_\text{elas}, I_p, J_p$ and $J_a$ are dimensionless versions of corresponding quantities in \eqref{FE} and \eqref{FEE}. \end{proposition} \begin{proof} The proof follows from a calculation similar to the proof of Theorem \ref{mainc}. \end{proof} Thus, system \eqref{beta0} may be seen as the system associated with the energy principle \eqref{ENFE} in which the electrostatic energy in \eqref{FEE} is discarded. \section{Numerical Simulation of Animal Cell Volume Control}\label{animal} In this section, we take the problem of cell volume control to illustrate some aspects of the model we introduced above. Cells contain a large number of organic molecules that do not leak out through membrane. This results in excess intracellular osmotic pressure, which may cause the cell to burst. Cells have developed countermeasures to prevent this from happening. We shall use the electroneutral system \eqref{beta0} to study cell volume control. We continue to work with the dimensionless equations. To simplify matters, we suppose that the cell membrane $\Gamma$ and the outer boundary $\Gamma_{\text{out}}=\partial \Omega$ are concentric spheres for all time and that the velocity field $\mb{u}$ only has a radial component. Assuming the boundary condition $\mb{u}=\mb{0}$ on $\Gamma_\text{out}$ we immediately see that $\mb{u}=\mb{0}$ throughout $\Omega_i\cup \Omega_e$. We can thus drop equation \eqref{beta0stokes} and set $\mb{u}=\mb{0}$ wherever $\mb{u}$ appears in system \eqref{beta0}. Assuming further that $c_k$ and $\phi$ are functions only of the (dimensionless) radial coordinate $r$, we have: \begin{subequations}\label{radial} \begin{align} \PD{c_k}{t}& =-\frac{1}{r^2}\PD{}{r}\paren{r^2f_k}, \; f_k=-\paren{D_k\PD{c_k}{r}+z_kc_k\PD{\phi}{r}},\label{radialck}\\ \sum_{k=1}^{N}z_kc_k&=0, \end{align} for $0<r<R$ and $R<r<R_\text{out}$ where $R(t)$ is the radius of the membrane sphere $\Gamma$ and $R_\text{out}=\text{const}.$ is a the radius of the outer boundary sphere $\Gamma_\text{out}$. The boundary conditions are: \begin{align} f_k=&\begin{cases} 0 &\text{ at } r=0,\\ c_k\PD{R}{t}+\alpha(j_k+a_k) &\text{ at } r=R\pm,\label{radialckbc} \end{cases}\\ -\frac{1}{\text{Pe}}\PD{R}{t}&=j_w, \; \jump{p}=F_\text{elas} \text{ at } r=R,\label{radialflow} \end{align} \end{subequations} where $r=R\pm$ denote limiting values as $r$ approaches $R$ from above or below. Boundary conditions at $R=R_\text{out}$, will be specified later. The elastic force $F_\text{elas}$ can now be viewed as a scalar quantity since the force is only in the radial direction. We now develop a numerical algorithm to simulate system \eqref{radial} and apply this to animal cell volume control as a demonstrative example. We first discuss the numerical algorithm used to simulate system \eqref{radial}. Consider \eqref{radial} in the region $a<r<b$. First, suppose $b<R$ or $a>R$. Then, we have: \begin{equation} \D{}{t}\int_a^b r^2c_kdr= a^2f_k(a)-b^2f_k(b).\label{fkab} \end{equation} If we let $b=R(t)$ in the above, we must account for the fact that $R(t)$ is changing in time. Using \eqref{radialck} and \eqref{radialckbc}, we have: \begin{equation}\label{fkmem} \D{}{t}\int_a^{R(t)}r^2c_kdr=a^2f_k(a)-R^2(t)\alpha(j_k+a_k). \end{equation} A similar expression is true when $a=R(t)$. The above conservation relations will be the basis for our discretization. Let $\Delta t$ be the time step, and let $R^n$ be the position of the membrane at $t=n\Delta t$. We divide $0<r<R^n$ and $R^n<r<R_\text{out}$ into $N_v$ equal segments. Let \begin{equation} r_l^n= \begin{cases} \frac{kR^n}{N_v}, &\text{ if } 0\leq l\leq N_v\\ R^n+\frac{R_\text{out}-R^n}{N_v}, &\text{ if } N_v+1\leq l\leq 2N_v. \end{cases} \end{equation} The $l$-th segment is given by $r_{l-1}<r<r_l$. Of the $2N_v$ segments, segments $1\leq l\leq N_v$ are in the interior of the cell, whereas the the rest are in the exterior of the cell. In each segment, we have the concentrations $c_{k,l}^n$ and the electrostatic potential $\phi_l^n$. Suppose we are to advance from time $(n-1)\Delta t$ to $n\Delta t$. We use a splitting scheme. Each time step is divided into two substeps. In the first substep, we advance membrane position: \begin{equation} R^n=R^{n-1}-\text{Pe}j_{w}^{n-1}\Delta t. \end{equation} In evaluating $j_w$, we need the osmotic pressure as well as the elastic force $F_\text{elas}$, both of which are evaluated using at time $(n-1)\Delta t$. For concentrations of ions at the intracellular and extracellular sides of the membrane, we use $c_{k,N_v}^{n-1}$ and $c_{k,N_v+1}^{n-1}$ respectively. In the second substep, we update the concentrations and compute the electrostatic potential. We use one step of a backward Euler discretization. We first describe our discretization for the intracellular region. Define: \begin{equation}\label{ckinr} c_{k,i}^n(r)= \begin{cases} c_{k,l}^n &\text{ if } r_{l-1}^n\leq r<r_l^n, \\ 0 &\text{ if } r\geq r_{N_v}^n=R^n. \end{cases} \end{equation} Suppose first that $R^n\leq R^{n-1}$. For $1\leq l\leq N_v-1$, we discretize \eqref{fkab} to obtain an equation for $c_{k,l}^n$: \begin{equation} \begin{split} \frac{4\pi}{3}((r_l^n)^3-(r_{l-1}^n)^3)c_{k,l}^n&= \int_{r_{l-1}^n}^{r_l^n} 4\pi r^2c_{k,i}^{n-1}(r)dr\\ &+4\pi \paren{(r_{l-1}^n)^2f_{k,l-1}^n-(r_l^n)^2f_{k,l}^n}\Delta t.\label{ckln} \end{split} \end{equation} where $f_{k,l}^n$ is set to $0$ for $l=0$ and \begin{equation} f_{k,l}^n= -D_k\paren{\frac{c_{k,l}^n-c_{k,l-1}^n}{\Delta x_i} +\frac{c_{k,l}^n+c_{k,l}^n}{2}\frac{\phi_{k,l}^n-\phi_{k,l-1}^n}{\Delta x_i}}, \text{ for } 1\leq l\leq N_v-1, \end{equation} where $\Delta x_i=R^n/N_v$. Note that the integral in \eqref{ckln} can be evaluated exactly given expression \eqref{ckinr}. As for segment $l=N_v$, we view the endpoint $r_{N_v}^n=R^n$ as having evolved from $R^{n-1}$, and thus discretize \eqref{fkmem}. We have: \begin{equation}\label{ckmem} \begin{split} \frac{4\pi}{3}((r_{N_v-1}^n)^3-(R^n)^3)c_{k,N_v}^n&= \int_{r_{N_v-1}^n}^{R^{n-1}} 4\pi r^2c_{k,i}^{n-1}(r)dr\\ &+4\pi \paren{(r_{N_v-1}^n)^2f_{k,N_v-1}^n-(R^n)^2\alpha(a_k^n+j_k^n)}\Delta t. \end{split} \end{equation} The important point here is that the upper end point of the above integral is $R^{n-1}$ and not $R^n$. The total membrane fluxes $(R^n)^2a_k^n$ and $(R^n)^2j_k^n$ are evaluated at time $n\Delta t$, and are thus functions of $c_{k,N_v}^n, c_{k,N_v+1}^n$ and $[\phi]^n=\phi_{N_v}^n-\phi_{N_v+1}^{n}$. If $R^n>R^{n-1}$, the discretized equations are the same as \eqref{ckln} and \eqref{ckmem} except that in \eqref{ckmem} the upper endpoint of the integral in is $R^n$ instead of $R^{n-1}$. The fact that the endpoint of the integral is time-dependent in \eqref{fkmem} is taken into account by the $0$ extension of $c_{k,i}^{n-1}(r)$ when $r\geq R^{n-1}$ (see \eqref{ckinr}). The final equation we impose is that electroneutrality be satisfied in each segment: \begin{equation}\label{discEN} \sum_{k=1}^N z_kc_{k,l}^n=0 \text{ for all } l. \end{equation} For the extracellular segments $N_v+1\leq l\leq 2N_v$, we essentially use the same discretization as in the intracellular segments. The only difference is in treating boundary conditions at the $l=2N_v$ segment. We impose either no-flux or Dirichlet boundary conditions. For no-flux boundary conditions, we simply let $f_{k,N_v}^n=0$ in \eqref{ckln} for $l=2N_v$. Suppose the Dirichlet boundary conditions are given by: \begin{equation} c_k(R_\text{out},t)=c_{k,e}. \end{equation} In this case, we set: \begin{equation}\label{discbc} c_{k,e}^n=c_{k,e}, \; c_{k,e}^n =\frac{3}{2}c_{k,2N_v}^n-\frac{1}{2}c_{k,2N_v-1}^n. \end{equation} For either boundary condition, the electrostatic potential is determined only up to an additive constant, and we thus set $\phi_e^n=3\phi_{2N_v}^n/2-\phi_{2N_v-1}^n/2=0$. For the second substep, we thus have equations \eqref{ckln}, \eqref{ckmem} and \eqref{discEN} with suitable boundary conditions at $r^n_{2N_v}=R_\text{out}$, which we must solve for $c_{k,l}^n$ and $\phi_l^n$. This system of nonlinear algebraic equations is solved using a Newton iteration where the Jacobian matrix is computed analytically. In all simulations reported here, we obtained convergence to within a relative tolerance of $10^{-12}$ within less than $4$ iterations. In particular, the electroneutrality condition at each time step was satisfied at each point to within $6\times 10^{-14}$mmol/$\ell$ for all simulation results shown below. Note that the discretization is conservative. For example, we have: \begin{equation} \sum_{l=1}^{2N_v} \frac{4\pi}{3}((r_l^n)^3-(r_{l-1}^n)^3)c_{k,l}^n=\text{const} \end{equation} so long as we impose the no-flux boundary condition at $r=R^{\text{out}}$. We have checked this property numerically, we achieve conservation of ions to $14$ to $15$ digits. This property is very important in studying long time behavior. We would also like to comment on our use of the backward Euler scheme and the Newton iteration in the second substep of each time step. Rather than use a backward Euler step, we may split the second substep further into two substeps. In the first substep, one compute the updates of $\phi$ given values of $c_k$ at time $(n-1)\Delta t$ and in the second substep, we update $c_k$ using the updated $\phi$. A variant of this scheme is to use the above as one step of a fixed point iteration to solve the backward Euler problem. An advantage of these schemes is that the associated matrix problem is much simpler and smaller than that of a full Newton iteration we use in this paper. This was indeed the first algorithm we used in our attempt to simulate the system. This algorithm, however, turned out to have serious stability and convergence issues and led to large pile-up of charges close to the membrane. This difficulty was clearly caused by the moving membrane. Indeed, a similar algorithm was successfully used in \cite{CAMCoS} to simulate a similar but higher dimensional system, in which the membrane was stationary. We also found that if $\Delta t$ or the membrane velocity is very small, the fixed-point algorithm does produce computational results in agreement with those obtained using a Newton iteration. We do point out that even the backward Euler, Newton scheme, that we use here was not unconditionally stable, though the time step restriction was never serious. A more stable algorithm may be possible by developing a scheme in which the membrane position and concentrations (and electrostatic potential) are computed simultaneously. We now describe the model example we simulate. The cell membrane of animal cells is not mechanically strong enough to resist osmotic pressure due to the presence of organic solutes in the cell. Cell volume control is achieved by actively maintaining a concentration gradient of ions across the cell membrane. Many modeling studies have been performed to study cell volume control in animal cells. To the best of our knowledge, all such studies use ODE systems in which the cellular and extracellular concentrations are assumed to have no spatial variation \cite{KS,hoppensteadt2002modeling,tosteson1960regulation,tosteson1964regulation,jakobsson1980interactions}. The novelty here is that we use the PDE system \eqref{radial}, a field theory, to study cell volume control. We consider a generic spherical animal cell whose sodium and potassium concentration differences across the membrane is maintained by the presence of the Na-K ATPase. Henceforth, we shall use variables with their original dimensions, since we will be dealing with a concrete biophysical setup. We consider four species of ion, Na$^+$, K$^+$, Cl$^-$ and the organic anions, which we index as $k=1,\cdots, 4$ in this order. The diffusion coefficients of the four species is given in the Table \ref{ionparams}. We make the simplification that the organic anions are a homogeneous species with a single diffusion coefficient. The diffusion coefficient for the organic anion is somewhat arbitrary, one order of magnitude smaller than the small inorganic ions. We take the initial radius of the spherical cell to be $R_0$. We let the outer edge of the simulation domain $R_\text{out}=2R_0$. We assume that the membrane does not generate any mechanical force, so that $F_\text{elas}=0$. Passive water flow across the membrane is proportional to the water chemical potential. Given that $F_\text{elas}=0$, water flow across the cell membrane is driven osmotic pressure difference across the membrane: \begin{equation} j_w=\zeta N_Ak_BT\sum_{k=1}^4 \jump{c_k} \end{equation} where $N_A$ is the Avogadro constant (so that $N_Ak_B$ is the ideal gas constant), $T$ is the absolute temperature, $c_k$ is measured in mmol$/\ell$ and $\zeta$ is measured in velocity per pressure. For the passive membrane flux $j_k$, we take expression \eqref{GHK}: \begin{equation} j_k=\frac{R_0^2}{R(t)^2}P_kz_k\phi'\paren{ \frac{\at{c_k}{R-}\exp(z_k\phi')-\at{c_k}{R+}} {\exp(z_k\phi')-1}}, \; \phi'=\frac{q\jump{\phi}}{k_BT}. \end{equation} where the subscript $R-$ and $R+$ denote evaluation at the inner and outer faces of the membrane. This choice is standard for cell volume studies \cite{strieter_volume-activated_1990,lew_behaviour_1979}. The number $P_k$ is measured in cm/sec and is the permeability of a unit area of membrane for ionic species $k$ when the radius of the cell is $R_0$. Assuming that this permeability is determined by the presence of ionic channels and that the number of ionic channels remains constant, $j_k$ must be made inversely proportional to the membrane area. For sodium, potassium and chloride, $P_k$ is positive but we set the permeability for organic solutes to $0$. We follow \cite{lew_behaviour_1979} to use the following expression for the Na-K ATPase flux: \begin{equation} a_1=A_p\paren{\frac{\at{c_1}{R-}}{\at{c_1}{R-}+K_{Na}}}^3 \paren{\frac{\at{c_2}{R+}}{\at{c_2}{R+}+K_K}}^2, \; a_2=-\frac{2}{3}a_1. \end{equation} Recall here that $a_1, c_1$ are the active Na$^+$ flux and concentration respectively and $a_2, c_2$ are the active K$^+$ flux and concentration respectively. The exponents of $3, 2$ and the factor of $-2/3$ reflects the $3:2$ stoichiometry of the NaK ATPase in pumping Na$^+$ out and K$^+$ into the cell. The constants $K_{Na}$ and $K_k$ are given in Table \ref{miscparams}. All constants and initial conditions are given in Tables \ref{miscparams} and \ref{ionparams}. Initial concentrations are assumed spatially uniform. The constants that are not listed in the tables are computed so that the initial state is a stationary state under no-flux boundary conditions at $R=R_\text{out}$. This is similar to what is done in \cite{lew_behaviour_1979}. This procedure determines the initial intracellular Cl$^-$ concentration, Na-K ATPase maximal pump rate $A_p$, K$^+$ permeability $p_2$, initial intracellular organic solute concentration, and the organic solute valence $z_4$. We point out that $\jump{\phi}^\text{init}$, the initial value of the membrane potential is only needed to compute the initial conditions. Once all the concentrations are known, the concentrations serve as the initial conditions and there is no need to know $[\phi]$ at the initial time to evolve the system forward. \begin{table} \begin{center} \begin{tabular}{|c|c||c|c|} \hline $T$ (K)& $273.15+37$ & $K_k$ (mmol/$\ell$)& $0.75$ \cite{lew_behaviour_1979}\\ $\zeta$ (cm/s/mPa) & $5.2507\times 10^{-13}$ \cite{strieter_volume-activated_1990} &$K_{Na}$ (mmol/$\ell)$ & $3.5$ \cite{lew_behaviour_1979}\\ $R_0$ (mm)& $0.5$ & $A_p$ (cm/s) & -\\ $R_\text{out}$ (mm)& $1$ & $\jump{\phi}^\text{init}$ (mV)& $-70$ \\ \hline \end{tabular} \end{center} \caption{Constants used in the numerical simulation. $\jump{\phi}^\text{init}$ is the initial membrane voltage. Symbols labeled with '-' are determined so that the initial condition is a stationary state (see main text). The ion related constants are listed in Table \ref{ionparams}.} \label{miscparams} \end{table} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|c|} \hline & $z_k$ & $D_k$ (cm$^2$/s) & $P_k$ (cm/s) & $c_{k,\text{int}}^\text{init}$ & $c_{k,\text{ext}}^\text{init}$\\ \hline Na$^+$&$+1$&$1.33\times 10^{-5}$\cite{koch_biophysics_1999}&$1.0\times 10^{-7}$\cite{hernandez_modeling_1998}&$10$&$145$\\ K$^+$ &$+1$&$1.96\times 10^{-5}$\cite{koch_biophysics_1999}& - &$140$&$5$\\ Cl$^-$&$-1$&$2.03\times 10^{-5}$\cite{koch_biophysics_1999}&$1.0\times 10^{-7}$\cite{hernandez_modeling_1998}&-&$150$ \\ O.A. &-& $1.0\times 10^{-6}$& 0 &-&$0$\\ \hline \end{tabular} \end{center} \caption{Parameters related to ionic concentrations. O.A. stands for organic anions. The initial intracellular and extracellular concentrations are given by $c_{k,\text{int}}^\text{init}$ and $c_{k,\text{ext}}^\text{init}$ respectively (listed here in mmol/$\ell$). Symbols labeled with '-' are determined so that the initial condition is a stationary state (see main text). Other parameters are listed in Table \ref{miscparams}.} \label{ionparams} \end{table} In the simulations to follow, we took $N_v=100$ and the time step $\Delta t=500$ms. We perform the following numerical experiments. Starting with the initial conditions specified above with no-flux boundary conditions, we set the following Dirichlet boundary conditions for $t\geq 10$s: \begin{equation} c_{1,e}=100, \; c_{2,e}=50, \; c_{3,e}=150,\label{hKcb} \end{equation} where the units are in mmol/$\ell$. The boundary concentrations are thus isotonic with the initial concentrations, but the extracellular K$^+$ concentration is now increased $10$-fold. Such a stimulus should lead to immediate depolarization together with expansion of the cell. The computational results are given in Figure \ref{highK}. What is interesting here is that there is a transient drop in the cell radius, followed by an expected gradual increase. This transient drop is due to the following. After a sudden change in the boundary condition, Na$^+$ ions diffuse out whereas K$^+$ ions should diffuse in from $R=R_\text{out}$. Since K$^+$ diffuses faster than Na$^+$, there is a transient increase in total ionic concentration near the membrane, leading to excess osmotic pressure immediately outside the cell compared to the inside. This gives rise to a transient drop in the cell radius. However, as the ionic concentration becomes spatially uniform within the extracellular and intracellular domains, the cell starts to expand. \begin{figure} \begin{centering} \includegraphics[width=\textwidth]{fig1.eps} \end{centering} \caption{Computational results under a high K$^+$ stimulus (see \eqref{hKcb}). The first five figures are the snapshots of the ionic concentrations and the electrostatic potential at $t=50$s. The horizontal axis represents the radius $r$. The last figure plots the cell radius $R(t)$ as a function of time.} \label{highK} \end{figure} The next computational results describe a hypotonic shock. We set the boundary conditions to the following for $t\geq 10$s: \begin{equation} c_{1,e}=100, \; c_{2,e}=5, \; c_{3,e}=105, \label{hypcb} \end{equation} where the concentrations are in mmol/$\ell$. A snapshot of the computational results are given in Figure \ref{hypotonic}. The cell expands due to the hypotonic shock but tends to a new stationary state with time. \begin{figure} \begin{centering} \includegraphics[width=\textwidth]{fig2.eps} \end{centering} \caption{Computational results under hypotonic stimulus (see \eqref{hypcb}). The first five figures are the snapshots of the ionic concentrations and the electrostatic potential at $t=100$s. The horizontal axis represents the radius $r$. The last figure plots the cell radius $R(t)$ as a function of time.} \label{hypotonic} \end{figure} \section{Conclusion}\label{conclusion} We introduced a PDE system of electrodiffusion and osmotic water flow in the presence of deformable capacitance-carrying membranes. The salient feature of the model is that it satisfies an energy equality, and thus possesses a natural thermodynamic structure. We discussed simplifications of the model and applied the electroneutral limit to the problem of cell volume control. In the proof of Theorem \ref{mainc}, we showed that the van t'Hoff expression for osmotic pressure arises naturally, simply through an integration by parts argument. This observation seems to be new. It is interesting that, in expression \eqref{FE}, the mechanical pressure $p$ and osmotic pressure $\pi_w$ only appear in the combination $\psi_w=p+\pi_w$. This is consistent with experimental results indicating that the effect of osmotic pressure on transmembrane water flow is indistinguishable from that of mechanical pressure \cite{finkelstein1987water}. The models introduced here are {\em sharp interface} models in the sense that the membrane is treated as a surface without thickness and the physical quantities of interest are allowed to have discontinuities across $\Gamma$. This is in contrast to {\em diffuse interface} models in which the membrane has some small but finite thickness and the physical quantities transition rapidly but smoothly across the interface. It should be possible to obtain at least parts of the model by taking the thin interface limit of an appropriate diffuse interface (or finite thickness) model. This may lead to a simpler verification of the energy identity of Theorem \ref{main}. Establishing such a connection may also help in understanding the physical nature of the capacitive force \eqref{FcapCmQ}. The calculations in Appendix \ref{capforce} may be seen as an initial step in establishing this relationship. Given the natural thermodynamic structure of the problem, it is almost certainly the case that our model has a {\em variational} structure. A variational principle for dissipative systems dates back to \cite{onsager1931reciprocal}. This procedure has been used successfully in deriving dynamic equations for soft matter systems \cite{doi1988theory,doi2009gel}. In \cite{eisenberg2010energy,hyon2010energetic}, a model for non-ideal electrolyte solutions is derived by combining, in the spirit of Rayleigh (see \cite{goldstein1980classical}), the principle of least action with the above variational principle for dissipative system. The energy identities introduced here provide a natural apriori estimate for our PDE system. For related systems, the corresponding free energy identity has been used successfully to prove stability of steady states \cite{biler2000long,ryham2009existence}. It would be interesting to see if similar analytical results can be obtained for the system proposed here. We hope that our model has wide-ranging applications in cellular physiology. In principle, our model is applicable to most problems of classical physiology \cite{davson1970textbook,boron2008medical,pappenheimer1987silver}. As we saw in Section \ref{simple}, our model admits simplifications when certain dimensionless parameters are small. In the short time scale when water movement is not significant, the system is reduced to the Poisson-Nernst-Planck model with interface boundary conditions. This and related models have been successfully applied in \cite{ leonetti_biomembrane_1998,leonetti_pattern_2004,mori_ephaptic_2008}. If the physiological processes of interest are slow and happen over a long time scale, the electroneutral limit may be taken. This was applied to the problem of cell volume control in Section \ref{animal} of this paper. Any serious application of our model will require the development of an efficient numerical algorithm. The electrodiffusive part of the problem with stationary membranes (without fluid flow) has been treated successfully in \cite{CAMCoS, brera2010conservative} in a two-dimensional setting. In the model presented here, the membrane interface is dynamic. We must therefore solve an electrodiffusive problem in a domain with a moving interface across which physical quantities experience discontinuities. We have presented successful computations in one-dimension for the electroneutral limit in Section \ref{animal}, but simulations are bound to be more challenging in higher dimension. If a regular mesh is to be used, immersed boundary or immersed interface schemes could be a major component of the algorithm \cite{layton2006modeling,ibmethod,li_immersed_2006}. Many physiological phenomena in which both electrodiffusion and osmosis play an important role take place over spatial scales of whole tissues or organs rather than the cellular spatial scale we focused on in this paper. Such systems include ocular fluid circulation, electrolyte regulation in the kidney or brain ionic homeostasis. For such systems, it is important to develop an appropriate homogenized model. In the context of cable models, this is known as the bidomain model, and has found great utility in many contexts, especially in cardiac electrophysiology \cite{KS,eisenberg_three-dimensional_1970,eisenberg1979electrical, mathias1979electrical,neu1993hst}. We shall report on a such a multidomain model in a future publication.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{ Introduction and preliminaries.} We consider the boundary value problem of the form \begin{equation} \label{1)} l\left(y\right):=-y''+\left[2\lambda p\left(x\right)+q\left(x\right)\right]y=\lambda ^{2} \delta \left(x\right)y,\, \, x\in \left[0,\pi \right]/\left\{a_{1} ,a_{2} \right\} \end{equation} with the boundary conditions \begin{equation} \label{2)} y'\left(0\right)=0,y\left(\pi \right)=0 \end{equation} and the jump conditions \begin{equation} \label{3)} y\left(a_{1} +0\right)=\alpha _{1} y\left(a_{1} -0\right) \end{equation} \begin{equation} \label{4)} y'\left(a_{1} +0\right)=\beta _{1} y'\left(a_{1} -0\right)+i\lambda \gamma _{1} y\left(a_{1} -0\right) \end{equation} \begin{equation} \label{5)} y\left(a_{2} +0\right)=\alpha _{2} y\left(a_{2} -0\right) \end{equation} \begin{equation} \label{6)} y'\left(a_{2} +0\right)=\beta _{2} y'\left(a_{2} -0\right)+i\lambda \gamma _{2} y\left(a_{2} -0\right) \end{equation} Where $\lambda $ is a spectral parameter, $p(x)\in W_{2}^{1} \left[0,\pi \right]$, $q(x)\in L_{2} \left[0,\pi \right]$ are real valued functions, $a_{1} \in \left[0,\frac{\pi }{2} \right]$, $a_{2} \in \left[\frac{\pi }{2} ,\pi \right]$ , $\alpha _{1} ,\alpha _{2} ,\gamma _{1} ,\gamma _{2} $ are real numbers, $\left|\alpha _{i} -1\right|^{2} +\gamma _{i} ^{2} \ne 0\, \, \left(\alpha _{i} >0;i=1,2\right)$, $\beta _{i} =\frac{1}{\alpha _{i} } \left(i=1,2\right)$ and\\ $\delta \left(x\right)=\left\{\begin{array}{l} {\alpha ^{2} ,\, \, \, \, x\in \left(0,\frac{\pi }{2} \right)} \\ {\beta ^{2} ,\, \, \, \, x\in \left(\frac{\pi }{2} ,\pi \right)} \end{array}\right. $ where $0<\alpha <\beta <1$,$\alpha +\beta >1$. \noindent The inverse problems consist in recoverint the coefficients of an operator from their spectral characteristics. A lot of study were done the inverse spectral problem for Sturm-Liouville operators and diffusion operators \cite{Acan,Gala,Amirov,Carlson,Ergün-1,F.Yang-1,Gesztes,Hryniv,Huang,Keldysh,Levin,Markus,Sakhnovich,Yang,Yang-1,Yurko,alpay,hald,hochstadt,Gala-1,koyunbakan,levitan,ozkan,wei,10}. The first results an inverse problems theory of Sturm-Liouville operators where given by Ambarzumyan $\left[2\right]$. The half inverse problems for Sturm-Liouville equations; the known potential in half interval is determined by the help of a one spectrum over the interval. First the obtained results the half inverse problem by Hochstadt and Lieberman \cite{hochstadt}. They proved that spectrum of the problem \[-y''+q\left(x\right)y=\lambda y,\, \, x\in \left[0,1\right]\] \[y'\left(0\right)-hy\left(0\right)=0\] \[y'\left(1\right)+Hy\left(1\right)=0\] and potential $q\left(x\right)$ on the $\left(\frac{1}{2} ,1\right)$uniquely determine the potential $q\left(x\right)$ on the whole interval $\left[0,1\right]$ almost everywhere. Hald \cite{hald} proved similar results in the case when there exists a impulsive conditions inside the interval. Many studies have been done by different authours for half invers problems using this methods \cite{koyunbakan,Sakhnovich}. In the work \cite{Sakhnovich} studied the existence of the solution for he half-inverse problem of Sturm-Liouville problems and gave method of reconstructing this solution under same conditions by Sakhnovich $\left[16\right]$. Recently, same new uniqueness results on the inverse or half inverse spectral analysis of differential operators have been given. Koyunbakan and Panakhov \cite{koyunbakan} proved the half inverse problem for diffusion operator on the finite interval $\left[0,\pi \right]$. Ran Zhang, Xiao-Chuan Xu, Chuan-Fu Yang and Natalia Pavlovna Bondarenko, proved the determination of the impulsive Sturm-Liouville operator from a set of eigenvalues \cite{10} . \noindent Purpose of this study is to prove half inverse problem by using the Hocstadt- Lieberman and Yang-Zettl methods for the following equations \begin{equation} \label{7)} \tilde{l}\left(y\right):=-y''+\left[2\lambda \tilde{p}\left(x\right)+\tilde{q}\left(x\right)\right]y=\lambda ^{2} \tilde{\delta }\left(x\right)y,\, \, x\in \left[0,\pi \right]/\left\{a_{1} ,a_{2} \right\} \end{equation} \begin{equation} \label{8)} y'\left(0\right)=0,y\left(\pi \right)=0 \end{equation} \begin{equation} \label{9)} y\left(a_{1} +0\right)=\tilde{\alpha }_{1} y\left(a_{1} -0\right) \end{equation} \begin{equation} \label{10)} y'\left(a_{1} +0\right)=\tilde{\beta }_{1} y'\left(a_{1} -0\right)+i\lambda \tilde{\gamma }_{1} y\left(a_{1} -0\right) \end{equation} \begin{equation} \label{11)} y\left(a_{2} +0\right)=\tilde{\alpha }_{2} y\left(a_{2} -0\right) \end{equation} \begin{equation} \label{12)} y'\left(a_{2} +0\right)=\tilde{\beta }_{2} y'\left(a_{2} -0\right)+i\lambda \tilde{\gamma }_{2} y\left(a_{2} -0\right). \end{equation} \begin{lemma}\label{lem:1} Let $p\left(x\right)\in W_{2}^{1} \left(0,\pi \right)$ ,$q\left(x\right)\in L_{2} \left(0,\pi \right)$. \textbf{$M\left(x,t\right)$ },\textbf{$N\left(x,t\right)$} are summable functions on $\left[0,\pi \right]$ such that the representation for each $x\in \left[0,\pi \right]/\left\{a_{1} ,a_{2} \right\}$. $\; \varphi \left(x,\lambda \right)$ solution of the equations $\left(1.1\right)$ , providing boundary conditions $\left(1.2\right)$ and discontinuity conditions $\left(1.3\right)-\left(1.6\right)$ \[\varphi \left(x,\lambda \right)=\varphi _{0} \left(x,\lambda \right)+\int _{0}^{x}M\left(x,t\right) \cos \lambda tdt+\int _{0}^{x}N\left(x,t\right) \sin \lambda tdt\] is satisfied, \noindent for $0<x<\frac{\pi }{2} $, \begin{equation} \label{13)} \begin{array}{l} {\varphi _{0} \left(x,\lambda \right)=} \\ {\left(\beta _{1} ^{+} +\frac{\gamma _{1} }{2\alpha } \right)\cos \left[\lambda \xi ^{+} \left(x\right)-\frac{1}{\alpha } \int _{a_{1} }^{x}p\left(t\right)dt \right]+\left(\beta _{1} ^{-} -\frac{\gamma _{1} }{2\alpha } \right)\cos \left[\lambda \xi ^{-} \left(x\right)+\frac{1}{\alpha } \int _{a_{1} }^{x}p\left(t\right)dt \right]} \end{array} \end{equation} for $\frac{\pi }{2} <x\le \pi $, \begin{equation} \label{14)} \begin{array}{l} {\varphi _{0} \left(x,\lambda \right)=\left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda k^{+} \left(\pi \right)-\frac{1}{\beta } \int _{a_{2} }^{\pi }p\left(t\right)dt \right]} \\ {+\left(\beta _{2} ^{-} +\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda k^{-} \left(\pi \right)-\frac{1}{\beta } \int _{a_{2} }^{\pi }p\left(t\right)dt \right]} \\ {+\left(\beta _{2} ^{-} -\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda s^{+} \left(\pi \right)+\frac{1}{\beta } \int _{a_{2} }^{\pi }p\left(t\right)dt \right]} \\ {+\left(\beta _{2} ^{+} -\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda s^{-} \left(\pi \right)+\frac{1}{\beta } \int _{a_{2} }^{\pi }p\left(t\right)dt \right]} \end{array} \end{equation} where $\xi ^{\pm } \left(x\right)=\pm \alpha x\mp \alpha a_{1} +a_{1} $ , $k^{\pm } \left(x\right)=\xi ^{+} \left(a_{2} \right)\pm \beta x\mp \beta a_{2} $,\\ $s^{\pm } \left(x\right)=\xi ^{-} \left(a_{2} \right)\pm \beta x\mp \beta a_{2} $,$\beta _{1} ^{\mp } =\frac{1}{2} \left(\alpha _{1} \mp \frac{\beta _{1} }{\alpha } \right)$ , $\beta _{2} ^{\mp } =\frac{1}{2} \left(\alpha _{2} \mp \frac{\alpha \beta _{2} }{\beta } \right)$ . \noindent Thus, following the relations hold; \noindent If $p\left(x\right)\in W_{2}^{2} \left(0,\pi \right),\, q\left(x\right)\in W_{2}^{1} \left(0,\pi \right)$ \[\left\{\begin{array}{l} {\frac{\partial ^{2} M\left(x,t\right)}{\partial x^{2} } -\rho \left(x\right)\frac{\partial ^{2} M\left(x,t\right)}{\partial t^{2} } =2p\left(x\right)\frac{\partial N\left(x,t\right)}{\partial t} +q\left(x\right)M\left(x,t\right)} \\ {\frac{\partial ^{2} N\left(x,t\right)}{\partial x^{2} } -\rho \left(x\right)\frac{\partial ^{2} N\left(x,t\right)}{\partial t^{2} } =-2p\left(x\right)\frac{\partial M\left(x,t\right)}{\partial t} +q\left(x\right)N\left(x,t\right)} \end{array}\right. \, \] \[M\left(x,\varsigma ^{+} \left(x\right)\right)\cos \frac{\beta \left(x\right)}{\alpha } +N\left(x,\varsigma ^{+} \left(x\right)\right)\sin \frac{\beta \left(x\right)}{\alpha } =\left(\beta _{1} ^{+} +\frac{\gamma _{1} }{2\alpha } \right)\int _{0}^{x}\left(q\left(t\right)+\frac{p^{2} \left(t\right)}{\alpha ^{2} } \right) dt\, \] \[M\left(x,\varsigma ^{+} \left(x\right)\right)\sin \frac{\beta \left(x\right)}{\alpha } -N\left(x,\varsigma ^{+} \left(x\right)\right)\cos \frac{\beta \left(x\right)}{\alpha } =\left(\beta _{1} ^{+} +\frac{\gamma _{1} }{2\alpha } \right)\left(p\left(x\right)-p\left(0\right)\right)\, \] \[\begin{array}{l} {M\left(x,k^{+} \left(x\right)+0\right)-M\left(x,k^{+} \left(x\right)-0\right)=} \\ {-\left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)\left(p\left(x\right)-p\left(0\right)\right)\, \sin \frac{\omega \left(x\right)}{\beta } -\left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)\int _{0}^{x}\left(q\left(t\right)+\frac{p^{2} \left(t\right)}{\beta ^{2} } \right) dt\, \cos \frac{\omega \left(x\right)}{\beta } } \end{array}\] \[\begin{array}{l} {N\left(x,k^{+} \left(x\right)+0\right)-N\left(x,k^{+} \left(x\right)-0\right)=} \\ {\left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)\left(p\left(x\right)-p\left(0\right)\right)\, \cos \frac{\omega \left(x\right)}{\beta } -\left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)\int _{0}^{x}\left(q\left(t\right)+\frac{p^{2} \left(t\right)}{\beta ^{2} } \right) dt\, \sin \frac{\omega \left(x\right)}{\beta } } \end{array}\] \[\left. \frac{\partial M\left(x,t\right)}{\partial t} \right|_{t=0} =N\left(x,0\right)=0\] where $\beta \left(x\right)=\int _{0}^{x}p\left(t\right)dt $,$\omega \left(x\right)=\int _{a_{2} }^{x}p\left(t\right)dt +\int _{0}^{a_{1} }p\left(t\right)dt $. \noindent The proof is done as in \cite{Ergün-1}. \end{lemma} \textbf{Definition.} The function $\Delta \left(\lambda \right)$ is called the characteristic function of the eigenvalues $\left\{\lambda _{n} \right\}$of the problem $\left(1.1\right)-\left(1.6\right)$. $\tilde{\Delta }\left(\lambda \right)$ is called the characteristic function of the eigenvalues $\left\{\tilde{\lambda }_{n} \right\}$of the problem $\left(1.7\right)-\left(1.12\right)$. \noindent Let $\lambda =s^{2} ,s=\sigma +i\tau \, ,\, \sigma ,\tau \in {\rm R}$. The solution $\varphi \left(x,\lambda \right)$ of $\left(1.1\right)-\left(1.6\right)$ have the following asymptotic formulas hold on for $\left|\lambda \right|\to \infty $, \noindent for $0<x<\frac{\pi }{2} $, \[\varphi \left(x,\lambda \right)=\frac{1}{2} \left(\frac{\alpha _{1} }{2} \mp \frac{\beta _{1} }{2\alpha } +\frac{\gamma _{1} }{2\alpha } \right)\exp \left(-i\left(\lambda \xi ^{+} \left(x\right)-\frac{v\left(x\right)}{\alpha } \right)\right)\left(1+O\left(\frac{1}{\lambda } \right)\right)\] for $\frac{\pi }{2} <x\le \pi $ , \[\varphi \left(x,\lambda \right)=\frac{1}{2} \left(\frac{\alpha _{2} }{2} +\frac{\alpha \beta _{2} }{2\beta } +\frac{\gamma _{2} }{2\beta } \right)\exp \left(-i\left(\lambda k^{+} \left(x\right)-\frac{t\left(x\right)}{\beta } \right)\right)\left(1+O\left(\frac{1}{\lambda } \right)\right).\] where $v\left(x\right)=\int _{a_{1} }^{x}p\left(t\right)dt $, $t\left(x\right)=\int _{a_{2} }^{x}p\left(t\right)dt $. \noindent \noindent \noindent In this study, if $q\left(x\right)$ and $p\left(x\right)$ to be known almost everywhere $\left(\frac{\pi }{2} ,\pi \right)$, sufficient to determine uniquely $p\left(x\right)$ and $q\left(x\right)$ whole interval $\left(0,\pi \right)$ . \section{main result} \noindent If $\varphi_{0} \left(x,\lambda \right)$ a nontrivial solution of equation $\left(1.1\right)$ with conditions $\left(1.2\right)$-$\left(1.6\right)$, then $\lambda _{0} $ is called eigenvalue. Additionally, $\varphi_{0} \left(x,\lambda \right)$ is called the eigenfunction of the problem corresponding to the eigenvalue $\lambda _{0} $. $\left\{\lambda _{n} \right\}$ are eigenvalues of the problem. \begin{lemma}\label{lem:1} If $\lambda _{n} =\tilde{\lambda }_{n} $, $\frac{\alpha }{\tilde{\alpha }} =\frac{\beta }{\tilde{\beta }} $ then $\alpha =\tilde{\alpha }$ and $\beta =\tilde{\beta }$ for all $n\in {\rm N}$. \end{lemma} \begin{proof} \noindent Since $\lambda _{n} =\tilde{\lambda }_{n} $ and $\Delta \left(\lambda \right),\, \tilde{\Delta }\left(\lambda \right)$are entire functions in $\lambda $ of order one by Hadamard factorization theorem for $\lambda \in {\rm C}$ \[\Delta \left(\lambda \right)\equiv C\, \tilde{\Delta }\left(\lambda \right)\] On the other hand, $\left(1.1\right)$ can be written as \[\Delta _{0} \left(\lambda \right)-C\, \tilde{\Delta }_{0} \left(\lambda \right)=C\left[\tilde{\Delta }\left(\lambda \right)-\, \tilde{\Delta }_{0} \left(\lambda \right)\right]-\left[\Delta \left(\lambda \right)-\, \Delta _{0} \left(\lambda \right)\right]\] Hence \begin{equation} \label{15)} \begin{array}{l} {C\left[\tilde{\Delta }\left(\lambda \right)-\, \tilde{\Delta }_{0} \left(\lambda \right)\right]-\left[\Delta \left(\lambda \right)-\, \Delta _{0} \left(\lambda \right)\right]=} \\ {\left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda k^{+} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right]+\left(\beta _{2} ^{-} +\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda k^{-} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right]} \\ {+\left(\beta _{2} ^{-} -\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda s^{+} \left(\pi \right)+\frac{w\left(\pi \right)}{\beta } \right]+\left(\beta _{2} ^{+} -\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda s^{-} \left(\pi \right)+\frac{w\left(\pi \right)}{\beta } \right]} \\ {-C\left(\tilde{\beta }_{2} ^{+} +\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\cos \left[\lambda k^{+} \left(\pi \right)-\frac{\tilde{w}\left(\pi \right)}{\tilde{\beta }} \right]-C\left(\tilde{\beta }_{2} ^{-} +\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\cos \left[\lambda k^{-} \left(\pi \right)-\frac{\tilde{w}\left(\pi \right)}{\tilde{\beta }} \right]} \\ {-C\left(\tilde{\beta }_{2} ^{-} -\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\cos \left[\lambda s^{+} \left(\pi \right)+\frac{\tilde{w}\left(\pi \right)}{\tilde{\beta }} \right]-C\left(\tilde{\beta }_{2} ^{+} -\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\cos \left[\lambda s^{-} \left(\pi \right)+\frac{\tilde{w}\left(\pi \right)}{\beta } \right]} \end{array} \end{equation} If we multiply both sides of $\left(2.1\right)$ by $\cos \left[\lambda k^{+} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right]$ and integrate with respect to $\lambda $ in $\left(\varepsilon ,T\right)$, ($\varepsilon $ is sufficiently small positive number) for any positive real number $T$, then we get \noindent \[\begin{array}{l} {\int _{\varepsilon }^{T}\left(C\left[\tilde{\Delta }\left(\lambda \right)-\, \tilde{\Delta }_{0} \left(\lambda \right)\right]-\left[\Delta \left(\lambda \right)-\, \Delta _{0} \left(\lambda \right)\right]\right)\cos \left[\lambda k^{+} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right] d\lambda =} \\ {+\int _{\varepsilon }^{T}\left\{\left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda k^{+} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right]+\left(\beta _{2} ^{-} +\frac{\gamma _{2} }{2\beta } \right)\cos \right. \left[\lambda k^{-} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right] } \\ {+\left(\beta _{2} ^{-} -\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda s^{+} \left(\pi \right)+\frac{w\left(\pi \right)}{\beta } \right]+\left(\beta _{2} ^{+} -\frac{\gamma _{2} }{2\beta } \right)\cos \left[\lambda s^{-} \left(\pi \right)+\frac{w\left(\pi \right)}{\beta } \right]} \\ {-C\left(\tilde{\beta }_{2} ^{+} +\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\cos \left[\lambda k^{+} \left(\pi \right)-\frac{\tilde{w}\left(\pi \right)}{\tilde{\beta }} \right]-C\left(\tilde{\beta }_{2} ^{-} +\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\cos \left[\lambda k^{-} \left(\pi \right)-\frac{\tilde{w}\left(\pi \right)}{\tilde{\beta }} \right]} \\ {\left. -C\left(\tilde{\beta }_{2} ^{-} -\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\cos \left[\lambda s^{+} \left(\pi \right)+\frac{\tilde{w}\left(\pi \right)}{\tilde{\beta }} \right]-C\left(\tilde{\beta }_{2} ^{+} -\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\cos \left[\lambda s^{-} \left(\pi \right)+\frac{\tilde{w}\left(\pi \right)}{\beta } \right]\right\}d\lambda } \end{array}\] And so \[\begin{array}{l} {\int _{\varepsilon }^{T}\left(C\left[\tilde{\Delta }\left(\lambda \right)-\, \tilde{\Delta }_{0} \left(\lambda \right)\right]-\left[\Delta \left(\lambda \right)-\, \Delta _{0} \left(\lambda \right)\right]\right)\cos \left[\lambda k^{+} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right] d\lambda =} \\ {\int _{\varepsilon }^{T}\left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)\cos ^{2} \left[\lambda k^{+} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right] d\lambda } \\ {-C\int _{\varepsilon }^{T}\left(\tilde{\beta }_{2} ^{+} +\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\cos \left[\lambda k^{+} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right]\cos \left[\lambda k^{+} \left(\pi \right)-\frac{\tilde{w}\left(\pi \right)}{\tilde{\beta }} \right] d\lambda } \end{array}\] \[\begin{array}{l} {=\int _{\varepsilon }^{T}\frac{1}{2} \left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)+\frac{1}{2} \left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)\cos \left[2\lambda k^{+} \left(\pi \right)-\frac{2w\left(\pi \right)}{\beta } \right] d\lambda } \\ {-C\int _{\varepsilon }^{T}\frac{1}{2} \left(\tilde{\beta }_{2} ^{+} +\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)\left(\cos \left[2\lambda k^{+} \left(\pi \right)-\frac{\tilde{w}\left(\pi \right)+w\left(\pi \right)}{\beta } \right]+\cos \left[\frac{w\left(\pi \right)-\tilde{w}\left(\pi \right)}{\tilde{\beta }} \right]\right) d\lambda } \end{array}\] $\Delta \left(\lambda \right)-\Delta _{0} \left(\lambda \right)=O\left(\frac{1}{\left|\lambda \right|} e^{\left|Im\lambda \right|k^{+} \left(\pi \right)} \right)$, $\tilde{\Delta }\left(\lambda \right)-\tilde{\Delta }_{0} \left(\lambda \right)=O\left(\frac{1}{\left|\lambda \right|} e^{\left|Im\lambda \right|k^{+} \left(\pi \right)} \right)$ for all $\lambda $ in $\left(\varepsilon ,T\right)$. \[\frac{C}{2} \left(\tilde{\beta }_{2} ^{+} +\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} \right)-\frac{1}{2} \left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)=O\left(\frac{1}{T} \right)\] By letting $T$ tend to infinity we see that \[C=\frac{\tilde{\beta }_{2} ^{+} +\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} }{\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } } \] Similarly, if we multiply both side of $\left(2.1\right)$ $\cos \left[\lambda k^{-} \left(\pi \right)-\frac{w\left(\pi \right)}{\beta } \right]$ and integrate again with respect to $\lambda $ in $\left(\varepsilon ,T\right)$ and by letting $T$ tend to infinity, then we get \[C=\frac{\tilde{\beta }_{2} ^{-} +\frac{\tilde{\gamma }_{2} }{2\tilde{\beta }} }{\beta _{2} ^{-} +\frac{\gamma _{2} }{2\beta } } \] But since $\alpha ,\beta $ and $\tilde{\alpha },\tilde{\beta }$ are positive, since $w^{+} \left(\pi \right)-\tilde{w}^{+} \left(\pi \right)=w^{-} \left(\pi \right)-\tilde{w}^{-} \left(\pi \right)$ we conclude that $C=1$. Hence $\frac{\tilde{\beta }_{2} ^{+} }{\beta _{2} ^{+} } =\frac{\tilde{\beta }_{2} ^{-} }{\beta _{2} ^{-} } $ is obtained. We have therefore proved since $\alpha =\tilde{\alpha }$ that $\beta =\tilde{\beta }$. \noindent The proof is completed. \end{proof} \begin{lemma}\label{lem:1}If $\lambda _{n} =\tilde{\lambda }_{n} $ then $\alpha _{i} =\tilde{\alpha }_{i} $ and $\gamma _{i} =\tilde{\gamma }_{i} $ $\left(i=1,2\right)$for all $n\in {\rm N}$. \noindent The proof is done as in \cite{Ergün-1}. \end{lemma} \noindent \begin{theorem}\label{1} Let $\left\{\lambda _{n} \right\}$ a eigenvalues of both problem $\left(1.1\right)-\left(1.6\right)$ and $\left(1.7\right)-\left(1.12\right)$. If $p\left(x\right)=\tilde{p}\left(x\right)$ and $q\left(x\right)=\tilde{q}\left(x\right)$ on $\left[\frac{\pi }{2} ,\pi \right]$ , then $p\left(x\right)=\tilde{p}\left(x\right)$ and $q\left(x\right)=\tilde{q}\left(x\right)$ almost everywhere on $\left[0,\pi \right]$. \end{theorem} \begin{proof}[Proof of Theorem {1}] Let function $\varphi \left(x,\lambda \right)$ the solution of equation $\left(1.1\right)$ under the conditions $\left(1.2\right)-\left(1.6\right)$ and the function $\tilde{\varphi }\left(x,\lambda \right)$ the solution of equation $\left(1.7\right)$ under the conditions $\left(1.8\right)-\left(1.12\right)$in $\left[0,\frac{\pi }{2} \right]$. The integral forms of the functions $\varphi \left(x,\lambda \right)$ and $\tilde{\varphi }\left(x,\lambda \right)$ can be obtained as follows \noindent \begin{equation} \label{16)} \begin{array}{l} {\varphi \left(x,\lambda \right)=\left(\beta _{1} ^{+} +\frac{\gamma _{1} }{2\alpha } \right)\cos \left[\lambda \xi ^{+} \left(x\right)-\frac{1}{\alpha } \int _{a_{1} }^{x}p\left(t\right)dt \right]} \\ {+\left(\beta _{1} ^{-} -\frac{\gamma _{1} }{2\alpha } \right)\cos \left[\lambda \xi ^{-} \left(x\right)+\frac{1}{\alpha } \int _{a_{1} }^{x}p\left(t\right)dt \right]+\int _{0}^{x}M\left(x,t\right)\cos \lambda tdt +\int _{0}^{x}N\left(x,t\right)\sin \lambda tdt } \end{array} \end{equation} and \begin{equation} \label{17)} \begin{array}{l} {\tilde{\varphi }\left(x,\lambda \right)=\left(\tilde{\beta }_{1} ^{+} +\frac{\tilde{\gamma }_{1} }{2\alpha } \right)\cos \left[\lambda \xi ^{+} \left(x\right)-\frac{1}{\alpha } \int _{a_{1} }^{x}\tilde{p}\left(t\right)dt \right]} \\ {+\left(\tilde{\beta }_{1} ^{-} -\frac{\tilde{\gamma }_{1} }{2\alpha } \right)\cos \left[\lambda \xi ^{-} \left(x\right)+\frac{1}{\alpha } \int _{a_{1} }^{x}\tilde{p}\left(t\right)dt \right]+\int _{0}^{x}\tilde{M}\left(x,t\right)\cos \lambda tdt +\int _{0}^{x}\tilde{N}\left(x,t\right)\sin \lambda tdt } \end{array} \end{equation} If we multiply equations $\left(2.2\right)$ and $\left(2.3\right)$\\ $\begin{array}{l} {\varphi \left(x,\lambda \right)\cdot \tilde{\varphi }\left(x,\lambda \right)=\frac{S^{+} \tilde{S}^{+} }{2} \left[\cos \left(2\lambda \xi ^{+} \left(x\right)-K\left(x\right)\right)+\cos L\left(x\right)\right]} \\ {+\frac{S^{+} \tilde{S}^{-} }{2} \left[\cos \left(2\lambda a_{1} t-L\left(x\right)\right)+\cos \left(2\lambda \alpha \left(x-a_{1} \right)-K\left(x\right)\right)\right]} \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \left[\cos \left(2\lambda a_{1} +L\left(x\right)\right)+\cos \left(2\lambda \alpha \left(x-a_{1} \right)+K\left(x\right)\right)\right]} \\ {+\frac{S^{-} \tilde{S}^{-} }{2} \left[\cos \left(2\lambda \xi ^{-} \left(x\right)+L\left(x\right)\right)+\cos K\left(x\right)\right]} \\ {+S^{+} \int _{0}^{x}\tilde{M}\left(x,t\right)\cos \left[\lambda \xi ^{+} \left(x\right)-\frac{t\left(x\right)}{\alpha } \right] \cos \lambda tdt} \\ {+S^{+} \int _{0}^{x}\tilde{N}\left(x,t\right)\cos \left[\lambda \xi ^{+} \left(x\right)-\frac{t\left(x\right)}{\alpha } \right] \sin \lambda tdt} \\ {+S^{-} \int _{0}^{x}\tilde{M}\left(x,t\right)\cos \left[\lambda \xi ^{-} \left(x\right)+\frac{t\left(x\right)}{\alpha } \right] \cos \lambda tdt} \\ {+S^{-} \int _{0}^{x}\tilde{N}\left(x,t\right)\cos \left[\lambda \xi ^{-} \left(x\right)+\frac{t\left(x\right)}{\alpha } \right] \sin \lambda tdt} \\ {+\tilde{S}^{+} \int _{0}^{x}M\left(x,t\right)\cos \left[\lambda \xi ^{+} \left(x\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right] \cos \lambda tdt} \\ {+\tilde{S}^{+} \int _{0}^{x}N\left(x,t\right)\cos \left[\lambda \xi ^{+} \left(x\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right] \sin \lambda tdt} \\ {+\tilde{S}^{-} \int _{0}^{x}M\left(x,t\right)\cos \left[\lambda \xi ^{-} \left(x\right)+\frac{\tilde{t}\left(x\right)}{\alpha } \right] \cos \lambda tdt} \\ {+\tilde{S}^{-} \int _{0}^{x}N\left(x,t\right)\cos \left[\lambda \xi ^{-} \left(x\right)+\frac{\tilde{t}\left(x\right)}{\alpha } \right] \sin \lambda tdt} \\ {+\left(\int _{0}^{x}M\left(x,t\right)\cos \lambda tdt \right)\left(\int _{0}^{x}\tilde{M}\left(x,t\right)\cos \lambda tdt \right)} \\ {+\left(\int _{0}^{x}N\left(x,t\right)\sin \lambda tdt \right)\left(\int _{0}^{x}\tilde{N}\left(x,t\right)\sin \lambda tdt \right)} \\ {+\left(\int _{0}^{x}M\left(x,t\right)\cos \lambda tdt \right)\left(\int _{0}^{x}\tilde{N}\left(x,t\right)\sin \lambda tdt \right)} \\ {+\left(\int _{0}^{x}\tilde{M}\left(x,t\right)\cos \lambda tdt \right)\left(\int _{0}^{x}N\left(x,t\right)\sin \lambda tdt \right)} \end{array}$ \noindent \begin{equation} \label{18)} \begin{array}{l} {\varphi \left(x,\lambda \right)\cdot \tilde{\varphi }\left(x,\lambda \right)=\frac{S^{+} \tilde{S}^{+} }{2} \left[\cos \left(2\lambda \xi ^{+} \left(x\right)-K\left(x\right)\right)+\cos L\left(x\right)\right]} \\ {+\frac{S^{+} \tilde{S}^{-} }{2} \left[\cos \left(2\lambda a_{1} t-L\left(x\right)\right)+\cos \left(2\lambda \alpha \left(x-a_{1} \right)-K\left(x\right)\right)\right]} \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \left[\cos \left(2\lambda a_{1} +L\left(x\right)\right)+\cos \left(2\lambda \alpha \left(x-a_{1} \right)+K\left(x\right)\right)\right]} \\ {+\frac{S^{-} \tilde{S}^{-} }{2} \left[\cos \left(2\lambda \xi ^{-} \left(x\right)+L\left(x\right)\right)+\cos K\left(x\right)\right]} \\ {+\frac{1}{2} \left\{\int _{0}^{x}U_{c} \left(x,t\right)\cos \left(2\lambda t-K\left(t\right)\right)dt -\int _{0}^{x}U_{s} \left(x,t\right)\sin \left(2\lambda t-K\left(t\right)\right)dt \right\}} \end{array} \end{equation} is obtained, being $S^{\pm } =\left(\beta _{1} ^{\pm } \mp \frac{\gamma _{1} }{2\alpha } \right)$, $\tilde{S}^{\pm } =\left(\tilde{\beta }_{1} ^{\pm } \mp \frac{\tilde{\gamma }_{1} }{2\alpha } \right)$, $K\left(x\right)=\frac{t\left(x\right)+\tilde{t}\left(x\right)}{2} $, $L\left(x\right)=\frac{t\left(x\right)-\tilde{t}\left(x\right)}{2} $,\\ $\begin{array}{l} {U_{c} \left(x,t\right)=S^{+} \tilde{M}\left(x,\xi ^{+} \left(x\right)-2t\right)\cos \left(K\left(t\right)-\frac{t\left(x\right)}{\alpha } \right)} \\ {+S^{-} \tilde{M}\left(x,\xi ^{-} \left(x\right)-2t\right)\cos \left(K\left(t\right)-\frac{t\left(x\right)}{\alpha } \right)} \\ {+\tilde{S}^{+} M\left(x,\xi ^{+} \left(x\right)-2t\right)\cos \left(K\left(t\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right)} \\ {+\tilde{S}^{-} M\left(x,\xi ^{-} \left(x\right)-2t\right)\sin \left(K\left(t\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right)} \\ {-S^{-} \tilde{N}\left(x,\xi ^{+} \left(x\right)-2t\right)\sin \left(K\left(t\right)-\frac{t\left(x\right)}{\alpha } \right)} \\ {-S^{-} \tilde{N}\left(x,\xi ^{-} \left(x\right)-2t\right)\sin \left(K\left(t\right)-\frac{t\left(x\right)}{\alpha } \right)} \\ {-\tilde{S}^{+} N\left(x,\xi ^{+} \left(x\right)-2t\right)\sin \left(K\left(t\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right)} \\ {-\tilde{S}^{-} N\left(x,\xi ^{-} \left(x\right)-2t\right)\sin \left(K\left(t\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right)} \\ {+K_{1} \left(x,t\right)\cos K\left(t\right)+K_{2} \left(x,t\right)\cos K\left(t\right)} \\ {+M_{1} \left(x,t\right)\sin K\left(t\right)+M_{2} \left(x,t\right)\sin K\left(t\right)} \end{array}$ \\ $\begin{array}{l} {U_{s} \left(x,t\right)=S^{+} \tilde{M}\left(x,\xi ^{+} \left(x\right)-2t\right)\sin \left(K\left(t\right)-\frac{t\left(x\right)}{\alpha } \right)} \\ {+S^{-} \tilde{M}\left(x,\xi ^{-} \left(x\right)-2t\right)\sin \left(K\left(t\right)-\frac{t\left(x\right)}{\alpha } \right)} \\ {+\tilde{S}^{+} M\left(x,\xi ^{+} \left(x\right)-2t\right)\sin \left(K\left(t\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right)} \\ {+\tilde{S}^{-} M\left(x,\xi ^{-} \left(x\right)-2t\right)\sin \left(K\left(t\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right)} \\ {+S^{+} \tilde{N}\left(x,\xi ^{+} \left(x\right)-2t\right)\cos \left(K\left(t\right)-\frac{t\left(x\right)}{\alpha } \right)} \\ {+S^{-} \tilde{N}\left(x,\xi ^{-} \left(x\right)-2t\right)\cos \left(K\left(t\right)-\frac{t\left(x\right)}{\alpha } \right)} \\ {+\tilde{S}^{+} N\left(x,\xi ^{+} \left(x\right)-2t\right)\cos \left(K\left(t\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right)} \\ {+\tilde{S}^{-} N\left(x,\xi ^{-} \left(x\right)-2t\right)\cos \left(K\left(t\right)-\frac{\tilde{t}\left(x\right)}{\alpha } \right)} \\ {+K_{1} \left(x,t\right)\sin K\left(t\right)+K_{2} \left(x,t\right)\sin K\left(t\right)} \\ {-M_{1} \left(x,t\right)\cos K\left(t\right)-M_{2} \left(x,t\right)\cos K\left(t\right)} \end{array}$ \[K_{1} \left(x,t\right)=\int _{-x}^{x-2t}M\left(x,s\right)\tilde{M}\left(x,s+2t\right) ds+\int _{2t-x}^{x}M\left(x,s\right)\tilde{M}\left(x,s+2t\right) ds\] \[K_{2} \left(x,t\right)=\int _{-x}^{x-2t}N\left(x,s\right)\tilde{N}\left(x,s+2t\right) ds+\int _{2t-x}^{x}n\left(x,s\right)\tilde{N}\left(x,s+2t\right) ds\] \[M_{1} \left(x,t\right)=\int _{-x}^{x-2t}M\left(x,s\right)\tilde{N}\left(x,s+2t\right) ds-\int _{2t-x}^{x}M\left(x,s\right)\tilde{N}\left(x,s+2t\right) ds\] \[M_{2} \left(x,t\right)=-\int _{-x}^{x-2t}N\left(x,s\right)\tilde{M}\left(x,s+2t\right) ds+\int _{2t-x}^{x}N\left(x,s\right)\tilde{M}\left(x,s+2t\right) ds\] Let $\varphi \left(x,\lambda \right)$ and $\tilde{\varphi }\left(x,\lambda \right)$ are substituted into $\left(1.1\right)$ and $\left(1.7\right)$, \begin{equation} \label{19)} -\varphi ''\left(x,\lambda \right)+\left(2\lambda p\left(x\right)+q\left(x\right)\right)\varphi \left(x,\lambda \right)=\lambda ^{2} \rho \left(x\right)\varphi \left(x,\lambda \right) \end{equation} \begin{equation} \label{20)} -\tilde{\varphi }''\left(x,\lambda \right)+\left(2\lambda p\left(x\right)+q\left(x\right)\right)\tilde{\varphi }\left(x,\lambda \right)=\lambda ^{2} \rho \left(x\right)\tilde{\varphi }\left(x,\lambda \right) \end{equation} The following equations is obtained $\left(2.5\right)$ and $\left(2.6\right)$ \\ $\begin{array}{l} {\int _{0}^{\frac{\pi }{2} }\varphi \left(x,\lambda \right)\tilde{\varphi }\left(x,\lambda \right)\left[2\lambda \left(p\left(x\right)-\tilde{p}\left(x\right)\right)+\left(q\left(x\right)-\tilde{q}\left(x\right)\right)\right] dx} \\ {=\left[\tilde{\varphi }'\left(x,\lambda \right)\varphi \left(x,\lambda \right)-\varphi '\left(x,\lambda \right)\tilde{\varphi }\left(x,\lambda \right)\right]_{0}^{\frac{\pi }{2} } +\left. \right|_{\frac{\pi }{2} }^{\pi } } \end{array}$ \begin{equation} \label{21)} \begin{array}{l} {\int _{0}^{\frac{\pi }{2} }\varphi \left(x,\lambda \right)\tilde{\varphi }\left(x,\lambda \right)\left[2\lambda \left(p\left(x\right)-\tilde{p}\left(x\right)\right)+\left(q\left(x\right)-\tilde{q}\left(x\right)\right)\right] dx} \\ {+\tilde{\varphi }'\left(\pi ,\lambda \right)\varphi \left(\pi ,\lambda \right)-\varphi '\left(\pi ,\lambda \right)\tilde{\varphi }\left(\pi ,\lambda \right)=0} \end{array} \end{equation} Let $Q\left(x\right)=q\left(x\right)-\tilde{q}\left(x\right)$ and $P\left(x\right)=p\left(x\right)-\tilde{p}\left(x\right)$ \[U\left(\lambda \right)=\int _{0}^{\frac{\pi }{2} }\left[2\lambda P\left(x\right)+Q\left(x\right)\right] \varphi \left(x,\lambda \right)\tilde{\varphi }\left(x,\lambda \right)dx\] It is obvious that the functions $\varphi \left(x,\lambda \right)$ and $\tilde{\varphi }\left(x,\lambda \right)$are the solutions which satisfy boundary value conditions of $\left(1.2\right)$ and $\left(1.8\right)$, recpectively, then if we consider this facts in equation $\left(2.7\right)$, we obtain the following equation \begin{equation} \label{22)} U\left(\lambda _{n} \right)=0 \end{equation} for each eigenvalue $\lambda _{n} $. Let us marked \[U_{1} \left(\lambda \right)=\int _{0}^{\frac{\pi }{2} }P\left(x\right) \varphi \left(x,\lambda \right)\tilde{\varphi }\left(x,\lambda \right)dx, U_{2} \left(\lambda \right)=\int _{0}^{\frac{\pi }{2} }Q\left(x\right) \varphi \left(x,\lambda \right)\tilde{\varphi }\left(x,\lambda \right)dx\] Then equations $\left(2.7\right)$ can be rewritten as \[2\lambda _{n} U_{1} \left(\lambda _{n} \right)+U_{2} \left(\lambda _{n} \right)=0.\] From $\left(2.4\right)$ ve $\left(2.7\right)$ we obtain \begin{equation} \label{23)} \left|U\left(\lambda \right)\right|\le \left(C_{1} +C_{2} \left|\lambda \right|\right)\exp \left(\tau \pi \right) \end{equation} $C_{1} ,C_{2} >0$ are constants. \noindent For all complex $\lambda $. Because $\lambda _{n} =\tilde{\lambda }_{n} $, $\Delta \left(\lambda \right)=\varphi \left(\pi ,\lambda \right)=\tilde{\varphi }\left(\pi ,\lambda \right)$. Thus, \[U\left(\lambda \right)=\int _{0}^{\frac{\pi }{2} }\left[2\lambda P\left(x\right)+Q\left(x\right)\right] \varphi \left(x,\lambda \right)\tilde{\varphi }\left(x,\lambda \right)dx=\Delta \left(\lambda \right)\left[\varphi \left(\pi ,\lambda \right)-\tilde{\varphi }\left(\pi ,\lambda \right)\right] .\] The function $\phi \left(\lambda \right)=\frac{U\left(\lambda \right)}{\Delta \left(\lambda \right)} $ is an entire function with respect to $\lambda $. \noindent It follows from $\Delta \left(\lambda \right)\ge \left(\left|\lambda \beta \right|-C\right)\exp \left(\tau \xi ^{+} \left(x\right)\right)$ and $\left(2.9\right)$, $\phi \left(\lambda \right)=O\left(1\right)$ for sufficient large $\left|\lambda \right|$. We obtain $\phi \left(\lambda \right)=C$, for all $\lambda $ by Liouville's Theorem.\\ $U\left(\lambda \right)=C\Delta \left(\lambda \right)$ \\ $\begin{array}{l} {\int _{0}^{\frac{\pi }{2} }\varphi \left(x,\lambda \right)\tilde{\varphi }\left(x,\lambda \right)\left[2\lambda P\left(x\right)+Q\left(x\right)\right] dx=} \\ {=C\left[\left(\beta _{2} ^{+} +\frac{\gamma _{2} }{2\beta } \right)R_{1} \left(a_{2} \right)\cos \left[\lambda k^{+} \left(\pi \right)-\frac{1}{\beta } \int _{a_{2} }^{\pi }p\left(t\right)dt \right]\right. } \\ {+\left(\beta _{2} ^{-} +\frac{\gamma _{2} }{2\beta } \right)R_{2} \left(a_{2} \right)\cos \left[\lambda k^{-} \left(\pi \right)-\frac{1}{\beta } \int _{a_{2} }^{\pi }p\left(t\right)dt \right]} \\ {+\left(\beta _{2} ^{-} -\frac{\gamma _{2} }{2\beta } \right)R_{1} \left(a_{2} \right)\cos \left[\lambda s^{+} \left(\pi \right)+\frac{1}{\beta } \int _{a_{2} }^{\pi }p\left(t\right)dt \right]} \\ {\left. +\left(\beta _{2} ^{+} -\frac{\gamma _{2} }{2\beta } \right)R_{2} \left(a_{2} \right)\cos \left[\lambda s^{-} \left(\pi \right)+\frac{1}{\beta } \int _{a_{2} }^{\pi }p\left(t\right)dt \right]\right]+O\left(\exp \left(\tau k^{+} \left(\pi \right)\right)\right)} \end{array}$\\ By the Riemann-Lebesgue lemma, for $\lambda \to \infty $ , $\lambda \in {\rm R}$ we get $C=0$. Then,\\ $\begin{array}{l} {2U_{1} \left(\lambda \right)=S^{+} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos \left(2\lambda \xi ^{+} \left(x\right)-K\left(x\right)\right)dx } \\ {+S^{+} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos L\left(x\right) dx} \\ {+S^{+} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos \left(2\lambda a_{1} t-L\left(x\right)\right)dx } \\ {+S^{+} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos \left(2\lambda \alpha \left(x-a_{1} \right)-K\left(x\right)\right)dx } \\ {+S^{-} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos \left(2\lambda a_{1} +L\left(x\right)\right)dx } \\ {+S^{-} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos \cos \left(2\lambda \alpha \left(x-a_{1} \right)+K\left(x\right)\right)dx } \\ {+S^{-} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos \left(2\lambda \xi ^{-} \left(x\right)+L\left(x\right)\right)dx } \\ {+S^{-} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos K\left(x\right)dx } \\ {+\int _{0}^{\frac{\pi }{2} }P\left(x\right)\left(\int _{0}^{x}U_{c} \left(x,t\right)\cos \left(2\lambda t-K\left(t\right)\right)dt \right) dx} \\ {-\int _{0}^{\frac{\pi }{2} }P\left(x\right)\left(\int _{0}^{x}U_{s} \left(x,t\right)\sin \left(2\lambda t-K\left(t\right)\right)dt \right) dx.} \end{array}$\\ where $\xi ^{\pm } \left(x\right)=\pm \alpha x\mp \alpha a_{1} +a_{1} $ , $k^{\pm } \left(x\right)=\mu ^{+} \left(a_{2} \right)\pm \beta x\mp \beta a_{2} $, \\ $s^{\pm } \left(x\right)=\mu ^{-} \left(a_{2} \right)\pm \beta x\mp \beta a_{2} ,\beta _{1} ^{\mp } =\frac{1}{2} \left(\alpha _{1} \mp \frac{\beta _{1} }{\alpha } \right) , \beta _{2} ^{\mp } =\frac{1}{2} \left(\alpha _{2} \mp \frac{\alpha \beta _{2} }{\beta } \right) .$ \\ $\begin{array}{l} {2U_{1} \left(\lambda \right)=\frac{S^{+} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{-i\left(K\left(t\right)\right)} e^{i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt +\frac{S^{+} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{i\left(K\left(t\right)\right)} e^{-i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt } \\ {+\frac{S^{+} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{-i\left(L\left(t\right)\right)} e^{i\left(2\lambda a_{1} t\right)} dt +\frac{S^{+} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{i\left(L\left(t\right)\right)} e^{-i\left(2\lambda a_{1} t\right)} dt } \\ {+\frac{S^{+} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{-i\left(K\left(t\right)\right)} e^{i\left(2\lambda \alpha \left(t-a_{1} \right)\right)} dt ++\frac{S^{+} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{i\left(K\left(t\right)\right)} e^{-i\left(2\lambda \alpha \left(t-a_{1} \right)\right)} dt } \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{i\left(L\left(t\right)\right)} e^{i\left(2\lambda a_{1} t\right)} dt +\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{-i\left(L\left(t\right)\right)} e^{i\left(2\lambda a_{1} t\right)} dt } \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{i\left(K\left(t\right)\right)} e^{i\left(2\lambda \alpha \left(t-a_{1} \right)\right)} dt +\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{-i\left(K\left(t\right)\right)} e^{i\left(2\lambda \alpha \left(t-a_{1} \right)\right)} dt } \\ {+\frac{S^{-} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{i\left(L\left(t\right)\right)} e^{-i\left(2\lambda \xi ^{-} \left(t\right)\right)} dt +\frac{S^{-} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }P\left(t\right)e^{-i\left(L\left(t\right)\right)} e^{i\left(2\lambda \xi ^{-} \left(t\right)\right)} dt } \\ {+S^{+} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos L\left(x\right) dx+S^{-} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos K\left(x\right)dx } \\ {+\int _{0}^{\frac{\pi }{2} }P\left(x\right)\left(\int _{0}^{x}U_{c} \left(x,t\right)\cos \left(2\lambda t-K\left(t\right)\right)dt \right) dx} \\ {-\int _{0}^{\frac{\pi }{2} }P\left(x\right)\left(\int _{0}^{x}U_{s} \left(x,t\right)\sin \left(2\lambda t-K\left(t\right)\right)dt \right) dx} \end{array}$ if necessary operations are performed and integrals are calculated\\ $\begin{array}{l} {2U_{1} \left(\lambda \right)=\frac{S^{+} \tilde{S}^{+} }{2} \left[\frac{T_{1} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{i\left(2\lambda \xi ^{+} \left(\frac{\pi }{2} \right)\right)} -\frac{T_{1} \left(0\right)}{2i\lambda \alpha } e^{2i\lambda \left(\alpha a_{1} +a_{1} \right)} -\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{1} ^{{'} } \left(t\right)e^{i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt \right]} \\ {+\frac{S^{+} \tilde{S}^{+} }{2} \left[-\frac{T_{2} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{-i\left(2\lambda \xi ^{+} \left(\frac{\pi }{2} \right)\right)} +\frac{T_{2} \left(0\right)}{2i\lambda \alpha } e^{-2i\lambda \left(\alpha a_{1} +a_{1} \right)} +\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{2} ^{{'} } \left(t\right)e^{-i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt \right]} \\ {+\frac{S^{+} \tilde{S}^{-} }{2} \left[\frac{T_{3} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{i\lambda a_{1} } -\frac{T_{3} \left(0\right)}{2i\lambda \alpha } -\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{3} ^{{'} } \left(t\right)e^{2ia_{1} t} dt \right]} \\ {+\frac{S^{+} \tilde{S}^{-} }{2} \left[-\frac{T_{4} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{i\lambda a_{1} } +\frac{T_{4} \left(0\right)}{2i\lambda \alpha } +\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{4} ^{{'} } \left(t\right)e^{-2ia_{1} t} dt \right]} \\ {+\frac{S^{+} S^{-} }{2} \left[\frac{T_{1} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{2i\lambda \alpha \left(\frac{\pi }{2} -a_{1} \right)} -\frac{T_{1} \left(0\right)}{2i\lambda \alpha } e^{-2i\lambda \alpha a_{1} } -\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{1} ^{{'} } \left(t\right)e^{2i\lambda \alpha \left(t-a_{1} \right)} dt \right]} \\ {+\frac{S^{+} S^{-} }{2} \left[-\frac{T_{2} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{-2i\lambda \alpha \left(\frac{\pi }{2} -a_{1} \right)} +\frac{T_{2} \left(0\right)}{2i\lambda \alpha } e^{2i\lambda \alpha a_{1} } +\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{2} ^{{'} } \left(t\right)e^{-2i\lambda \alpha \left(t-a_{1} \right)} dt \right]} \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \left[-\frac{T_{3} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{-i\lambda a_{1} \pi } +\frac{T_{3} \left(0\right)}{2i\lambda \alpha } +\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{3} ^{{'} } \left(t\right)e^{-2ia_{1} t} dt \right]} \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \left[\frac{T_{4} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{i\lambda a_{1} \pi } -\frac{T_{4} \left(0\right)}{2i\lambda \alpha } -\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{4} ^{{'} } \left(t\right)e^{2ia_{1} t} dt \right]} \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \left[-\frac{T_{1} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{-2i\lambda \alpha \left(\frac{\pi }{2} -a_{1} \right)} +\frac{T_{1} \left(0\right)}{2i\lambda \alpha } e^{2i\lambda \alpha a_{1} } +\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{1} ^{{'} } \left(t\right)e^{-2i\lambda \alpha \left(t-a_{1} \right)} dt \right]} \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \left[\frac{T_{2} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{2i\lambda \alpha \left(\frac{\pi }{2} -a_{1} \right)} -\frac{T_{2} \left(0\right)}{2i\lambda \alpha } e^{-2i\lambda \alpha a_{1} } -\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{2} ^{{'} } \left(t\right)e^{2i\lambda \alpha \left(t-a_{1} \right)} dt \right]} \\ {+\frac{S^{-} \tilde{S}^{-} }{2} \left[-\frac{T_{4} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{i\left(2\lambda \xi ^{-} \left(\frac{\pi }{2} \right)\right)} +\frac{T_{4} \left(0\right)}{2i\lambda \alpha } e^{2i\lambda \left(\alpha a_{1} +a_{1} \right)} +\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{4} ^{{'} } \left(t\right)e^{i\left(2\lambda \xi ^{-} \left(t\right)\right)} dt \right]} \\ {+\frac{S^{-} \tilde{S}^{-} }{2} \left[\frac{T_{3} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda \alpha } e^{-i\left(2\lambda \xi ^{-} \left(\frac{\pi }{2} \right)\right)} -\frac{T_{3} \left(0\right)}{2i\lambda \alpha } e^{2i\lambda \left(\alpha a_{1} -a_{1} \right)} -\frac{1}{2i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{3} ^{{'} } \left(t\right)e^{-i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt \right]} \\ {+S^{+} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos L\left(x\right) dx+S^{-} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos K\left(x\right)dx } \\ {+\left[\frac{T_{5} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda } e^{i\pi \lambda } -\frac{T_{5} \left(0\right)}{2i\lambda } -\frac{1}{2i\lambda } \int _{0}^{\frac{\pi }{2} }T'_{1} \left(t\right)e^{2i\lambda t} dt \right]} \\ {+\left[-\frac{T_{6} \left({\raise0.7ex\hbox{$ \pi $}\!\mathord{\left/ {\vphantom {\pi 2}} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ 2 $}} \right)}{2i\lambda } e^{-i\pi \lambda } +\frac{T_{6} \left(0\right)}{2i\lambda } +\frac{1}{2i\lambda } \int _{0}^{\frac{\pi }{2} }T'_{6} \left(t\right)e^{-2i\lambda t} dt \right]} \end{array}$ \noindent where $T_{1} \left(t\right)=P\left(t\right)e^{-i\left(K\left(t\right)\right)} $, $T_{2} \left(t\right)=P\left(t\right)e^{i\left(K\left(t\right)\right)} $, $T_{3} \left(t\right)=P\left(t\right)e^{-i\left(L\left(t\right)\right)} $, $T_{4} \left(t\right)=P\left(t\right)e^{i\left(L\left(t\right)\right)} $, $P_{1} \left(t\right)=\int _{t}^{\frac{\pi }{2} }P\left(x\right)U_{c} \left(x,t\right) dx, P_{2} \left(t\right)=\int _{t}^{\frac{\pi }{2} }P\left(x\right)U_{s} \left(x,t\right) dx$\\ $ , T_{5} \left(t\right)=\frac{P_{1} \left(t\right)+iP_{2} \left(t\right)}{2} e^{-iK\left(t\right)} ,T_{6} \left(t\right)=\frac{P_{1} \left(t\right)-iP_{2} \left(t\right)}{2} e^{iK\left(t\right)} $ \noindent By the Riemann-Lebesgue lemma $\int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos L\left(x\right) dx=0\, \, ,\, \,\\ \int _{0}^{\frac{\pi }{2} }P\left(x\right)\cos K\left(x\right)dx =0$ and $P\left(\frac{\pi }{2} \right)=0$ for $\lambda \to \infty $.\\ Thus, \begin{equation} \label{24)} \begin{array}{l} {2U_{1} \left(\lambda \right)=-\frac{S^{+} \tilde{S}^{+} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{1} ^{{'} } \left(t\right)e^{i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt +\frac{S^{+} \tilde{S}^{+} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{2} ^{{'} } \left(t\right)e^{-i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt } \\ {-\frac{S^{+} \tilde{S}^{-} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{3} ^{{'} } \left(t\right)e^{2ia_{1} t} dt +\frac{S^{+} \tilde{S}^{-} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{4} ^{{'} } \left(t\right)e^{-2ia_{1} t} dt } \\ {-\frac{S^{+} S^{-} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{1} ^{{'} } \left(t\right)e^{2i\lambda \alpha \left(t-a_{1} \right)} dt +\frac{S^{+} S^{-} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{2} ^{{'} } \left(t\right)e^{-2i\lambda \alpha \left(t-a_{1} \right)} dt } \\ {+\frac{S^{-} \tilde{S}^{+} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{3} ^{{'} } \left(t\right)e^{-2ia_{1} t} dt -\frac{S^{-} \tilde{S}^{+} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{4} ^{{'} } \left(t\right)e^{2ia_{1} t} dt } \\ {+\frac{S^{-} \tilde{S}^{+} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{1} ^{{'} } \left(t\right)e^{-2i\lambda \alpha \left(t-a_{1} \right)} dt +\frac{S^{-} \tilde{S}^{+} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{2} ^{{'} } \left(t\right)e^{2i\lambda \alpha \left(t-a_{1} \right)} dt } \\ {+\frac{S^{-} \tilde{S}^{-} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{4} ^{{'} } \left(t\right)e^{i\left(2\lambda \xi ^{-} \left(t\right)\right)} dt -\frac{S^{-} \tilde{S}^{-} }{4i\lambda \alpha } \int _{0}^{\frac{\pi }{2} }T_{3} ^{{'} } \left(t\right)e^{-i\left(2\lambda \xi ^{-} \left(t\right)\right)} dt } \\ {+\frac{i}{2\lambda } \int _{0}^{\frac{\pi }{2} }T'_{5} \left(t\right)e^{2i\lambda t} dt -\frac{i}{2\lambda } \int _{0}^{\frac{\pi }{2} }T'_{6} \left(t\right)e^{-2i\lambda t} dt } \end{array} \end{equation} $\begin{array}{l} {2U_{2} \left(\lambda \right)=S^{+} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }Q\left(x\right)\left(\frac{e^{i\left(2\lambda \xi ^{+} \left(x\right)-K\left(x\right)\right)} +e^{-i\left(2\lambda \xi ^{+} \left(x\right)-K\left(x\right)\right)} }{2} \right)dx } \\ {+S^{+} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }Q\left(x\right)\left(\frac{e^{i\left(2\lambda a_{1} t-L\left(x\right)\right)} +e^{-i\left(2\lambda a_{1} t-L\left(x\right)\right)} }{2} \right)dx } \\ {+S^{+} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }Q\left(x\right)\left(\frac{e^{i\left(2\lambda \alpha \left(x-a_{1} \right)-K\left(x\right)\right)} +e^{-i\left(2\lambda \alpha \left(x-a_{1} \right)-K\left(x\right)\right)} }{2} \right)dx } \\ {+S^{-} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }Q\left(x\right)\left(\frac{e^{i\left(2\lambda a_{1} t+L\left(x\right)\right)} +e^{-i\left(2\lambda a_{1} t+L\left(x\right)\right)} }{2} \right)dx } \\ {+S^{-} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }Q\left(x\right)\left(\frac{e^{i\left(2\lambda \alpha \left(x-a_{1} \right)+K\left(x\right)\right)} +e^{-i\left(2\lambda \alpha \left(x-a_{1} \right)+K\left(x\right)\right)} }{2} \right)dx } \\ {+S^{-} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }Q\left(x\right)\left(\frac{e^{i\left(2\lambda \xi ^{-} \left(x\right)+L\left(x\right)\right)} +e^{-i\left(2\lambda \xi ^{-} \left(x\right)+L\left(x\right)\right)} }{2} \right)dx } \\ {+S^{+} \tilde{S}^{+} \int _{0}^{\frac{\pi }{2} }Q\left(x\right)\cos L\left(x\right) dx+S^{-} \tilde{S}^{-} \int _{0}^{\frac{\pi }{2} }Q\left(x\right)\cos K\left(x\right)dx } \\ {+\int _{0}^{\frac{\pi }{2} }Q\left(x\right)\left(\int _{0}^{x}U_{c} \left(x,t\right)\cos \left(2\lambda t-K\left(t\right)\right)dt \right) dx} \\ {-\int _{0}^{\frac{\pi }{2} }Q\left(x\right)\left(\int _{0}^{x}U_{s} \left(x,t\right)\sin \left(2\lambda t-K\left(t\right)\right)dt \right) dx} \end{array}$ \noindent where $R_{1} \left(t\right)=Q\left(t\right)e^{-i\left(K\left(t\right)\right)} $, $R_{2} \left(t\right)=Q\left(t\right)e^{i\left(K\left(t\right)\right)} $, $R_{3} \left(t\right)=Q\left(t\right)e^{-i\left(L\left(t\right)\right)} $, $R_{4} \left(t\right)=Q\left(t\right)e^{i\left(L\left(t\right)\right)} ,Q_{1} \left(t\right)=\int _{t}^{\frac{\pi }{2} }P\left(x\right)U_{c} \left(x,t\right) dx,$\\ $ Q_{2} \left(t\right)=\int _{t}^{\frac{\pi }{2} }P\left(x\right)U_{s} \left(x,t\right) dx, R_{5} \left(t\right)=\frac{Q_{1} \left(t\right)+iQ_{2} \left(t\right)}{2} e^{-iK\left(t\right)} , $ $R_{6} \left(t\right)=\frac{Q_{1} \left(t\right)-iQ_{2} \left(t\right)}{2} e^{iK\left(t\right)} $\\ By the Riemann-Lebesgue lemma\\ $\int _{0}^{\frac{\pi }{2} }Q\left(x\right)\cos L\left(x\right) dx=0\, \, ,\, \, \int _{0}^{\frac{\pi }{2} }Q\left(x\right)\cos K\left(x\right)dx =0$. Thus, \begin{equation} \label{25)} \begin{array}{l} {2U_{2} \left(\lambda \right)=\frac{S^{+} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }R_{1} \left(t\right)e^{i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt +\frac{S^{+} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }R_{2} \left(t\right)e^{-i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt } \\ {+\frac{S^{+} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }R_{3} \left(t\right)e^{2ia_{1} t} dt +\frac{S^{+} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }R_{4} \left(t\right)e^{-2ia_{1} t} dt } \\ {+\frac{S^{+} S^{-} }{2} \int _{0}^{\frac{\pi }{2} }R_{1} \left(t\right)e^{2i\lambda \alpha \left(t-a_{1} \right)} dt +\frac{S^{+} S^{-} }{2} \int _{0}^{\frac{\pi }{2} }R_{2} \left(t\right)e^{-2i\lambda \alpha \left(t-a_{1} \right)} dt } \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }R_{3} \left(t\right)e^{-2ia_{1} t} dt +\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }R_{4} \left(t\right)e^{2ia_{1} t} dt } \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }R_{1} \left(t\right)e^{-2i\lambda \alpha \left(t-a_{1} \right)} dt +\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }R_{2} \left(t\right)e^{2i\lambda \alpha \left(t-a_{1} \right)} dt } \\ {+\frac{S^{-} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }R_{4} \left(t\right)e^{i\left(2\lambda \xi ^{-} \left(t\right)\right)} dt +\frac{S^{-} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }R_{3} \left(t\right)e^{-i\left(2\lambda \xi ^{-} \left(t\right)\right)} dt } \\ {+\frac{i}{2\lambda } \int _{0}^{\frac{\pi }{2} }R_{5} \left(t\right)e^{2i\lambda t} dt +\frac{i}{2\lambda } \int _{0}^{\frac{\pi }{2} }R_{6} \left(t\right)e^{-2i\lambda t} dt } \end{array} \end{equation} \begin{equation} \label{26)} 2\lambda U_{1} \left(\lambda \right)+U_{2} \left(\lambda \right)=0. \end{equation} If $\left(2.10\right)$ and $\left(2.11\right)$ are substituted into $\left(2.12\right)$, we get\\ $\begin{array}{l} {\frac{S^{+} \tilde{S}^{+} }{2\alpha } \int _{0}^{\frac{\pi }{2} }\left(R_{1} \left(t\right)+iT'_{1} \left(t\right)\right)e^{i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt +\frac{S^{+} \tilde{S}^{+} }{2\alpha } \int _{0}^{\frac{\pi }{2} }\left(R_{2} \left(t\right)-iT'_{2} \left(t\right)\right)e^{-i\left(2\lambda \xi ^{+} \left(t\right)\right)} dt } \\ {+\frac{S^{+} \tilde{S}^{-} }{2\alpha } \int _{0}^{\frac{\pi }{2} }\left(R_{3} \left(t\right)+iT'_{3} \left(t\right)\right)e^{2ia_{1} t} dt +\frac{S^{+} \tilde{S}^{-} }{2\alpha } \int _{0}^{\frac{\pi }{2} }\left(R_{4} \left(t\right)-iT'_{4} \left(t\right)\right)e^{-2ia_{1} t} dt } \\ {+\frac{S^{+} S^{-} }{2\alpha } \int _{0}^{\frac{\pi }{2} }\left(R_{1} \left(t\right)+iT'_{1} \left(t\right)\right)e^{2i\lambda \alpha \left(t-a_{1} \right)} dt +\frac{S^{+} S^{-} }{2\alpha } \int _{0}^{\frac{\pi }{2} }\left(R_{2} \left(t\right)-iT'_{2} \left(t\right)\right)e^{-2i\lambda \alpha \left(t-a_{1} \right)} dt } \\ {+\frac{S^{-} \tilde{S}^{+} }{2\alpha } \int _{0}^{\frac{\pi }{2} }\left(R_{4} \left(t\right)+iT'_{4} \left(t\right)\right)e^{-2ia_{1} t} dt +\frac{S^{-} \tilde{S}^{+} }{2\alpha } \int _{0}^{\frac{\pi }{2} }\left(R_{3} \left(t\right)-iT'_{3} \left(t\right)\right)e^{2ia_{1} t} dt } \\ {+\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }\left(R_{2} \left(t\right)+iT'_{2} \left(t\right)\right)e^{2i\lambda \alpha \left(t-a_{1} \right)} dt +\frac{S^{-} \tilde{S}^{+} }{2} \int _{0}^{\frac{\pi }{2} }\left(R_{1} \left(t\right)-iT'_{1} \left(t\right)\right)e^{-2i\lambda \alpha \left(t-a_{1} \right)} dt } \\ {+\frac{S^{-} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }\left(R_{4} \left(t\right)-iT'_{4} \left(t\right)\right)e^{i\left(2\lambda \xi ^{-} \left(t\right)\right)} dt +\frac{S^{-} \tilde{S}^{-} }{2} \int _{0}^{\frac{\pi }{2} }\left(R_{3} \left(t\right)+iT'_{3} \left(t\right)\right)e^{-i\left(2\lambda \xi ^{-} \left(t\right)\right)} dt } \\ {+\int _{0}^{\frac{\pi }{2} }\left(R_{5} \left(t\right)+iT'_{5} \left(t\right)\right)e^{2i\lambda t} dt +\int _{0}^{\frac{\pi }{2} }\left(R_{6} \left(t\right)-iT'_{6} \left(t\right)\right)e^{-2i\lambda t} dt =0} \end{array}$\\ Since the systems $\left\{e^{\pm 2i\lambda \xi ^{+} \left(t\right)} :\, \, \lambda \in {\rm R}\right\}$ , $\left\{e^{\pm 2i\lambda a_{1} t} :\, \, \lambda \in {\rm R}\right\}$, $\left\{e^{\pm 2i\lambda \alpha \left(t-a_{1} \right)} :\, \, \lambda \in {\rm R}\right\}$ and $\left\{e^{\pm 2i\lambda t} :\, \, \lambda \in {\rm R}\right\}$ are entire in $L_{2} \left(-\frac{\pi }{2} ,\frac{\pi }{2} \right)$, it follows \[\begin{array}{l} {R_{1} \left(t\right)+iT'_{1} \left(t\right)=0\, \, ,\, \, R_{2} \left(t\right)-iT'_{2} \left(t\right)=0\, \, ,\, \, R_{3} \left(t\right)+iT'_{3} \left(t\right)=0} \\ {R_{4} \left(t\right)-iT'_{4} \left(t\right)=0\, \, ,\, \, R_{1} \left(t\right)+iT'_{1} \left(t\right)=0\, \, ,\, \, R_{2} \left(t\right)-iT'_{2} \left(t\right)=0} \\ {R_{4} \left(t\right)+iT'_{4} \left(t\right)=0\, \, ,\, \, R_{3} \left(t\right)-iT'_{3} \left(t\right)=0\, \, ,\, \, R_{2} \left(t\right)+iT'_{2} \left(t\right)=0} \\ {R_{1} \left(t\right)-iT'_{1} \left(t\right)=0\, \, ,\, \, R_{4} \left(t\right)-iT'_{4} \left(t\right)=0\, \, ,\, \, \, R_{3} \left(t\right)+iT'_{3} \left(t\right)=0} \\ {R_{5} \left(t\right)+iT'_{5} \left(t\right)=0\, \, ,\, \, R_{6} \left(t\right)-iT'_{6} \left(t\right)=0} \end{array}\] Then, we get the following system. \[\begin{array}{l} {R_{5} \left(t\right)+iT'_{5} \left(t\right)=0\, \, } \\ {R_{6} \left(t\right)-iT'_{6} \left(t\right)=0} \end{array}\] and hence, \[\left\{\begin{array}{l} {\left[Q_{1} \left(t\right)+P_{1} \left(t\right)K'\left(t\right)-P_{2} ^{{'} } \left(t\right)\right]+i\left[Q_{2} \left(t\right)+P_{2} \left(t\right)K'\left(t\right)+P_{1} ^{{'} } \left(t\right)\right]=0} \\ {\left[Q_{1} \left(t\right)+P_{1} \left(t\right)K'\left(t\right)-P_{2} ^{{'} } \left(t\right)\right]-i\left[Q_{2} \left(t\right)+P_{2} \left(t\right)K'\left(t\right)+P_{1} ^{{'} } \left(t\right)\right]=0} \end{array}\right. \] and hence, \[\left\{\begin{array}{l} {Q_{1} \left(t\right)+P_{1} \left(t\right)K'\left(t\right)-P_{2} ^{{'} } \left(t\right)=0} \\ {Q_{2} \left(t\right)+P_{2} \left(t\right)K'\left(t\right)+P_{1} ^{{'} } \left(t\right)=0} \end{array}\right. \] \begin{equation} \label{27)} \left\{\begin{array}{l} {P'\left(t\right)=U_{c} \left(t,t\right)P\left(t\right)} \\ {-\int _{t}^{\frac{\pi }{2} }U_{s} \left(x,t\right)Q\left(x\right)dx-\int _{t}^{\frac{\pi }{2} }\left(K'\left(t\right)U_{s} \left(x,t\right)+\frac{\partial H_{s} \left(x,t\right)}{\partial t} \right)P\left(x\right)dx } \\ {} \\ {P\left(t\right)=-\int _{t}^{\frac{\pi }{2} }P'\left(x\right)dx } \\ {} \\ {Q\left(t\right)=-\left(K'\left(t\right)+U_{s} \left(t,t\right)\right)P\left(t\right)} \\ {-\int _{t}^{\frac{\pi }{2} }U_{c} \left(x,t\right)Q\left(x\right)dx-\int _{t}^{\frac{\pi }{2} }\left(K'\left(t\right)U_{c} \left(x,t\right)-\frac{\partial H_{s} \left(x,t\right)}{\partial t} \right)P\left(x\right)dx } \end{array}\right. \end{equation} If we mark this \[S\left(t\right)=\left(Q\left(t\right),P\left(t\right),P'\left(t\right)\right)^{T} \] and \[K\left(x,t\right)=\left(\begin{array}{ccc} {U_{c} \left(x,t\right)} & {K'\left(t\right)U_{c} \left(x,t\right)-\frac{\partial U_{s} \left(x,t\right)}{\partial t} } & {-\left(K'\left(t\right)+U_{s} \left(t,t\right)\right)} \\ {0} & {0} & {1} \\ {U_{s} \left(x,t\right)} & {K'\left(t\right)U_{s} \left(x,t\right)+\frac{\partial U_{s} \left(x,t\right)}{\partial t} } & {U_{c} \left(x,t\right)} \end{array}\right)\] Equations $\left(2.13\right)$ can be reduced to a vector from \begin{equation} \label{28)} S\left(t\right)+\int _{t}^{\frac{\pi }{2} }K\left(x,t\right)S\left(x\right)dx=0 \end{equation} for $0<t<\frac{\pi }{2} $. Since the equation $\left(2.14\right)$is a homogenous Volterra integral equations. Equation $\left(2.14\right)$ only has the trivial solution. Thus, we obtain \noindent $S\left(t\right)=0$ for $0<t<\frac{\pi }{2} $. \noindent This gives us \noindent $Q\left(t\right)=P\left(t\right)=0$ for $0<t<\frac{\pi }{2} $. \noindent Thus, we obtain $q\left(x\right)=\tilde{q}\left(x\right)$ and $p\left(x\right)=\tilde{p}\left(x\right)$ on $\left(0,\pi \right)$. The proof is comleted. \end{proof} \section*{Acknowledgement} Not applicable.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In order to extract the contributing resonances in photoproduction experiments, partial wave analyses need to be performed. A complete experiment is required to determine the contributing amplitudes. This involves the measurement of single and double polarization observables. For single pseudoscalar meson photoproduction using a linearly polarized photon beam and a transversely polarized target, the cross section can be written in the form \begin{equation} \frac{\ensuremath{\mathrm{d}}\sigma}{\ensuremath{\mathrm{d}}\Omega} = \left(\frac{\ensuremath{\mathrm{d}}\sigma}{\ensuremath{\mathrm{d}}\Omega}\right)_0 \left[1-P_{\ensuremath{\mathrm{\gamma}}}\Sigma\cos(2\phi) - P_xP_{\ensuremath{\mathrm{\gamma}}}H\sin(2\phi) - P_y\left(P_{\ensuremath{\mathrm{\gamma}}}P\cos(2\phi)-T\right)\right] \end{equation} where $\left(\frac{\ensuremath{\mathrm{d}}\sigma}{\ensuremath{\mathrm{d}}\Omega}\right)_0$ is the unpolarized cross section, $\Sigma$, $T$, $P$, and $H$ are the occurring polarization observables \cite{barker:1975}, $P_{\ensuremath{\mathrm{\gamma}}}$ the degree of linear photon polarization, and $\phi$ the azimuthal angle of the photon polarization plane with respect to the reaction plane. $P_x$ and $P_y$ are the degrees of target polarization in the direction of the produced meson and perpendicular to that, respectively. \section{Data analysis and preliminary results} The data presented has been obtained with the CBELSA/TAPS experiment at ELSA \cite{hillert:2006}. The detector system consists of two electromagnetic calorimeters, the Crystal Barrel \cite{aker:1992} and the MiniTAPS detector \cite{novotny:1991}, together covering the polar angle range from $1^\circ$ to $156^\circ$ and the full azimuthal angle. For charged particle identification, a three-layer scintillating fiber detector \cite{suft:2005} surrounding the target, and plastic scintillators in forward direction were used. The frozen spin butanol target \cite{bradtke:1999} was operated with a superconducting saddle coil providing a homogeneous magnetic field perpendicular to the beam direction, reaching an average target polarization of $\unit[74]{\%}$. Data has been taken with two opposite settings of the target polarization direction (named $\uparrow$ and $\downarrow$). For this analysis, a data set obtained with a primary electron energy of $\unit[3.2]{GeV}$ was used. The energy tagged photon beam was linearly polarized by means of coherent bremsstrahlung \cite{elsner:2009} with a maximum polarization of $\unit[65]{\%}$ at $E_\gamma = \unit[830]{MeV}$. Two perpendicular settings of the polarization plane were used (named $\parallel$ and $\perp$). The data sample was selected for events with three distinct calorimeter hits, with one of them charged and two uncharged. Further kinematic cuts to ensure momentum conservation were applied, and the $\vec{\ensuremath{\mathrm{\gamma}}}\,\vec{\ensuremath{\mathrm{p}}} \to \ensuremath{\mathrm{p}} \ensuremath{\mathrm{\pi^0}}$ and $\vec{\ensuremath{\mathrm{\gamma}}}\,\vec{\ensuremath{\mathrm{p}}} \to \ensuremath{\mathrm{p}} \ensuremath{\mathrm{\eta}}$ events were selected by applying a cut on the $\ensuremath{\mathrm{\gamma}}\Pgg$ invariant mass. This results in a final event sample containing a total of $1.4$ million $\ensuremath{\mathrm{p}}\ensuremath{\mathrm{\pi^0}}$ and $140000$ $\ensuremath{\mathrm{p}}\ensuremath{\mathrm{\eta}}$ events with a background contribution of $<\unit[1]{\%}$ and $<\unit[5]{\%}$, respectively. The selected events for each of the four combinations of beam and target polarization directions were normalized w.r.t. the number of events and average polarization. The target asymmetry $T$ can be determined using \begin{equation} \Delta N(\phi) = \frac{1}{f \cdot P_t} \cdot \frac{N_\uparrow-N_\downarrow}{N_\uparrow+N_\downarrow} = T \cdot \sin(\beta-\phi) ;\quad f(E_{\ensuremath{\mathrm{\gamma}}},\theta) = \frac{N_{butanol} - N_{carbon}}{N_{butanol}} \end{equation} with average target polarization $P_t$, $\beta=99^\circ$ being the direction of the target polarization in the $\uparrow$ setting, and the effective dilution factor $f$, which arises from the fact that not all protons in the butanol target are polarized. In order to determine $f$ for each $E_{\ensuremath{\mathrm{\gamma}}}$ and $\cos\theta$ bin, two additional data samples with unpolarized LH$_2$ and carbon targets were used. This data was normalized in such a way that the butanol data agrees with the sum of LH$_2$ and carbon data. The target asymmetry is determined by a fit to the $\Delta N(\phi)$ distributions as shown by the examples in Fig.\ref{fig:phidistr}. Two additional observables are accessible by using linear beam polarization and a transversely polarized target. In addition to the observable $H$, one also has access to the recoil polarization $P$, without the difficulties associated with a direct measurement of the recoil proton polarization. In order to extract both observables from the data, all four combinations of beam and target polarization settings are used: \begin{eqnarray} \Delta N(\phi) &=& \frac{1}{f \cdot P_{\ensuremath{\mathrm{\gamma}}} P_t} \cdot \frac{(N_{\perp\uparrow}-N_{\perp\downarrow})-(N_{\parallel\uparrow}-N_{\parallel\downarrow})}{(N_{\perp\uparrow}+N_{\perp\downarrow})+(N_{\parallel\uparrow}+N_{\parallel\downarrow})} \\ &=& P \sin(\beta-\phi)\cos(2(\alpha-\phi)) + H \cos(\beta-\phi)\sin(2(\alpha-\phi)) \notag \end{eqnarray} with average beam polarization $P_{\ensuremath{\mathrm{\gamma}}}$ and $\alpha=45^\circ$ being the direction of the polarization plane in the $\parallel$ setting. Again, the observables can easily be determined by a fit to the $\Delta N(\phi)$ distributions, as shown by the example in Fig.\ref{fig:phidistr}. \begin{figure} \begin{overpic}[width=0.33\textwidth]{fit_T_pi0} \put(10,6.5){\includegraphics[width=0.27\textwidth]{preliminary}} \put(31,2){\begin{tiny}$\beta$\end{tiny}} \put(10,65){\begin{tiny}$E_{\ensuremath{\mathrm{\gamma}}} = \unit[800]{MeV}$\end{tiny}} \put(35,65){\begin{tiny}$\cos\theta = 0.25$\end{tiny}} \put(0,65.5){\begin{footnotesize}(a)\end{footnotesize}} \end{overpic} \begin{overpic}[width=0.33\textwidth]{fit_T_eta} \put(10,6.5){\includegraphics[width=0.27\textwidth]{preliminary}} \put(31,2){\begin{tiny}$\beta$\end{tiny}} \put(10,65){\begin{tiny}$E_{\ensuremath{\mathrm{\gamma}}} = \unit[825]{MeV}$\end{tiny}} \put(35,65){\begin{tiny}$\cos\theta = 0.25$\end{tiny}} \put(0,65.5){\begin{footnotesize}(b)\end{footnotesize}} \end{overpic} \begin{overpic}[width=0.33\textwidth]{fit_PH_pi0} \put(10,6.5){\includegraphics[width=0.27\textwidth]{preliminary}} \put(18.5,2.2){\begin{tiny}$\alpha$\end{tiny}} \put(31,2){\begin{tiny}$\beta$\end{tiny}} \put(10,65){\begin{tiny}$E_{\ensuremath{\mathrm{\gamma}}} = \unit[800]{MeV}$\end{tiny}} \put(35,65){\begin{tiny}$\cos\theta = 0.25$\end{tiny}} \put(0,65.5){\begin{footnotesize}(c)\end{footnotesize}} \end{overpic} \label{fig:phidistr} \caption{Examples for measured $\phi$-distributions used to extract the target asymmetry $T$ in $\ensuremath{\mathrm{\pi^0}}$ (a) and $\ensuremath{\mathrm{\eta}}$ photoproduction (b), and $P$ and $H$ in $\ensuremath{\mathrm{\pi^0}}$ photoproduction (c).} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{pi0_50MeV} \caption{Preliminary distributions of $T$ (top 3 rows), $P$ (4th row), and $H$ (bottom row) for the reaction $\vec{\ensuremath{\mathrm{\gamma}}}\,\vec{\ensuremath{\mathrm{p}}}\to\ensuremath{\mathrm{p}}\ensuremath{\mathrm{\pi^0}}$ (black) as a function of the $\ensuremath{\mathrm{\pi^0}}$ CMS angle, compared to previous measurements of $T$ \cite{booth:1977} (gray) and the predictions of the BnGa \cite{bnga} (solid), SAID \cite{said:2009} (dotted), and MAID \cite{maid:2007} (dashed) analyses.} \label{fig:pi0} \end{figure} \begin{figure} \includegraphics[width=\textwidth,trim=0cm 0cm 0cm 8.66cm,clip]{eta_50MeV} \caption{Very preliminary distributions of $T$ for the reaction $\ensuremath{\mathrm{\gamma}}\,\vec{\ensuremath{\mathrm{p}}}\to\ensuremath{\mathrm{p}}\ensuremath{\mathrm{\eta}}$ (black) as a function of the $\ensuremath{\mathrm{\eta}}$ CMS angle, compared to previous measurements \cite{bock:1998} (gray) and the predictions of the BnGa \cite{bnga} (solid), SAID \cite{said:2009} (dotted), and MAID \cite{maid:2007} (dashed) analyses.} \label{fig:eta} \end{figure} Preliminary results for the observables $T$, $P$, and $H$ are shown in Fig. \ref{fig:pi0} for $\ensuremath{\mathrm{\pi^0}}$ photoproduction. The shown error bars so far only include statistical uncertainties. The agreement with previous measurements of $T$ performed at the Daresbury $\unit[5]{GeV}$ electron synchrotron \cite{booth:1977} is quite good. Very perliminary results for the target asymmetry $T$ in $\ensuremath{\mathrm{\eta}}$ photoproduction are shown in Fig. \ref{fig:eta}. The results seem to be inconsistent with previous measurements done at PHOENICS \cite{bock:1998}, but a detailed study of the systematic uncertainties needs to be done before a final conclusion can be drawn. \section{Summary} Data has been taken with the CBELSA/TAPS experiment using the newly developed transversely polarized target and a linearly polarized photon beam. The preliminary results show the excellent quality of the data for the reaction $\vec{\ensuremath{\mathrm{\gamma}}}\,\vec{\ensuremath{\mathrm{p}}} \to \ensuremath{\mathrm{p}}\ensuremath{\mathrm{\pi^0}}$. Further measurements are planned to increase the statistics to investigate other reactions and higher energies. Together with the measurements with a longitudinally polarized target and both linearly \cite{thiel:2010} or circularly polarized photon beams this is an important step towards a complete experiment and will provide further constraints for the partial wave analysis. \begin{theacknowledgments} This work was supported by the \emph{Deutsche Forschungsgemeinschaft} within SFB/TR-16. \end{theacknowledgments} \bibliographystyle{aipproc}
{ "redpajama_set_name": "RedPajamaArXiv" }